scons: Reading SConscript files ... scons version: 1.3.1 python version: 2 7 2 'final' 0 Checking whether the C++ compiler works(cached) yes Checking for C header file unistd.h... (cached) yes Checking for C library rt... (cached) yes Checking for C++ header file execinfo.h... (cached) yes Checking whether backtrace is declared... (cached) yes Checking whether backtrace_symbols is declared... (cached) yes Checking for C library pcap... (cached) no Checking for C library wpcap... (cached) no Checking for C library nsl... (cached) yes scons: done reading SConscript files. scons: Building targets ... generate_buildinfo(["build/buildinfo.cpp"], ['\n#include \n#include \n\n#include "mongo/util/version.h"\n\nnamespace mongo {\n const char * gitVersion() { return "%(git_version)s"; }\n const char * compiledJSEngine() { return "%(js_engine)s"; }\n const char * allocator() { return "%(allocator)s"; }\n const char * loaderFlags() { return "%(loader_flags)s"; }\n const char * compilerFlags() { return "%(compiler_flags)s"; }\n std::string sysInfo() { return "%(sys_info)s BOOST_LIB_VERSION=" BOOST_LIB_VERSION ; }\n} // namespace mongo\n']) /opt/local/bin/python2.7 /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/buildscripts/smoke.py --with-cleanbb jsSlowNightly cwd [/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo] nokill requested, not killing anybody cwd [/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo] num procs:1 buildlogger: could not find or import buildbot.tac for authentication Fri Feb 22 11:14:57.123 [initandlisten] MongoDB starting : pid=18143 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=bs-smartos-x86-64-1.10gen.cc Fri Feb 22 11:14:57.123 [initandlisten] Fri Feb 22 11:14:57.123 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB Fri Feb 22 11:14:57.123 [initandlisten] ** uses to detect impending page faults. Fri Feb 22 11:14:57.123 [initandlisten] ** This may result in slower performance for certain use cases Fri Feb 22 11:14:57.123 [initandlisten] Fri Feb 22 11:14:57.123 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 Fri Feb 22 11:14:57.123 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 Fri Feb 22 11:14:57.123 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 Fri Feb 22 11:14:57.124 [initandlisten] allocator: system Fri Feb 22 11:14:57.124 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999, setParameter: [ "enableTestCommands=1" ] } Fri Feb 22 11:14:57.124 [initandlisten] journal dir=/data/db/sconsTests/journal Fri Feb 22 11:14:57.124 [initandlisten] recover : no journal files present, no recovery needed Fri Feb 22 11:14:57.139 [FileAllocator] allocating new datafile /data/db/sconsTests/local.ns, filling with zeroes... Fri Feb 22 11:14:57.140 [FileAllocator] creating directory /data/db/sconsTests/_tmp Fri Feb 22 11:14:57.140 [FileAllocator] done allocating datafile /data/db/sconsTests/local.ns, size: 16MB, took 0 secs Fri Feb 22 11:14:57.140 [FileAllocator] allocating new datafile /data/db/sconsTests/local.0, filling with zeroes... Fri Feb 22 11:14:57.140 [FileAllocator] done allocating datafile /data/db/sconsTests/local.0, size: 64MB, took 0 secs Fri Feb 22 11:14:57.143 [websvr] admin web console waiting for connections on port 28999 Fri Feb 22 11:14:57.143 [initandlisten] waiting for connections on port 27999 Fri Feb 22 11:14:57.949 [initandlisten] connection accepted from 127.0.0.1:56914 #1 (1 connection now open) running /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 ******************************************* Test : 32bit.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/32bit.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/32bit.js";TestData.testFile = "32bit.js";TestData.testName = "32bit";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:14:57 2013 Fri Feb 22 11:14:57.962 [conn1] end connection 127.0.0.1:56914 (0 connections now open) buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:14:58.113 [initandlisten] connection accepted from 127.0.0.1:51266 #2 (1 connection now open) null 32bit.js running - this test is slow so only runs at night. Fri Feb 22 11:14:58.122 [conn2] dropDatabase test_32bit starting Fri Feb 22 11:14:58.130 [conn2] removeJournalFiles Fri Feb 22 11:14:58.131 [conn2] dropDatabase test_32bit finished 32bit.js PASS #1 seed=0.650223703822121 Fri Feb 22 11:14:58.132 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.ns, filling with zeroes... Fri Feb 22 11:14:58.132 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.ns, size: 16MB, took 0 secs Fri Feb 22 11:14:58.132 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.0, filling with zeroes... Fri Feb 22 11:14:58.132 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.0, size: 64MB, took 0 secs Fri Feb 22 11:14:58.133 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.1, filling with zeroes... Fri Feb 22 11:14:58.133 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.1, size: 128MB, took 0 secs Fri Feb 22 11:14:58.136 [conn2] build index test_32bit.colltest_32bit { _id: 1 } Fri Feb 22 11:14:58.137 [conn2] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:14:58.137 [conn2] build index test_32bit.colltest_32bit { a: 1.0 } Fri Feb 22 11:14:58.138 [conn2] build index done. scanned 1 total records. 0 secs Fri Feb 22 11:14:58.139 [conn2] build index test_32bit.colltest_32bit { b: 1.0 } Fri Feb 22 11:14:58.140 [conn2] build index done. scanned 1 total records. 0 secs Fri Feb 22 11:14:58.140 [conn2] build index test_32bit.colltest_32bit { x: 1.0 } Fri Feb 22 11:14:58.142 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.142 [conn2] build index test_32bit.colltest_32bit { c: 1.0 } Fri Feb 22 11:14:58.143 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.144 [conn2] build index test_32bit.colltest_32bit { d: 1.0 } Fri Feb 22 11:14:58.145 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.145 [conn2] build index test_32bit.colltest_32bit { e: 1.0 } Fri Feb 22 11:14:58.146 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.147 [conn2] build index test_32bit.colltest_32bit { f: 1.0 } Fri Feb 22 11:14:58.148 [conn2] build index done. scanned 1 total records. 0.001 secs 32bit.js eta_secs:315.5 Fri Feb 22 11:15:01.967 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.2, filling with zeroes... Fri Feb 22 11:15:01.967 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.2, size: 256MB, took 0 secs Fri Feb 22 11:15:13.484 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.3, filling with zeroes... Fri Feb 22 11:15:13.484 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.3, size: 512MB, took 0 secs 100000 200000 Fri Feb 22 11:15:51.711 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.4, filling with zeroes... Fri Feb 22 11:15:51.711 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.4, size: 1024MB, took 0 secs 300000 400000 500000 600000 700000 Fri Feb 22 11:17:32.974 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.5, filling with zeroes... Fri Feb 22 11:17:32.975 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.5, size: 2047MB, took 0 secs count: 723263 Fri Feb 22 11:17:38.621 [conn2] CMD: validate test_32bit.colltest_32bit Fri Feb 22 11:17:38.621 [conn2] validating index 0: test_32bit.colltest_32bit.$_id_ Fri Feb 22 11:17:38.649 [conn2] validating index 1: test_32bit.colltest_32bit.$a_1 Fri Feb 22 11:17:38.675 [conn2] validating index 2: test_32bit.colltest_32bit.$b_1 Fri Feb 22 11:17:38.696 [conn2] validating index 3: test_32bit.colltest_32bit.$x_1 Fri Feb 22 11:17:38.721 [conn2] validating index 4: test_32bit.colltest_32bit.$c_1 Fri Feb 22 11:17:39.074 [conn2] validating index 5: test_32bit.colltest_32bit.$d_1 Fri Feb 22 11:17:39.111 [conn2] validating index 6: test_32bit.colltest_32bit.$e_1 Fri Feb 22 11:17:39.132 [conn2] validating index 7: test_32bit.colltest_32bit.$f_1 Fri Feb 22 11:17:39.158 [conn2] command test_32bit.$cmd command: { validate: "colltest_32bit", full: undefined } ntoreturn:1 keyUpdates:0 locks(micros) r:537722 reslen:1085 537ms Fri Feb 22 11:17:39.159 [conn2] dropDatabase test_32bit starting Fri Feb 22 11:17:39.540 [conn2] removeJournalFiles Fri Feb 22 11:17:40.114 [conn2] dropDatabase test_32bit finished Fri Feb 22 11:17:40.114 [conn2] command test_32bit.$cmd command: { dropDatabase: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:955072 reslen:61 955ms 32bit.js SUCCESS Fri Feb 22 11:17:40.134 [conn2] end connection 127.0.0.1:51266 (0 connections now open) 2.7034 minutes Fri Feb 22 11:17:40.153 [initandlisten] connection accepted from 127.0.0.1:49308 #3 (1 connection now open) Fri Feb 22 11:17:40.155 [conn3] end connection 127.0.0.1:49308 (0 connections now open) ******************************************* Test : autosplit_heuristics.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/autosplit_heuristics.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/autosplit_heuristics.js";TestData.testFile = "autosplit_heuristics.js";TestData.testName = "autosplit_heuristics";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:17:40 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:17:40.301 [initandlisten] connection accepted from 127.0.0.1:38033 #4 (1 connection now open) null Resetting db path '/data/db/test0' Fri Feb 22 11:17:40.309 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/test0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:17:40.384 [initandlisten] MongoDB starting : pid=19376 port=30000 dbpath=/data/db/test0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:17:40.384 [initandlisten] m30000| Fri Feb 22 11:17:40.384 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:17:40.384 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:17:40.384 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:17:40.384 [initandlisten] m30000| Fri Feb 22 11:17:40.384 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:17:40.384 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:17:40.384 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:17:40.384 [initandlisten] allocator: system m30000| Fri Feb 22 11:17:40.384 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:17:40.385 [initandlisten] journal dir=/data/db/test0/journal m30000| Fri Feb 22 11:17:40.385 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:17:40.400 [FileAllocator] allocating new datafile /data/db/test0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:17:40.400 [FileAllocator] creating directory /data/db/test0/_tmp m30000| Fri Feb 22 11:17:40.400 [FileAllocator] done allocating datafile /data/db/test0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:17:40.400 [FileAllocator] allocating new datafile /data/db/test0/local.0, filling with zeroes... m30000| Fri Feb 22 11:17:40.400 [FileAllocator] done allocating datafile /data/db/test0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:17:40.403 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:17:40.403 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:17:40.511 [initandlisten] connection accepted from 127.0.0.1:40147 #1 (1 connection now open) Resetting db path '/data/db/test-config0' Fri Feb 22 11:17:40.515 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/test-config0 --configsvr --setParameter enableTestCommands=1 m29000| Fri Feb 22 11:17:40.585 [initandlisten] MongoDB starting : pid=19377 port=29000 dbpath=/data/db/test-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 11:17:40.586 [initandlisten] m29000| Fri Feb 22 11:17:40.586 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 11:17:40.586 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 11:17:40.586 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 11:17:40.586 [initandlisten] m29000| Fri Feb 22 11:17:40.586 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 11:17:40.586 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 11:17:40.586 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 11:17:40.586 [initandlisten] allocator: system m29000| Fri Feb 22 11:17:40.586 [initandlisten] options: { configsvr: true, dbpath: "/data/db/test-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:17:40.586 [initandlisten] journal dir=/data/db/test-config0/journal m29000| Fri Feb 22 11:17:40.586 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 11:17:40.599 [FileAllocator] allocating new datafile /data/db/test-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 11:17:40.599 [FileAllocator] creating directory /data/db/test-config0/_tmp m29000| Fri Feb 22 11:17:40.599 [FileAllocator] done allocating datafile /data/db/test-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.599 [FileAllocator] allocating new datafile /data/db/test-config0/local.0, filling with zeroes... m29000| Fri Feb 22 11:17:40.599 [FileAllocator] done allocating datafile /data/db/test-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.602 [initandlisten] ****** m29000| Fri Feb 22 11:17:40.602 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 11:17:40.606 [initandlisten] ****** m29000| Fri Feb 22 11:17:40.606 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 11:17:40.606 [websvr] ERROR: listen(): bind() failed errno:125 Address already in use for socket: 0.0.0.0:30000 m29000| Fri Feb 22 11:17:40.606 [websvr] ERROR: addr already in use m29000| Fri Feb 22 11:17:40.716 [initandlisten] connection accepted from 127.0.0.1:52563 #1 (1 connection now open) "localhost:29000" m29000| Fri Feb 22 11:17:40.717 [initandlisten] connection accepted from 127.0.0.1:55728 #2 (2 connections now open) ShardingTest test : { "config" : "localhost:29000", "shards" : [ connection to localhost:30000 ] } Fri Feb 22 11:17:40.723 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:17:40.750 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:17:40.751 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=19378 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:17:40.751 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:17:40.751 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:17:40.751 [mongosMain] options: { chunkSize: 1, configdb: "localhost:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:17:40.751 [mongosMain] config string : localhost:29000 m30999| Fri Feb 22 11:17:40.751 [mongosMain] creating new connection to:localhost:29000 m30999| Fri Feb 22 11:17:40.752 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.752 [initandlisten] connection accepted from 127.0.0.1:54691 #3 (3 connections now open) m30999| Fri Feb 22 11:17:40.752 [mongosMain] connected connection! m30999| Fri Feb 22 11:17:40.753 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:17:40.753 [mongosMain] creating new connection to:localhost:29000 m30999| Fri Feb 22 11:17:40.753 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.753 [initandlisten] connection accepted from 127.0.0.1:64791 #4 (4 connections now open) m30999| Fri Feb 22 11:17:40.753 [mongosMain] connected connection! m29000| Fri Feb 22 11:17:40.754 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:17:40.758 [mongosMain] created new distributed lock for configUpgrade on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:17:40.759 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838 ) m30999| Fri Feb 22 11:17:40.759 [LockPinger] creating distributed lock ping thread for localhost:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:17:40.759 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 11:17:40.760 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:17:40 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512753d414c149b7a4b0a7b1" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 11:17:40.760 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes... m29000| Fri Feb 22 11:17:40.760 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.760 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes... m29000| Fri Feb 22 11:17:40.760 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.760 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes... m29000| Fri Feb 22 11:17:40.761 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 11:17:40.763 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 11:17:40.764 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.764 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 11:17:40.765 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.765 [LockPinger] cluster localhost:29000 pinged successfully at Fri Feb 22 11:17:40 2013 by distributed lock pinger 'localhost:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838', sleeping for 30000ms m29000| Fri Feb 22 11:17:40.765 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 11:17:40.766 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:17:40.766 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' acquired, ts : 512753d414c149b7a4b0a7b1 m30999| Fri Feb 22 11:17:40.768 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:17:40.768 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:17:40.768 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d414c149b7a4b0a7b2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531860768), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 11:17:40.768 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 11:17:40.769 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.769 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 11:17:40.769 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 11:17:40.770 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.770 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d414c149b7a4b0a7b4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531860770), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:17:40.770 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:17:40.771 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' unlocked. m29000| Fri Feb 22 11:17:40.772 [conn3] build index config.settings { _id: 1 } m29000| Fri Feb 22 11:17:40.772 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.773 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:17:40.773 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:17:40.773 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:17:40.773 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:17:40.773 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:17:40.773 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:17:40.773 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:17:40.773 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 11:17:40.773 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 11:17:40.774 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.774 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 11:17:40.774 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 11:17:40.775 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.775 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 11:17:40.775 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.775 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 11:17:40.776 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.776 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 11:17:40.776 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.776 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 11:17:40.777 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 11:17:40.777 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.778 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:17:40.778 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:17:40 m30999| Fri Feb 22 11:17:40.778 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:17:40.778 [Balancer] creating new connection to:localhost:29000 m29000| Fri Feb 22 11:17:40.778 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:17:40.778 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.778 [initandlisten] connection accepted from 127.0.0.1:64925 #5 (5 connections now open) m30999| Fri Feb 22 11:17:40.778 [Balancer] connected connection! m29000| Fri Feb 22 11:17:40.779 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.779 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:40.779 [Balancer] trying to acquire new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838 ) m30999| Fri Feb 22 11:17:40.779 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:17:40.779 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:17:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512753d414c149b7a4b0a7b6" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:17:40.780 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' acquired, ts : 512753d414c149b7a4b0a7b6 m30999| Fri Feb 22 11:17:40.780 [Balancer] *** start balancing round m30999| Fri Feb 22 11:17:40.780 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:17:40.780 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:17:40.780 [Balancer] no collections to balance m30999| Fri Feb 22 11:17:40.780 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:17:40.780 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:17:40.780 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' unlocked. m30999| Fri Feb 22 11:17:40.924 [mongosMain] connection accepted from 127.0.0.1:60762 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:17:40.926 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 11:17:40.927 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 11:17:40.927 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.928 [conn1] put [admin] on: config:localhost:29000 m30999| Fri Feb 22 11:17:40.928 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:17:40.928 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:17:40.928 [conn1] connected connection! m30000| Fri Feb 22 11:17:40.928 [initandlisten] connection accepted from 127.0.0.1:54131 #2 (2 connections now open) m30999| Fri Feb 22 11:17:40.929 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } m30999| Fri Feb 22 11:17:40.931 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:17:40.931 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:17:40.939 [conn1] connected connection! m30999| Fri Feb 22 11:17:40.939 [conn1] creating WriteBackListener for: localhost:30000 serverID: 512753d414c149b7a4b0a7b5 m30000| Fri Feb 22 11:17:40.939 [initandlisten] connection accepted from 127.0.0.1:61319 #3 (3 connections now open) m30999| Fri Feb 22 11:17:40.939 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:17:40.939 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:17:40.939 [conn1] creating new connection to:localhost:29000 m30999| Fri Feb 22 11:17:40.940 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.940 [initandlisten] connection accepted from 127.0.0.1:51914 #6 (6 connections now open) m30999| Fri Feb 22 11:17:40.940 [conn1] connected connection! m30999| Fri Feb 22 11:17:40.940 [conn1] creating WriteBackListener for: localhost:29000 serverID: 512753d414c149b7a4b0a7b5 m30999| Fri Feb 22 11:17:40.940 [conn1] initializing shard connection to localhost:29000 m30999| Fri Feb 22 11:17:40.940 BackgroundJob starting: WriteBackListener-localhost:29000 m30999| Fri Feb 22 11:17:40.940 [WriteBackListener-localhost:29000] localhost:29000 is not a shard node Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 11:17:40.942 [conn1] couldn't find database [foo] in config db m30999| Fri Feb 22 11:17:40.942 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:17:40.942 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:17:40.942 [initandlisten] connection accepted from 127.0.0.1:63773 #4 (4 connections now open) m30999| Fri Feb 22 11:17:40.942 [conn1] connected connection! m30999| Fri Feb 22 11:17:40.943 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:17:40.943 [conn1] put [foo] on: shard0000:localhost:30000 m30999| Fri Feb 22 11:17:40.943 [conn1] enabling sharding on: foo { "ok" : 1 } m30000| Fri Feb 22 11:17:40.945 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes... m30000| Fri Feb 22 11:17:40.945 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:17:40.945 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes... m30000| Fri Feb 22 11:17:40.945 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:17:40.946 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes... m30000| Fri Feb 22 11:17:40.946 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:17:40.949 [conn4] build index foo.hashBar { _id: 1 } m30000| Fri Feb 22 11:17:40.950 [conn4] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:17:40.950 [conn4] info: creating collection foo.hashBar on add index m30999| Fri Feb 22 11:17:40.950 [conn1] CMD: shardcollection: { shardCollection: "foo.hashBar", key: { _id: 1.0 } } m30999| Fri Feb 22 11:17:40.950 [conn1] enable sharding on: foo.hashBar with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:17:40.951 [conn1] going to create 1 chunk(s) for: foo.hashBar using new epoch 512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:17:40.951 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 2 version: 1|0||512753d414c149b7a4b0a7b7 based on: (empty) m29000| Fri Feb 22 11:17:40.952 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 11:17:40.953 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.953 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 2 m30999| Fri Feb 22 11:17:40.953 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.hashBar", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'foo.hashBar'" } m30999| Fri Feb 22 11:17:40.953 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 2 m30000| Fri Feb 22 11:17:40.954 [conn3] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 11:17:40.954 [initandlisten] connection accepted from 127.0.0.1:47773 #7 (7 connections now open) m30999| Fri Feb 22 11:17:40.954 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } { "collectionsharded" : "foo.hashBar", "ok" : 1 } m30999| Fri Feb 22 11:17:40.955 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30000| Fri Feb 22 11:17:40.956 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "foo.hashBar-_id_MinKey", configdb: "localhost:29000" } m29000| Fri Feb 22 11:17:40.956 [initandlisten] connection accepted from 127.0.0.1:39233 #8 (8 connections now open) m30000| Fri Feb 22 11:17:40.957 [LockPinger] creating distributed lock ping thread for localhost:29000 and process bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070 (sleeping for 30000ms) m30000| Fri Feb 22 11:17:40.958 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e57 m30000| Fri Feb 22 11:17:40.959 [conn4] splitChunk accepted at version 1|0||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.959 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e58", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860959), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.960 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.960 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 3 version: 1|2||512753d414c149b7a4b0a7b7 based on: 1|0||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.961 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:17:40.961 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], shardId: "foo.hashBar-_id_0.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.962 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e59 m30000| Fri Feb 22 11:17:40.963 [conn4] splitChunk accepted at version 1|2||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.963 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e5a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860963), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.964 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.964 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 4 version: 1|4||512753d414c149b7a4b0a7b7 based on: 1|2||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.965 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|3||000000000000000000000000min: { _id: 0.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.965 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "foo.hashBar-_id_0.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.966 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e5b m30000| Fri Feb 22 11:17:40.967 [conn4] splitChunk accepted at version 1|4||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.967 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e5c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860967), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 0.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 1.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.967 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.968 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 5 version: 1|6||512753d414c149b7a4b0a7b7 based on: 1|4||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.969 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|6||000000000000000000000000min: { _id: 1.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.969 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "foo.hashBar-_id_1.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.970 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e5d m30000| Fri Feb 22 11:17:40.980 [conn4] splitChunk accepted at version 1|6||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.980 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e5e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860980), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 1.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 2.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.980 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.981 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 6 version: 1|8||512753d414c149b7a4b0a7b7 based on: 1|6||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.982 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|8||000000000000000000000000min: { _id: 2.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.982 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "foo.hashBar-_id_2.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.983 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e5f m30000| Fri Feb 22 11:17:40.984 [conn4] splitChunk accepted at version 1|8||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.984 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e60", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860984), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 2.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 3.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.984 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.985 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 7 version: 1|10||512753d414c149b7a4b0a7b7 based on: 1|8||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.986 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|10||000000000000000000000000min: { _id: 3.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.986 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], shardId: "foo.hashBar-_id_3.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.987 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e61 m30000| Fri Feb 22 11:17:40.987 [conn4] splitChunk accepted at version 1|10||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.988 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e62", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860988), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 3.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 4.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.988 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.989 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 8 version: 1|12||512753d414c149b7a4b0a7b7 based on: 1|10||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.989 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|12||000000000000000000000000min: { _id: 4.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.989 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], shardId: "foo.hashBar-_id_4.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.990 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e63 m30000| Fri Feb 22 11:17:40.991 [conn4] splitChunk accepted at version 1|12||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.991 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e64", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860991), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 4.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 5.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.992 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.992 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 9 version: 1|14||512753d414c149b7a4b0a7b7 based on: 1|12||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.993 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|14||000000000000000000000000min: { _id: 5.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.993 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], shardId: "foo.hashBar-_id_5.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.994 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e65 m30000| Fri Feb 22 11:17:40.995 [conn4] splitChunk accepted at version 1|14||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.995 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e66", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860995), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 5.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 6.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.995 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.996 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 10 version: 1|16||512753d414c149b7a4b0a7b7 based on: 1|14||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.997 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|16||000000000000000000000000min: { _id: 6.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.997 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], shardId: "foo.hashBar-_id_6.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.998 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e67 m30000| Fri Feb 22 11:17:40.998 [conn4] splitChunk accepted at version 1|16||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.999 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e68", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860999), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 6.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 7.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.999 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:41.000 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 11 version: 1|18||512753d414c149b7a4b0a7b7 based on: 1|16||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:41.001 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|18||000000000000000000000000min: { _id: 7.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:41.001 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], shardId: "foo.hashBar-_id_7.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:41.002 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d552997dd3b08a6e69 m30000| Fri Feb 22 11:17:41.002 [conn4] splitChunk accepted at version 1|18||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:41.003 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:41-512753d552997dd3b08a6e6a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531861003), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 7.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 8.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:41.003 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:41.004 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 12 version: 1|20||512753d414c149b7a4b0a7b7 based on: 1|18||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:41.004 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|20||000000000000000000000000min: { _id: 8.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:41.005 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], shardId: "foo.hashBar-_id_8.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:41.005 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d552997dd3b08a6e6b m30000| Fri Feb 22 11:17:41.006 [conn4] splitChunk accepted at version 1|20||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:41.006 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:41-512753d552997dd3b08a6e6c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531861006), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 8.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 9.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:41.007 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:41.007 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 13 version: 1|22||512753d414c149b7a4b0a7b7 based on: 1|20||512753d414c149b7a4b0a7b7 { "ok" : 1 } ---- Setup collection... ---- --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512753d414c149b7a4b0a7b3") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "foo", "partitioned" : true, "primary" : "shard0000" } foo.hashBar shard key: { "_id" : 1 } chunks: shard0000 12 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : shard0000 { "t" : 1000, "i" : 1 } { "_id" : 0 } -->> { "_id" : 1 } on : shard0000 { "t" : 1000, "i" : 5 } { "_id" : 1 } -->> { "_id" : 2 } on : shard0000 { "t" : 1000, "i" : 7 } { "_id" : 2 } -->> { "_id" : 3 } on : shard0000 { "t" : 1000, "i" : 9 } { "_id" : 3 } -->> { "_id" : 4 } on : shard0000 { "t" : 1000, "i" : 11 } { "_id" : 4 } -->> { "_id" : 5 } on : shard0000 { "t" : 1000, "i" : 13 } { "_id" : 5 } -->> { "_id" : 6 } on : shard0000 { "t" : 1000, "i" : 15 } { "_id" : 6 } -->> { "_id" : 7 } on : shard0000 { "t" : 1000, "i" : 17 } { "_id" : 7 } -->> { "_id" : 8 } on : shard0000 { "t" : 1000, "i" : 19 } { "_id" : 8 } -->> { "_id" : 9 } on : shard0000 { "t" : 1000, "i" : 21 } { "_id" : 9 } -->> { "_id" : 10 } on : shard0000 { "t" : 1000, "i" : 22 } { "_id" : 10 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1000, "i" : 4 } ---- Starting inserts of approx size: 18... ---- { "chunkSizeBytes" : 1048576, "insertsForSplit" : 81556, "totalInserts" : 815560 } m30999| Fri Feb 22 11:17:41.049 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|22, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 13 m30999| Fri Feb 22 11:17:41.050 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:17:43.765 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.765 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.765 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.765 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.766 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.766 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.766 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.766 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.766 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.766 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.767 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.767 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.841 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.841 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:17:43.849 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.850 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.850 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:17:43.858 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.858 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.858 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:17:43.866 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.866 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.866 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:17:43.874 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:46.781 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:46.781 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:17:49.108 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.108 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:17:49.133 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.133 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.133 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:17:49.157 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.158 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.158 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:17:49.180 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.180 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.181 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:17:49.204 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.205 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.205 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:17:49.229 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.229 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.229 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:17:49.252 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.296 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.296 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:17:49.321 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.321 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.321 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:17:49.344 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.344 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.345 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:17:49.367 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.367 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.368 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:17:49.390 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:52.782 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:52.782 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:17:54.595 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.596 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:17:54.637 [conn1] chunk not full enough to trigger auto-split { _id: 0.321423316494188 } m30999| Fri Feb 22 11:17:54.638 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.638 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:17:54.678 [conn1] chunk not full enough to trigger auto-split { _id: 1.321424542645544 } m30999| Fri Feb 22 11:17:54.679 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.679 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:17:54.717 [conn1] chunk not full enough to trigger auto-split { _id: 2.3214257687969 } m30999| Fri Feb 22 11:17:54.717 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.717 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:17:54.757 [conn1] chunk not full enough to trigger auto-split { _id: 3.321426994948256 } m30999| Fri Feb 22 11:17:54.757 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.757 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:17:54.798 [conn1] chunk not full enough to trigger auto-split { _id: 4.321428221099612 } m30999| Fri Feb 22 11:17:54.798 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.798 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:17:54.836 [conn1] chunk not full enough to trigger auto-split { _id: 5.321429447250969 } m30999| Fri Feb 22 11:17:54.897 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.897 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:17:54.935 [conn1] chunk not full enough to trigger auto-split { _id: 6.321430673402324 } m30999| Fri Feb 22 11:17:54.935 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.935 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:17:54.975 [conn1] chunk not full enough to trigger auto-split { _id: 7.321431899553681 } m30999| Fri Feb 22 11:17:54.976 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.976 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:17:55.017 [conn1] chunk not full enough to trigger auto-split { _id: 8.321433125705036 } m30999| Fri Feb 22 11:17:55.017 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:55.017 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:17:55.057 [conn1] chunk not full enough to trigger auto-split { _id: 9.321434351856393 } m30999| Fri Feb 22 11:17:58.783 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:58.783 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:18:00.234 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.234 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:18:00.272 [conn1] chunk not full enough to trigger auto-split { _id: 0.321423316494188 } m30999| Fri Feb 22 11:18:00.272 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.272 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:18:00.308 [conn1] chunk not full enough to trigger auto-split { _id: 1.321424542645544 } m30999| Fri Feb 22 11:18:00.308 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.308 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:18:00.342 [conn1] chunk not full enough to trigger auto-split { _id: 2.3214257687969 } m30999| Fri Feb 22 11:18:00.342 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.342 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:18:00.378 [conn1] chunk not full enough to trigger auto-split { _id: 3.321426994948256 } m30999| Fri Feb 22 11:18:00.378 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.378 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:18:00.414 [conn1] chunk not full enough to trigger auto-split { _id: 4.321428221099612 } m30999| Fri Feb 22 11:18:00.414 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.414 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:18:00.449 [conn1] chunk not full enough to trigger auto-split { _id: 5.321429447250969 } m30999| Fri Feb 22 11:18:00.493 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.493 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:18:00.527 [conn1] chunk not full enough to trigger auto-split { _id: 6.321430673402324 } m30999| Fri Feb 22 11:18:00.527 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.527 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:18:00.566 [conn1] chunk not full enough to trigger auto-split { _id: 7.321431899553681 } m30999| Fri Feb 22 11:18:00.566 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.566 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:18:00.622 [conn1] chunk not full enough to trigger auto-split { _id: 8.321433125705036 } m30999| Fri Feb 22 11:18:00.633 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.633 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:18:00.689 [conn1] chunk not full enough to trigger auto-split { _id: 9.321434351856393 } m30999| Fri Feb 22 11:18:04.783 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:18:04.784 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:18:05.876 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:05.877 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:18:05.952 [conn1] chunk not full enough to trigger auto-split { _id: 0.321423316494188 } m30999| Fri Feb 22 11:18:05.952 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:05.952 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:18:06.026 [conn1] chunk not full enough to trigger auto-split { _id: 1.321424542645544 } m30999| Fri Feb 22 11:18:06.026 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.026 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:18:06.085 [conn1] chunk not full enough to trigger auto-split { _id: 2.3214257687969 } m30999| Fri Feb 22 11:18:06.092 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.092 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:18:06.169 [conn1] chunk not full enough to trigger auto-split { _id: 3.321426994948256 } m30999| Fri Feb 22 11:18:06.170 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.170 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:18:06.243 [conn1] chunk not full enough to trigger auto-split { _id: 4.321428221099612 } m30999| Fri Feb 22 11:18:06.243 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.243 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:18:06.317 [conn1] chunk not full enough to trigger auto-split { _id: 5.321429447250969 } m30999| Fri Feb 22 11:18:06.370 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.370 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:18:06.443 [conn1] chunk not full enough to trigger auto-split { _id: 6.321430673402324 } m30999| Fri Feb 22 11:18:06.443 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.443 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:18:06.512 [conn1] chunk not full enough to trigger auto-split { _id: 7.321431899553681 } m30999| Fri Feb 22 11:18:06.513 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.513 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:18:06.581 [conn1] chunk not full enough to trigger auto-split { _id: 8.321433125705036 } m30999| Fri Feb 22 11:18:06.582 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.582 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:18:06.653 [conn1] chunk not full enough to trigger auto-split { _id: 9.321434351856393 } m30999| Fri Feb 22 11:18:10.767 [LockPinger] cluster localhost:29000 pinged successfully at Fri Feb 22 11:18:10 2013 by distributed lock pinger 'localhost:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838', sleeping for 30000ms m30999| Fri Feb 22 11:18:10.784 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:18:10.784 [Balancer] skipping balancing round because balancing is disabled m30000| Fri Feb 22 11:18:11.657 [FileAllocator] allocating new datafile /data/db/test0/foo.2, filling with zeroes... m30000| Fri Feb 22 11:18:11.657 [FileAllocator] done allocating datafile /data/db/test0/foo.2, size: 256MB, took 0 secs m30999| Fri Feb 22 11:18:12.015 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.015 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30000| Fri Feb 22 11:18:12.091 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30000| Fri Feb 22 11:18:12.091 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", splitKeys: [ { _id: 0.321423316494188 } ], shardId: "foo.hashBar-_id_0.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.092 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e6d m30000| Fri Feb 22 11:18:12.093 [conn4] splitChunk accepted at version 1|22||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.094 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e6e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892094), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 0.321423316494188 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 0.321423316494188 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.094 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.095 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 14 version: 1|24||512753d414c149b7a4b0a7b7 based on: 1|22||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.095 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } on: { _id: 0.321423316494188 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.095 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|24, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 14 m30999| Fri Feb 22 11:18:12.096 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.096 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.096 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30000| Fri Feb 22 11:18:12.169 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30000| Fri Feb 22 11:18:12.169 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", splitKeys: [ { _id: 1.321424542645544 } ], shardId: "foo.hashBar-_id_1.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.170 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e6f m30000| Fri Feb 22 11:18:12.171 [conn4] splitChunk accepted at version 1|24||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.172 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e70", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892172), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 1.321424542645544 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 1.321424542645544 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.175 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.176 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 15 version: 1|26||512753d414c149b7a4b0a7b7 based on: 1|24||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.177 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } on: { _id: 1.321424542645544 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.177 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|26, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 15 m30999| Fri Feb 22 11:18:12.177 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.177 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.177 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30000| Fri Feb 22 11:18:12.251 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30000| Fri Feb 22 11:18:12.251 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0000", splitKeys: [ { _id: 2.3214257687969 } ], shardId: "foo.hashBar-_id_2.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.252 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e71 m30000| Fri Feb 22 11:18:12.253 [conn4] splitChunk accepted at version 1|26||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.254 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e72", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892254), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 2.3214257687969 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 2.3214257687969 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.254 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.255 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 16 version: 1|28||512753d414c149b7a4b0a7b7 based on: 1|26||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.255 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } on: { _id: 2.3214257687969 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.255 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|28, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 16 m30999| Fri Feb 22 11:18:12.255 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.255 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.256 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30000| Fri Feb 22 11:18:12.330 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30000| Fri Feb 22 11:18:12.330 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0000", splitKeys: [ { _id: 3.321426994948256 } ], shardId: "foo.hashBar-_id_3.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.331 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e73 m30000| Fri Feb 22 11:18:12.332 [conn4] splitChunk accepted at version 1|28||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.333 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e74", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892333), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 3.321426994948256 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 3.321426994948256 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.333 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.334 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 17 version: 1|30||512753d414c149b7a4b0a7b7 based on: 1|28||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.334 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } on: { _id: 3.321426994948256 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.334 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|30, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 17 m30999| Fri Feb 22 11:18:12.335 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.335 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.335 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30000| Fri Feb 22 11:18:12.409 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30000| Fri Feb 22 11:18:12.409 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0000", splitKeys: [ { _id: 4.321428221099612 } ], shardId: "foo.hashBar-_id_4.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.410 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e75 m30000| Fri Feb 22 11:18:12.411 [conn4] splitChunk accepted at version 1|30||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.411 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e76", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892411), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 4.321428221099612 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 4.321428221099612 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.411 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.412 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 18 version: 1|32||512753d414c149b7a4b0a7b7 based on: 1|30||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.412 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } on: { _id: 4.321428221099612 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.412 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|32, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 18 m30999| Fri Feb 22 11:18:12.413 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.413 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.413 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30000| Fri Feb 22 11:18:12.482 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30000| Fri Feb 22 11:18:12.482 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0000", splitKeys: [ { _id: 5.321429447250969 } ], shardId: "foo.hashBar-_id_5.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.483 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e77 m30000| Fri Feb 22 11:18:12.484 [conn4] splitChunk accepted at version 1|32||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.485 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e78", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892485), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 5.321429447250969 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 5.321429447250969 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.485 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.486 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 19 version: 1|34||512753d414c149b7a4b0a7b7 based on: 1|32||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.486 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } on: { _id: 5.321429447250969 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.486 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|34, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 19 m30999| Fri Feb 22 11:18:12.486 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.532 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.532 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30000| Fri Feb 22 11:18:12.603 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30000| Fri Feb 22 11:18:12.603 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0000", splitKeys: [ { _id: 6.321430673402324 } ], shardId: "foo.hashBar-_id_6.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.604 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e79 m30000| Fri Feb 22 11:18:12.605 [conn4] splitChunk accepted at version 1|34||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.605 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e7a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892605), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 6.321430673402324 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 6.321430673402324 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.606 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.607 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 20 version: 1|36||512753d414c149b7a4b0a7b7 based on: 1|34||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.607 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } on: { _id: 6.321430673402324 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.607 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|36, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 20 m30999| Fri Feb 22 11:18:12.607 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.607 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.607 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30000| Fri Feb 22 11:18:12.687 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30000| Fri Feb 22 11:18:12.687 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0000", splitKeys: [ { _id: 7.321431899553681 } ], shardId: "foo.hashBar-_id_7.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.688 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e7b m30000| Fri Feb 22 11:18:12.689 [conn4] splitChunk accepted at version 1|36||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.689 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e7c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892689), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 7.321431899553681 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 7.321431899553681 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.690 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.690 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 21 version: 1|38||512753d414c149b7a4b0a7b7 based on: 1|36||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.691 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } on: { _id: 7.321431899553681 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.691 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|38, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 21 m30999| Fri Feb 22 11:18:12.691 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.691 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.691 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:12.764 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:12.764 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0000", splitKeys: [ { _id: 8.321433125705036 } ], shardId: "foo.hashBar-_id_8.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.765 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e7d m30000| Fri Feb 22 11:18:12.765 [conn4] splitChunk accepted at version 1|38||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.766 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e7e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892766), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 8.321433125705036 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 8.321433125705036 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.766 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.767 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 22 version: 1|40||512753d414c149b7a4b0a7b7 based on: 1|38||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.767 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } on: { _id: 8.321433125705036 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.767 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|40, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 22 m30999| Fri Feb 22 11:18:12.768 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:16.326 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.326 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30000| Fri Feb 22 11:18:16.398 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30000| Fri Feb 22 11:18:16.398 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 9.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 9.321434351856393 } ], shardId: "foo.hashBar-_id_9.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:16.400 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f852997dd3b08a6e7f m30000| Fri Feb 22 11:18:16.410 [conn4] splitChunk accepted at version 1|40||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:16.411 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:16-512753f852997dd3b08a6e80", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531896411), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 9.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 9.0 }, max: { _id: 9.321434351856393 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 9.321434351856393 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:16.411 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:16.412 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 23 version: 1|42||512753d414c149b7a4b0a7b7 based on: 1|40||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:16.412 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } on: { _id: 9.321434351856393 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:16.412 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|42, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 23 m30999| Fri Feb 22 11:18:16.413 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:16.413 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|24||000000000000000000000000min: { _id: 0.321423316494188 }max: { _id: 1.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.413 [conn4] request split points lookup for chunk foo.hashBar { : 0.321423316494188 } -->> { : 1.0 } m30999| Fri Feb 22 11:18:16.479 [conn1] chunk not full enough to trigger auto-split { _id: 0.6428466329883761 } m30999| Fri Feb 22 11:18:16.480 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|26||000000000000000000000000min: { _id: 1.321424542645544 }max: { _id: 2.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.480 [conn4] request split points lookup for chunk foo.hashBar { : 1.321424542645544 } -->> { : 2.0 } m30999| Fri Feb 22 11:18:16.544 [conn1] chunk not full enough to trigger auto-split { _id: 1.642847859139732 } m30999| Fri Feb 22 11:18:16.544 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|28||000000000000000000000000min: { _id: 2.3214257687969 }max: { _id: 3.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.545 [conn4] request split points lookup for chunk foo.hashBar { : 2.3214257687969 } -->> { : 3.0 } m30999| Fri Feb 22 11:18:16.605 [conn1] chunk not full enough to trigger auto-split { _id: 2.642849085291088 } m30999| Fri Feb 22 11:18:16.606 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|30||000000000000000000000000min: { _id: 3.321426994948256 }max: { _id: 4.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.606 [conn4] request split points lookup for chunk foo.hashBar { : 3.321426994948256 } -->> { : 4.0 } m30999| Fri Feb 22 11:18:16.672 [conn1] chunk not full enough to trigger auto-split { _id: 3.642850311442444 } m30999| Fri Feb 22 11:18:16.672 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|32||000000000000000000000000min: { _id: 4.321428221099612 }max: { _id: 5.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.673 [conn4] request split points lookup for chunk foo.hashBar { : 4.321428221099612 } -->> { : 5.0 } m30999| Fri Feb 22 11:18:16.737 [conn1] chunk not full enough to trigger auto-split { _id: 4.642851537593801 } m30999| Fri Feb 22 11:18:16.780 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|34||000000000000000000000000min: { _id: 5.321429447250969 }max: { _id: 6.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.780 [conn4] request split points lookup for chunk foo.hashBar { : 5.321429447250969 } -->> { : 6.0 } m30999| Fri Feb 22 11:18:16.785 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:18:16.785 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:18:16.842 [conn1] chunk not full enough to trigger auto-split { _id: 5.642852763745156 } m30999| Fri Feb 22 11:18:16.843 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|36||000000000000000000000000min: { _id: 6.321430673402324 }max: { _id: 7.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.843 [conn4] request split points lookup for chunk foo.hashBar { : 6.321430673402324 } -->> { : 7.0 } m30999| Fri Feb 22 11:18:16.904 [conn1] chunk not full enough to trigger auto-split { _id: 6.642853989896513 } m30999| Fri Feb 22 11:18:16.904 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|38||000000000000000000000000min: { _id: 7.321431899553681 }max: { _id: 8.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.904 [conn4] request split points lookup for chunk foo.hashBar { : 7.321431899553681 } -->> { : 8.0 } m30999| Fri Feb 22 11:18:16.968 [conn1] chunk not full enough to trigger auto-split { _id: 7.642855216047869 } m30999| Fri Feb 22 11:18:20.884 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|40||000000000000000000000000min: { _id: 8.321433125705036 }max: { _id: 9.0 } dataWritten: 209727 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:20.885 [conn4] request split points lookup for chunk foo.hashBar { : 8.321433125705036 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:20.960 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 8.321433125705036 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:20.960 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 8.321433125705036 }, max: { _id: 9.0 }, from: "shard0000", splitKeys: [ { _id: 8.642856442199225 } ], shardId: "foo.hashBar-_id_8.321433125705036", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:20.962 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753fc52997dd3b08a6e81 m30000| Fri Feb 22 11:18:20.963 [conn4] splitChunk accepted at version 1|42||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:20.963 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:20-512753fc52997dd3b08a6e82", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531900963), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 8.321433125705036 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.321433125705036 }, max: { _id: 8.642856442199225 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 8.642856442199225 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:20.964 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:20.964 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 24 version: 1|44||512753d414c149b7a4b0a7b7 based on: 1|42||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:20.965 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|40||000000000000000000000000min: { _id: 8.321433125705036 }max: { _id: 9.0 } on: { _id: 8.642856442199225 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:20.965 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|44, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 24 m30999| Fri Feb 22 11:18:20.965 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } ---- Inserts completed... ---- --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512753d414c149b7a4b0a7b3") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "foo", "partitioned" : true, "primary" : "shard0000" } foo.hashBar shard key: { "_id" : 1 } chunks: shard0000 23 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:18:20.978 [conn1] warning: mongos collstats doesn't know about: systemFlags m30999| Fri Feb 22 11:18:20.978 [conn1] warning: mongos collstats doesn't know about: userFlags { "sharded" : true, "ns" : "foo.hashBar", "count" : 815560, "numExtents" : 8, "size" : 16311224, "storageSize" : 37797888, "totalIndexSize" : 39277504, "indexSizes" : { "_id_" : 39277504 }, "avgObjSize" : 20.000029427632548, "nindexes" : 1, "nchunks" : 23, "shards" : { "shard0000" : { "ns" : "foo.hashBar", "count" : 815560, "size" : 16311224, "avgObjSize" : 20.000029427632548, "storageSize" : 37797888, "numExtents" : 8, "nindexes" : 1, "lastExtentSize" : 15290368, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 39277504, "indexSizes" : { "_id_" : 39277504 }, "ok" : 1 } }, "ok" : 1 } ---- DONE! ---- m30999| Fri Feb 22 11:18:20.979 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30000| Fri Feb 22 11:18:20.992 [conn4] end connection 127.0.0.1:63773 (3 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn4] end connection 127.0.0.1:64791 (7 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn6] end connection 127.0.0.1:51914 (7 connections now open) m30000| Fri Feb 22 11:18:20.992 [conn3] end connection 127.0.0.1:61319 (3 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn5] end connection 127.0.0.1:64925 (7 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn3] end connection 127.0.0.1:54691 (7 connections now open) Fri Feb 22 11:18:21.979 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:18:21.980 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:18:21.980 [interruptThread] now exiting m30000| Fri Feb 22 11:18:21.980 dbexit: m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:18:21.980 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:18:21.980 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:18:21.980 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:18:21.980 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:18:21.980 [conn1] end connection 127.0.0.1:40147 (1 connection now open) m29000| Fri Feb 22 11:18:21.980 [conn7] end connection 127.0.0.1:47773 (3 connections now open) m29000| Fri Feb 22 11:18:21.980 [conn8] end connection 127.0.0.1:39233 (3 connections now open) m30000| Fri Feb 22 11:18:22.034 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:18:22.049 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:18:22.049 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:18:22.049 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:18:22.050 dbexit: really exiting now Fri Feb 22 11:18:22.980 shell: stopped mongo program on port 30000 m29000| Fri Feb 22 11:18:22.980 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 11:18:22.980 [interruptThread] now exiting m29000| Fri Feb 22 11:18:22.980 dbexit: m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 11:18:22.980 [interruptThread] closing listening socket: 15 m29000| Fri Feb 22 11:18:22.980 [interruptThread] closing listening socket: 17 m29000| Fri Feb 22 11:18:22.980 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 11:18:22.981 [conn1] end connection 127.0.0.1:52563 (1 connection now open) m29000| Fri Feb 22 11:18:22.981 [conn2] end connection 127.0.0.1:55728 (1 connection now open) m29000| Fri Feb 22 11:18:22.990 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 11:18:22.991 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 11:18:22.991 [interruptThread] journalCleanup... m29000| Fri Feb 22 11:18:22.991 [interruptThread] removeJournalFiles m29000| Fri Feb 22 11:18:22.991 dbexit: really exiting now Fri Feb 22 11:18:23.980 shell: stopped mongo program on port 29000 *** ShardingTest test completed successfully in 43.699 seconds *** Fri Feb 22 11:18:24.013 [conn4] end connection 127.0.0.1:38033 (0 connections now open) 43.8797 seconds Fri Feb 22 11:18:24.036 [initandlisten] connection accepted from 127.0.0.1:48800 #5 (1 connection now open) Fri Feb 22 11:18:24.037 [conn5] end connection 127.0.0.1:48800 (0 connections now open) ******************************************* Test : background.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/background.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/background.js";TestData.testFile = "background.js";TestData.testName = "background";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:18:24 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:18:24.203 [initandlisten] connection accepted from 127.0.0.1:42334 #6 (1 connection now open) null Fri Feb 22 11:18:24.209 [conn6] CMD: drop test.bg1 Fri Feb 22 11:18:24.210 [initandlisten] connection accepted from 127.0.0.1:40872 #7 (2 connections now open) Fri Feb 22 11:18:24.211 [FileAllocator] allocating new datafile /data/db/sconsTests/test.ns, filling with zeroes... Fri Feb 22 11:18:24.211 [FileAllocator] done allocating datafile /data/db/sconsTests/test.ns, size: 16MB, took 0 secs Fri Feb 22 11:18:24.211 [FileAllocator] allocating new datafile /data/db/sconsTests/test.0, filling with zeroes... Fri Feb 22 11:18:24.211 [FileAllocator] done allocating datafile /data/db/sconsTests/test.0, size: 64MB, took 0 secs Fri Feb 22 11:18:24.211 [FileAllocator] allocating new datafile /data/db/sconsTests/test.1, filling with zeroes... Fri Feb 22 11:18:24.212 [FileAllocator] done allocating datafile /data/db/sconsTests/test.1, size: 128MB, took 0 secs Fri Feb 22 11:18:24.215 [conn6] build index test.bg1 { _id: 1 } Fri Feb 22 11:18:24.216 [conn6] build index done. scanned 0 total records. 0.001 secs 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 Fri Feb 22 11:18:27.942 [conn7] build index test.bg1 { i: 1.0 } background { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 0, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 128/99797 0%", "progress" : { "done" : 128, "total" : 99797 }, "numYields" : 3, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(11375) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(25592) } } } ] } 0 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 0, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 3840/109947 3%", "progress" : { "done" : 3840, "total" : 109947 }, "numYields" : 49, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(110018) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(526952) } } } ] } 10000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 1, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 9856/119777 8%", "progress" : { "done" : 9856, "total" : 119777 }, "numYields" : 96, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(287248) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(1104551) } } } ] } 20000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 1, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 15360/129914 11%", "progress" : { "done" : 15360, "total" : 129914 }, "numYields" : 139, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(441567) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(1632933) } } } ] } 30000 Fri Feb 22 11:18:30.007 [conn7] Background Index Build Progress: 19800/137606 14% { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 2, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 21120/139916 15%", "progress" : { "done" : 21120, "total" : 139916 }, "numYields" : 184, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(605477) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(2185753) } } } ] } 40000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 2, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 27520/149991 18%", "progress" : { "done" : 27520, "total" : 149991 }, "numYields" : 234, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(766833) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(2789378) } } } ] } 50000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 3, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 36096/159863 22%", "progress" : { "done" : 36096, "total" : 159863 }, "numYields" : 301, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(912212) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(3542167) } } } ] } 60000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 4, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 45696/169925 26%", "progress" : { "done" : 45696, "total" : 169925 }, "numYields" : 375, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1097449) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(4379971) } } } ] } 70000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 4, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 52224/179881 29%", "progress" : { "done" : 52224, "total" : 179881 }, "numYields" : 426, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1271744) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(4984808) } } } ] } 80000 Fri Feb 22 11:18:33.006 [conn7] Background Index Build Progress: 52900/181201 29% { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 5, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 58112/189870 30%", "progress" : { "done" : 58112, "total" : 189870 }, "numYields" : 472, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1445095) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(5550709) } } } ] } 90000 { "n" : 0, "connectionId" : 6, "err" : null, "ok" : 1 } { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 6, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 63872/199880 31%", "progress" : { "done" : 63872, "total" : 199880 }, "numYields" : 517, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1617262) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(6103182) } } } ] } waiting waiting Fri Feb 22 11:18:36.002 [conn7] Background Index Build Progress: 129400/200000 64% waiting waiting waiting Fri Feb 22 11:18:38.326 [conn7] build index done. scanned 200000 total records. 10.383 secs Fri Feb 22 11:18:38.326 [conn7] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 558 locks(micros) w:9350693 10384ms { "n" : 0, "connectionId" : 7, "err" : null, "ok" : 1 } Fri Feb 22 11:18:39.072 [conn6] end connection 127.0.0.1:42334 (1 connection now open) Fri Feb 22 11:18:39.072 [conn7] end connection 127.0.0.1:40872 (1 connection now open) 15.0540 seconds Fri Feb 22 11:18:39.093 [initandlisten] connection accepted from 127.0.0.1:45824 #8 (1 connection now open) Fri Feb 22 11:18:39.093 [conn8] end connection 127.0.0.1:45824 (0 connections now open) ******************************************* Test : balance_repl.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_repl.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_repl.js";TestData.testFile = "balance_repl.js";TestData.testName = "balance_repl";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:18:39 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:18:39.265 [initandlisten] connection accepted from 127.0.0.1:57590 #9 (1 connection now open) null Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 0, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-0' Fri Feb 22 11:18:39.279 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-0 --nopreallocj --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Fri Feb 22 11:18:39.371 [initandlisten] MongoDB starting : pid=19530 port=31100 dbpath=/data/db/rs1-rs0-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31100| Fri Feb 22 11:18:39.371 [initandlisten] m31100| Fri Feb 22 11:18:39.371 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31100| Fri Feb 22 11:18:39.371 [initandlisten] ** uses to detect impending page faults. m31100| Fri Feb 22 11:18:39.371 [initandlisten] ** This may result in slower performance for certain use cases m31100| Fri Feb 22 11:18:39.371 [initandlisten] m31100| Fri Feb 22 11:18:39.371 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31100| Fri Feb 22 11:18:39.371 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31100| Fri Feb 22 11:18:39.371 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31100| Fri Feb 22 11:18:39.371 [initandlisten] allocator: system m31100| Fri Feb 22 11:18:39.371 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-0", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31100, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Fri Feb 22 11:18:39.371 [initandlisten] journal dir=/data/db/rs1-rs0-0/journal m31100| Fri Feb 22 11:18:39.372 [initandlisten] recover : no journal files present, no recovery needed m31100| Fri Feb 22 11:18:39.373 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.ns, filling with zeroes... m31100| Fri Feb 22 11:18:39.373 [FileAllocator] creating directory /data/db/rs1-rs0-0/_tmp m31100| Fri Feb 22 11:18:39.373 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:18:39.373 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.0, filling with zeroes... m31100| Fri Feb 22 11:18:39.374 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:18:39.377 [initandlisten] waiting for connections on port 31100 m31100| Fri Feb 22 11:18:39.377 [websvr] admin web console waiting for connections on port 32100 m31100| Fri Feb 22 11:18:39.380 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 11:18:39.380 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Fri Feb 22 11:18:39.481 [initandlisten] connection accepted from 127.0.0.1:54051 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 1, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-1' Fri Feb 22 11:18:39.489 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-1 --nopreallocj --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Fri Feb 22 11:18:39.580 [initandlisten] MongoDB starting : pid=19531 port=31101 dbpath=/data/db/rs1-rs0-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31101| Fri Feb 22 11:18:39.581 [initandlisten] m31101| Fri Feb 22 11:18:39.581 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31101| Fri Feb 22 11:18:39.581 [initandlisten] ** uses to detect impending page faults. m31101| Fri Feb 22 11:18:39.581 [initandlisten] ** This may result in slower performance for certain use cases m31101| Fri Feb 22 11:18:39.581 [initandlisten] m31101| Fri Feb 22 11:18:39.581 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31101| Fri Feb 22 11:18:39.581 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31101| Fri Feb 22 11:18:39.581 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31101| Fri Feb 22 11:18:39.581 [initandlisten] allocator: system m31101| Fri Feb 22 11:18:39.581 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-1", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31101, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Fri Feb 22 11:18:39.581 [initandlisten] journal dir=/data/db/rs1-rs0-1/journal m31101| Fri Feb 22 11:18:39.581 [initandlisten] recover : no journal files present, no recovery needed m31101| Fri Feb 22 11:18:39.583 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.ns, filling with zeroes... m31101| Fri Feb 22 11:18:39.583 [FileAllocator] creating directory /data/db/rs1-rs0-1/_tmp m31101| Fri Feb 22 11:18:39.583 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:18:39.583 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.0, filling with zeroes... m31101| Fri Feb 22 11:18:39.583 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:18:39.586 [initandlisten] waiting for connections on port 31101 m31101| Fri Feb 22 11:18:39.586 [websvr] admin web console waiting for connections on port 32101 m31101| Fri Feb 22 11:18:39.589 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Fri Feb 22 11:18:39.589 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Fri Feb 22 11:18:39.691 [initandlisten] connection accepted from 127.0.0.1:65148 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101 ] { "replSetInitiate" : { "_id" : "rs1-rs0", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" } ] } } m31100| Fri Feb 22 11:18:39.694 [conn1] replSet replSetInitiate admin command received from client m31100| Fri Feb 22 11:18:39.695 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31100| Fri Feb 22 11:18:39.695 [initandlisten] connection accepted from 165.225.128.186:56471 #2 (2 connections now open) m31101| Fri Feb 22 11:18:39.696 [initandlisten] connection accepted from 165.225.128.186:34514 #2 (2 connections now open) m31100| Fri Feb 22 11:18:39.697 [conn1] replSet replSetInitiate all members seem up m31100| Fri Feb 22 11:18:39.697 [conn1] ****** m31100| Fri Feb 22 11:18:39.697 [conn1] creating replication oplog of size: 40MB... m31100| Fri Feb 22 11:18:39.697 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.1, filling with zeroes... m31100| Fri Feb 22 11:18:39.698 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.1, size: 64MB, took 0 secs m31100| Fri Feb 22 11:18:39.709 [conn2] end connection 165.225.128.186:56471 (1 connection now open) m31100| Fri Feb 22 11:18:39.710 [conn1] ****** m31100| Fri Feb 22 11:18:39.710 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 11:18:39.726 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 11:18:39.726 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 11:18:49.380 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:18:49.380 [rsStart] replSet STARTUP2 m31100| Fri Feb 22 11:18:49.380 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31100| Fri Feb 22 11:18:49.380 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31101| Fri Feb 22 11:18:49.589 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:18:49.590 [initandlisten] connection accepted from 165.225.128.186:50991 #3 (2 connections now open) m31101| Fri Feb 22 11:18:49.591 [initandlisten] connection accepted from 165.225.128.186:53042 #3 (3 connections now open) m31101| Fri Feb 22 11:18:49.591 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:18:49.591 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Fri Feb 22 11:18:49.591 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Fri Feb 22 11:18:49.593 [rsStart] replSet saveConfigLocally done m31101| Fri Feb 22 11:18:49.594 [rsStart] replSet STARTUP2 m31101| Fri Feb 22 11:18:49.594 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31101| Fri Feb 22 11:18:49.594 [rsSync] ****** m31101| Fri Feb 22 11:18:49.594 [rsSync] creating replication oplog of size: 40MB... m31101| Fri Feb 22 11:18:49.594 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.1, filling with zeroes... m31101| Fri Feb 22 11:18:49.595 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.1, size: 64MB, took 0 secs m31101| Fri Feb 22 11:18:49.604 [conn3] end connection 165.225.128.186:53042 (2 connections now open) m31101| Fri Feb 22 11:18:49.606 [rsSync] ****** m31101| Fri Feb 22 11:18:49.606 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:18:49.606 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31100| Fri Feb 22 11:18:50.381 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:18:51.381 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 thinks that we are down m31100| Fri Feb 22 11:18:51.381 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 11:18:51.381 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 11:18:51.591 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 11:18:51.591 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31100| Fri Feb 22 11:18:57.382 [rsMgr] replSet info electSelf 0 m31101| Fri Feb 22 11:18:57.382 [conn2] replSet RECOVERING m31101| Fri Feb 22 11:18:57.382 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31100| Fri Feb 22 11:18:58.381 [rsMgr] replSet PRIMARY m31100| Fri Feb 22 11:18:59.382 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31101| Fri Feb 22 11:18:59.592 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 11:19:05.606 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:19:05.606 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:19:05.607 [initandlisten] connection accepted from 165.225.128.186:57241 #4 (3 connections now open) m31101| Fri Feb 22 11:19:05.612 [rsSync] build index local.me { _id: 1 } m31101| Fri Feb 22 11:19:05.615 [rsSync] build index done. scanned 0 total records. 0.002 secs m31101| Fri Feb 22 11:19:05.616 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Fri Feb 22 11:19:05.617 [rsSync] build index done. scanned 0 total records. 0 secs m31101| Fri Feb 22 11:19:05.617 [rsSync] replSet initial sync drop all databases m31101| Fri Feb 22 11:19:05.617 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Fri Feb 22 11:19:05.617 [rsSync] replSet initial sync clone all databases m31101| Fri Feb 22 11:19:05.617 [rsSync] replSet initial sync data copy, starting syncup m31101| Fri Feb 22 11:19:05.617 [rsSync] oplog sync 1 of 3 m31101| Fri Feb 22 11:19:05.618 [rsSync] oplog sync 2 of 3 m31101| Fri Feb 22 11:19:05.618 [rsSync] replSet initial sync building indexes m31101| Fri Feb 22 11:19:05.618 [rsSync] oplog sync 3 of 3 m31101| Fri Feb 22 11:19:05.618 [rsSync] replSet initial sync finishing up m31101| Fri Feb 22 11:19:05.626 [rsSync] replSet set minValid=5127540f:b m31101| Fri Feb 22 11:19:05.630 [rsSync] replSet initial sync done m31100| Fri Feb 22 11:19:05.631 [conn4] end connection 165.225.128.186:57241 (2 connections now open) m31101| Fri Feb 22 11:19:06.595 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:19:06.595 [initandlisten] connection accepted from 165.225.128.186:62991 #5 (3 connections now open) m31101| Fri Feb 22 11:19:06.630 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:19:06.631 [initandlisten] connection accepted from 165.225.128.186:40786 #6 (4 connections now open) m31101| Fri Feb 22 11:19:07.632 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:19:07.638 [slaveTracking] build index local.slaves { _id: 1 } m31100| Fri Feb 22 11:19:07.639 [slaveTracking] build index done. scanned 0 total records. 0.001 secs Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31200, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 0, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-0' Fri Feb 22 11:19:07.817 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-0 --nopreallocj --setParameter enableTestCommands=1 m31200| note: noprealloc may hurt performance in many applications m31200| Fri Feb 22 11:19:07.905 [initandlisten] MongoDB starting : pid=19558 port=31200 dbpath=/data/db/rs1-rs1-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31200| Fri Feb 22 11:19:07.905 [initandlisten] m31200| Fri Feb 22 11:19:07.905 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31200| Fri Feb 22 11:19:07.905 [initandlisten] ** uses to detect impending page faults. m31200| Fri Feb 22 11:19:07.905 [initandlisten] ** This may result in slower performance for certain use cases m31200| Fri Feb 22 11:19:07.905 [initandlisten] m31200| Fri Feb 22 11:19:07.905 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31200| Fri Feb 22 11:19:07.905 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31200| Fri Feb 22 11:19:07.905 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31200| Fri Feb 22 11:19:07.905 [initandlisten] allocator: system m31200| Fri Feb 22 11:19:07.905 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-0", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31200, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31200| Fri Feb 22 11:19:07.906 [initandlisten] journal dir=/data/db/rs1-rs1-0/journal m31200| Fri Feb 22 11:19:07.906 [initandlisten] recover : no journal files present, no recovery needed m31200| Fri Feb 22 11:19:07.907 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.ns, filling with zeroes... m31200| Fri Feb 22 11:19:07.907 [FileAllocator] creating directory /data/db/rs1-rs1-0/_tmp m31200| Fri Feb 22 11:19:07.908 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:07.908 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.0, filling with zeroes... m31200| Fri Feb 22 11:19:07.908 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.0, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:07.911 [initandlisten] waiting for connections on port 31200 m31200| Fri Feb 22 11:19:07.911 [websvr] admin web console waiting for connections on port 32200 m31200| Fri Feb 22 11:19:07.913 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31200| Fri Feb 22 11:19:07.913 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31200| Fri Feb 22 11:19:08.019 [initandlisten] connection accepted from 127.0.0.1:34027 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31201, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 1, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-1' Fri Feb 22 11:19:08.023 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-1 --nopreallocj --setParameter enableTestCommands=1 m31201| note: noprealloc may hurt performance in many applications m31201| Fri Feb 22 11:19:08.094 [initandlisten] MongoDB starting : pid=19559 port=31201 dbpath=/data/db/rs1-rs1-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31201| Fri Feb 22 11:19:08.095 [initandlisten] m31201| Fri Feb 22 11:19:08.095 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31201| Fri Feb 22 11:19:08.095 [initandlisten] ** uses to detect impending page faults. m31201| Fri Feb 22 11:19:08.095 [initandlisten] ** This may result in slower performance for certain use cases m31201| Fri Feb 22 11:19:08.095 [initandlisten] m31201| Fri Feb 22 11:19:08.095 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31201| Fri Feb 22 11:19:08.095 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31201| Fri Feb 22 11:19:08.095 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31201| Fri Feb 22 11:19:08.095 [initandlisten] allocator: system m31201| Fri Feb 22 11:19:08.095 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-1", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31201, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31201| Fri Feb 22 11:19:08.095 [initandlisten] journal dir=/data/db/rs1-rs1-1/journal m31201| Fri Feb 22 11:19:08.095 [initandlisten] recover : no journal files present, no recovery needed m31201| Fri Feb 22 11:19:08.096 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.ns, filling with zeroes... m31201| Fri Feb 22 11:19:08.096 [FileAllocator] creating directory /data/db/rs1-rs1-1/_tmp m31201| Fri Feb 22 11:19:08.097 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.ns, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:08.097 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.0, filling with zeroes... m31201| Fri Feb 22 11:19:08.097 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.0, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:08.100 [initandlisten] waiting for connections on port 31201 m31201| Fri Feb 22 11:19:08.100 [websvr] admin web console waiting for connections on port 32201 m31201| Fri Feb 22 11:19:08.102 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31201| Fri Feb 22 11:19:08.102 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31201| Fri Feb 22 11:19:08.224 [initandlisten] connection accepted from 127.0.0.1:47001 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200, connection to bs-smartos-x86-64-1.10gen.cc:31201 ] { "replSetInitiate" : { "_id" : "rs1-rs1", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31200" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31201" } ] } } m31200| Fri Feb 22 11:19:08.226 [conn1] replSet replSetInitiate admin command received from client m31200| Fri Feb 22 11:19:08.229 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31200| Fri Feb 22 11:19:08.230 [initandlisten] connection accepted from 165.225.128.186:56222 #2 (2 connections now open) m31201| Fri Feb 22 11:19:08.231 [initandlisten] connection accepted from 165.225.128.186:50247 #2 (2 connections now open) m31200| Fri Feb 22 11:19:08.232 [conn1] replSet replSetInitiate all members seem up m31200| Fri Feb 22 11:19:08.232 [conn1] ****** m31200| Fri Feb 22 11:19:08.232 [conn1] creating replication oplog of size: 40MB... m31200| Fri Feb 22 11:19:08.232 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.1, filling with zeroes... m31200| Fri Feb 22 11:19:08.232 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.1, size: 64MB, took 0 secs m31200| Fri Feb 22 11:19:08.247 [conn1] ****** m31200| Fri Feb 22 11:19:08.247 [conn1] replSet info saving a newer config version to local.system.replset m31200| Fri Feb 22 11:19:08.254 [conn2] end connection 165.225.128.186:56222 (1 connection now open) m31200| Fri Feb 22 11:19:08.260 [conn1] replSet saveConfigLocally done m31200| Fri Feb 22 11:19:08.260 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 11:19:09.383 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31101| Fri Feb 22 11:19:17.384 [conn2] end connection 165.225.128.186:34514 (1 connection now open) m31101| Fri Feb 22 11:19:17.385 [initandlisten] connection accepted from 165.225.128.186:46605 #4 (2 connections now open) m31200| Fri Feb 22 11:19:17.914 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:17.914 [rsStart] replSet STARTUP2 m31200| Fri Feb 22 11:19:17.914 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31200| Fri Feb 22 11:19:17.914 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is up m31201| Fri Feb 22 11:19:18.103 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:18.103 [initandlisten] connection accepted from 165.225.128.186:41805 #3 (2 connections now open) m31201| Fri Feb 22 11:19:18.104 [initandlisten] connection accepted from 165.225.128.186:38161 #3 (3 connections now open) m31201| Fri Feb 22 11:19:18.104 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31201 m31201| Fri Feb 22 11:19:18.105 [rsStart] replSet got config version 1 from a remote, saving locally m31201| Fri Feb 22 11:19:18.105 [rsStart] replSet info saving a newer config version to local.system.replset m31201| Fri Feb 22 11:19:18.111 [rsStart] replSet saveConfigLocally done m31201| Fri Feb 22 11:19:18.111 [rsStart] replSet STARTUP2 m31201| Fri Feb 22 11:19:18.112 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31201| Fri Feb 22 11:19:18.112 [rsSync] ****** m31201| Fri Feb 22 11:19:18.112 [rsSync] creating replication oplog of size: 40MB... m31201| Fri Feb 22 11:19:18.112 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.1, filling with zeroes... m31201| Fri Feb 22 11:19:18.112 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.1, size: 64MB, took 0 secs m31201| Fri Feb 22 11:19:18.126 [conn3] end connection 165.225.128.186:38161 (2 connections now open) m31201| Fri Feb 22 11:19:18.127 [rsSync] ****** m31201| Fri Feb 22 11:19:18.127 [rsSync] replSet initial sync pending m31201| Fri Feb 22 11:19:18.128 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31200| Fri Feb 22 11:19:18.915 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:19:19.595 [conn3] end connection 165.225.128.186:50991 (3 connections now open) m31100| Fri Feb 22 11:19:19.596 [initandlisten] connection accepted from 165.225.128.186:59785 #7 (4 connections now open) m31200| Fri Feb 22 11:19:19.914 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31201 thinks that we are down m31200| Fri Feb 22 11:19:19.914 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state STARTUP2 m31200| Fri Feb 22 11:19:19.915 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31201 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31200 is electable' m31201| Fri Feb 22 11:19:20.105 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is up m31201| Fri Feb 22 11:19:20.105 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state SECONDARY m31200| Fri Feb 22 11:19:25.916 [rsMgr] replSet info electSelf 0 m31201| Fri Feb 22 11:19:25.916 [conn2] replSet RECOVERING m31201| Fri Feb 22 11:19:25.916 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31200 (0) m31200| Fri Feb 22 11:19:26.915 [rsMgr] replSet PRIMARY m31200| Fri Feb 22 11:19:27.916 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state RECOVERING m31201| Fri Feb 22 11:19:28.106 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state PRIMARY m31201| Fri Feb 22 11:19:34.128 [rsSync] replSet initial sync pending m31201| Fri Feb 22 11:19:34.128 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:34.128 [initandlisten] connection accepted from 165.225.128.186:33716 #4 (3 connections now open) m31201| Fri Feb 22 11:19:34.135 [rsSync] build index local.me { _id: 1 } m31201| Fri Feb 22 11:19:34.138 [rsSync] build index done. scanned 0 total records. 0.003 secs m31201| Fri Feb 22 11:19:34.139 [rsSync] build index local.replset.minvalid { _id: 1 } m31201| Fri Feb 22 11:19:34.140 [rsSync] build index done. scanned 0 total records. 0 secs m31201| Fri Feb 22 11:19:34.140 [rsSync] replSet initial sync drop all databases m31201| Fri Feb 22 11:19:34.140 [rsSync] dropAllDatabasesExceptLocal 1 m31201| Fri Feb 22 11:19:34.140 [rsSync] replSet initial sync clone all databases m31201| Fri Feb 22 11:19:34.141 [rsSync] replSet initial sync data copy, starting syncup m31201| Fri Feb 22 11:19:34.141 [rsSync] oplog sync 1 of 3 m31201| Fri Feb 22 11:19:34.141 [rsSync] oplog sync 2 of 3 m31201| Fri Feb 22 11:19:34.141 [rsSync] replSet initial sync building indexes m31201| Fri Feb 22 11:19:34.141 [rsSync] oplog sync 3 of 3 m31201| Fri Feb 22 11:19:34.141 [rsSync] replSet initial sync finishing up m31201| Fri Feb 22 11:19:34.151 [rsSync] replSet set minValid=5127542c:1 m31201| Fri Feb 22 11:19:34.158 [rsSync] replSet initial sync done m31200| Fri Feb 22 11:19:34.158 [conn4] end connection 165.225.128.186:33716 (2 connections now open) m31201| Fri Feb 22 11:19:35.112 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:35.113 [initandlisten] connection accepted from 165.225.128.186:39096 #5 (3 connections now open) m31201| Fri Feb 22 11:19:35.158 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:35.158 [initandlisten] connection accepted from 165.225.128.186:33069 #6 (4 connections now open) m31201| Fri Feb 22 11:19:36.159 [rsSync] replSet SECONDARY m31200| Fri Feb 22 11:19:36.167 [slaveTracking] build index local.slaves { _id: 1 } m31200| Fri Feb 22 11:19:36.170 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31100| Fri Feb 22 11:19:36.328 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/admin.ns, filling with zeroes... m31100| Fri Feb 22 11:19:36.328 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/admin.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.328 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/admin.0, filling with zeroes... m31100| Fri Feb 22 11:19:36.328 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/admin.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.331 [conn1] build index admin.foo { _id: 1 } m31100| Fri Feb 22 11:19:36.332 [conn1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:19:36.333 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/admin.ns, filling with zeroes... m31101| Fri Feb 22 11:19:36.333 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/admin.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:19:36.334 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/admin.0, filling with zeroes... m31101| Fri Feb 22 11:19:36.334 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361531976000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361531976000, "i" : 1 } m31101| Fri Feb 22 11:19:36.337 [repl writer worker 1] build index admin.foo { _id: 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:19:36.338 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp { "t" : 1361531976000, "i" : 1 } Fri Feb 22 11:19:36.341 starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:19:36.342 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31100| Fri Feb 22 11:19:36.342 [initandlisten] connection accepted from 165.225.128.186:33843 #8 (5 connections now open) Fri Feb 22 11:19:36.342 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ Fri Feb 22 11:19:36.342 trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m31100| Fri Feb 22 11:19:36.342 [initandlisten] connection accepted from 165.225.128.186:56264 #9 (6 connections now open) Fri Feb 22 11:19:36.342 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 Fri Feb 22 11:19:36.342 trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 Fri Feb 22 11:19:36.343 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 11:19:36.343 [initandlisten] connection accepted from 165.225.128.186:34675 #5 (3 connections now open) m31100| Fri Feb 22 11:19:36.343 [initandlisten] connection accepted from 165.225.128.186:47962 #10 (7 connections now open) m31100| Fri Feb 22 11:19:36.343 [conn8] end connection 165.225.128.186:33843 (6 connections now open) Fri Feb 22 11:19:36.343 Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:19:36.344 [initandlisten] connection accepted from 165.225.128.186:62481 #6 (4 connections now open) Fri Feb 22 11:19:36.344 replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:19:36.344 [ReplicaSetMonitorWatcher] starting m31200| Fri Feb 22 11:19:36.346 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/admin.ns, filling with zeroes... m31200| Fri Feb 22 11:19:36.346 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/admin.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:36.346 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/admin.0, filling with zeroes... m31200| Fri Feb 22 11:19:36.346 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/admin.0, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:36.349 [conn1] build index admin.foo { _id: 1 } m31200| Fri Feb 22 11:19:36.351 [conn1] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 11:19:36.352 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/admin.ns, filling with zeroes... ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31200, is { "t" : 1361531976000, "i" : 1 } m31201| Fri Feb 22 11:19:36.352 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/admin.ns, size: 16MB, took 0 secs ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361531976000, "i" : 1 } m31201| Fri Feb 22 11:19:36.352 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/admin.0, filling with zeroes... m31201| Fri Feb 22 11:19:36.353 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31201 m31201| Fri Feb 22 11:19:36.356 [repl writer worker 1] build index admin.foo { _id: 1 } m31201| Fri Feb 22 11:19:36.357 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31201, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp { "t" : 1361531976000, "i" : 1 } Fri Feb 22 11:19:36.360 starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 Fri Feb 22 11:19:36.361 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 11:19:36.361 [initandlisten] connection accepted from 165.225.128.186:56538 #7 (5 connections now open) Fri Feb 22 11:19:36.361 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ Fri Feb 22 11:19:36.361 trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 Fri Feb 22 11:19:36.361 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 Fri Feb 22 11:19:36.361 trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m31200| Fri Feb 22 11:19:36.361 [initandlisten] connection accepted from 165.225.128.186:47585 #8 (6 connections now open) Fri Feb 22 11:19:36.361 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m31201| Fri Feb 22 11:19:36.361 [initandlisten] connection accepted from 165.225.128.186:54721 #4 (3 connections now open) m31200| Fri Feb 22 11:19:36.362 [initandlisten] connection accepted from 165.225.128.186:46153 #9 (7 connections now open) m31200| Fri Feb 22 11:19:36.362 [conn7] end connection 165.225.128.186:56538 (6 connections now open) Fri Feb 22 11:19:36.362 Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m31201| Fri Feb 22 11:19:36.363 [initandlisten] connection accepted from 165.225.128.186:33137 #5 (4 connections now open) Fri Feb 22 11:19:36.363 replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 Resetting db path '/data/db/rs1-config0' Fri Feb 22 11:19:36.367 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/rs1-config0 --configsvr --nopreallocj --setParameter enableTestCommands=1 m29000| Fri Feb 22 11:19:36.438 [initandlisten] MongoDB starting : pid=19611 port=29000 dbpath=/data/db/rs1-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 11:19:36.438 [initandlisten] m29000| Fri Feb 22 11:19:36.438 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 11:19:36.438 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 11:19:36.438 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 11:19:36.438 [initandlisten] m29000| Fri Feb 22 11:19:36.439 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 11:19:36.439 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 11:19:36.439 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 11:19:36.439 [initandlisten] allocator: system m29000| Fri Feb 22 11:19:36.439 [initandlisten] options: { configsvr: true, dbpath: "/data/db/rs1-config0", nopreallocj: true, port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:19:36.439 [initandlisten] journal dir=/data/db/rs1-config0/journal m29000| Fri Feb 22 11:19:36.439 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 11:19:36.440 [FileAllocator] allocating new datafile /data/db/rs1-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 11:19:36.440 [FileAllocator] creating directory /data/db/rs1-config0/_tmp m29000| Fri Feb 22 11:19:36.440 [FileAllocator] done allocating datafile /data/db/rs1-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.440 [FileAllocator] allocating new datafile /data/db/rs1-config0/local.0, filling with zeroes... m29000| Fri Feb 22 11:19:36.441 [FileAllocator] done allocating datafile /data/db/rs1-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.443 [initandlisten] ****** m29000| Fri Feb 22 11:19:36.443 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 11:19:36.447 [initandlisten] ****** m29000| Fri Feb 22 11:19:36.447 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 11:19:36.447 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 11:19:36.568 [initandlisten] connection accepted from 127.0.0.1:48681 #1 (1 connection now open) "bs-smartos-x86-64-1.10gen.cc:29000" m29000| Fri Feb 22 11:19:36.569 [initandlisten] connection accepted from 165.225.128.186:57785 #2 (2 connections now open) ShardingTest rs1 : { "config" : "bs-smartos-x86-64-1.10gen.cc:29000", "shards" : [ connection to rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101, connection to rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 ] } Fri Feb 22 11:19:36.573 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb bs-smartos-x86-64-1.10gen.cc:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:19:36.591 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:19:36.592 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=19612 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:19:36.592 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:19:36.592 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:19:36.592 [mongosMain] options: { chunkSize: 1, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:19:36.592 [mongosMain] config string : bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.592 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.593 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.593 [mongosMain] connected connection! m29000| Fri Feb 22 11:19:36.593 [initandlisten] connection accepted from 165.225.128.186:56121 #3 (3 connections now open) m30999| Fri Feb 22 11:19:36.594 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:19:36.594 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.594 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:19:36.594 [initandlisten] connection accepted from 165.225.128.186:58413 #4 (4 connections now open) m30999| Fri Feb 22 11:19:36.594 [mongosMain] connected connection! m29000| Fri Feb 22 11:19:36.595 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:19:36.600 [mongosMain] created new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:19:36.601 [mongosMain] trying to acquire new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838 ) m30999| Fri Feb 22 11:19:36.601 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:19:36.601 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m29000| Fri Feb 22 11:19:36.601 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.ns, filling with zeroes... m30999| Fri Feb 22 11:19:36.601 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:19:36 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127544800fc1508e4df1cdc" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 11:19:36.601 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.601 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.0, filling with zeroes... m29000| Fri Feb 22 11:19:36.601 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.602 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.1, filling with zeroes... m29000| Fri Feb 22 11:19:36.602 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 11:19:36.604 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 11:19:36.605 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.606 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 11:19:36.607 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.607 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 11:19:36 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838', sleeping for 30000ms m29000| Fri Feb 22 11:19:36.607 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 11:19:36.608 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:19:36.608 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' acquired, ts : 5127544800fc1508e4df1cdc m30999| Fri Feb 22 11:19:36.611 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:19:36.611 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:19:36.611 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-5127544800fc1508e4df1cdd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531976611), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 11:19:36.611 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 11:19:36.612 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.612 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 11:19:36.612 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 11:19:36.613 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.614 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-5127544800fc1508e4df1cdf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531976614), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:19:36.614 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:19:36.614 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' unlocked. m29000| Fri Feb 22 11:19:36.615 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:19:36.616 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m29000| Fri Feb 22 11:19:36.616 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:19:36.616 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:19:36.616 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:19:36.616 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:19:36.616 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:19:36.616 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:19:36.616 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:19:36.617 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 11:19:36.617 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 11:19:36.618 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.618 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 11:19:36.618 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 11:19:36.618 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.618 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 11:19:36.619 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.619 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 11:19:36.619 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.620 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 11:19:36.620 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.620 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 11:19:36.620 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 11:19:36.621 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.621 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:19:36.621 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:19:36 m30999| Fri Feb 22 11:19:36.621 [Balancer] created new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:19:36.621 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m29000| Fri Feb 22 11:19:36.622 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:19:36.622 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.622 [Balancer] connected connection! m29000| Fri Feb 22 11:19:36.622 [initandlisten] connection accepted from 165.225.128.186:50526 #5 (5 connections now open) m29000| Fri Feb 22 11:19:36.622 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.623 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:36.623 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838 ) m30999| Fri Feb 22 11:19:36.623 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:19:36.623 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:19:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127544800fc1508e4df1ce1" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:19:36.624 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' acquired, ts : 5127544800fc1508e4df1ce1 m30999| Fri Feb 22 11:19:36.624 [Balancer] *** start balancing round m30999| Fri Feb 22 11:19:36.624 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:19:36.624 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:19:36.624 [Balancer] no collections to balance m30999| Fri Feb 22 11:19:36.624 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:19:36.624 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:19:36.624 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' unlocked. m30999| Fri Feb 22 11:19:36.775 [mongosMain] connection accepted from 127.0.0.1:39839 #1 (1 connection now open) ShardingTest undefined going to add shard : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.777 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 11:19:36.777 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 11:19:36.778 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.778 [conn1] put [admin] on: config:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.778 [conn1] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.778 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.779 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.779 [conn1] connected connection! m31100| Fri Feb 22 11:19:36.779 [initandlisten] connection accepted from 165.225.128.186:47616 #11 (7 connections now open) m30999| Fri Feb 22 11:19:36.779 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.779 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976779), ok: 1.0 } m30999| Fri Feb 22 11:19:36.779 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m30999| Fri Feb 22 11:19:36.779 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.779 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.779 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.780 [initandlisten] connection accepted from 165.225.128.186:55630 #12 (8 connections now open) m30999| Fri Feb 22 11:19:36.780 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.780 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.780 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.780 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.780 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.780 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.780 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 11:19:36.780 [initandlisten] connection accepted from 165.225.128.186:44068 #7 (5 connections now open) m30999| Fri Feb 22 11:19:36.780 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.780 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.780 [initandlisten] connection accepted from 165.225.128.186:52897 #13 (9 connections now open) m30999| Fri Feb 22 11:19:36.780 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] replicaSetChange: shard not found for set: rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31100| Fri Feb 22 11:19:36.781 [conn11] end connection 165.225.128.186:47616 (8 connections now open) m30999| Fri Feb 22 11:19:36.781 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976781), ok: 1.0 } m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976781), ok: 1.0 } m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976781), ok: 1.0 } m30999| Fri Feb 22 11:19:36.781 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.782 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.782 [conn1] connected connection! m31101| Fri Feb 22 11:19:36.782 [initandlisten] connection accepted from 165.225.128.186:47479 #8 (6 connections now open) m30999| Fri Feb 22 11:19:36.782 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.782 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.782 [conn1] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.782 BackgroundJob starting: ReplicaSetMonitorWatcher m30999| Fri Feb 22 11:19:36.782 [ReplicaSetMonitorWatcher] starting m30999| Fri Feb 22 11:19:36.782 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.782 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.782 [initandlisten] connection accepted from 165.225.128.186:38058 #14 (9 connections now open) m30999| Fri Feb 22 11:19:36.782 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.784 [conn1] going to add shard: { _id: "rs1-rs0", host: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } { "shardAdded" : "rs1-rs0", "ok" : 1 } ShardingTest undefined going to add shard : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.785 [conn1] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.785 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.785 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.785 [conn1] connected connection! m31200| Fri Feb 22 11:19:36.785 [initandlisten] connection accepted from 165.225.128.186:35800 #10 (7 connections now open) m30999| Fri Feb 22 11:19:36.785 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.785 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976785), ok: 1.0 } m30999| Fri Feb 22 11:19:36.785 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m30999| Fri Feb 22 11:19:36.785 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.785 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.786 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 11:19:36.786 [initandlisten] connection accepted from 165.225.128.186:41224 #11 (8 connections now open) m30999| Fri Feb 22 11:19:36.786 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.786 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.786 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.786 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.786 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 11:19:36.786 [initandlisten] connection accepted from 165.225.128.186:45330 #6 (5 connections now open) m30999| Fri Feb 22 11:19:36.786 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.786 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.786 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.787 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 11:19:36.787 [initandlisten] connection accepted from 165.225.128.186:47705 #12 (9 connections now open) m30999| Fri Feb 22 11:19:36.787 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.787 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.787 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.787 [conn1] replicaSetChange: shard not found for set: rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.787 [conn1] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31200| Fri Feb 22 11:19:36.787 [conn10] end connection 165.225.128.186:35800 (8 connections now open) m30999| Fri Feb 22 11:19:36.787 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976787), ok: 1.0 } m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976788), ok: 1.0 } m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976788), ok: 1.0 } m30999| Fri Feb 22 11:19:36.788 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 11:19:36.788 [initandlisten] connection accepted from 165.225.128.186:61872 #7 (6 connections now open) m30999| Fri Feb 22 11:19:36.788 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.789 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 11:19:36.789 [initandlisten] connection accepted from 165.225.128.186:46311 #13 (9 connections now open) m30999| Fri Feb 22 11:19:36.789 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.790 [conn1] going to add shard: { _id: "rs1-rs1", host: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } { "shardAdded" : "rs1-rs1", "ok" : 1 } m30999| Fri Feb 22 11:19:36.791 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31100 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.791 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31101 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.791 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.791 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.791 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.791 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.791 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.792 [initandlisten] connection accepted from 165.225.128.186:40845 #15 (10 connections now open) m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.792 [conn1] connected connection! m31100| Fri Feb 22 11:19:36.792 [initandlisten] connection accepted from 165.225.128.186:49532 #16 (11 connections now open) m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.792 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] connected connection! m30999| Fri Feb 22 11:19:36.792 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.792 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] connected connection! m31101| Fri Feb 22 11:19:36.792 [initandlisten] connection accepted from 165.225.128.186:63370 #9 (7 connections now open) m30999| Fri Feb 22 11:19:36.792 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31200 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.792 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31201 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.792 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.792 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.793 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.793 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.793 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.793 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200] connected connection! m31200| Fri Feb 22 11:19:36.793 [initandlisten] connection accepted from 165.225.128.186:40626 #14 (10 connections now open) m30999| Fri Feb 22 11:19:36.793 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.793 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.793 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:36.793 [initandlisten] connection accepted from 165.225.128.186:48503 #15 (11 connections now open) m31201| Fri Feb 22 11:19:36.793 [initandlisten] connection accepted from 165.225.128.186:40100 #8 (7 connections now open) m30999| Fri Feb 22 11:19:36.793 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201] connected connection! m30999| Fri Feb 22 11:19:36.793 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.794 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:19:36.794 [initandlisten] connection accepted from 165.225.128.186:54107 #6 (6 connections now open) m30999| Fri Feb 22 11:19:36.794 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.794 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:29000 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.794 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.794 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.794 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000] bs-smartos-x86-64-1.10gen.cc:29000 is not a shard node m30999| Fri Feb 22 11:19:36.795 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 11:19:36.795 [conn1] best shard for new allocation is shard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 mapped: 128 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:19:36.796 [conn1] put [test] on: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31100| Fri Feb 22 11:19:36.796 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.ns, filling with zeroes... m31100| Fri Feb 22 11:19:36.797 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.797 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.0, filling with zeroes... m31100| Fri Feb 22 11:19:36.797 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.800 [conn15] build index test.foo { _id: 1 } m31100| Fri Feb 22 11:19:36.801 [conn15] build index done. scanned 0 total records. 0 secs m31101| Fri Feb 22 11:19:36.802 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.ns, filling with zeroes... m31101| Fri Feb 22 11:19:36.802 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:19:36.802 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.0, filling with zeroes... m31101| Fri Feb 22 11:19:36.802 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:19:36.808 [repl writer worker 1] build index test.foo { _id: 1 } m31101| Fri Feb 22 11:19:36.810 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:19:36.948 [conn1] enabling sharding on: test m30999| Fri Feb 22 11:19:36.950 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 11:19:36.950 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:19:36.950 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.952 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5127544800fc1508e4df1ce2 based on: (empty) m29000| Fri Feb 22 11:19:36.952 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 11:19:36.953 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:19:36.954 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 2 m30999| Fri Feb 22 11:19:36.954 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:19:36.954 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), authoritative: true, shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 2 m31100| Fri Feb 22 11:19:36.954 [conn15] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 11:19:36.955 [initandlisten] connection accepted from 165.225.128.186:45712 #7 (7 connections now open) m30999| Fri Feb 22 11:19:36.955 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:19:36.956 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.957 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m29000| Fri Feb 22 11:19:36.957 [initandlisten] connection accepted from 165.225.128.186:41614 #8 (8 connections now open) m31100| Fri Feb 22 11:19:36.958 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633 (sleeping for 30000ms) m31100| Fri Feb 22 11:19:36.960 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059523 m31100| Fri Feb 22 11:19:36.961 [conn14] splitChunk accepted at version 1|0||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.962 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059524", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976962), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.962 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.963 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5127544800fc1508e4df1ce2 based on: 1|0||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.965 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.965 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 100.0 } ], shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.965 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059525 m31100| Fri Feb 22 11:19:36.966 [conn14] splitChunk accepted at version 1|2||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.967 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059526", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976967), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 100.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.967 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.968 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||5127544800fc1508e4df1ce2 based on: 1|2||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.969 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|4||000000000000000000000000min: { _id: 100.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.969 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 100.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 200.0 } ], shardId: "test.foo-_id_100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.970 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059527 m31100| Fri Feb 22 11:19:36.971 [conn14] splitChunk accepted at version 1|4||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.972 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059528", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976972), what: "split", ns: "test.foo", details: { before: { min: { _id: 100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 100.0 }, max: { _id: 200.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.972 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.973 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||5127544800fc1508e4df1ce2 based on: 1|4||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.974 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|6||000000000000000000000000min: { _id: 200.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.974 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 200.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 300.0 } ], shardId: "test.foo-_id_200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.975 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059529 m31100| Fri Feb 22 11:19:36.975 [conn14] splitChunk accepted at version 1|6||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.976 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa44516705952a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976976), what: "split", ns: "test.foo", details: { before: { min: { _id: 200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 200.0 }, max: { _id: 300.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.976 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.977 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||5127544800fc1508e4df1ce2 based on: 1|6||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.978 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|8||000000000000000000000000min: { _id: 300.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.978 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 300.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 400.0 } ], shardId: "test.foo-_id_300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.979 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa44516705952b m31100| Fri Feb 22 11:19:36.980 [conn14] splitChunk accepted at version 1|8||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.980 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa44516705952c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976980), what: "split", ns: "test.foo", details: { before: { min: { _id: 300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 300.0 }, max: { _id: 400.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.981 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.982 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||5127544800fc1508e4df1ce2 based on: 1|8||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.983 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|10||000000000000000000000000min: { _id: 400.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.983 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 400.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 500.0 } ], shardId: "test.foo-_id_400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.984 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa44516705952d m31100| Fri Feb 22 11:19:36.985 [conn14] splitChunk accepted at version 1|10||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.985 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa44516705952e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976985), what: "split", ns: "test.foo", details: { before: { min: { _id: 400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 400.0 }, max: { _id: 500.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.986 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.987 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||5127544800fc1508e4df1ce2 based on: 1|10||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.988 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|12||000000000000000000000000min: { _id: 500.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.988 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 500.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 600.0 } ], shardId: "test.foo-_id_500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.988 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa44516705952f m31100| Fri Feb 22 11:19:36.989 [conn14] splitChunk accepted at version 1|12||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.990 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059530", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976990), what: "split", ns: "test.foo", details: { before: { min: { _id: 500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 500.0 }, max: { _id: 600.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.990 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.991 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||5127544800fc1508e4df1ce2 based on: 1|12||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.992 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|14||000000000000000000000000min: { _id: 600.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.992 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 600.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 700.0 } ], shardId: "test.foo-_id_600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.993 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059531 m31100| Fri Feb 22 11:19:36.994 [conn14] splitChunk accepted at version 1|14||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.994 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059532", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976994), what: "split", ns: "test.foo", details: { before: { min: { _id: 600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 600.0 }, max: { _id: 700.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.995 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.996 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||5127544800fc1508e4df1ce2 based on: 1|14||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.996 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|16||000000000000000000000000min: { _id: 700.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.997 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 700.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 800.0 } ], shardId: "test.foo-_id_700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.997 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059533 m31100| Fri Feb 22 11:19:36.998 [conn14] splitChunk accepted at version 1|16||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.999 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059534", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976999), what: "split", ns: "test.foo", details: { before: { min: { _id: 700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 700.0 }, max: { _id: 800.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.999 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.000 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||5127544800fc1508e4df1ce2 based on: 1|16||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.001 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|18||000000000000000000000000min: { _id: 800.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.001 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 800.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 900.0 } ], shardId: "test.foo-_id_800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.002 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059535 m31100| Fri Feb 22 11:19:37.003 [conn14] splitChunk accepted at version 1|18||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.003 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059536", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977003), what: "split", ns: "test.foo", details: { before: { min: { _id: 800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 800.0 }, max: { _id: 900.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 900.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.004 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.004 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||5127544800fc1508e4df1ce2 based on: 1|18||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.005 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|20||000000000000000000000000min: { _id: 900.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.006 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 900.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1000.0 } ], shardId: "test.foo-_id_900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.006 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059537 m31100| Fri Feb 22 11:19:37.007 [conn14] splitChunk accepted at version 1|20||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.008 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059538", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977008), what: "split", ns: "test.foo", details: { before: { min: { _id: 900.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 900.0 }, max: { _id: 1000.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.008 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.009 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||5127544800fc1508e4df1ce2 based on: 1|20||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.010 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|22||000000000000000000000000min: { _id: 1000.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.010 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1000.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1100.0 } ], shardId: "test.foo-_id_1000.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.011 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059539 m31100| Fri Feb 22 11:19:37.012 [conn14] splitChunk accepted at version 1|22||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.012 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705953a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977012), what: "split", ns: "test.foo", details: { before: { min: { _id: 1000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.013 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.014 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||5127544800fc1508e4df1ce2 based on: 1|22||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.015 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|24||000000000000000000000000min: { _id: 1100.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.015 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1100.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1200.0 } ], shardId: "test.foo-_id_1100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.015 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705953b m31100| Fri Feb 22 11:19:37.016 [conn14] splitChunk accepted at version 1|24||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.017 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705953c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977017), what: "split", ns: "test.foo", details: { before: { min: { _id: 1100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.017 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.018 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||5127544800fc1508e4df1ce2 based on: 1|24||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.019 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|26||000000000000000000000000min: { _id: 1200.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.019 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1200.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1300.0 } ], shardId: "test.foo-_id_1200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.020 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705953d m31100| Fri Feb 22 11:19:37.021 [conn14] splitChunk accepted at version 1|26||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.022 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705953e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977022), what: "split", ns: "test.foo", details: { before: { min: { _id: 1200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.022 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.023 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||5127544800fc1508e4df1ce2 based on: 1|26||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.024 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|28||000000000000000000000000min: { _id: 1300.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.024 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1300.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1400.0 } ], shardId: "test.foo-_id_1300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.025 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705953f m31100| Fri Feb 22 11:19:37.026 [conn14] splitChunk accepted at version 1|28||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.026 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059540", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977026), what: "split", ns: "test.foo", details: { before: { min: { _id: 1300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.027 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.028 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||5127544800fc1508e4df1ce2 based on: 1|28||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.029 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|30||000000000000000000000000min: { _id: 1400.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.029 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1400.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1500.0 } ], shardId: "test.foo-_id_1400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.029 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059541 m31100| Fri Feb 22 11:19:37.030 [conn14] splitChunk accepted at version 1|30||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.031 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059542", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977031), what: "split", ns: "test.foo", details: { before: { min: { _id: 1400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.031 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.032 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||5127544800fc1508e4df1ce2 based on: 1|30||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.033 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|32||000000000000000000000000min: { _id: 1500.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.033 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1500.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1600.0 } ], shardId: "test.foo-_id_1500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.034 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059543 m31100| Fri Feb 22 11:19:37.035 [conn14] splitChunk accepted at version 1|32||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.035 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059544", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977035), what: "split", ns: "test.foo", details: { before: { min: { _id: 1500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.036 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.037 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||5127544800fc1508e4df1ce2 based on: 1|32||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.038 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|34||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.038 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1600.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1700.0 } ], shardId: "test.foo-_id_1600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.038 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059545 m31100| Fri Feb 22 11:19:37.039 [conn14] splitChunk accepted at version 1|34||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.040 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059546", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977040), what: "split", ns: "test.foo", details: { before: { min: { _id: 1600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.040 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.041 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||5127544800fc1508e4df1ce2 based on: 1|34||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.042 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|36||000000000000000000000000min: { _id: 1700.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.042 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1700.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1800.0 } ], shardId: "test.foo-_id_1700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.043 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059547 m31100| Fri Feb 22 11:19:37.044 [conn14] splitChunk accepted at version 1|36||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.045 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059548", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977045), what: "split", ns: "test.foo", details: { before: { min: { _id: 1700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.045 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.046 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||5127544800fc1508e4df1ce2 based on: 1|36||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.047 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|38||000000000000000000000000min: { _id: 1800.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.047 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1800.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1900.0 } ], shardId: "test.foo-_id_1800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.048 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059549 m31100| Fri Feb 22 11:19:37.049 [conn14] splitChunk accepted at version 1|38||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.050 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705954a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977050), what: "split", ns: "test.foo", details: { before: { min: { _id: 1800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1900.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.050 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.051 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||5127544800fc1508e4df1ce2 based on: 1|38||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.052 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|40, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 22 m30999| Fri Feb 22 11:19:37.052 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:37.054 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:37.054 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:37.054 [initandlisten] connection accepted from 165.225.128.186:62098 #17 (12 connections now open) m30999| Fri Feb 22 11:19:37.054 [conn1] connected connection! m30999| Fri Feb 22 11:19:37.093 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 0.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:37.093 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|3||000000000000000000000000min: { _id: 0.0 }max: { _id: 100.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:37.094 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:37.094 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 0.0 }, max: { _id: 100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:37.095 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705954b m31100| Fri Feb 22 11:19:37.095 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705954c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977095), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:37.096 [conn14] moveChunk request accepted at version 1|40||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.096 [conn14] moveChunk number of documents: 100 m31100| Fri Feb 22 11:19:37.096 [conn14] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:37.097 [conn14] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 11:19:37.097 [initandlisten] connection accepted from 165.225.128.186:46351 #16 (12 connections now open) m31100| Fri Feb 22 11:19:37.097 [conn14] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m31100| Fri Feb 22 11:19:37.097 [conn14] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m31200| Fri Feb 22 11:19:37.097 [initandlisten] connection accepted from 165.225.128.186:40234 #17 (13 connections now open) m31100| Fri Feb 22 11:19:37.097 [conn14] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m31100| Fri Feb 22 11:19:37.097 [conn14] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m31100| Fri Feb 22 11:19:37.098 [conn14] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m31201| Fri Feb 22 11:19:37.098 [initandlisten] connection accepted from 165.225.128.186:56063 #9 (8 connections now open) m31200| Fri Feb 22 11:19:37.098 [initandlisten] connection accepted from 165.225.128.186:43875 #18 (14 connections now open) m31200| Fri Feb 22 11:19:37.098 [conn16] end connection 165.225.128.186:46351 (13 connections now open) m31100| Fri Feb 22 11:19:37.099 [conn14] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m31201| Fri Feb 22 11:19:37.099 [initandlisten] connection accepted from 165.225.128.186:39391 #10 (9 connections now open) m31100| Fri Feb 22 11:19:37.100 [conn14] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:37.100 [ReplicaSetMonitorWatcher] starting m31200| Fri Feb 22 11:19:37.100 [initandlisten] connection accepted from 165.225.128.186:49821 #19 (14 connections now open) m31200| Fri Feb 22 11:19:37.100 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 100.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:37.100 [migrateThread] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31100| Fri Feb 22 11:19:37.101 [initandlisten] connection accepted from 165.225.128.186:39169 #18 (13 connections now open) m31200| Fri Feb 22 11:19:37.101 [migrateThread] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31200| Fri Feb 22 11:19:37.101 [migrateThread] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m31200| Fri Feb 22 11:19:37.101 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m31100| Fri Feb 22 11:19:37.101 [initandlisten] connection accepted from 165.225.128.186:36167 #19 (14 connections now open) m31200| Fri Feb 22 11:19:37.101 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m31200| Fri Feb 22 11:19:37.102 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m31200| Fri Feb 22 11:19:37.102 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 11:19:37.102 [initandlisten] connection accepted from 165.225.128.186:39290 #10 (8 connections now open) m31100| Fri Feb 22 11:19:37.102 [initandlisten] connection accepted from 165.225.128.186:39920 #20 (15 connections now open) m31100| Fri Feb 22 11:19:37.103 [conn18] end connection 165.225.128.186:39169 (14 connections now open) m31200| Fri Feb 22 11:19:37.103 [migrateThread] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:19:37.103 [initandlisten] connection accepted from 165.225.128.186:36721 #11 (9 connections now open) m31200| Fri Feb 22 11:19:37.104 [migrateThread] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31200| Fri Feb 22 11:19:37.104 [ReplicaSetMonitorWatcher] starting m31100| Fri Feb 22 11:19:37.104 [initandlisten] connection accepted from 165.225.128.186:45792 #21 (15 connections now open) m31200| Fri Feb 22 11:19:37.105 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/test.ns, filling with zeroes... m31200| Fri Feb 22 11:19:37.106 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/test.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:37.106 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/test.0, filling with zeroes... m31200| Fri Feb 22 11:19:37.106 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/test.0, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:37.110 [migrateThread] build index test.foo { _id: 1 } m31100| Fri Feb 22 11:19:37.110 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:37.111 [migrateThread] build index done. scanned 0 total records. 0.001 secs m31200| Fri Feb 22 11:19:37.111 [migrateThread] info: creating collection test.foo on add index m31200| Fri Feb 22 11:19:37.112 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31201| Fri Feb 22 11:19:37.112 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/test.ns, filling with zeroes... m31201| Fri Feb 22 11:19:37.112 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/test.ns, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:37.113 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/test.0, filling with zeroes... m31201| Fri Feb 22 11:19:37.113 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/test.0, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:37.117 [repl writer worker 1] build index test.foo { _id: 1 } m31201| Fri Feb 22 11:19:37.118 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 11:19:37.118 [repl writer worker 1] info: creating collection test.foo on add index m31100| Fri Feb 22 11:19:37.121 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.131 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.141 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.157 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5, clonedBytes: 145, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.189 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8, clonedBytes: 232, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.253 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 14, clonedBytes: 406, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.382 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 27, clonedBytes: 783, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.638 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 52, clonedBytes: 1508, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:37.917 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state SECONDARY m31200| Fri Feb 22 11:19:38.131 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:38.131 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31200| Fri Feb 22 11:19:38.131 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31100| Fri Feb 22 11:19:38.150 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:38.150 [conn14] moveChunk setting version to: 2|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:38.151 [initandlisten] connection accepted from 165.225.128.186:45938 #20 (15 connections now open) m31200| Fri Feb 22 11:19:38.151 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:38.152 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31200| Fri Feb 22 11:19:38.152 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31200| Fri Feb 22 11:19:38.152 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:38-5127544a4384cdc634ba227e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531978152), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, step1 of 5: 11, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 20 } } m29000| Fri Feb 22 11:19:38.152 [initandlisten] connection accepted from 165.225.128.186:64688 #9 (9 connections now open) m31100| Fri Feb 22 11:19:38.161 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:38.161 [conn14] moveChunk updating self version to: 2|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m29000| Fri Feb 22 11:19:38.162 [initandlisten] connection accepted from 165.225.128.186:44442 #10 (10 connections now open) m31100| Fri Feb 22 11:19:38.162 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:38-5127544a8cfa44516705954d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531978162), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:38.162 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:38.162 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:38.162 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:38.162 [conn14] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 100.0 } m31100| Fri Feb 22 11:19:39.175 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:19:39.175 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 0.0 } -> { _id: 100.0 } m31100| Fri Feb 22 11:19:39.175 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:39.175 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:39.175 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:39.176 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:39-5127544b8cfa44516705954e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531979176), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 4, step4 of 6: 1050, step5 of 6: 12, step6 of 6: 1012 } } m31100| Fri Feb 22 11:19:39.176 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 0.0 }, max: { _id: 100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:345 w:10941 reslen:37 2081ms m30999| Fri Feb 22 11:19:39.176 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:39.177 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 2|1||5127544800fc1508e4df1ce2 based on: 1|40||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:39.177 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 23 m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:39.178 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:39.178 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:39.178 [conn1] connected connection! m31101| Fri Feb 22 11:19:39.178 [initandlisten] connection accepted from 165.225.128.186:59280 #12 (10 connections now open) m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 23 m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), authoritative: true, shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 23 m31200| Fri Feb 22 11:19:39.178 [conn15] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 11:19:39.179 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:19:39.179 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:39.180 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:39.180 [conn1] connected connection! m31201| Fri Feb 22 11:19:39.180 [initandlisten] connection accepted from 165.225.128.186:39345 #11 (10 connections now open) m30999| Fri Feb 22 11:19:39.181 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:39.182 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:19:39.182 [initandlisten] connection accepted from 165.225.128.186:43377 #13 (11 connections now open) m30999| Fri Feb 22 11:19:39.182 [conn1] connected connection! m30999| Fri Feb 22 11:19:39.210 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 100.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:39.210 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|5||000000000000000000000000min: { _id: 100.0 }max: { _id: 200.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:39.210 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:39.210 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 100.0 }, max: { _id: 200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:39.211 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127544b8cfa44516705954f m31100| Fri Feb 22 11:19:39.211 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:39-5127544b8cfa445167059550", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531979211), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:39.212 [conn14] moveChunk request accepted at version 2|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:39.213 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:39.213 [migrateThread] starting receiving-end of migration of chunk { _id: 100.0 } -> { _id: 200.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:39.214 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:39.223 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.233 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.244 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.254 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.270 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.302 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.366 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.494 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.751 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:40.234 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:40.234 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31200| Fri Feb 22 11:19:40.234 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31100| Fri Feb 22 11:19:40.263 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:40.263 [conn14] moveChunk setting version to: 3|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:40.263 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:40.265 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31200| Fri Feb 22 11:19:40.265 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31200| Fri Feb 22 11:19:40.265 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:40-5127544c4384cdc634ba227f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531980265), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:40.273 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:40.273 [conn14] moveChunk updating self version to: 3|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:40.274 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:40-5127544c8cfa445167059551", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531980274), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:40.274 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:40.274 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:40.274 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:40.274 [conn14] moveChunk starting delete for: test.foo from { _id: 100.0 } -> { _id: 200.0 } m31100| Fri Feb 22 11:19:41.283 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 986ms m31100| Fri Feb 22 11:19:41.283 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 100.0 } -> { _id: 200.0 } m31100| Fri Feb 22 11:19:41.283 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:41.283 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:41.284 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:41.284 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:41-5127544d8cfa445167059552", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531981284), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1009 } } m31100| Fri Feb 22 11:19:41.284 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 100.0 }, max: { _id: 200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:315 w:10984 reslen:37 2073ms m30999| Fri Feb 22 11:19:41.284 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:41.285 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 24 version: 3|1||5127544800fc1508e4df1ce2 based on: 2|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:41.286 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 3000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 24 m30999| Fri Feb 22 11:19:41.286 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:41.286 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 3000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 24 m30999| Fri Feb 22 11:19:41.287 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:41.289 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:41.289 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:41.289 [conn1] connected connection! m31201| Fri Feb 22 11:19:41.289 [initandlisten] connection accepted from 165.225.128.186:42134 #12 (11 connections now open) m30999| Fri Feb 22 11:19:41.315 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 200.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:41.315 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|7||000000000000000000000000min: { _id: 200.0 }max: { _id: 300.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:41.315 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:41.316 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 200.0 }, max: { _id: 300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:41.317 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127544d8cfa445167059553 m31100| Fri Feb 22 11:19:41.317 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:41-5127544d8cfa445167059554", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531981317), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:41.318 [conn14] moveChunk request accepted at version 3|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:41.318 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:41.318 [migrateThread] starting receiving-end of migration of chunk { _id: 200.0 } -> { _id: 300.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:41.319 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:41.328 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.338 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.348 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.359 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.375 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.407 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.471 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.599 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.856 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:42.338 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:42.338 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31200| Fri Feb 22 11:19:42.338 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31100| Fri Feb 22 11:19:42.368 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:42.368 [conn14] moveChunk setting version to: 4|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:42.368 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:42.368 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31200| Fri Feb 22 11:19:42.368 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31200| Fri Feb 22 11:19:42.369 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:42-5127544e4384cdc634ba2280", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531982369), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:19:42.378 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:42.378 [conn14] moveChunk updating self version to: 4|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:42.379 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:42-5127544e8cfa445167059555", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531982379), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:42.379 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:42.379 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:42.379 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:42.379 [conn14] moveChunk starting delete for: test.foo from { _id: 200.0 } -> { _id: 300.0 } m30999| Fri Feb 22 11:19:42.625 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:42.625 [Balancer] skipping balancing round because balancing is disabled m31100| Fri Feb 22 11:19:43.389 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 984ms m31100| Fri Feb 22 11:19:43.389 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 200.0 } -> { _id: 300.0 } m31100| Fri Feb 22 11:19:43.389 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:43.389 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:43.389 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:43.389 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:43-5127544f8cfa445167059556", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531983389), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1009 } } m31100| Fri Feb 22 11:19:43.390 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 200.0 }, max: { _id: 300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:310 w:11093 reslen:37 2074ms m30999| Fri Feb 22 11:19:43.390 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:43.391 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 4|1||5127544800fc1508e4df1ce2 based on: 3|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:43.395 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 4000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 25 m30999| Fri Feb 22 11:19:43.395 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:43.395 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 4000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 25 m30999| Fri Feb 22 11:19:43.396 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:43.426 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 300.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:43.426 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|9||000000000000000000000000min: { _id: 300.0 }max: { _id: 400.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:43.426 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:43.426 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 300.0 }, max: { _id: 400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:43.427 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127544f8cfa445167059557 m31100| Fri Feb 22 11:19:43.427 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:43-5127544f8cfa445167059558", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531983427), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:43.428 [conn14] moveChunk request accepted at version 4|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:43.428 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:43.429 [migrateThread] starting receiving-end of migration of chunk { _id: 300.0 } -> { _id: 400.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:43.429 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:43.439 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.449 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.459 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.469 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.485 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.518 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.582 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.710 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.966 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:44.449 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:44.449 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31200| Fri Feb 22 11:19:44.449 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31100| Fri Feb 22 11:19:44.479 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:44.479 [conn14] moveChunk setting version to: 5|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:44.479 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:44.480 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31200| Fri Feb 22 11:19:44.480 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31200| Fri Feb 22 11:19:44.480 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:44-512754504384cdc634ba2281", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531984480), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:44.489 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:44.489 [conn14] moveChunk updating self version to: 5|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:44.490 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:44-512754508cfa445167059559", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531984490), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:44.490 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:44.490 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:44.490 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:44.490 [conn14] moveChunk starting delete for: test.foo from { _id: 300.0 } -> { _id: 400.0 } m31100| Fri Feb 22 11:19:45.501 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:19:45.501 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 300.0 } -> { _id: 400.0 } m31100| Fri Feb 22 11:19:45.501 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:45.501 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:45.501 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:45.501 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:45-512754518cfa44516705955a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531985501), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1010 } } m31100| Fri Feb 22 11:19:45.501 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 300.0 }, max: { _id: 400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:285 w:11569 reslen:37 2075ms m30999| Fri Feb 22 11:19:45.501 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:45.503 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 26 version: 5|1||5127544800fc1508e4df1ce2 based on: 4|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:45.504 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 5000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 26 m30999| Fri Feb 22 11:19:45.504 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:45.504 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 5000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 26 m30999| Fri Feb 22 11:19:45.505 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:45.537 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 400.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:45.537 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|11||000000000000000000000000min: { _id: 400.0 }max: { _id: 500.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:45.537 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:45.538 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 400.0 }, max: { _id: 500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:45.539 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754518cfa44516705955b m31100| Fri Feb 22 11:19:45.539 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:45-512754518cfa44516705955c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531985539), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:45.540 [conn14] moveChunk request accepted at version 5|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:45.540 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:45.541 [migrateThread] starting receiving-end of migration of chunk { _id: 400.0 } -> { _id: 500.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:45.542 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:45.551 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.561 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.571 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.581 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.598 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.630 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.694 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.822 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31201| Fri Feb 22 11:19:45.918 [conn2] end connection 165.225.128.186:50247 (10 connections now open) m31201| Fri Feb 22 11:19:45.918 [initandlisten] connection accepted from 165.225.128.186:45106 #13 (11 connections now open) m31100| Fri Feb 22 11:19:46.078 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:46.562 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:46.562 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31200| Fri Feb 22 11:19:46.563 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31100| Fri Feb 22 11:19:46.591 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:46.591 [conn14] moveChunk setting version to: 6|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:46.591 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:46.593 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31200| Fri Feb 22 11:19:46.593 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31200| Fri Feb 22 11:19:46.593 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:46-512754524384cdc634ba2282", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531986593), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:46.601 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:46.601 [conn14] moveChunk updating self version to: 6|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:46.602 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:46-512754528cfa44516705955d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531986602), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:46.602 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:46.602 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:46.602 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:46.602 [conn14] moveChunk starting delete for: test.foo from { _id: 400.0 } -> { _id: 500.0 } m30999| Fri Feb 22 11:19:46.782 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986782), ok: 1.0 } m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986783), ok: 1.0 } m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986783), ok: 1.0 } m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986783), ok: 1.0 } m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986790), ok: 1.0 } m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986790), ok: 1.0 } m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986791), ok: 1.0 } m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986791), ok: 1.0 } m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31101| Fri Feb 22 11:19:47.389 [conn4] end connection 165.225.128.186:46605 (10 connections now open) m31101| Fri Feb 22 11:19:47.389 [initandlisten] connection accepted from 165.225.128.186:40295 #14 (11 connections now open) m31100| Fri Feb 22 11:19:47.614 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 976ms m31100| Fri Feb 22 11:19:47.614 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 400.0 } -> { _id: 500.0 } m31100| Fri Feb 22 11:19:47.614 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:47.614 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:47.615 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:47.615 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:47-512754538cfa44516705955e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531987615), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1012 } } m31100| Fri Feb 22 11:19:47.615 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 400.0 }, max: { _id: 500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:293 w:11361 reslen:37 2077ms m30999| Fri Feb 22 11:19:47.615 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:47.616 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 6|1||5127544800fc1508e4df1ce2 based on: 5|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:47.617 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 6000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 27 m30999| Fri Feb 22 11:19:47.617 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:47.617 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 6000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 27 m30999| Fri Feb 22 11:19:47.618 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:47.647 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 500.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:47.647 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|13||000000000000000000000000min: { _id: 500.0 }max: { _id: 600.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:47.647 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:47.648 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 500.0 }, max: { _id: 600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:47.648 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754538cfa44516705955f m31100| Fri Feb 22 11:19:47.649 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:47-512754538cfa445167059560", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531987649), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:47.650 [conn14] moveChunk request accepted at version 6|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:47.650 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:47.650 [migrateThread] starting receiving-end of migration of chunk { _id: 500.0 } -> { _id: 600.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:47.651 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:47.660 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.671 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.681 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.691 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.707 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.739 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.804 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.932 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:48.108 [conn3] end connection 165.225.128.186:41805 (14 connections now open) m31200| Fri Feb 22 11:19:48.109 [initandlisten] connection accepted from 165.225.128.186:33784 #21 (15 connections now open) m31100| Fri Feb 22 11:19:48.188 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:19:48.626 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:48.626 [Balancer] skipping balancing round because balancing is disabled m31200| Fri Feb 22 11:19:48.670 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:48.670 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31200| Fri Feb 22 11:19:48.671 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31100| Fri Feb 22 11:19:48.700 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:48.700 [conn14] moveChunk setting version to: 7|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:48.700 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:48.701 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31200| Fri Feb 22 11:19:48.701 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31200| Fri Feb 22 11:19:48.701 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:48-512754544384cdc634ba2283", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531988701), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:48.711 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:48.711 [conn14] moveChunk updating self version to: 7|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:48.711 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:48-512754548cfa445167059561", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531988711), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:48.712 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:48.712 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:48.712 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:48.712 [conn14] moveChunk starting delete for: test.foo from { _id: 500.0 } -> { _id: 600.0 } m31100| Fri Feb 22 11:19:49.600 [conn7] end connection 165.225.128.186:59785 (14 connections now open) m31100| Fri Feb 22 11:19:49.600 [initandlisten] connection accepted from 165.225.128.186:64933 #22 (15 connections now open) m31100| Fri Feb 22 11:19:49.722 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:19:49.722 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 500.0 } -> { _id: 600.0 } m31100| Fri Feb 22 11:19:49.722 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:49.722 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:49.723 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:49.723 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:49-512754558cfa445167059562", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531989723), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1010 } } m31100| Fri Feb 22 11:19:49.723 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 500.0 }, max: { _id: 600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:25 r:314 w:10697 reslen:37 2075ms m30999| Fri Feb 22 11:19:49.723 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:49.724 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 7|1||5127544800fc1508e4df1ce2 based on: 6|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:49.725 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 7000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 28 m30999| Fri Feb 22 11:19:49.725 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:49.725 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 7000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 28 m30999| Fri Feb 22 11:19:49.726 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:49.753 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 600.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:49.753 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|15||000000000000000000000000min: { _id: 600.0 }max: { _id: 700.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:49.754 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:49.754 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 600.0 }, max: { _id: 700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:49.755 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754558cfa445167059563 m31100| Fri Feb 22 11:19:49.755 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:49-512754558cfa445167059564", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531989755), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:49.756 [conn14] moveChunk request accepted at version 7|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:49.756 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:49.757 [migrateThread] starting receiving-end of migration of chunk { _id: 600.0 } -> { _id: 700.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:49.757 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:49.767 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.777 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.787 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.797 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.814 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.846 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.910 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:50.038 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:50.295 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:50.778 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:50.778 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31200| Fri Feb 22 11:19:50.778 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31100| Fri Feb 22 11:19:50.807 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:50.807 [conn14] moveChunk setting version to: 8|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:50.807 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:50.809 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31200| Fri Feb 22 11:19:50.809 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31200| Fri Feb 22 11:19:50.809 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:50-512754564384cdc634ba2284", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531990809), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:50.817 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:50.817 [conn14] moveChunk updating self version to: 8|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:50.818 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:50-512754568cfa445167059565", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531990818), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:50.818 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:50.818 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:50.818 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:50.818 [conn14] moveChunk starting delete for: test.foo from { _id: 600.0 } -> { _id: 700.0 } m31100| Fri Feb 22 11:19:51.829 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:19:51.830 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 600.0 } -> { _id: 700.0 } m31100| Fri Feb 22 11:19:51.830 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:51.830 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:51.840 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:51.840 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:51-512754578cfa445167059566", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531991840), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:19:51.840 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 600.0 }, max: { _id: 700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:35 r:341 w:10844 reslen:37 2086ms m30999| Fri Feb 22 11:19:51.840 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:51.841 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 8|1||5127544800fc1508e4df1ce2 based on: 7|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:51.842 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 8000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 29 m30999| Fri Feb 22 11:19:51.842 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:51.842 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 8000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 29 m30999| Fri Feb 22 11:19:51.843 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:51.870 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 700.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:51.870 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|17||000000000000000000000000min: { _id: 700.0 }max: { _id: 800.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:51.870 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:51.870 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 700.0 }, max: { _id: 800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:51.871 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754578cfa445167059567 m31100| Fri Feb 22 11:19:51.871 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:51-512754578cfa445167059568", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531991871), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:51.872 [conn14] moveChunk request accepted at version 8|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:51.872 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:51.873 [migrateThread] starting receiving-end of migration of chunk { _id: 700.0 } -> { _id: 800.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:51.873 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:51.883 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.893 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.903 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.913 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.930 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.962 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.026 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.154 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.410 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:52.892 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:52.892 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31200| Fri Feb 22 11:19:52.893 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31100| Fri Feb 22 11:19:52.923 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.923 [conn14] moveChunk setting version to: 9|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:52.923 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:52.923 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31200| Fri Feb 22 11:19:52.923 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31200| Fri Feb 22 11:19:52.923 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:52-512754584384cdc634ba2285", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531992923), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1018, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:19:52.933 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:52.933 [conn14] moveChunk updating self version to: 9|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:52.934 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:52-512754588cfa445167059569", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531992934), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:52.934 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:52.934 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:52.934 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:52.934 [conn14] moveChunk starting delete for: test.foo from { _id: 700.0 } -> { _id: 800.0 } m31100| Fri Feb 22 11:19:53.943 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 985ms m31100| Fri Feb 22 11:19:53.943 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 700.0 } -> { _id: 800.0 } m31100| Fri Feb 22 11:19:53.943 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:53.943 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:53.944 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:53.944 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:53-512754598cfa44516705956a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531993944), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1009 } } m31100| Fri Feb 22 11:19:53.944 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 700.0 }, max: { _id: 800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:313 w:11356 reslen:37 2073ms m30999| Fri Feb 22 11:19:53.944 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:53.945 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 9|1||5127544800fc1508e4df1ce2 based on: 8|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:53.946 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 9000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 30 m30999| Fri Feb 22 11:19:53.946 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:53.946 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 9000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 30 m30999| Fri Feb 22 11:19:53.947 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:53.973 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 800.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:53.973 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|19||000000000000000000000000min: { _id: 800.0 }max: { _id: 900.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:53.974 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:53.974 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 800.0 }, max: { _id: 900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:53.975 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754598cfa44516705956b m31100| Fri Feb 22 11:19:53.975 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:53-512754598cfa44516705956c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531993975), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:53.976 [conn14] moveChunk request accepted at version 9|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:53.976 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:53.976 [migrateThread] starting receiving-end of migration of chunk { _id: 800.0 } -> { _id: 900.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:53.977 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:53.986 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:53.996 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.007 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.017 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.033 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.065 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.129 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.258 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.514 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:19:54.627 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:54.627 [Balancer] skipping balancing round because balancing is disabled m31200| Fri Feb 22 11:19:54.997 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:54.997 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31200| Fri Feb 22 11:19:54.997 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31100| Fri Feb 22 11:19:55.026 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:55.026 [conn14] moveChunk setting version to: 10|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:55.026 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:55.028 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31200| Fri Feb 22 11:19:55.028 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31200| Fri Feb 22 11:19:55.028 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:55-5127545b4384cdc634ba2286", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531995028), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:55.036 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:55.037 [conn14] moveChunk updating self version to: 10|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:55.038 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:55-5127545b8cfa44516705956d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531995038), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:55.038 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:55.038 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:55.038 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:55.038 [conn14] moveChunk starting delete for: test.foo from { _id: 800.0 } -> { _id: 900.0 } m31100| Fri Feb 22 11:19:56.050 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:19:56.050 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 800.0 } -> { _id: 900.0 } m31100| Fri Feb 22 11:19:56.050 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:56.050 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:56.050 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:56.050 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:56-5127545c8cfa44516705956e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531996050), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1012 } } m31100| Fri Feb 22 11:19:56.051 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 800.0 }, max: { _id: 900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:366 w:11162 reslen:37 2076ms m30999| Fri Feb 22 11:19:56.051 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:56.052 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 10|1||5127544800fc1508e4df1ce2 based on: 9|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:56.052 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 10000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 31 m30999| Fri Feb 22 11:19:56.052 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:56.053 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 10000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 31 m30999| Fri Feb 22 11:19:56.053 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:56.106 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 900.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:56.106 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|21||000000000000000000000000min: { _id: 900.0 }max: { _id: 1000.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:56.106 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:56.107 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 900.0 }, max: { _id: 1000.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:56.108 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127545c8cfa44516705956f m31100| Fri Feb 22 11:19:56.108 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:56-5127545c8cfa445167059570", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531996108), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:56.109 [conn14] moveChunk request accepted at version 10|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:56.109 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:56.109 [migrateThread] starting receiving-end of migration of chunk { _id: 900.0 } -> { _id: 1000.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:56.110 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:56.119 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.130 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.140 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.150 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.166 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.198 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.262 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.391 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.647 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:19:56.791 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996793), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996793), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996793), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.794 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996794), ok: 1.0 } m30999| Fri Feb 22 11:19:56.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31200| Fri Feb 22 11:19:57.129 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:57.129 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31200| Fri Feb 22 11:19:57.130 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31100| Fri Feb 22 11:19:57.159 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:57.159 [conn14] moveChunk setting version to: 11|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:57.159 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:57.160 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31200| Fri Feb 22 11:19:57.160 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31200| Fri Feb 22 11:19:57.161 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:57-5127545d4384cdc634ba2287", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531997161), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:57.169 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:57.169 [conn14] moveChunk updating self version to: 11|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:57.170 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:57-5127545d8cfa445167059571", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531997170), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:57.170 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:57.170 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:57.170 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:57.170 [conn14] moveChunk starting delete for: test.foo from { _id: 900.0 } -> { _id: 1000.0 } m31100| Fri Feb 22 11:19:58.182 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:19:58.182 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 900.0 } -> { _id: 1000.0 } m31100| Fri Feb 22 11:19:58.182 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:58.182 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:58.183 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:58.183 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:58-5127545e8cfa445167059572", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531998183), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:19:58.183 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 900.0 }, max: { _id: 1000.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:317 w:11088 reslen:37 2076ms m30999| Fri Feb 22 11:19:58.183 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:58.184 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 11|1||5127544800fc1508e4df1ce2 based on: 10|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:58.185 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 11000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 32 m30999| Fri Feb 22 11:19:58.185 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:58.185 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 11000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 32 m30999| Fri Feb 22 11:19:58.186 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:58.213 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1000.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:58.213 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|23||000000000000000000000000min: { _id: 1000.0 }max: { _id: 1100.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:58.213 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:58.213 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1000.0 }, max: { _id: 1100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1000.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:58.214 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127545e8cfa445167059573 m31100| Fri Feb 22 11:19:58.214 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:58-5127545e8cfa445167059574", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531998214), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:58.215 [conn14] moveChunk request accepted at version 11|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:58.216 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:58.216 [migrateThread] starting receiving-end of migration of chunk { _id: 1000.0 } -> { _id: 1100.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:58.216 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:58.226 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.236 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.246 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.256 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.273 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.305 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.369 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.497 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.753 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:59.238 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:59.238 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31200| Fri Feb 22 11:19:59.239 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31100| Fri Feb 22 11:19:59.266 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:59.266 [conn14] moveChunk setting version to: 12|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:59.266 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:59.269 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31200| Fri Feb 22 11:19:59.269 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31200| Fri Feb 22 11:19:59.269 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:59-5127545f4384cdc634ba2288", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531999269), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1021, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:59.276 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:59.276 [conn14] moveChunk updating self version to: 12|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:59.277 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:59-5127545f8cfa445167059575", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531999277), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:59.277 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:59.277 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:59.277 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:59.277 [conn14] moveChunk starting delete for: test.foo from { _id: 1000.0 } -> { _id: 1100.0 } m31100| Fri Feb 22 11:20:00.289 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:20:00.289 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1000.0 } -> { _id: 1100.0 } m31100| Fri Feb 22 11:20:00.289 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:00.289 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:00.290 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:00.290 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:00-512754608cfa445167059576", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532000290), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:00.290 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1000.0 }, max: { _id: 1100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1000.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:312 w:11094 reslen:37 2077ms m30999| Fri Feb 22 11:20:00.290 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:00.292 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 33 version: 12|1||5127544800fc1508e4df1ce2 based on: 11|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:00.293 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 12000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 33 m30999| Fri Feb 22 11:20:00.293 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:00.293 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 12000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 33 m30999| Fri Feb 22 11:20:00.294 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:00.322 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1100.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:00.322 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|25||000000000000000000000000min: { _id: 1100.0 }max: { _id: 1200.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:00.322 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:00.323 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1100.0 }, max: { _id: 1200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:00.323 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754608cfa445167059577 m31100| Fri Feb 22 11:20:00.324 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:00-512754608cfa445167059578", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532000323), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:00.324 [conn14] moveChunk request accepted at version 12|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:00.325 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:00.325 [migrateThread] starting receiving-end of migration of chunk { _id: 1100.0 } -> { _id: 1200.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:00.326 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:00.335 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.345 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.356 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.366 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.382 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.414 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.478 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.606 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:20:00.628 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:00.628 [Balancer] skipping balancing round because balancing is disabled m31100| Fri Feb 22 11:20:00.863 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:01.347 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:01.347 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31200| Fri Feb 22 11:20:01.347 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31100| Fri Feb 22 11:20:01.375 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:01.375 [conn14] moveChunk setting version to: 13|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:01.375 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:01.378 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31200| Fri Feb 22 11:20:01.378 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31200| Fri Feb 22 11:20:01.378 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:01-512754614384cdc634ba2289", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532001378), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:01.385 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:01.385 [conn14] moveChunk updating self version to: 13|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:01.386 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:01-512754618cfa445167059579", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532001386), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:01.386 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:01.386 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:01.386 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:01.386 [conn14] moveChunk starting delete for: test.foo from { _id: 1100.0 } -> { _id: 1200.0 } m31100| Fri Feb 22 11:20:02.400 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:20:02.400 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1100.0 } -> { _id: 1200.0 } m31100| Fri Feb 22 11:20:02.400 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:02.400 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:02.401 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:02.401 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:02-512754628cfa44516705957a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532002401), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1014 } } m31100| Fri Feb 22 11:20:02.401 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1100.0 }, max: { _id: 1200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:311 w:11663 reslen:37 2078ms m30999| Fri Feb 22 11:20:02.401 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:02.402 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 13|1||5127544800fc1508e4df1ce2 based on: 12|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:02.403 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 13000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 34 m30999| Fri Feb 22 11:20:02.403 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:02.403 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 13000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 34 m30999| Fri Feb 22 11:20:02.404 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:02.432 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1200.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:02.432 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|27||000000000000000000000000min: { _id: 1200.0 }max: { _id: 1300.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:02.432 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:02.433 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1200.0 }, max: { _id: 1300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:02.434 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754628cfa44516705957b m31100| Fri Feb 22 11:20:02.434 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:02-512754628cfa44516705957c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532002434), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:02.435 [conn14] moveChunk request accepted at version 13|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:02.435 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:02.435 [migrateThread] starting receiving-end of migration of chunk { _id: 1200.0 } -> { _id: 1300.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:02.436 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:02.445 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.456 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.466 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.476 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.492 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.524 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.589 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.717 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.973 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:03.461 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:03.462 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31200| Fri Feb 22 11:20:03.462 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31100| Fri Feb 22 11:20:03.486 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:03.486 [conn14] moveChunk setting version to: 14|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:03.486 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:03.493 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31200| Fri Feb 22 11:20:03.493 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31200| Fri Feb 22 11:20:03.493 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:03-512754634384cdc634ba228a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532003493), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1025, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:03.496 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:03.496 [conn14] moveChunk updating self version to: 14|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:03.497 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:03-512754638cfa44516705957d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532003497), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:03.497 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:03.497 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:03.497 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:03.497 [conn14] moveChunk starting delete for: test.foo from { _id: 1200.0 } -> { _id: 1300.0 } m31100| Fri Feb 22 11:20:04.509 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:20:04.509 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1200.0 } -> { _id: 1300.0 } m31100| Fri Feb 22 11:20:04.509 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:04.509 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:04.510 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:04.510 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:04-512754648cfa44516705957e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532004510), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1012 } } m31100| Fri Feb 22 11:20:04.510 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1200.0 }, max: { _id: 1300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:34 r:292 w:11307 reslen:37 2077ms m30999| Fri Feb 22 11:20:04.510 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:04.511 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 14|1||5127544800fc1508e4df1ce2 based on: 13|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 14000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 35 m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 14000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 35 m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:04.549 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1300.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:04.549 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|29||000000000000000000000000min: { _id: 1300.0 }max: { _id: 1400.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:04.549 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:04.549 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1300.0 }, max: { _id: 1400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:04.550 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754648cfa44516705957f m31100| Fri Feb 22 11:20:04.550 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:04-512754648cfa445167059580", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532004550), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:04.551 [conn14] moveChunk request accepted at version 14|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:04.551 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:04.551 [migrateThread] starting receiving-end of migration of chunk { _id: 1300.0 } -> { _id: 1400.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:04.552 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:04.562 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.572 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.582 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.592 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.609 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.641 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.705 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.833 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:05.089 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:05.576 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:05.576 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31200| Fri Feb 22 11:20:05.576 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31100| Fri Feb 22 11:20:05.602 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:05.602 [conn14] moveChunk setting version to: 15|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:05.602 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:05.607 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31200| Fri Feb 22 11:20:05.607 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31200| Fri Feb 22 11:20:05.607 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:05-512754654384cdc634ba228b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532005607), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1023, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:05.612 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:05.612 [conn14] moveChunk updating self version to: 15|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:05.613 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:05-512754658cfa445167059581", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532005613), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:05.613 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:05.613 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:05.613 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:05.613 [conn14] moveChunk starting delete for: test.foo from { _id: 1300.0 } -> { _id: 1400.0 } m30999| Fri Feb 22 11:20:06.609 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 11:20:06 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838', sleeping for 30000ms m31100| Fri Feb 22 11:20:06.627 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:20:06.627 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1300.0 } -> { _id: 1400.0 } m31100| Fri Feb 22 11:20:06.627 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:06.627 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:06.627 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:06.627 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:06-512754668cfa445167059582", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532006627), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1013 } } m31100| Fri Feb 22 11:20:06.627 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1300.0 }, max: { _id: 1400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:269 w:10214 reslen:37 2078ms m30999| Fri Feb 22 11:20:06.628 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:06.629 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 36 version: 15|1||5127544800fc1508e4df1ce2 based on: 14|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:06.629 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:20:06.629 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:06.629 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:06.629 [conn1] connected connection! m29000| Fri Feb 22 11:20:06.629 [initandlisten] connection accepted from 165.225.128.186:50718 #11 (11 connections now open) m30999| Fri Feb 22 11:20:06.629 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:20:06.630 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 15000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 36 m30999| Fri Feb 22 11:20:06.630 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:06.630 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 15000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 36 m30999| Fri Feb 22 11:20:06.631 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:06.660 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1400.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:06.660 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|31||000000000000000000000000min: { _id: 1400.0 }max: { _id: 1500.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:06.661 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:06.661 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1400.0 }, max: { _id: 1500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:06.662 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754668cfa445167059583 m31100| Fri Feb 22 11:20:06.662 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:06-512754668cfa445167059584", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532006662), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:06.663 [conn14] moveChunk request accepted at version 15|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:06.663 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:06.663 [migrateThread] starting receiving-end of migration of chunk { _id: 1400.0 } -> { _id: 1500.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:06.664 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:06.674 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.684 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.694 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.705 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.721 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.753 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006794), ok: 1.0 } m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006794), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006795), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006795), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006795), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006796), ok: 1.0 } m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006796), ok: 1.0 } m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006796), ok: 1.0 } m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:06.817 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.945 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:07.202 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:07.690 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:07.690 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31200| Fri Feb 22 11:20:07.691 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31100| Fri Feb 22 11:20:07.714 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:07.714 [conn14] moveChunk setting version to: 16|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:07.714 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:07.722 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31200| Fri Feb 22 11:20:07.722 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31200| Fri Feb 22 11:20:07.722 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:07-512754674384cdc634ba228c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532007722), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1025, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:07.724 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:07.724 [conn14] moveChunk updating self version to: 16|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:07.725 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:07-512754678cfa445167059585", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532007725), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:07.725 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:07.725 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:07.725 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:07.725 [conn14] moveChunk starting delete for: test.foo from { _id: 1400.0 } -> { _id: 1500.0 } m31100| Fri Feb 22 11:20:08.739 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:20:08.739 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1400.0 } -> { _id: 1500.0 } m31100| Fri Feb 22 11:20:08.739 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:08.739 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:08.740 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:08.740 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:08-512754688cfa445167059586", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532008740), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1013 } } m31100| Fri Feb 22 11:20:08.740 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1400.0 }, max: { _id: 1500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:24 r:290 w:11489 reslen:37 2079ms m30999| Fri Feb 22 11:20:08.740 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:08.741 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 16|1||5127544800fc1508e4df1ce2 based on: 15|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:08.742 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 16000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 37 m30999| Fri Feb 22 11:20:08.742 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:08.742 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 16000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 37 m30999| Fri Feb 22 11:20:08.743 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:08.770 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1500.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:08.770 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|33||000000000000000000000000min: { _id: 1500.0 }max: { _id: 1600.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:08.770 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:08.770 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1500.0 }, max: { _id: 1600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:08.771 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754688cfa445167059587 m31100| Fri Feb 22 11:20:08.771 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:08-512754688cfa445167059588", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532008771), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:08.772 [conn14] moveChunk request accepted at version 16|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:08.773 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:08.773 [migrateThread] starting receiving-end of migration of chunk { _id: 1500.0 } -> { _id: 1600.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:08.773 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:08.783 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.793 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.803 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.814 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.830 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.862 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.926 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:09.054 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:09.311 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:09.795 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:09.795 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31200| Fri Feb 22 11:20:09.795 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31100| Fri Feb 22 11:20:09.823 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:09.823 [conn14] moveChunk setting version to: 17|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:09.823 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:09.825 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31200| Fri Feb 22 11:20:09.825 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31200| Fri Feb 22 11:20:09.825 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:09-512754694384cdc634ba228d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532009825), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:20:09.834 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:09.834 [conn14] moveChunk updating self version to: 17|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:09.834 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:09-512754698cfa445167059589", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532009834), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:09.834 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:09.834 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:09.835 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:09.835 [conn14] moveChunk starting delete for: test.foo from { _id: 1500.0 } -> { _id: 1600.0 } m31100| Fri Feb 22 11:20:10.846 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:20:10.846 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1500.0 } -> { _id: 1600.0 } m31100| Fri Feb 22 11:20:10.846 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:10.846 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:10.847 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:10.847 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:10-5127546a8cfa44516705958a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532010847), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:10.847 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1500.0 }, max: { _id: 1600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:30 r:288 w:11084 reslen:37 2076ms m30999| Fri Feb 22 11:20:10.847 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:10.848 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 17|1||5127544800fc1508e4df1ce2 based on: 16|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:10.849 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 17000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 38 m30999| Fri Feb 22 11:20:10.849 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:10.849 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 17000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 38 m30999| Fri Feb 22 11:20:10.850 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:10.878 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1600.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:10.878 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|35||000000000000000000000000min: { _id: 1600.0 }max: { _id: 1700.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:10.878 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:10.878 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1600.0 }, max: { _id: 1700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:10.879 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127546a8cfa44516705958b m31100| Fri Feb 22 11:20:10.879 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:10-5127546a8cfa44516705958c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532010879), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:10.880 [conn14] moveChunk request accepted at version 17|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:10.880 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:10.881 [migrateThread] starting receiving-end of migration of chunk { _id: 1600.0 } -> { _id: 1700.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:10.881 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:10.891 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.901 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.911 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.921 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.937 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.970 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.034 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.162 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.418 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:11.905 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:11.905 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31200| Fri Feb 22 11:20:11.905 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31100| Fri Feb 22 11:20:11.931 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.931 [conn14] moveChunk setting version to: 18|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:11.931 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:11.936 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31200| Fri Feb 22 11:20:11.936 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31200| Fri Feb 22 11:20:11.936 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:11-5127546b4384cdc634ba228e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532011936), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1023, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:11.941 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:11.941 [conn14] moveChunk updating self version to: 18|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:11.942 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:11-5127546b8cfa44516705958d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532011942), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:11.942 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:11.942 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:11.942 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:11.942 [conn14] moveChunk starting delete for: test.foo from { _id: 1600.0 } -> { _id: 1700.0 } m30999| Fri Feb 22 11:20:12.631 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:12.631 [Balancer] skipping balancing round because balancing is disabled m31100| Fri Feb 22 11:20:12.953 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:20:12.953 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1600.0 } -> { _id: 1700.0 } m31100| Fri Feb 22 11:20:12.953 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:12.953 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:12.953 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:12.953 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:12-5127546c8cfa44516705958e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532012953), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1010 } } m31100| Fri Feb 22 11:20:12.953 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1600.0 }, max: { _id: 1700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:315 w:10751 reslen:37 2075ms m30999| Fri Feb 22 11:20:12.953 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:12.955 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 18|1||5127544800fc1508e4df1ce2 based on: 17|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:12.956 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 18000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 39 m30999| Fri Feb 22 11:20:12.956 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:12.956 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 18000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 39 m30999| Fri Feb 22 11:20:12.957 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:12.988 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1700.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:12.988 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|37||000000000000000000000000min: { _id: 1700.0 }max: { _id: 1800.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:12.988 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:12.988 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1700.0 }, max: { _id: 1800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:12.989 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127546c8cfa44516705958f m31100| Fri Feb 22 11:20:12.989 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:12-5127546c8cfa445167059590", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532012989), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:12.991 [conn14] moveChunk request accepted at version 18|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:12.991 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:12.991 [migrateThread] starting receiving-end of migration of chunk { _id: 1700.0 } -> { _id: 1800.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:12.992 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:13.001 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.012 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.022 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.032 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.048 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.081 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.145 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.273 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.529 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:14.012 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:14.012 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31200| Fri Feb 22 11:20:14.013 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31100| Fri Feb 22 11:20:14.041 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:14.042 [conn14] moveChunk setting version to: 19|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:14.042 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:14.043 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31200| Fri Feb 22 11:20:14.043 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31200| Fri Feb 22 11:20:14.043 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:14-5127546e4384cdc634ba228f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532014043), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:14.052 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:14.052 [conn14] moveChunk updating self version to: 19|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:14.053 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:14-5127546e8cfa445167059591", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532014053), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:14.053 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:14.053 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:14.053 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:14.053 [conn14] moveChunk starting delete for: test.foo from { _id: 1700.0 } -> { _id: 1800.0 } m31100| Fri Feb 22 11:20:15.064 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:20:15.064 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1700.0 } -> { _id: 1800.0 } m31100| Fri Feb 22 11:20:15.064 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:15.064 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:15.064 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:15.065 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:15-5127546f8cfa445167059592", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532015065), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:15.065 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1700.0 }, max: { _id: 1800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:309 w:11307 reslen:37 2076ms m30999| Fri Feb 22 11:20:15.065 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:15.066 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 19|1||5127544800fc1508e4df1ce2 based on: 18|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:15.067 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 19000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 40 m30999| Fri Feb 22 11:20:15.067 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:15.067 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 19000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 40 m30999| Fri Feb 22 11:20:15.068 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:15.095 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1800.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:15.095 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|39||000000000000000000000000min: { _id: 1800.0 }max: { _id: 1900.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:15.095 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:15.095 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1800.0 }, max: { _id: 1900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:15.096 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127546f8cfa445167059593 m31100| Fri Feb 22 11:20:15.096 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:15-5127546f8cfa445167059594", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532015096), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:15.097 [conn14] moveChunk request accepted at version 19|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:15.098 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:15.098 [migrateThread] starting receiving-end of migration of chunk { _id: 1800.0 } -> { _id: 1900.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:15.099 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:15.108 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.118 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.128 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.139 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.155 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.187 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.251 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.380 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.636 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31201| Fri Feb 22 11:20:15.930 [conn13] end connection 165.225.128.186:45106 (10 connections now open) m31201| Fri Feb 22 11:20:15.931 [initandlisten] connection accepted from 165.225.128.186:38149 #14 (11 connections now open) m31200| Fri Feb 22 11:20:16.122 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:16.122 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31200| Fri Feb 22 11:20:16.122 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31100| Fri Feb 22 11:20:16.148 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:16.148 [conn14] moveChunk setting version to: 20|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:16.148 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:16.153 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31200| Fri Feb 22 11:20:16.153 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31200| Fri Feb 22 11:20:16.153 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:16-512754704384cdc634ba2290", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532016153), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1022, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:16.158 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:16.158 [conn14] moveChunk updating self version to: 20|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:16.159 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:16-512754708cfa445167059595", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532016159), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:16.159 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:16.159 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:16.159 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:16.159 [conn14] moveChunk starting delete for: test.foo from { _id: 1800.0 } -> { _id: 1900.0 } m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016797), ok: 1.0 } m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016797), ok: 1.0 } m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016798), ok: 1.0 } m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016798), ok: 1.0 } m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016798), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016799), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016799), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016799), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:17.171 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:20:17.171 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1800.0 } -> { _id: 1900.0 } m31100| Fri Feb 22 11:20:17.171 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:17.171 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:17.172 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:17.172 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:17-512754718cfa445167059596", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532017172), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:17.172 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1800.0 }, max: { _id: 1900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:319 w:11955 reslen:37 2076ms m30999| Fri Feb 22 11:20:17.172 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:17.173 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 20|1||5127544800fc1508e4df1ce2 based on: 19|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:17.174 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 20000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 41 m30999| Fri Feb 22 11:20:17.174 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:17.174 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 20000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 41 m30999| Fri Feb 22 11:20:17.175 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:17.202 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1900.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:17.202 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|40||000000000000000000000000min: { _id: 1900.0 }max: { _id: MaxKey }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:17.202 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:17.203 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1900.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:17.203 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754718cfa445167059597 m31100| Fri Feb 22 11:20:17.203 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:17-512754718cfa445167059598", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532017203), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:17.205 [conn14] moveChunk request accepted at version 20|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:17.205 [conn14] moveChunk number of documents: 200 m31200| Fri Feb 22 11:20:17.205 [migrateThread] starting receiving-end of migration of chunk { _id: 1900.0 } -> { _id: MaxKey } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:17.206 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:17.215 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.226 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.236 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.246 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.262 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.295 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.359 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31101| Fri Feb 22 11:20:17.394 [conn14] end connection 165.225.128.186:40295 (10 connections now open) m31101| Fri Feb 22 11:20:17.394 [initandlisten] connection accepted from 165.225.128.186:61602 #15 (11 connections now open) m31100| Fri Feb 22 11:20:17.487 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.744 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:18.113 [conn21] end connection 165.225.128.186:33784 (14 connections now open) m31200| Fri Feb 22 11:20:18.113 [initandlisten] connection accepted from 165.225.128.186:38882 #22 (15 connections now open) m31100| Fri Feb 22 11:20:18.256 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 103, clonedBytes: 2987, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:20:18.632 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:18.632 [Balancer] skipping balancing round because balancing is disabled m31200| Fri Feb 22 11:20:19.250 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:19.250 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31200| Fri Feb 22 11:20:19.251 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31100| Fri Feb 22 11:20:19.280 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 200, clonedBytes: 5800, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:19.280 [conn14] moveChunk setting version to: 21|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:19.280 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:19.281 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31200| Fri Feb 22 11:20:19.281 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31200| Fri Feb 22 11:20:19.281 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:19-512754734384cdc634ba2291", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532019281), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 2044, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:20:19.290 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 200, clonedBytes: 5800, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:19.290 [conn14] moveChunk updating self version to: 21|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:19.291 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:19-512754738cfa445167059599", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532019291), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:19.291 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:19.291 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:19.291 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:19.291 [conn14] moveChunk starting delete for: test.foo from { _id: 1900.0 } -> { _id: MaxKey } m31100| Fri Feb 22 11:20:19.605 [conn22] end connection 165.225.128.186:64933 (14 connections now open) m31100| Fri Feb 22 11:20:19.605 [initandlisten] connection accepted from 165.225.128.186:53699 #23 (15 connections now open) m31100| Fri Feb 22 11:20:21.325 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 1988ms m31100| Fri Feb 22 11:20:21.325 [conn14] moveChunk deleted 200 documents for test.foo from { _id: 1900.0 } -> { _id: MaxKey } m31100| Fri Feb 22 11:20:21.325 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:21.325 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:21.326 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:21.326 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:21-512754758cfa44516705959a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532021326), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2074, step5 of 6: 11, step6 of 6: 2034 } } m31100| Fri Feb 22 11:20:21.326 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1900.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:525 w:20654 reslen:37 4123ms m30999| Fri Feb 22 11:20:21.326 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:21.327 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 21|1||5127544800fc1508e4df1ce2 based on: 20|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:21.328 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 21000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 42 m30999| Fri Feb 22 11:20:21.328 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:21.328 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 21000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 42 m30999| Fri Feb 22 11:20:21.329 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:21.346 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31101| Fri Feb 22 11:20:21.347 [conn8] end connection 165.225.128.186:47479 (10 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn7] end connection 165.225.128.186:61872 (10 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:47705 (14 connections now open) m31101| Fri Feb 22 11:20:21.347 [conn7] end connection 165.225.128.186:44068 (10 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn6] end connection 165.225.128.186:45330 (10 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:55630 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn14] end connection 165.225.128.186:38058 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn13] end connection 165.225.128.186:52897 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn15] end connection 165.225.128.186:40845 (13 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn11] end connection 165.225.128.186:41224 (14 connections now open) m31101| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:59280 (8 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn13] end connection 165.225.128.186:46311 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn17] end connection 165.225.128.186:62098 (11 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn11] end connection 165.225.128.186:39345 (8 connections now open) m31101| Fri Feb 22 11:20:21.347 [conn13] end connection 165.225.128.186:43377 (8 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:42134 (8 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn15] end connection 165.225.128.186:48503 (14 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn3] end connection 165.225.128.186:56121 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn11] end connection 165.225.128.186:50718 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn4] end connection 165.225.128.186:58413 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn5] end connection 165.225.128.186:50526 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn6] end connection 165.225.128.186:54107 (10 connections now open) Fri Feb 22 11:20:22.346 shell: stopped mongo program on port 30999 Fri Feb 22 11:20:22.347 No db started on port: 30000 Fri Feb 22 11:20:22.347 shell: stopped mongo program on port 30000 Fri Feb 22 11:20:22.347 No db started on port: 30001 Fri Feb 22 11:20:22.347 shell: stopped mongo program on port 30001 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** m31100| Fri Feb 22 11:20:22.347 got signal 15 (Terminated), will terminate after current cmd ends m31100| Fri Feb 22 11:20:22.347 [interruptThread] now exiting m31100| Fri Feb 22 11:20:22.347 dbexit: m31100| Fri Feb 22 11:20:22.347 [interruptThread] shutdown: going to close listening sockets... m31100| Fri Feb 22 11:20:22.347 [interruptThread] closing listening socket: 12 m31100| Fri Feb 22 11:20:22.347 [interruptThread] closing listening socket: 13 m31100| Fri Feb 22 11:20:22.348 [interruptThread] closing listening socket: 14 m31100| Fri Feb 22 11:20:22.348 [interruptThread] removing socket file: /tmp/mongodb-31100.sock m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: going to flush diaglog... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: going to close sockets... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: waiting for fs preallocator... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: lock for final commit... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: final commit... m31100| Fri Feb 22 11:20:22.348 [conn23] end connection 165.225.128.186:53699 (9 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn1] end connection 127.0.0.1:54051 (9 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn9] end connection 165.225.128.186:56264 (9 connections now open) m31101| Fri Feb 22 11:20:22.348 [conn15] end connection 165.225.128.186:61602 (6 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn10] end connection 165.225.128.186:47962 (9 connections now open) m29000| Fri Feb 22 11:20:22.348 [conn7] end connection 165.225.128.186:45712 (5 connections now open) m29000| Fri Feb 22 11:20:22.348 [conn8] end connection 165.225.128.186:41614 (5 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn17] end connection 165.225.128.186:40234 (10 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn18] end connection 165.225.128.186:43875 (10 connections now open) m31201| Fri Feb 22 11:20:22.348 [conn9] end connection 165.225.128.186:56063 (6 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn19] end connection 165.225.128.186:49821 (10 connections now open) m31201| Fri Feb 22 11:20:22.348 [conn10] end connection 165.225.128.186:39391 (6 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn21] end connection 165.225.128.186:45792 (9 connections now open) m31101| Fri Feb 22 11:20:22.348 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:20:22.348 [conn19] end connection 165.225.128.186:36167 (9 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn20] end connection 165.225.128.186:39920 (9 connections now open) m31101| Fri Feb 22 11:20:22.348 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m29000| Fri Feb 22 11:20:22.348 [conn10] end connection 165.225.128.186:44442 (3 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn20] end connection 165.225.128.186:45938 (7 connections now open) m31100| Fri Feb 22 11:20:22.365 [interruptThread] shutdown: closing all files... m31100| Fri Feb 22 11:20:22.366 [interruptThread] closeAllFiles() finished m31100| Fri Feb 22 11:20:22.366 [interruptThread] journalCleanup... m31100| Fri Feb 22 11:20:22.366 [interruptThread] removeJournalFiles m31100| Fri Feb 22 11:20:22.366 dbexit: really exiting now Fri Feb 22 11:20:23.347 shell: stopped mongo program on port 31100 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number ReplSetTest stop *** Shutting down mongod in port 31101 *** m31101| Fri Feb 22 11:20:23.348 got signal 15 (Terminated), will terminate after current cmd ends m31101| Fri Feb 22 11:20:23.348 [interruptThread] now exiting m31101| Fri Feb 22 11:20:23.348 dbexit: m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: going to close listening sockets... m31101| Fri Feb 22 11:20:23.348 [interruptThread] closing listening socket: 15 m31101| Fri Feb 22 11:20:23.348 [interruptThread] closing listening socket: 16 m31101| Fri Feb 22 11:20:23.348 [interruptThread] closing listening socket: 17 m31101| Fri Feb 22 11:20:23.348 [interruptThread] removing socket file: /tmp/mongodb-31101.sock m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: going to flush diaglog... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: going to close sockets... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: waiting for fs preallocator... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: lock for final commit... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 11:20:23.348 [conn1] end connection 127.0.0.1:65148 (5 connections now open) m31101| Fri Feb 22 11:20:23.348 [conn5] end connection 165.225.128.186:34675 (5 connections now open) m31101| Fri Feb 22 11:20:23.348 [conn6] end connection 165.225.128.186:62481 (5 connections now open) m31101| Fri Feb 22 11:20:23.348 [conn11] end connection 165.225.128.186:36721 (5 connections now open) m31101| Fri Feb 22 11:20:23.349 [conn10] end connection 165.225.128.186:39290 (4 connections now open) m31101| Fri Feb 22 11:20:23.367 [interruptThread] shutdown: closing all files... m31101| Fri Feb 22 11:20:23.367 [interruptThread] closeAllFiles() finished m31101| Fri Feb 22 11:20:23.367 [interruptThread] journalCleanup... m31101| Fri Feb 22 11:20:23.367 [interruptThread] removeJournalFiles m31101| Fri Feb 22 11:20:23.368 dbexit: really exiting now Fri Feb 22 11:20:24.348 shell: stopped mongo program on port 31101 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number ReplSetTest stop *** Shutting down mongod in port 31200 *** m31200| Fri Feb 22 11:20:24.356 got signal 15 (Terminated), will terminate after current cmd ends m31200| Fri Feb 22 11:20:24.356 [interruptThread] now exiting m31200| Fri Feb 22 11:20:24.356 dbexit: m31200| Fri Feb 22 11:20:24.356 [interruptThread] shutdown: going to close listening sockets... m31200| Fri Feb 22 11:20:24.356 [interruptThread] closing listening socket: 18 m31200| Fri Feb 22 11:20:24.356 [interruptThread] closing listening socket: 19 m31200| Fri Feb 22 11:20:24.356 [interruptThread] closing listening socket: 20 m31200| Fri Feb 22 11:20:24.357 [interruptThread] removing socket file: /tmp/mongodb-31200.sock m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: going to flush diaglog... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: going to close sockets... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: waiting for fs preallocator... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: lock for final commit... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: final commit... m31200| Fri Feb 22 11:20:24.357 [conn1] end connection 127.0.0.1:34027 (6 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn22] end connection 165.225.128.186:38882 (6 connections now open) m31201| Fri Feb 22 11:20:24.357 [conn14] end connection 165.225.128.186:38149 (4 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn6] end connection 165.225.128.186:33069 (6 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn8] end connection 165.225.128.186:47585 (6 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn9] end connection 165.225.128.186:46153 (6 connections now open) m29000| Fri Feb 22 11:20:24.357 [conn9] end connection 165.225.128.186:64688 (2 connections now open) m31201| Fri Feb 22 11:20:24.357 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:20:24.377 [interruptThread] shutdown: closing all files... m31200| Fri Feb 22 11:20:24.378 [interruptThread] closeAllFiles() finished m31200| Fri Feb 22 11:20:24.378 [interruptThread] journalCleanup... m31200| Fri Feb 22 11:20:24.378 [interruptThread] removeJournalFiles m31200| Fri Feb 22 11:20:24.378 dbexit: really exiting now Fri Feb 22 11:20:25.356 shell: stopped mongo program on port 31200 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number ReplSetTest stop *** Shutting down mongod in port 31201 *** m31201| Fri Feb 22 11:20:25.357 got signal 15 (Terminated), will terminate after current cmd ends m31201| Fri Feb 22 11:20:25.357 [interruptThread] now exiting m31201| Fri Feb 22 11:20:25.357 dbexit: m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: going to close listening sockets... m31201| Fri Feb 22 11:20:25.357 [interruptThread] closing listening socket: 21 m31201| Fri Feb 22 11:20:25.357 [interruptThread] closing listening socket: 22 m31201| Fri Feb 22 11:20:25.357 [interruptThread] closing listening socket: 23 m31201| Fri Feb 22 11:20:25.357 [interruptThread] removing socket file: /tmp/mongodb-31201.sock m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: going to flush diaglog... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: going to close sockets... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: waiting for fs preallocator... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: lock for final commit... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: final commit... m31201| Fri Feb 22 11:20:25.357 [conn1] end connection 127.0.0.1:47001 (3 connections now open) m31201| Fri Feb 22 11:20:25.357 [conn4] end connection 165.225.128.186:54721 (3 connections now open) m31201| Fri Feb 22 11:20:25.357 [conn5] end connection 165.225.128.186:33137 (3 connections now open) m31201| Fri Feb 22 11:20:25.380 [interruptThread] shutdown: closing all files... m31201| Fri Feb 22 11:20:25.381 [interruptThread] closeAllFiles() finished m31201| Fri Feb 22 11:20:25.381 [interruptThread] journalCleanup... m31201| Fri Feb 22 11:20:25.381 [interruptThread] removeJournalFiles m31201| Fri Feb 22 11:20:25.382 dbexit: really exiting now Fri Feb 22 11:20:26.357 shell: stopped mongo program on port 31201 ReplSetTest stopSet deleting all dbpaths Fri Feb 22 11:20:26.361 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 ReplSetTest stopSet *** Shut down repl set - test worked **** m29000| Fri Feb 22 11:20:26.366 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 11:20:26.366 [interruptThread] now exiting m29000| Fri Feb 22 11:20:26.366 dbexit: m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 11:20:26.366 [interruptThread] closing listening socket: 32 m29000| Fri Feb 22 11:20:26.366 [interruptThread] closing listening socket: 33 m29000| Fri Feb 22 11:20:26.366 [interruptThread] closing listening socket: 34 m29000| Fri Feb 22 11:20:26.366 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 11:20:26.366 [conn1] end connection 127.0.0.1:48681 (1 connection now open) m29000| Fri Feb 22 11:20:26.366 [conn2] end connection 165.225.128.186:57785 (0 connections now open) Fri Feb 22 11:20:26.367 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:20:26.367 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:20:26.367 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31101 Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31101 error: 9001 socket exception [1] server [165.225.128.186:31101] Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:20:26.377 [conn9] end connection 127.0.0.1:57590 (0 connections now open) m29000| Fri Feb 22 11:20:26.376 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 11:20:26.377 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 11:20:26.377 [interruptThread] journalCleanup... m29000| Fri Feb 22 11:20:26.377 [interruptThread] removeJournalFiles m29000| Fri Feb 22 11:20:26.377 dbexit: really exiting now Fri Feb 22 11:20:27.366 shell: stopped mongo program on port 29000 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31101 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31101 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31101 *** ShardingTest rs1 completed successfully in 108.103 seconds *** 1.8052 minutes Fri Feb 22 11:20:27.406 [initandlisten] connection accepted from 127.0.0.1:39368 #10 (1 connection now open) Fri Feb 22 11:20:27.407 [conn10] end connection 127.0.0.1:39368 (0 connections now open) ******************************************* Test : balance_tags1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_tags1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_tags1.js";TestData.testFile = "balance_tags1.js";TestData.testName = "balance_tags1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:20:27 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:20:27.582 [initandlisten] connection accepted from 127.0.0.1:60212 #11 (1 connection now open) null Resetting db path '/data/db/balance_tags10' Fri Feb 22 11:20:27.597 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/balance_tags10 --nopreallocj --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:20:27.688 [initandlisten] MongoDB starting : pid=20728 port=30000 dbpath=/data/db/balance_tags10 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:20:27.689 [initandlisten] m30000| Fri Feb 22 11:20:27.689 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:20:27.689 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:20:27.689 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:20:27.689 [initandlisten] m30000| Fri Feb 22 11:20:27.689 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:20:27.689 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:20:27.689 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:20:27.689 [initandlisten] allocator: system m30000| Fri Feb 22 11:20:27.689 [initandlisten] options: { dbpath: "/data/db/balance_tags10", nopreallocj: true, port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:20:27.689 [initandlisten] journal dir=/data/db/balance_tags10/journal m30000| Fri Feb 22 11:20:27.689 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:20:27.691 [FileAllocator] allocating new datafile /data/db/balance_tags10/local.ns, filling with zeroes... m30000| Fri Feb 22 11:20:27.691 [FileAllocator] creating directory /data/db/balance_tags10/_tmp m30000| Fri Feb 22 11:20:27.691 [FileAllocator] done allocating datafile /data/db/balance_tags10/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:20:27.691 [FileAllocator] allocating new datafile /data/db/balance_tags10/local.0, filling with zeroes... m30000| Fri Feb 22 11:20:27.691 [FileAllocator] done allocating datafile /data/db/balance_tags10/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:20:27.695 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:20:27.695 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:20:27.799 [initandlisten] connection accepted from 127.0.0.1:51670 #1 (1 connection now open) Resetting db path '/data/db/balance_tags11' Fri Feb 22 11:20:27.803 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/balance_tags11 --nopreallocj --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:20:27.893 [initandlisten] MongoDB starting : pid=20729 port=30001 dbpath=/data/db/balance_tags11 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:20:27.893 [initandlisten] m30001| Fri Feb 22 11:20:27.893 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:20:27.893 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:20:27.893 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:20:27.893 [initandlisten] m30001| Fri Feb 22 11:20:27.893 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:20:27.893 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:20:27.893 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:20:27.893 [initandlisten] allocator: system m30001| Fri Feb 22 11:20:27.893 [initandlisten] options: { dbpath: "/data/db/balance_tags11", nopreallocj: true, port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:20:27.893 [initandlisten] journal dir=/data/db/balance_tags11/journal m30001| Fri Feb 22 11:20:27.894 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:20:27.895 [FileAllocator] allocating new datafile /data/db/balance_tags11/local.ns, filling with zeroes... m30001| Fri Feb 22 11:20:27.895 [FileAllocator] creating directory /data/db/balance_tags11/_tmp m30001| Fri Feb 22 11:20:27.895 [FileAllocator] done allocating datafile /data/db/balance_tags11/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:20:27.896 [FileAllocator] allocating new datafile /data/db/balance_tags11/local.0, filling with zeroes... m30001| Fri Feb 22 11:20:27.896 [FileAllocator] done allocating datafile /data/db/balance_tags11/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:20:27.899 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:20:27.899 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:20:28.004 [initandlisten] connection accepted from 127.0.0.1:61815 #1 (1 connection now open) Resetting db path '/data/db/balance_tags12' Fri Feb 22 11:20:28.007 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30002 --dbpath /data/db/balance_tags12 --nopreallocj --setParameter enableTestCommands=1 m30002| Fri Feb 22 11:20:28.097 [initandlisten] MongoDB starting : pid=20730 port=30002 dbpath=/data/db/balance_tags12 64-bit host=bs-smartos-x86-64-1.10gen.cc m30002| Fri Feb 22 11:20:28.097 [initandlisten] m30002| Fri Feb 22 11:20:28.097 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30002| Fri Feb 22 11:20:28.097 [initandlisten] ** uses to detect impending page faults. m30002| Fri Feb 22 11:20:28.097 [initandlisten] ** This may result in slower performance for certain use cases m30002| Fri Feb 22 11:20:28.097 [initandlisten] m30002| Fri Feb 22 11:20:28.097 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30002| Fri Feb 22 11:20:28.097 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30002| Fri Feb 22 11:20:28.097 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30002| Fri Feb 22 11:20:28.097 [initandlisten] allocator: system m30002| Fri Feb 22 11:20:28.097 [initandlisten] options: { dbpath: "/data/db/balance_tags12", nopreallocj: true, port: 30002, setParameter: [ "enableTestCommands=1" ] } m30002| Fri Feb 22 11:20:28.097 [initandlisten] journal dir=/data/db/balance_tags12/journal m30002| Fri Feb 22 11:20:28.098 [initandlisten] recover : no journal files present, no recovery needed m30002| Fri Feb 22 11:20:28.099 [FileAllocator] allocating new datafile /data/db/balance_tags12/local.ns, filling with zeroes... m30002| Fri Feb 22 11:20:28.099 [FileAllocator] creating directory /data/db/balance_tags12/_tmp m30002| Fri Feb 22 11:20:28.099 [FileAllocator] done allocating datafile /data/db/balance_tags12/local.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:28.100 [FileAllocator] allocating new datafile /data/db/balance_tags12/local.0, filling with zeroes... m30002| Fri Feb 22 11:20:28.100 [FileAllocator] done allocating datafile /data/db/balance_tags12/local.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:20:28.103 [initandlisten] waiting for connections on port 30002 m30002| Fri Feb 22 11:20:28.103 [websvr] admin web console waiting for connections on port 31002 m30002| Fri Feb 22 11:20:28.209 [initandlisten] connection accepted from 127.0.0.1:34448 #1 (1 connection now open) "localhost:30000,localhost:30001,localhost:30002" Fri Feb 22 11:20:28.213 SyncClusterConnection connecting to [localhost:30000] Fri Feb 22 11:20:28.213 SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:28.213 [initandlisten] connection accepted from 127.0.0.1:58478 #2 (2 connections now open) Fri Feb 22 11:20:28.214 SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:28.214 [initandlisten] connection accepted from 127.0.0.1:58147 #2 (2 connections now open) m30002| Fri Feb 22 11:20:28.214 [initandlisten] connection accepted from 127.0.0.1:47206 #2 (2 connections now open) ShardingTest balance_tags1 : { "config" : "localhost:30000,localhost:30001,localhost:30002", "shards" : [ connection to localhost:30000, connection to localhost:30001, connection to localhost:30002 ] } Fri Feb 22 11:20:28.218 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000,localhost:30001,localhost:30002 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:20:28.236 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=20731 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:20:28.236 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:20:28.236 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:20:28.236 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000,localhost:30001,localhost:30002", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:20:28.236 [mongosMain] config string : localhost:30000,localhost:30001,localhost:30002 m30999| Fri Feb 22 11:20:28.236 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:28.237 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.238 [mongosMain] connected connection! m30000| Fri Feb 22 11:20:28.238 [initandlisten] connection accepted from 127.0.0.1:40427 #3 (3 connections now open) m30999| Fri Feb 22 11:20:28.238 [mongosMain] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:28.238 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.239 [mongosMain] connected connection! m30001| Fri Feb 22 11:20:28.238 [initandlisten] connection accepted from 127.0.0.1:55562 #3 (3 connections now open) m30999| Fri Feb 22 11:20:28.239 [mongosMain] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:28.239 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.239 [initandlisten] connection accepted from 127.0.0.1:64894 #3 (3 connections now open) m30999| Fri Feb 22 11:20:28.239 [mongosMain] connected connection! m30999| Fri Feb 22 11:20:28.240 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:20:28.240 [mongosMain] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:28.240 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:28.240 [initandlisten] connection accepted from 127.0.0.1:33503 #4 (4 connections now open) m30999| Fri Feb 22 11:20:28.240 [mongosMain] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:28.240 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 11:20:28.241 [initandlisten] connection accepted from 127.0.0.1:42942 #4 (4 connections now open) m30999| Fri Feb 22 11:20:28.241 [mongosMain] SyncClusterConnection connecting to [localhost:30002] m30999| Fri Feb 22 11:20:28.241 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.241 [initandlisten] connection accepted from 127.0.0.1:48691 #4 (4 connections now open) m30000| Fri Feb 22 11:20:28.242 [conn4] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.249 [conn4] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.264 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:28.273 [mongosMain] scoped connection to localhost:30000,localhost:30001,localhost:30002 not being returned to the pool m30999| Fri Feb 22 11:20:28.273 [mongosMain] created new distributed lock for configUpgrade on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:20:28.274 [mongosMain] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:28.274 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.274 [mongosMain] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:28.274 [initandlisten] connection accepted from 127.0.0.1:60590 #5 (5 connections now open) m30999| Fri Feb 22 11:20:28.274 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.274 [mongosMain] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:28.274 [initandlisten] connection accepted from 127.0.0.1:48409 #5 (5 connections now open) m30999| Fri Feb 22 11:20:28.274 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.274 [initandlisten] connection accepted from 127.0.0.1:52548 #5 (5 connections now open) m30999| Fri Feb 22 11:20:28.275 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:28.275 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:20:28.275 [LockPinger] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:28.275 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30000| Fri Feb 22 11:20:28.276 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:28.276 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.276 [LockPinger] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:28.276 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:28.276 [initandlisten] connection accepted from 127.0.0.1:58494 #6 (6 connections now open) m30999| Fri Feb 22 11:20:28.276 [LockPinger] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:28.276 [initandlisten] connection accepted from 127.0.0.1:46581 #6 (6 connections now open) m30999| Fri Feb 22 11:20:28.276 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.276 [initandlisten] connection accepted from 127.0.0.1:52531 #6 (6 connections now open) m30000| Fri Feb 22 11:20:28.284 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.284 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.291 [conn4] end connection 127.0.0.1:42942 (5 connections now open) m30002| Fri Feb 22 11:20:28.291 [conn4] end connection 127.0.0.1:48691 (5 connections now open) m30001| Fri Feb 22 11:20:28.292 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.298 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.299 [conn4] end connection 127.0.0.1:33503 (5 connections now open) m30002| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags12/config.ns, filling with zeroes... m30000| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags10/config.ns, filling with zeroes... m30002| Fri Feb 22 11:20:28.306 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.306 [FileAllocator] done allocating datafile /data/db/balance_tags10/config.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:28.306 [FileAllocator] done allocating datafile /data/db/balance_tags12/config.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags12/config.0, filling with zeroes... m30000| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags10/config.0, filling with zeroes... m30002| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags12/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags10/config.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:20:28.307 [FileAllocator] allocating new datafile /data/db/balance_tags12/config.1, filling with zeroes... m30000| Fri Feb 22 11:20:28.307 [FileAllocator] allocating new datafile /data/db/balance_tags10/config.1, filling with zeroes... m30002| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags12/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags10/config.1, size: 128MB, took 0 secs m30002| Fri Feb 22 11:20:28.310 [conn5] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:20:28.310 [conn5] build index config.locks { _id: 1 } m30002| Fri Feb 22 11:20:28.310 [conn5] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:20:28.311 [conn5] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:20:28.316 [FileAllocator] allocating new datafile /data/db/balance_tags11/config.ns, filling with zeroes... m30001| Fri Feb 22 11:20:28.316 [FileAllocator] done allocating datafile /data/db/balance_tags11/config.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:20:28.316 [FileAllocator] allocating new datafile /data/db/balance_tags11/config.0, filling with zeroes... m30001| Fri Feb 22 11:20:28.316 [FileAllocator] done allocating datafile /data/db/balance_tags11/config.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:20:28.317 [FileAllocator] allocating new datafile /data/db/balance_tags11/config.1, filling with zeroes... m30001| Fri Feb 22 11:20:28.317 [FileAllocator] done allocating datafile /data/db/balance_tags11/config.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:20:28.320 [conn5] build index config.locks { _id: 1 } m30001| Fri Feb 22 11:20:28.320 [conn5] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:20:28.329 [conn6] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:20:28.329 [conn6] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:20:28.330 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:28.330 [conn6] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 11:20:28.331 [conn6] build index config.lockpings { _id: 1 } m30002| Fri Feb 22 11:20:28.335 [conn6] build index done. scanned 0 total records. 0.003 secs m30999| Fri Feb 22 11:20:28.406 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:28 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127547cd4b973931fc9a223" } } m30999| { "_id" : "configUpgrade", m30000| Fri Feb 22 11:20:28.407 [conn6] CMD fsync: sync:1 lock:0 m30999| "state" : 0 } m30000| Fri Feb 22 11:20:28.407 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.430 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.443 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.453 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.480 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.543 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.566 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.578 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.586 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.599 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.602 [conn6] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:20:28.602 [conn6] build index config.lockpings { ping: new Date(1) } m30001| Fri Feb 22 11:20:28.602 [conn6] build index config.lockpings { ping: new Date(1) } m30001| Fri Feb 22 11:20:28.604 [conn6] build index done. scanned 1 total records. 0.001 secs m30002| Fri Feb 22 11:20:28.604 [conn6] build index done. scanned 1 total records. 0.001 secs m30000| Fri Feb 22 11:20:28.604 [conn6] build index done. scanned 1 total records. 0.002 secs m30002| Fri Feb 22 11:20:28.627 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:28.680 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:20:28 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30999| Fri Feb 22 11:20:28.714 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127547cd4b973931fc9a223 m30999| Fri Feb 22 11:20:28.723 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:20:28.723 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:20:28.723 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:28-5127547cd4b973931fc9a224", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532028723), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:20:28.723 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.746 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.775 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.792 [conn5] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:20:28.793 [conn5] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:20:28.797 [conn5] build index config.changelog { _id: 1 } m30001| Fri Feb 22 11:20:28.798 [conn5] build index done. scanned 0 total records. 0 secs m30002| Fri Feb 22 11:20:28.801 [conn5] build index config.changelog { _id: 1 } m30002| Fri Feb 22 11:20:28.802 [conn5] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:20:28.885 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.908 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.936 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:29.022 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:20:29.030 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.054 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.083 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:29.101 [conn5] build index config.version { _id: 1 } m30002| Fri Feb 22 11:20:29.101 [conn5] build index config.version { _id: 1 } m30001| Fri Feb 22 11:20:29.101 [conn5] build index config.version { _id: 1 } m30002| Fri Feb 22 11:20:29.102 [conn5] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:29.102 [conn5] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:29.103 [conn5] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:20:29.160 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:29-5127547dd4b973931fc9a226", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532029160), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:20:29.160 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.184 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.213 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:29.295 [mongosMain] upgrade of config server to v4 successful m30000| Fri Feb 22 11:20:29.295 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.318 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.346 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.500 [conn5] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30999| Fri Feb 22 11:20:29.500 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:20:29.504 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.526 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.546 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:29.564 [conn6] build index config.settings { _id: 1 } m30001| Fri Feb 22 11:20:29.567 [conn6] build index config.settings { _id: 1 } m30002| Fri Feb 22 11:20:29.568 [conn6] build index config.settings { _id: 1 } m30000| Fri Feb 22 11:20:29.568 [conn6] build index done. scanned 0 total records. 0.004 secs m30001| Fri Feb 22 11:20:29.572 [conn6] build index done. scanned 0 total records. 0.004 secs m30002| Fri Feb 22 11:20:29.572 [conn6] build index done. scanned 0 total records. 0.004 secs m30000| Fri Feb 22 11:20:29.637 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.660 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.681 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.701 [conn6] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:20:29.701 [conn6] build index config.chunks { _id: 1 } m30002| Fri Feb 22 11:20:29.702 [conn6] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:20:29.705 [conn6] build index done. scanned 0 total records. 0.004 secs m30000| Fri Feb 22 11:20:29.706 [conn6] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:20:29.706 [conn6] build index config.chunks { ns: 1, min: 1 } m30001| Fri Feb 22 11:20:29.706 [conn6] build index done. scanned 0 total records. 0.004 secs m30001| Fri Feb 22 11:20:29.706 [conn6] info: creating collection config.chunks on add index m30001| Fri Feb 22 11:20:29.706 [conn6] build index config.chunks { ns: 1, min: 1 } m30002| Fri Feb 22 11:20:29.706 [conn6] build index done. scanned 0 total records. 0.004 secs m30002| Fri Feb 22 11:20:29.706 [conn6] info: creating collection config.chunks on add index m30002| Fri Feb 22 11:20:29.706 [conn6] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:20:29.708 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:29.708 [conn6] build index done. scanned 0 total records. 0.002 secs m30002| Fri Feb 22 11:20:29.708 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:29.774 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.798 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.821 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:29.838 [conn6] build index config.chunks { ns: 1, shard: 1, min: 1 } m30002| Fri Feb 22 11:20:29.838 [conn6] build index config.chunks { ns: 1, shard: 1, min: 1 } m30001| Fri Feb 22 11:20:29.838 [conn6] build index config.chunks { ns: 1, shard: 1, min: 1 } m30002| Fri Feb 22 11:20:29.840 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:29.841 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:29.841 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:29.876 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.900 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.921 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.938 [conn6] build index config.chunks { ns: 1, lastmod: 1 } m30001| Fri Feb 22 11:20:29.938 [conn6] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:20:29.938 [conn6] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:20:29.940 [conn6] build index done. scanned 0 total records. 0.002 secs m30002| Fri Feb 22 11:20:29.940 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:29.940 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:29.979 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.001 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.024 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:30.041 [conn6] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:20:30.044 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:30.044 [conn6] info: creating collection config.shards on add index m30002| Fri Feb 22 11:20:30.044 [conn6] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:20:30.044 [conn6] build index config.shards { host: 1 } m30001| Fri Feb 22 11:20:30.044 [conn6] build index config.shards { _id: 1 } m30002| Fri Feb 22 11:20:30.048 [conn6] build index done. scanned 0 total records. 0.003 secs m30002| Fri Feb 22 11:20:30.048 [conn6] info: creating collection config.shards on add index m30002| Fri Feb 22 11:20:30.048 [conn6] build index config.shards { host: 1 } m30001| Fri Feb 22 11:20:30.048 [conn6] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 11:20:30.048 [conn6] info: creating collection config.shards on add index m30001| Fri Feb 22 11:20:30.048 [conn6] build index config.shards { host: 1 } m30000| Fri Feb 22 11:20:30.048 [conn6] build index done. scanned 0 total records. 0.003 secs m30002| Fri Feb 22 11:20:30.052 [conn6] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 11:20:30.052 [conn6] build index done. scanned 0 total records. 0.004 secs m30000| Fri Feb 22 11:20:30.151 [conn6] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30999| Fri Feb 22 11:20:30.185 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:20:30.185 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:20:30.185 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:20:30.185 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:20:30.185 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:20:30.185 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:20:30 m30999| Fri Feb 22 11:20:30.185 [Balancer] created new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:20:30.185 [Balancer] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:30.185 [mongosMain] waiting for connections on port 30999 m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:30.185 [initandlisten] connection accepted from 127.0.0.1:56278 #7 (6 connections now open) m30999| Fri Feb 22 11:20:30.186 [Balancer] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:30.186 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:30.186 [conn6] build index config.mongos { _id: 1 } m30001| Fri Feb 22 11:20:30.186 [conn6] build index config.mongos { _id: 1 } m30000| Fri Feb 22 11:20:30.186 [conn6] build index config.mongos { _id: 1 } m30001| Fri Feb 22 11:20:30.186 [initandlisten] connection accepted from 127.0.0.1:58330 #7 (6 connections now open) m30999| Fri Feb 22 11:20:30.186 [Balancer] SyncClusterConnection connecting to [localhost:30002] m30999| Fri Feb 22 11:20:30.186 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:30.186 [initandlisten] connection accepted from 127.0.0.1:61888 #7 (6 connections now open) m30002| Fri Feb 22 11:20:30.188 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:30.188 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:30.188 [conn6] build index done. scanned 0 total records. 0.002 secs m30999| Fri Feb 22 11:20:30.189 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:30.189 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:30.189 [Balancer] inserting initial doc in config.locks for lock balancer m30000| Fri Feb 22 11:20:30.189 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.213 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.223 [mongosMain] connection accepted from 127.0.0.1:34561 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:20:30.226 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:20:30.226 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.240 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.242 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.267 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.289 [conn7] build index config.databases { _id: 1 } m30002| Fri Feb 22 11:20:30.289 [conn7] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:20:30.289 [conn7] build index config.databases { _id: 1 } m30002| Fri Feb 22 11:20:30.291 [conn7] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:30.291 [conn7] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:30.291 [conn7] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:20:30.320 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127547ed4b973931fc9a228" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30000| Fri Feb 22 11:20:30.320 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.341 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.354 [conn1] put [admin] on: config:localhost:30000,localhost:30001,localhost:30002 m30999| Fri Feb 22 11:20:30.355 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:30.355 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.355 [conn1] connected connection! m30000| Fri Feb 22 11:20:30.355 [initandlisten] connection accepted from 127.0.0.1:50872 #8 (7 connections now open) m30999| Fri Feb 22 11:20:30.356 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } m30000| Fri Feb 22 11:20:30.356 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.366 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.371 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.394 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:30.395 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.420 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.446 [conn5] CMD fsync: sync:1 lock:0 { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 11:20:30.497 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:30.497 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.497 [conn1] connected connection! m30001| Fri Feb 22 11:20:30.497 [initandlisten] connection accepted from 127.0.0.1:48237 #8 (7 connections now open) m30999| Fri Feb 22 11:20:30.499 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } m30000| Fri Feb 22 11:20:30.499 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.513 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.530 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127547ed4b973931fc9a228 m30999| Fri Feb 22 11:20:30.530 [Balancer] *** start balancing round m30999| Fri Feb 22 11:20:30.530 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:20:30.530 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:20:30.531 [Balancer] no collections to balance m30999| Fri Feb 22 11:20:30.531 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:20:30.531 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:20:30.531 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.538 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.549 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.576 [conn5] CMD fsync: sync:1 lock:0 { "shardAdded" : "shard0001", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30002 m30999| Fri Feb 22 11:20:30.633 [conn1] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:30.634 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.634 [conn1] connected connection! m30002| Fri Feb 22 11:20:30.634 [initandlisten] connection accepted from 127.0.0.1:54222 #8 (7 connections now open) m30999| Fri Feb 22 11:20:30.635 [conn1] going to add shard: { _id: "shard0002", host: "localhost:30002" } m30000| Fri Feb 22 11:20:30.635 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.653 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.666 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30002| Fri Feb 22 11:20:30.681 [conn7] CMD fsync: sync:1 lock:0 { "shardAdded" : "shard0002", "ok" : 1 } m30999| Fri Feb 22 11:20:30.804 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:30.804 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.804 [conn1] connected connection! m30000| Fri Feb 22 11:20:30.804 [initandlisten] connection accepted from 127.0.0.1:38220 #9 (8 connections now open) m30999| Fri Feb 22 11:20:30.804 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5127547ed4b973931fc9a227 m30999| Fri Feb 22 11:20:30.804 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:20:30.804 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:20:30.805 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:30.805 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.805 [conn1] connected connection! m30001| Fri Feb 22 11:20:30.805 [initandlisten] connection accepted from 127.0.0.1:57851 #9 (8 connections now open) m30999| Fri Feb 22 11:20:30.805 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5127547ed4b973931fc9a227 m30999| Fri Feb 22 11:20:30.805 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 11:20:30.805 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 11:20:30.806 [conn1] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:30.806 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.806 [conn1] connected connection! m30002| Fri Feb 22 11:20:30.806 [initandlisten] connection accepted from 127.0.0.1:42564 #9 (8 connections now open) m30999| Fri Feb 22 11:20:30.806 [conn1] creating WriteBackListener for: localhost:30002 serverID: 5127547ed4b973931fc9a227 m30999| Fri Feb 22 11:20:30.806 [conn1] initializing shard connection to localhost:30002 m30999| Fri Feb 22 11:20:30.806 BackgroundJob starting: WriteBackListener-localhost:30002 m30999| Fri Feb 22 11:20:30.807 [conn1] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:30.807 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:30.807 [initandlisten] connection accepted from 127.0.0.1:44398 #10 (9 connections now open) m30999| Fri Feb 22 11:20:30.807 [conn1] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:30.807 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.807 [conn1] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:30.807 [initandlisten] connection accepted from 127.0.0.1:50063 #10 (9 connections now open) m30999| Fri Feb 22 11:20:30.807 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:30.808 [initandlisten] connection accepted from 127.0.0.1:59349 #10 (9 connections now open) m30000| Fri Feb 22 11:20:30.808 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.833 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.854 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.940 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 11:20:30.940 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:30.940 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.940 [conn1] connected connection! m30000| Fri Feb 22 11:20:30.940 [initandlisten] connection accepted from 127.0.0.1:53660 #11 (10 connections now open) m30999| Fri Feb 22 11:20:30.941 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:30.941 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.941 [conn1] connected connection! m30001| Fri Feb 22 11:20:30.941 [initandlisten] connection accepted from 127.0.0.1:42693 #11 (10 connections now open) m30999| Fri Feb 22 11:20:30.942 [conn1] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:30.942 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.942 [conn1] connected connection! m30002| Fri Feb 22 11:20:30.942 [initandlisten] connection accepted from 127.0.0.1:44320 #11 (10 connections now open) m30999| Fri Feb 22 11:20:30.942 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 160 writeLock: 0 version: 2.4.0-rc1-pre- m30000| Fri Feb 22 11:20:30.942 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.960 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.989 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:31.110 [conn1] put [test] on: shard0000:localhost:30000 m30000| Fri Feb 22 11:20:31.110 [FileAllocator] allocating new datafile /data/db/balance_tags10/test.ns, filling with zeroes... m30000| Fri Feb 22 11:20:31.110 [FileAllocator] done allocating datafile /data/db/balance_tags10/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:20:31.111 [FileAllocator] allocating new datafile /data/db/balance_tags10/test.0, filling with zeroes... m30000| Fri Feb 22 11:20:31.111 [FileAllocator] done allocating datafile /data/db/balance_tags10/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:20:31.111 [FileAllocator] allocating new datafile /data/db/balance_tags10/test.1, filling with zeroes... m30000| Fri Feb 22 11:20:31.111 [FileAllocator] done allocating datafile /data/db/balance_tags10/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:20:31.114 [conn9] build index test.foo { _id: 1 } m30000| Fri Feb 22 11:20:31.115 [conn9] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:20:31.117 [conn1] enabling sharding on: test m30000| Fri Feb 22 11:20:31.117 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.143 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.168 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:31.281 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 11:20:31.281 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:20:31.282 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:31.282 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.306 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.332 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:31.451 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5127547fd4b973931fc9a229 based on: (empty) m30000| Fri Feb 22 11:20:31.451 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.475 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.501 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:31.587 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.612 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.640 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.665 [conn7] build index config.collections { _id: 1 } m30001| Fri Feb 22 11:20:31.665 [conn7] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:20:31.665 [conn7] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:20:31.670 [conn7] build index done. scanned 0 total records. 0.004 secs m30002| Fri Feb 22 11:20:31.670 [conn7] build index done. scanned 0 total records. 0.004 secs m30001| Fri Feb 22 11:20:31.670 [conn7] build index done. scanned 0 total records. 0.004 secs m30999| Fri Feb 22 11:20:31.758 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000,localhost:30001,localhost:30002", version: Timestamp 1000|0, versionEpoch: ObjectId('5127547fd4b973931fc9a229'), serverID: ObjectId('5127547ed4b973931fc9a227'), shard: "shard0000", shardHost: "localhost:30000" } 0x1180c80 2 m30999| Fri Feb 22 11:20:31.759 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:20:31.759 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000,localhost:30001,localhost:30002", version: Timestamp 1000|0, versionEpoch: ObjectId('5127547fd4b973931fc9a229'), serverID: ObjectId('5127547ed4b973931fc9a227'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x1180c80 2 m30000| Fri Feb 22 11:20:31.759 [conn9] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:20:31.759 [conn9] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:31.760 [conn9] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:31.760 [initandlisten] connection accepted from 127.0.0.1:40271 #12 (11 connections now open) m30000| Fri Feb 22 11:20:31.760 [conn9] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:31.760 [initandlisten] connection accepted from 127.0.0.1:43078 #12 (11 connections now open) m30002| Fri Feb 22 11:20:31.760 [initandlisten] connection accepted from 127.0.0.1:60374 #12 (11 connections now open) m30999| Fri Feb 22 11:20:31.761 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30000| Fri Feb 22 11:20:31.761 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.795 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.816 [conn10] CMD fsync: sync:1 lock:0 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 11:20:31.898 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:31.898 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:31.899 [initandlisten] connection accepted from 127.0.0.1:34264 #13 (12 connections now open) m30001| Fri Feb 22 11:20:31.899 [initandlisten] connection accepted from 127.0.0.1:49945 #13 (12 connections now open) m30002| Fri Feb 22 11:20:31.900 [initandlisten] connection accepted from 127.0.0.1:43096 #13 (12 connections now open) m30000| Fri Feb 22 11:20:31.902 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:31.902 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257 (sleeping for 30000ms) m30000| Fri Feb 22 11:20:31.902 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:31.902 [initandlisten] connection accepted from 127.0.0.1:49538 #14 (13 connections now open) m30000| Fri Feb 22 11:20:31.902 [conn11] SyncClusterConnection connecting to [localhost:30001] m30001| Fri Feb 22 11:20:31.913 [initandlisten] connection accepted from 127.0.0.1:51906 #14 (13 connections now open) m30000| Fri Feb 22 11:20:31.913 [conn11] SyncClusterConnection connecting to [localhost:30002] m30002| Fri Feb 22 11:20:31.913 [initandlisten] connection accepted from 127.0.0.1:33814 #14 (13 connections now open) m30000| Fri Feb 22 11:20:31.913 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.935 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.946 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.956 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.981 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.032 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.032 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.065 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.069 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.091 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.108 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.168 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.168 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.205 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.209 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.231 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.248 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.305 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754800cfd6a2130a0abd0 m30000| Fri Feb 22 11:20:32.306 [conn11] splitChunk accepted at version 1|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:32.306 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.334 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.355 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.480 [conn12] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30000| Fri Feb 22 11:20:32.509 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:32-512754800cfd6a2130a0abd1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532032509), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:32.510 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.539 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.561 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.612 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.640 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.660 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.714 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.739 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.764 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.816 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:32.816 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:111 reslen:37 918ms m30999| Fri Feb 22 11:20:32.817 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5127547fd4b973931fc9a229 based on: 1|0||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:32.818 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:32.818 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:32.819 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.843 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.868 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.919 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.944 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.968 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.021 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754800cfd6a2130a0abd2 m30000| Fri Feb 22 11:20:33.022 [conn11] splitChunk accepted at version 1|2||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:33.022 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.051 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.072 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.158 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:33-512754810cfd6a2130a0abd3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532033158), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:33.158 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.187 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.208 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.260 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.285 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.310 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.363 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:33.363 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:60 reslen:103 544ms m30999| Fri Feb 22 11:20:33.364 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||5127547fd4b973931fc9a229 based on: 1|2||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:33.365 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|4||000000000000000000000000min: { _id: 1.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:33.365 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:33.365 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.390 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.414 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.465 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.490 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.515 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.568 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754810cfd6a2130a0abd4 m30000| Fri Feb 22 11:20:33.569 [conn11] splitChunk accepted at version 1|4||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:33.569 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.601 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.625 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.705 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:33-512754810cfd6a2130a0abd5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532033705), what: "split", ns: "test.foo", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:33.705 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.733 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.755 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.807 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.832 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.857 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.909 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:33.909 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 544ms m30999| Fri Feb 22 11:20:33.910 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||5127547fd4b973931fc9a229 based on: 1|4||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:33.911 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|6||000000000000000000000000min: { _id: 2.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:33.911 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:33.912 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.936 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.961 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.012 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.037 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.063 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.114 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754810cfd6a2130a0abd6 m30000| Fri Feb 22 11:20:34.115 [conn11] splitChunk accepted at version 1|6||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:34.115 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.144 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.166 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.251 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:34-512754820cfd6a2130a0abd7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532034251), what: "split", ns: "test.foo", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:34.251 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.280 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.302 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.353 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.378 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.403 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.456 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:34.456 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 544ms m30999| Fri Feb 22 11:20:34.457 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||5127547fd4b973931fc9a229 based on: 1|6||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:34.458 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|8||000000000000000000000000min: { _id: 3.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:34.458 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:34.458 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.484 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.511 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.593 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.617 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.642 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.696 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754820cfd6a2130a0abd8 m30000| Fri Feb 22 11:20:34.697 [conn11] splitChunk accepted at version 1|8||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:34.697 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.726 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.750 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.832 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:34-512754820cfd6a2130a0abd9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532034832), what: "split", ns: "test.foo", details: { before: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:34.832 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.861 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.883 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.935 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.960 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.985 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.037 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:35.037 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:103 579ms m30999| Fri Feb 22 11:20:35.038 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||5127547fd4b973931fc9a229 based on: 1|8||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:35.039 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|10||000000000000000000000000min: { _id: 4.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:35.039 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:35.039 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.064 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.089 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.139 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.165 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.190 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.242 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754830cfd6a2130a0abda m30000| Fri Feb 22 11:20:35.243 [conn11] splitChunk accepted at version 1|10||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:35.243 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.271 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.293 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.378 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:35-512754830cfd6a2130a0abdb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532035378), what: "split", ns: "test.foo", details: { before: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:35.378 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.407 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.429 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.481 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.506 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.531 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.583 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:35.583 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:103 544ms m30999| Fri Feb 22 11:20:35.584 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||5127547fd4b973931fc9a229 based on: 1|10||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:35.585 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|12||000000000000000000000000min: { _id: 5.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:35.586 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:35.586 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.611 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.640 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.720 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.745 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.773 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.857 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754830cfd6a2130a0abdc m30000| Fri Feb 22 11:20:35.858 [conn11] splitChunk accepted at version 1|12||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:35.858 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.892 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.914 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.993 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:35-512754830cfd6a2130a0abdd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532035993), what: "split", ns: "test.foo", details: { before: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:35.993 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.027 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.048 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.130 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.150 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.174 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.232 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:36.232 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:67 reslen:103 646ms m30999| Fri Feb 22 11:20:36.233 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||5127547fd4b973931fc9a229 based on: 1|12||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:36.234 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|14||000000000000000000000000min: { _id: 6.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:36.234 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:36.234 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.254 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.278 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.335 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.354 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.379 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.437 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754840cfd6a2130a0abde m30000| Fri Feb 22 11:20:36.438 [conn11] splitChunk accepted at version 1|14||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:36.438 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.465 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.483 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.540 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:36-512754840cfd6a2130a0abdf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532036540), what: "split", ns: "test.foo", details: { before: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:36.540 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.568 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.590 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.642 [conn14] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:36.667 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:36.667 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:20:36.667 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.693 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.779 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:36.779 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 544ms m30999| Fri Feb 22 11:20:36.780 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||5127547fd4b973931fc9a229 based on: 1|14||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:36.781 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|16||000000000000000000000000min: { _id: 7.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:36.781 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:36.781 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.806 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.830 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.881 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.906 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.931 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.984 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754840cfd6a2130a0abe0 m30000| Fri Feb 22 11:20:36.984 [conn11] splitChunk accepted at version 1|16||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:36.985 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.014 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.038 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.120 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:37-512754850cfd6a2130a0abe1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532037120), what: "split", ns: "test.foo", details: { before: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:37.120 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.149 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.171 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.223 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.248 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.273 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.325 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:37.325 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:60 reslen:103 544ms m30999| Fri Feb 22 11:20:37.326 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||5127547fd4b973931fc9a229 based on: 1|16||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:37.327 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:37.327 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:37.327 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.352 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.377 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.428 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.452 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.477 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.530 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754850cfd6a2130a0abe2 m30000| Fri Feb 22 11:20:37.531 [conn11] splitChunk accepted at version 1|18||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:37.531 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.559 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.581 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.705 [conn12] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30000| Fri Feb 22 11:20:37.735 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:37-512754850cfd6a2130a0abe3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532037735), what: "split", ns: "test.foo", details: { before: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 9.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:37.735 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.763 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.785 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.837 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.862 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.887 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.940 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:37.940 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 612ms m30999| Fri Feb 22 11:20:37.940 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||5127547fd4b973931fc9a229 based on: 1|18||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:37.941 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|20||000000000000000000000000min: { _id: 9.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:37.942 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:37.942 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.966 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.991 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.042 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.067 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.094 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.145 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754850cfd6a2130a0abe4 m30000| Fri Feb 22 11:20:38.146 [conn11] splitChunk accepted at version 1|20||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:38.146 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.176 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.199 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.281 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:38-512754860cfd6a2130a0abe5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532038281), what: "split", ns: "test.foo", details: { before: { min: { _id: 9.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 9.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:38.281 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.310 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.333 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.384 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.409 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.436 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.486 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:38.486 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 544ms m30999| Fri Feb 22 11:20:38.487 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||5127547fd4b973931fc9a229 based on: 1|20||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:38.488 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 10.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:38.488 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 11.0 } ], shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:38.488 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.515 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.541 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.623 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.648 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.676 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.725 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754860cfd6a2130a0abe6 m30000| Fri Feb 22 11:20:38.726 [conn11] splitChunk accepted at version 1|22||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:38.726 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.759 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.780 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.862 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:38-512754860cfd6a2130a0abe7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532038862), what: "split", ns: "test.foo", details: { before: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 10.0 }, max: { _id: 11.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 11.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:38.862 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.895 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.916 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.998 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.024 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.052 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.173 [conn14] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30000| Fri Feb 22 11:20:39.203 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:39.203 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 11.0 } ], shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:68 reslen:103 714ms m30999| Fri Feb 22 11:20:39.204 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||5127547fd4b973931fc9a229 based on: 1|22||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:39.205 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|24||000000000000000000000000min: { _id: 11.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:39.205 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 11.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:39.206 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.232 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.262 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.339 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.364 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.393 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.442 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754870cfd6a2130a0abe8 m30000| Fri Feb 22 11:20:39.443 [conn11] splitChunk accepted at version 1|24||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:39.443 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.475 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.497 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.578 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:39-512754870cfd6a2130a0abe9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532039578), what: "split", ns: "test.foo", details: { before: { min: { _id: 11.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 11.0 }, max: { _id: 12.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 12.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:39.579 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.611 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.632 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.715 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.740 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.769 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.818 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:39.818 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 11.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:108 reslen:103 612ms m30999| Fri Feb 22 11:20:39.819 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||5127547fd4b973931fc9a229 based on: 1|24||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:39.820 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|26||000000000000000000000000min: { _id: 12.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:39.820 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 12.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 13.0 } ], shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:39.820 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.845 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.873 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.954 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.979 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.007 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.057 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754870cfd6a2130a0abea m30000| Fri Feb 22 11:20:40.058 [conn11] splitChunk accepted at version 1|26||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:40.058 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.087 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.108 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.193 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:40-512754880cfd6a2130a0abeb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532040193), what: "split", ns: "test.foo", details: { before: { min: { _id: 12.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 12.0 }, max: { _id: 13.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 13.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:40.193 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.222 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.243 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.296 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.321 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.346 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.398 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:40.398 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 12.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 13.0 } ], shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 578ms m30999| Fri Feb 22 11:20:40.399 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||5127547fd4b973931fc9a229 based on: 1|26||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:40.400 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|28||000000000000000000000000min: { _id: 13.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:40.400 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 13.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], shardId: "test.foo-_id_13.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:40.401 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.427 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.453 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.573 [conn14] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30000| Fri Feb 22 11:20:40.603 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.628 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.653 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.705 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754880cfd6a2130a0abec m30000| Fri Feb 22 11:20:40.706 [conn11] splitChunk accepted at version 1|28||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:40.706 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.734 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.755 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.842 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:40-512754880cfd6a2130a0abed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532040842), what: "split", ns: "test.foo", details: { before: { min: { _id: 13.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 13.0 }, max: { _id: 14.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 14.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:40.842 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.871 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.893 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.945 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.970 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.995 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.047 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:41.047 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 13.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], shardId: "test.foo-_id_13.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 647ms m30999| Fri Feb 22 11:20:41.048 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||5127547fd4b973931fc9a229 based on: 1|28||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:41.049 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|30||000000000000000000000000min: { _id: 14.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:41.049 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 14.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 15.0 } ], shardId: "test.foo-_id_14.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:41.049 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.074 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.099 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.150 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.174 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.199 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.252 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754890cfd6a2130a0abee m30000| Fri Feb 22 11:20:41.253 [conn11] splitChunk accepted at version 1|30||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:41.253 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.281 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.303 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.389 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:41-512754890cfd6a2130a0abef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532041389), what: "split", ns: "test.foo", details: { before: { min: { _id: 14.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 14.0 }, max: { _id: 15.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:41.389 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.418 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.439 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.491 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.518 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.547 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.628 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:41.628 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 14.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 15.0 } ], shardId: "test.foo-_id_14.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 578ms m30999| Fri Feb 22 11:20:41.628 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||5127547fd4b973931fc9a229 based on: 1|30||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:41.630 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|32||000000000000000000000000min: { _id: 15.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:41.630 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], shardId: "test.foo-_id_15.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:41.630 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.654 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.679 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.730 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.755 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.780 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.833 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754890cfd6a2130a0abf0 m30000| Fri Feb 22 11:20:41.833 [conn11] splitChunk accepted at version 1|32||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:41.833 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.862 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.883 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.969 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:41-512754890cfd6a2130a0abf1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532041969), what: "split", ns: "test.foo", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 16.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 16.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:41.969 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.998 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.020 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.072 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.096 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.121 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.174 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:42.174 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], shardId: "test.foo-_id_15.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 544ms m30999| Fri Feb 22 11:20:42.175 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||5127547fd4b973931fc9a229 based on: 1|32||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:42.176 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|34||000000000000000000000000min: { _id: 16.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:42.176 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 16.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 17.0 } ], shardId: "test.foo-_id_16.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:42.176 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.201 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.227 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.311 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.335 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.360 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.413 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127548a0cfd6a2130a0abf2 m30000| Fri Feb 22 11:20:42.414 [conn11] splitChunk accepted at version 1|34||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:42.414 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.443 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.464 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.549 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:42-5127548a0cfd6a2130a0abf3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532042549), what: "split", ns: "test.foo", details: { before: { min: { _id: 16.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 16.0 }, max: { _id: 17.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 17.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:42.549 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.582 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.605 [conn12] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:42.668 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:42.668 [Balancer] skipping balancing round because balancing is disabled m30000| Fri Feb 22 11:20:42.687 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.714 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.744 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.823 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:42.823 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 16.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 17.0 } ], shardId: "test.foo-_id_16.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:74 reslen:103 647ms m30999| Fri Feb 22 11:20:42.824 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||5127547fd4b973931fc9a229 based on: 1|34||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:42.825 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|36||000000000000000000000000min: { _id: 17.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:42.825 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 17.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], shardId: "test.foo-_id_17.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:42.826 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.850 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.879 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.960 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.985 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.013 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.096 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127548a0cfd6a2130a0abf4 m30000| Fri Feb 22 11:20:43.097 [conn11] splitChunk accepted at version 1|36||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:43.097 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.130 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.152 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.233 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:43-5127548b0cfd6a2130a0abf5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532043233), what: "split", ns: "test.foo", details: { before: { min: { _id: 17.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 17.0 }, max: { _id: 18.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 18.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:43.233 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.266 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.288 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.369 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.395 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.424 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.506 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:43.506 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 17.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], shardId: "test.foo-_id_17.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:77 reslen:103 680ms m30999| Fri Feb 22 11:20:43.507 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||5127547fd4b973931fc9a229 based on: 1|36||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:43.508 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|38||000000000000000000000000min: { _id: 18.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:43.508 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 18.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 19.0 } ], shardId: "test.foo-_id_18.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:43.509 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.537 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.566 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.643 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.667 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.696 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.780 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127548b0cfd6a2130a0abf6 m30000| Fri Feb 22 11:20:43.781 [conn11] splitChunk accepted at version 1|38||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:43.781 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.815 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.837 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.916 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:43-5127548b0cfd6a2130a0abf7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532043916), what: "split", ns: "test.foo", details: { before: { min: { _id: 18.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 18.0 }, max: { _id: 19.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 19.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:43.916 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.949 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.971 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:44.053 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:44.078 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:44.106 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:44.189 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:44.189 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 18.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 19.0 } ], shardId: "test.foo-_id_18.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:79 reslen:103 680ms m30999| Fri Feb 22 11:20:44.190 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||5127547fd4b973931fc9a229 based on: 1|38||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:44.191 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:44.223 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:44.244 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:48.669 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:48.670 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:48.670 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:48 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275490d4b973931fc9a22a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127547ed4b973931fc9a228" } } m30000| Fri Feb 22 11:20:48.670 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:48.699 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:48.724 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:48.785 [conn5] CMD fsync: sync:1 lock:0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("5127547dd4b973931fc9a225") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } test.foo shard key: { "_id" : 1 } chunks: shard0000 21 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : shard0000 { "t" : 1000, "i" : 1 } { "_id" : 0 } -->> { "_id" : 1 } on : shard0000 { "t" : 1000, "i" : 3 } { "_id" : 1 } -->> { "_id" : 2 } on : shard0000 { "t" : 1000, "i" : 5 } { "_id" : 2 } -->> { "_id" : 3 } on : shard0000 { "t" : 1000, "i" : 7 } { "_id" : 3 } -->> { "_id" : 4 } on : shard0000 { "t" : 1000, "i" : 9 } { "_id" : 4 } -->> { "_id" : 5 } on : shard0000 { "t" : 1000, "i" : 11 } { "_id" : 5 } -->> { "_id" : 6 } on : shard0000 { "t" : 1000, "i" : 13 } { "_id" : 6 } -->> { "_id" : 7 } on : shard0000 { "t" : 1000, "i" : 15 } { "_id" : 7 } -->> { "_id" : 8 } on : shard0000 { "t" : 1000, "i" : 17 } { "_id" : 8 } -->> { "_id" : 9 } on : shard0000 { "t" : 1000, "i" : 19 } { "_id" : 9 } -->> { "_id" : 10 } on : shard0000 { "t" : 1000, "i" : 21 } { "_id" : 10 } -->> { "_id" : 11 } on : shard0000 { "t" : 1000, "i" : 23 } { "_id" : 11 } -->> { "_id" : 12 } on : shard0000 { "t" : 1000, "i" : 25 } { "_id" : 12 } -->> { "_id" : 13 } on : shard0000 { "t" : 1000, "i" : 27 } { "_id" : 13 } -->> { "_id" : 14 } on : shard0000 { "t" : 1000, "i" : 29 } { "_id" : 14 } -->> { "_id" : 15 } on : shard0000 { "t" : 1000, "i" : 31 } { "_id" : 15 } -->> { "_id" : 16 } on : shard0000 { "t" : 1000, "i" : 33 } { "_id" : 16 } -->> { "_id" : 17 } on : shard0000 { "t" : 1000, "i" : 35 } { "_id" : 17 } -->> { "_id" : 18 } on : shard0000 { "t" : 1000, "i" : 37 } { "_id" : 18 } -->> { "_id" : 19 } on : shard0000 { "t" : 1000, "i" : 39 } { "_id" : 19 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1000, "i" : 40 } m30001| Fri Feb 22 11:20:48.816 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:48.841 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:48.922 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275490d4b973931fc9a22a m30999| Fri Feb 22 11:20:48.922 [Balancer] *** start balancing round m30999| Fri Feb 22 11:20:48.922 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:20:48.922 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 11:20:48.923 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:48.948 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:48.973 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:48.994 [conn7] build index config.tags { _id: 1 } m30002| Fri Feb 22 11:20:48.994 [conn7] build index config.tags { _id: 1 } m30001| Fri Feb 22 11:20:48.994 [conn7] build index config.tags { _id: 1 } m30001| Fri Feb 22 11:20:48.997 [conn7] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:48.997 [conn7] info: creating collection config.tags on add index m30001| Fri Feb 22 11:20:48.997 [conn7] build index config.tags { ns: 1, min: 1 } m30002| Fri Feb 22 11:20:48.997 [conn7] build index done. scanned 0 total records. 0.002 secs m30002| Fri Feb 22 11:20:48.997 [conn7] info: creating collection config.tags on add index m30002| Fri Feb 22 11:20:48.997 [conn7] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 11:20:48.997 [conn7] build index done. scanned 0 total records. 0.003 secs m30000| Fri Feb 22 11:20:48.998 [conn7] info: creating collection config.tags on add index m30000| Fri Feb 22 11:20:48.998 [conn7] build index config.tags { ns: 1, min: 1 } m30001| Fri Feb 22 11:20:48.999 [conn7] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 11:20:48.999 [conn7] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:49.000 [conn7] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:49.103 [conn7] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30999| Fri Feb 22 11:20:49.127 [Balancer] shard0002 has more chunks me:0 best: shard0001:0 m30999| Fri Feb 22 11:20:49.127 [Balancer] collection : test.foo m30999| Fri Feb 22 11:20:49.127 [Balancer] donor : shard0000 chunks on 21 m30999| Fri Feb 22 11:20:49.127 [Balancer] receiver : shard0001 chunks on 0 m30999| Fri Feb 22 11:20:49.127 [Balancer] threshold : 4 m30999| Fri Feb 22 11:20:49.127 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:20:49.127 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 1|1||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:20:49.127 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:20:49.127 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:20:49.128 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.154 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.180 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.229 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.253 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.278 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.331 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754910cfd6a2130a0abf8 m30000| Fri Feb 22 11:20:49.332 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-512754910cfd6a2130a0abf9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532049332), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:20:49.332 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.360 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.381 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.435 [conn11] moveChunk request accepted at version 1|40||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:49.435 [conn11] moveChunk number of documents: 0 m30001| Fri Feb 22 11:20:49.435 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30000| Fri Feb 22 11:20:49.436 [initandlisten] connection accepted from 127.0.0.1:62544 #15 (14 connections now open) m30001| Fri Feb 22 11:20:49.437 [FileAllocator] allocating new datafile /data/db/balance_tags11/test.ns, filling with zeroes... m30001| Fri Feb 22 11:20:49.437 [FileAllocator] done allocating datafile /data/db/balance_tags11/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:20:49.437 [FileAllocator] allocating new datafile /data/db/balance_tags11/test.0, filling with zeroes... m30001| Fri Feb 22 11:20:49.437 [FileAllocator] done allocating datafile /data/db/balance_tags11/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:20:49.437 [FileAllocator] allocating new datafile /data/db/balance_tags11/test.1, filling with zeroes... m30001| Fri Feb 22 11:20:49.437 [FileAllocator] done allocating datafile /data/db/balance_tags11/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:20:49.440 [migrateThread] build index test.foo { _id: 1 } m30001| Fri Feb 22 11:20:49.441 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:49.441 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 11:20:49.441 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:20:49.441 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.442 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:20:49.445 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:20:49.446 [conn11] moveChunk setting version to: 2|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:20:49.446 [initandlisten] connection accepted from 127.0.0.1:38369 #15 (14 connections now open) m30001| Fri Feb 22 11:20:49.446 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:20:49.452 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.452 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.452 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-5127549178e37a7f0861eba1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532049452), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:20:49.452 [migrateThread] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:49.453 [initandlisten] connection accepted from 127.0.0.1:43618 #16 (15 connections now open) m30001| Fri Feb 22 11:20:49.453 [migrateThread] SyncClusterConnection connecting to [localhost:30001] m30001| Fri Feb 22 11:20:49.453 [initandlisten] connection accepted from 127.0.0.1:62454 #16 (15 connections now open) m30001| Fri Feb 22 11:20:49.453 [migrateThread] SyncClusterConnection connecting to [localhost:30002] m30002| Fri Feb 22 11:20:49.453 [initandlisten] connection accepted from 127.0.0.1:45677 #15 (14 connections now open) m30000| Fri Feb 22 11:20:49.453 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.456 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:20:49.456 [conn11] moveChunk updating self version to: 2|1||5127547fd4b973931fc9a229 through { _id: 0.0 } -> { _id: 1.0 } for collection 'test.foo' m30000| Fri Feb 22 11:20:49.456 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:49.462 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:49.462 [initandlisten] connection accepted from 127.0.0.1:45549 #17 (16 connections now open) m30000| Fri Feb 22 11:20:49.464 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:49.464 [initandlisten] connection accepted from 127.0.0.1:40060 #17 (16 connections now open) m30002| Fri Feb 22 11:20:49.464 [initandlisten] connection accepted from 127.0.0.1:56139 #16 (15 connections now open) m30000| Fri Feb 22 11:20:49.464 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.480 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.490 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.519 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.537 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.571 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.599 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.605 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-512754910cfd6a2130a0abfa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532049605), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:20:49.605 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.631 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.634 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.661 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:49.742 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:49.742 [cleanupOldData-512754910cfd6a2130a0abfb] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 0.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:20:49.742 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.762 [cleanupOldData-512754910cfd6a2130a0abfb] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:20:49.762 [cleanupOldData-512754910cfd6a2130a0abfb] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:20:49.762 [cleanupOldData-512754910cfd6a2130a0abfb] moveChunk deleted 0 documents for test.foo from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.767 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.802 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.878 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:49.878 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-512754910cfd6a2130a0abfc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532049878), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 307, step3 of 6: 0, step4 of 6: 10, step5 of 6: 296, step6 of 6: 0 } } m30000| Fri Feb 22 11:20:49.879 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.908 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.938 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:50.015 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:18 r:45 w:20 reslen:37 887ms m30999| Fri Feb 22 11:20:50.015 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:50.016 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 2|1||5127547fd4b973931fc9a229 based on: 1|40||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:50.016 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:20:50.016 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:50.045 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:50.080 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:50.152 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:20:55.152 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:55.153 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:55.154 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:55 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275497d4b973931fc9a22b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275490d4b973931fc9a22a" } } m30000| Fri Feb 22 11:20:55.154 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.189 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.230 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.302 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.335 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.375 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:55.439 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275497d4b973931fc9a22b m30999| Fri Feb 22 11:20:55.439 [Balancer] *** start balancing round m30999| Fri Feb 22 11:20:55.439 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:20:55.439 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:20:55.440 [Balancer] collection : test.foo m30999| Fri Feb 22 11:20:55.440 [Balancer] donor : shard0000 chunks on 20 m30999| Fri Feb 22 11:20:55.440 [Balancer] receiver : shard0002 chunks on 0 m30999| Fri Feb 22 11:20:55.440 [Balancer] threshold : 2 m30999| Fri Feb 22 11:20:55.441 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:20:55.441 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 2|1||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:20:55.441 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:20:55.441 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:20:55.441 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.466 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.506 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.575 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.600 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.640 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.712 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754970cfd6a2130a0abfd m30000| Fri Feb 22 11:20:55.712 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:55-512754970cfd6a2130a0abfe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532055712), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:20:55.712 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.745 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.775 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.849 [conn11] moveChunk request accepted at version 2|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:55.849 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:20:55.850 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 1.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30000| Fri Feb 22 11:20:55.850 [initandlisten] connection accepted from 127.0.0.1:35927 #18 (17 connections now open) m30002| Fri Feb 22 11:20:55.851 [FileAllocator] allocating new datafile /data/db/balance_tags12/test.ns, filling with zeroes... m30002| Fri Feb 22 11:20:55.851 [FileAllocator] done allocating datafile /data/db/balance_tags12/test.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:55.851 [FileAllocator] allocating new datafile /data/db/balance_tags12/test.0, filling with zeroes... m30002| Fri Feb 22 11:20:55.852 [FileAllocator] done allocating datafile /data/db/balance_tags12/test.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:20:55.852 [FileAllocator] allocating new datafile /data/db/balance_tags12/test.1, filling with zeroes... m30002| Fri Feb 22 11:20:55.852 [FileAllocator] done allocating datafile /data/db/balance_tags12/test.1, size: 128MB, took 0 secs m30002| Fri Feb 22 11:20:55.856 [migrateThread] build index test.foo { _id: 1 } m30002| Fri Feb 22 11:20:55.857 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 11:20:55.857 [migrateThread] info: creating collection test.foo on add index m30002| Fri Feb 22 11:20:55.858 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:20:55.858 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:20:55.858 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:20:55.860 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:20:55.860 [conn11] moveChunk setting version to: 3|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:20:55.860 [initandlisten] connection accepted from 127.0.0.1:33670 #17 (16 connections now open) m30002| Fri Feb 22 11:20:55.860 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:20:55.868 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:20:55.868 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:20:55.868 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:55-51275497aaaba61d9eb250fb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532055868), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 7, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30002| Fri Feb 22 11:20:55.868 [migrateThread] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:55.869 [initandlisten] connection accepted from 127.0.0.1:61265 #19 (18 connections now open) m30002| Fri Feb 22 11:20:55.869 [migrateThread] SyncClusterConnection connecting to [localhost:30001] m30002| Fri Feb 22 11:20:55.869 [migrateThread] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:55.869 [initandlisten] connection accepted from 127.0.0.1:58911 #18 (17 connections now open) m30002| Fri Feb 22 11:20:55.869 [initandlisten] connection accepted from 127.0.0.1:39075 #18 (17 connections now open) m30000| Fri Feb 22 11:20:55.869 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.871 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:20:55.871 [conn11] moveChunk updating self version to: 3|1||5127547fd4b973931fc9a229 through { _id: 1.0 } -> { _id: 2.0 } for collection 'test.foo' m30000| Fri Feb 22 11:20:55.871 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.903 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.905 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.933 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.941 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.019 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:56-512754980cfd6a2130a0abff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532056019), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:20:56.019 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.019 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.058 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.062 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.092 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.097 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:56.190 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:56.190 [cleanupOldData-512754980cfd6a2130a0ac00] (start) waiting to cleanup test.foo from { _id: 0.0 } -> { _id: 1.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:20:56.190 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.210 [cleanupOldData-512754980cfd6a2130a0ac00] waiting to remove documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:20:56.210 [cleanupOldData-512754980cfd6a2130a0ac00] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:20:56.211 [cleanupOldData-512754980cfd6a2130a0ac00] moveChunk deleted 1 documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:20:56.215 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.255 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.326 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:56.326 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:56-512754980cfd6a2130a0ac01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532056326), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:20:56.327 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.361 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.392 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.463 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:62 w:20 reslen:37 1022ms m30999| Fri Feb 22 11:20:56.463 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:56.465 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 3|1||5127547fd4b973931fc9a229 based on: 2|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:56.465 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:20:56.465 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.498 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.536 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:56.600 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:20:58.680 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:58.707 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:58.751 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:58.847 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:58.872 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:58.912 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:58.983 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:20:58 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30999| Fri Feb 22 11:21:01.600 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:01.601 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:01.601 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127549dd4b973931fc9a22c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275497d4b973931fc9a22b" } } m30000| Fri Feb 22 11:21:01.601 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:01.633 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:01.671 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:01.740 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:01.772 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:01.810 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:01.877 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127549dd4b973931fc9a22c m30999| Fri Feb 22 11:21:01.877 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:01.877 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:01.877 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:01.878 [Balancer] shard0002 has more chunks me:1 best: shard0001:1 m30999| Fri Feb 22 11:21:01.878 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:01.878 [Balancer] donor : shard0000 chunks on 19 m30999| Fri Feb 22 11:21:01.878 [Balancer] receiver : shard0001 chunks on 1 m30999| Fri Feb 22 11:21:01.878 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:01.878 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:01.878 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 3|1||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:01.879 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:01.879 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:01.879 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:01.899 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:01.938 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.013 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.033 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.074 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.150 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127549d0cfd6a2130a0ac02 m30000| Fri Feb 22 11:21:02.150 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e0cfd6a2130a0ac03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532062150), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:02.150 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.184 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.216 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.287 [conn11] moveChunk request accepted at version 3|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:02.287 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:02.288 [migrateThread] starting receiving-end of migration of chunk { _id: 1.0 } -> { _id: 2.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:02.289 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:02.289 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.289 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:21:02.298 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:02.298 [conn11] moveChunk setting version to: 4|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:02.298 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:02.300 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.300 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.300 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e78e37a7f0861eba2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532062300), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:02.300 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.305 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.308 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:02.308 [conn11] moveChunk updating self version to: 4|1||5127547fd4b973931fc9a229 through { _id: 2.0 } -> { _id: 3.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:02.308 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.326 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.342 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.346 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.369 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.393 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.398 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.491 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e0cfd6a2130a0ac04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532062491), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:02.491 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:21:02.491 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.492 [initandlisten] connection accepted from 127.0.0.1:51262 #20 (19 connections now open) m30000| Fri Feb 22 11:21:02.492 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:21:02.503 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:21:02.503 [initandlisten] connection accepted from 127.0.0.1:41492 #19 (18 connections now open) m30002| Fri Feb 22 11:21:02.503 [initandlisten] connection accepted from 127.0.0.1:64097 #19 (18 connections now open) m30000| Fri Feb 22 11:21:02.503 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.525 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.536 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.555 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.564 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:02.628 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:02.628 [cleanupOldData-5127549e0cfd6a2130a0ac05] (start) waiting to cleanup test.foo from { _id: 1.0 } -> { _id: 2.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:02.628 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.648 [cleanupOldData-5127549e0cfd6a2130a0ac05] waiting to remove documents for test.foo from { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:21:02.648 [cleanupOldData-5127549e0cfd6a2130a0ac05] moveChunk starting delete for: test.foo from { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:21:02.648 [cleanupOldData-5127549e0cfd6a2130a0ac05] moveChunk deleted 1 documents for test.foo from { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.653 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.694 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.764 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:02.764 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e0cfd6a2130a0ac06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532062764), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:02.771 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.804 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.834 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.901 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:121 w:18 reslen:37 1022ms m30999| Fri Feb 22 11:21:02.901 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:02.902 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 25 version: 4|1||5127547fd4b973931fc9a229 based on: 3|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:02.902 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:02.903 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.936 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.974 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:03.037 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:08.038 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:08.039 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:08.039 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754a4d4b973931fc9a22d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127549dd4b973931fc9a22c" } } m30000| Fri Feb 22 11:21:08.039 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.072 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.107 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.153 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.182 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.216 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:08.256 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754a4d4b973931fc9a22d m30999| Fri Feb 22 11:21:08.256 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:08.256 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:08.256 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:08.257 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:08.257 [Balancer] donor : shard0000 chunks on 18 m30999| Fri Feb 22 11:21:08.257 [Balancer] receiver : shard0002 chunks on 1 m30999| Fri Feb 22 11:21:08.257 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:08.257 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_2.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:08.257 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 4|1||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:08.258 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:08.258 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:08.258 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.283 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.317 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.358 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.383 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.417 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.460 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754a40cfd6a2130a0ac07 m30000| Fri Feb 22 11:21:08.460 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a40cfd6a2130a0ac08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532068460), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:08.461 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.489 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.518 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.563 [conn11] moveChunk request accepted at version 4|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:08.564 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:08.564 [migrateThread] starting receiving-end of migration of chunk { _id: 2.0 } -> { _id: 3.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:08.565 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:08.565 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:21:08.565 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:21:08.574 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:08.574 [conn11] moveChunk setting version to: 5|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:08.574 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:08.575 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:21:08.575 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:21:08.576 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a4aaaba61d9eb250fc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532068575), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:08.576 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.584 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:08.584 [conn11] moveChunk updating self version to: 5|1||5127547fd4b973931fc9a229 through { _id: 3.0 } -> { _id: 4.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:08.585 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.601 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.605 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.636 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.640 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.699 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a40cfd6a2130a0ac09", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532068699), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:08.699 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.729 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.759 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:08.802 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:08.802 [cleanupOldData-512754a40cfd6a2130a0ac0a] (start) waiting to cleanup test.foo from { _id: 2.0 } -> { _id: 3.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:08.802 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.822 [cleanupOldData-512754a40cfd6a2130a0ac0a] waiting to remove documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:21:08.822 [cleanupOldData-512754a40cfd6a2130a0ac0a] moveChunk starting delete for: test.foo from { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:21:08.822 [cleanupOldData-512754a40cfd6a2130a0ac0a] moveChunk deleted 1 documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:21:08.827 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.863 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.904 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:08.904 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a40cfd6a2130a0ac0b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532068904), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 305, step3 of 6: 0, step4 of 6: 10, step5 of 6: 227, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:08.905 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.935 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.967 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:09.007 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:32 r:92 w:21 reslen:37 749ms m30999| Fri Feb 22 11:21:09.007 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:09.009 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 5|1||5127547fd4b973931fc9a229 based on: 4|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:09.009 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:09.009 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:09.039 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:09.074 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:09.110 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:14.110 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:14.111 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:14.111 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754aad4b973931fc9a22e" } } m30000| Fri Feb 22 11:21:14.111 [conn5] CMD fsync: sync:1 lock:0 m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754a4d4b973931fc9a22d" } } m30001| Fri Feb 22 11:21:14.140 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.175 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.224 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.253 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.289 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:14.361 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754aad4b973931fc9a22e m30999| Fri Feb 22 11:21:14.361 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:14.361 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:14.361 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:14.362 [Balancer] shard0002 has more chunks me:2 best: shard0001:2 m30999| Fri Feb 22 11:21:14.362 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:14.362 [Balancer] donor : shard0000 chunks on 17 m30999| Fri Feb 22 11:21:14.362 [Balancer] receiver : shard0001 chunks on 2 m30999| Fri Feb 22 11:21:14.362 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:14.362 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_3.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 3.0 }, max: { _id: 4.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:14.363 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 5|1||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:14.363 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:14.363 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:14.363 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.388 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.423 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.551 [conn14] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30000| Fri Feb 22 11:21:14.565 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.593 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.635 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.702 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754aa0cfd6a2130a0ac0c m30000| Fri Feb 22 11:21:14.702 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:14-512754aa0cfd6a2130a0ac0d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532074702), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:14.702 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.736 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.766 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.839 [conn11] moveChunk request accepted at version 5|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:14.839 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:14.840 [migrateThread] starting receiving-end of migration of chunk { _id: 3.0 } -> { _id: 4.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:14.841 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:14.841 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:14.841 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:21:14.850 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:14.850 [conn11] moveChunk setting version to: 6|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:14.850 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:14.852 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:14.852 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:14.852 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:14-512754aa78e37a7f0861eba3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532074852), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:14.852 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.860 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:14.860 [conn11] moveChunk updating self version to: 6|1||5127547fd4b973931fc9a229 through { _id: 4.0 } -> { _id: 5.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:14.860 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.877 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.882 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.925 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.925 [conn15] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.009 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:15-512754ab0cfd6a2130a0ac0e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532075009), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:15.009 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:15.042 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.074 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.145 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:15.145 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:15.145 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:15.146 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:15.146 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:15.146 [cleanupOldData-512754ab0cfd6a2130a0ac0f] (start) waiting to cleanup test.foo from { _id: 3.0 } -> { _id: 4.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:15.146 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.166 [cleanupOldData-512754ab0cfd6a2130a0ac0f] waiting to remove documents for test.foo from { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:21:15.166 [cleanupOldData-512754ab0cfd6a2130a0ac0f] moveChunk starting delete for: test.foo from { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:21:15.166 [cleanupOldData-512754ab0cfd6a2130a0ac0f] moveChunk deleted 1 documents for test.foo from { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:15.171 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.211 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.316 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:15.316 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:15-512754ab0cfd6a2130a0ac10", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532075316), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 6: 0, step2 of 6: 476, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:15.316 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:15.350 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.380 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.487 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:121 w:19 reslen:37 1124ms m30999| Fri Feb 22 11:21:15.487 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:15.488 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 6|1||5127547fd4b973931fc9a229 based on: 5|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:15.488 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:15.488 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:15.522 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.562 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:15.657 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:20.658 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:20.658 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:20.659 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754b0d4b973931fc9a22f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754aad4b973931fc9a22e" } } m30000| Fri Feb 22 11:21:20.659 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:20.688 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:20.722 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:20.806 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:20.834 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:20.868 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:20.976 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754b0d4b973931fc9a22f m30999| Fri Feb 22 11:21:20.976 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:20.976 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:20.976 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:20.978 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:20.978 [Balancer] donor : shard0000 chunks on 16 m30999| Fri Feb 22 11:21:20.978 [Balancer] receiver : shard0002 chunks on 2 m30999| Fri Feb 22 11:21:20.978 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:20.978 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_4.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 4.0 }, max: { _id: 5.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:20.978 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 6|1||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:20.978 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:20.978 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:20.978 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.003 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.039 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.147 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.172 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.207 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.317 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754b00cfd6a2130a0ac11 m30000| Fri Feb 22 11:21:21.317 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:21-512754b10cfd6a2130a0ac12", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532081317), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:21.317 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.346 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.376 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.489 [conn11] moveChunk request accepted at version 6|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:21.489 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:21.489 [migrateThread] starting receiving-end of migration of chunk { _id: 4.0 } -> { _id: 5.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:21.490 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:21.490 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:21:21.490 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30000| Fri Feb 22 11:21:21.499 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:21.499 [conn11] moveChunk setting version to: 7|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:21.499 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:21.500 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:21:21.500 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:21:21.501 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:21-512754b1aaaba61d9eb250fd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532081501), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:21.509 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:21.509 [conn11] moveChunk updating self version to: 7|1||5127547fd4b973931fc9a229 through { _id: 5.0 } -> { _id: 6.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:21.510 [conn17] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.511 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.545 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.547 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.580 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.584 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.696 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:21-512754b10cfd6a2130a0ac13", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532081696), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:21.696 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.726 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.756 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:21.867 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:21.867 [cleanupOldData-512754b10cfd6a2130a0ac14] (start) waiting to cleanup test.foo from { _id: 4.0 } -> { _id: 5.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:21.867 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.887 [cleanupOldData-512754b10cfd6a2130a0ac14] waiting to remove documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30000| Fri Feb 22 11:21:21.887 [cleanupOldData-512754b10cfd6a2130a0ac14] moveChunk starting delete for: test.foo from { _id: 4.0 } -> { _id: 5.0 } m30000| Fri Feb 22 11:21:21.887 [cleanupOldData-512754b10cfd6a2130a0ac14] moveChunk deleted 1 documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:21:21.892 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.927 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:22.037 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:22.037 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:22-512754b20cfd6a2130a0ac15", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532082037), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 367, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:22.037 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:22.066 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:22.099 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:22.208 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:89 w:20 reslen:37 1229ms m30999| Fri Feb 22 11:21:22.208 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:22.209 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 7|1||5127547fd4b973931fc9a229 based on: 6|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:22.209 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:22.209 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:22.238 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:22.271 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:22.378 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:27.379 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:27.379 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:27.380 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754b7d4b973931fc9a230" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754b0d4b973931fc9a22f" } } m30000| Fri Feb 22 11:21:27.380 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.413 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.453 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:27.553 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.586 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.625 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:27.723 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754b7d4b973931fc9a230 m30999| Fri Feb 22 11:21:27.723 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:27.723 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:27.723 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:27.725 [Balancer] shard0002 has more chunks me:3 best: shard0001:3 m30999| Fri Feb 22 11:21:27.725 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:27.725 [Balancer] donor : shard0000 chunks on 15 m30999| Fri Feb 22 11:21:27.725 [Balancer] receiver : shard0001 chunks on 3 m30999| Fri Feb 22 11:21:27.725 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:27.725 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_5.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 5.0 }, max: { _id: 6.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:27.725 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 7|1||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:27.725 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:27.726 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 5.0 }, max: { _id: 6.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:27.726 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.751 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.791 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:27.894 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.918 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.957 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.064 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754b70cfd6a2130a0ac16 m30000| Fri Feb 22 11:21:28.065 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b80cfd6a2130a0ac17", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532088064), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:28.065 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.090 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.120 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.236 [conn11] moveChunk request accepted at version 7|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:28.236 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:28.236 [migrateThread] starting receiving-end of migration of chunk { _id: 5.0 } -> { _id: 6.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:28.238 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:28.238 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.238 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30000| Fri Feb 22 11:21:28.247 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 5.0 }, max: { _id: 6.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:28.247 [conn11] moveChunk setting version to: 8|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:28.247 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:28.248 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.248 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.248 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b878e37a7f0861eba4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532088248), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:28.249 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.257 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 5.0 }, max: { _id: 6.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:28.257 [conn11] moveChunk updating self version to: 8|1||5127547fd4b973931fc9a229 through { _id: 6.0 } -> { _id: 7.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:28.257 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.303 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.306 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.347 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.361 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.440 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b80cfd6a2130a0ac18", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532088440), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:28.440 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.463 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.493 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:28.542 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:28.543 [cleanupOldData-512754b80cfd6a2130a0ac19] (start) waiting to cleanup test.foo from { _id: 5.0 } -> { _id: 6.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:28.543 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.563 [cleanupOldData-512754b80cfd6a2130a0ac19] waiting to remove documents for test.foo from { _id: 5.0 } -> { _id: 6.0 } m30000| Fri Feb 22 11:21:28.563 [cleanupOldData-512754b80cfd6a2130a0ac19] moveChunk starting delete for: test.foo from { _id: 5.0 } -> { _id: 6.0 } m30000| Fri Feb 22 11:21:28.563 [cleanupOldData-512754b80cfd6a2130a0ac19] moveChunk deleted 1 documents for test.foo from { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.569 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.604 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.679 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:28.679 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b80cfd6a2130a0ac1a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532088679), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:28.679 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.705 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.734 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.815 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 5.0 }, max: { _id: 6.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:33 r:115 w:25 reslen:37 1090ms m30999| Fri Feb 22 11:21:28.815 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:28.816 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 8|1||5127547fd4b973931fc9a229 based on: 7|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:28.817 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:28.817 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.846 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.880 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:28.952 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:21:28.983 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:29.009 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:29.045 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:29.122 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:29.148 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:29.182 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:29.258 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:21:28 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30000| Fri Feb 22 11:21:32.662 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:32.690 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:32.722 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:32.798 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:32.825 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:32.856 [conn19] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:33.953 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:33.961 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:33.961 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754bdd4b973931fc9a231" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754b7d4b973931fc9a230" } } m30000| Fri Feb 22 11:21:33.961 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:33.991 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.026 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.093 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.122 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.157 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:34.229 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754bdd4b973931fc9a231 m30999| Fri Feb 22 11:21:34.229 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:34.229 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:34.229 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 11:21:34.230 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.260 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.290 [conn6] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:34.365 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:34.365 [Balancer] donor : shard0000 chunks on 14 m30999| Fri Feb 22 11:21:34.365 [Balancer] receiver : shard0002 chunks on 3 m30999| Fri Feb 22 11:21:34.365 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:34.365 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_6.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 6.0 }, max: { _id: 7.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:34.365 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 8|1||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:34.366 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:34.366 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:34.366 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.391 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.427 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.502 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.527 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.562 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.638 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754be0cfd6a2130a0ac1b m30000| Fri Feb 22 11:21:34.638 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:34-512754be0cfd6a2130a0ac1c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532094638), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:34.638 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.665 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.697 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.775 [conn11] moveChunk request accepted at version 8|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:34.776 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:34.776 [migrateThread] starting receiving-end of migration of chunk { _id: 6.0 } -> { _id: 7.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:34.777 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:34.777 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:21:34.777 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:21:34.786 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:34.786 [conn11] moveChunk setting version to: 9|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:34.786 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:34.787 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:21:34.787 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:21:34.787 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:34-512754beaaaba61d9eb250fe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532094787), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:34.787 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.796 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:34.796 [conn11] moveChunk updating self version to: 9|1||5127547fd4b973931fc9a229 through { _id: 7.0 } -> { _id: 8.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:34.796 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.822 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.825 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.853 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.863 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.911 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:34-512754be0cfd6a2130a0ac1d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532094911), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:34.911 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.942 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.978 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.047 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:35.048 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:35.048 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:35.048 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:35.048 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:35.048 [cleanupOldData-512754bf0cfd6a2130a0ac1e] (start) waiting to cleanup test.foo from { _id: 6.0 } -> { _id: 7.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:35.048 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.068 [cleanupOldData-512754bf0cfd6a2130a0ac1e] waiting to remove documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:21:35.068 [cleanupOldData-512754bf0cfd6a2130a0ac1e] moveChunk starting delete for: test.foo from { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:21:35.068 [cleanupOldData-512754bf0cfd6a2130a0ac1e] moveChunk deleted 1 documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30001| Fri Feb 22 11:21:35.074 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:35.111 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.184 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:35.184 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:35-512754bf0cfd6a2130a0ac1f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532095184), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 6: 0, step2 of 6: 409, step3 of 6: 0, step4 of 6: 10, step5 of 6: 261, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:35.184 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:35.210 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:35.242 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.320 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:22 r:104 w:18 reslen:37 954ms m30999| Fri Feb 22 11:21:35.321 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:35.322 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 9|1||5127547fd4b973931fc9a229 based on: 8|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:35.322 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:35.322 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:35.352 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:35.389 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:35.457 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:40.458 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:40.459 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:40.459 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754c4d4b973931fc9a232" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754bdd4b973931fc9a231" } } m30000| Fri Feb 22 11:21:40.459 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:40.493 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:40.533 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:40.641 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:40.676 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:40.719 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:40.846 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754c4d4b973931fc9a232 m30999| Fri Feb 22 11:21:40.846 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:40.846 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:40.846 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:40.848 [Balancer] shard0002 has more chunks me:4 best: shard0001:4 m30999| Fri Feb 22 11:21:40.848 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:40.848 [Balancer] donor : shard0000 chunks on 13 m30999| Fri Feb 22 11:21:40.848 [Balancer] receiver : shard0001 chunks on 4 m30999| Fri Feb 22 11:21:40.848 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:40.848 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_7.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 7.0 }, max: { _id: 8.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:40.848 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 9|1||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:40.848 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:40.848 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 7.0 }, max: { _id: 8.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:40.848 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:40.877 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:40.918 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.017 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.048 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.088 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.187 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754c40cfd6a2130a0ac20 m30000| Fri Feb 22 11:21:41.187 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c50cfd6a2130a0ac21", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532101187), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:41.187 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.213 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.243 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.359 [conn11] moveChunk request accepted at version 9|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:41.359 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:41.359 [migrateThread] starting receiving-end of migration of chunk { _id: 7.0 } -> { _id: 8.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:41.360 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:41.360 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.361 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30000| Fri Feb 22 11:21:41.369 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 7.0 }, max: { _id: 8.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:41.370 [conn11] moveChunk setting version to: 10|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:41.370 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:41.371 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.371 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.371 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c578e37a7f0861eba5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532101371), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:41.371 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.380 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 7.0 }, max: { _id: 8.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:41.380 [conn11] moveChunk updating self version to: 10|1||5127547fd4b973931fc9a229 through { _id: 8.0 } -> { _id: 9.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:41.380 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.410 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.413 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.459 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.459 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.596 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c50cfd6a2130a0ac22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532101596), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:41.597 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.622 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.653 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:41.767 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:41.767 [cleanupOldData-512754c50cfd6a2130a0ac23] (start) waiting to cleanup test.foo from { _id: 7.0 } -> { _id: 8.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:41.767 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.787 [cleanupOldData-512754c50cfd6a2130a0ac23] waiting to remove documents for test.foo from { _id: 7.0 } -> { _id: 8.0 } m30000| Fri Feb 22 11:21:41.788 [cleanupOldData-512754c50cfd6a2130a0ac23] moveChunk starting delete for: test.foo from { _id: 7.0 } -> { _id: 8.0 } m30000| Fri Feb 22 11:21:41.788 [cleanupOldData-512754c50cfd6a2130a0ac23] moveChunk deleted 1 documents for test.foo from { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.795 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.832 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.938 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:41.938 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c50cfd6a2130a0ac24", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532101938), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 397, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:41.938 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.964 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.996 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:42.108 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 7.0 }, max: { _id: 8.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:127 w:22 reslen:37 1260ms m30999| Fri Feb 22 11:21:42.108 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:42.109 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 10|1||5127547fd4b973931fc9a229 based on: 9|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:42.110 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:42.110 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:42.139 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:42.174 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:42.279 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:47.279 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:47.280 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:47.280 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:47 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754cbd4b973931fc9a233" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754c4d4b973931fc9a232" } } m30000| Fri Feb 22 11:21:47.280 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.311 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.348 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.393 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.423 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.460 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:47.495 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754cbd4b973931fc9a233 m30999| Fri Feb 22 11:21:47.495 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:47.495 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:47.495 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:47.497 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:47.497 [Balancer] donor : shard0000 chunks on 12 m30999| Fri Feb 22 11:21:47.497 [Balancer] receiver : shard0002 chunks on 4 m30999| Fri Feb 22 11:21:47.497 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:47.497 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_8.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 8.0 }, max: { _id: 9.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:47.497 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 10|1||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:47.497 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:47.497 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:47.497 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.522 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.557 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.598 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.623 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.663 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.700 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754cb0cfd6a2130a0ac25 m30000| Fri Feb 22 11:21:47.700 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:47-512754cb0cfd6a2130a0ac26", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532107700), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:47.700 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.726 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.757 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.803 [conn11] moveChunk request accepted at version 10|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:47.803 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:47.804 [migrateThread] starting receiving-end of migration of chunk { _id: 8.0 } -> { _id: 9.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:47.805 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:47.805 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:21:47.805 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30000| Fri Feb 22 11:21:47.814 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:47.814 [conn11] moveChunk setting version to: 11|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:47.814 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:47.815 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:21:47.815 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:21:47.815 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:47-512754cbaaaba61d9eb250ff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532107815), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:47.815 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.824 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:47.824 [conn11] moveChunk updating self version to: 11|1||5127547fd4b973931fc9a229 through { _id: 9.0 } -> { _id: 10.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:47.824 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.841 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.854 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.871 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.882 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.939 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:47-512754cb0cfd6a2130a0ac27", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532107939), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:47.939 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.965 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.995 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:48.041 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:48.042 [cleanupOldData-512754cc0cfd6a2130a0ac28] (start) waiting to cleanup test.foo from { _id: 8.0 } -> { _id: 9.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:48.042 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.062 [cleanupOldData-512754cc0cfd6a2130a0ac28] waiting to remove documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30000| Fri Feb 22 11:21:48.062 [cleanupOldData-512754cc0cfd6a2130a0ac28] moveChunk starting delete for: test.foo from { _id: 8.0 } -> { _id: 9.0 } m30000| Fri Feb 22 11:21:48.062 [cleanupOldData-512754cc0cfd6a2130a0ac28] moveChunk deleted 1 documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:21:48.067 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:48.108 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.144 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:48.144 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:48-512754cc0cfd6a2130a0ac29", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532108144), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 6: 0, step2 of 6: 306, step3 of 6: 0, step4 of 6: 10, step5 of 6: 227, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:48.144 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:48.170 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:48.200 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.246 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:29 r:119 w:15 reslen:37 749ms m30999| Fri Feb 22 11:21:48.246 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:48.248 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 32 version: 11|1||5127547fd4b973931fc9a229 based on: 10|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:48.248 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:48.248 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:48.282 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:48.322 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:48.383 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:53.383 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:53.384 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:53.384 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:53 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754d1d4b973931fc9a234" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754cbd4b973931fc9a233" } } m30000| Fri Feb 22 11:21:53.384 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.414 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.449 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.497 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.526 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.561 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:53.634 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754d1d4b973931fc9a234 m30999| Fri Feb 22 11:21:53.634 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:53.634 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:53.634 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:53.635 [Balancer] shard0002 has more chunks me:5 best: shard0001:5 m30999| Fri Feb 22 11:21:53.635 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:53.635 [Balancer] donor : shard0000 chunks on 11 m30999| Fri Feb 22 11:21:53.635 [Balancer] receiver : shard0001 chunks on 5 m30999| Fri Feb 22 11:21:53.635 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:53.635 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_9.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 9.0 }, max: { _id: 10.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:53.635 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 11|1||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:53.635 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:53.636 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 9.0 }, max: { _id: 10.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:53.636 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.661 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.696 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.736 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.761 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.796 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.838 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754d10cfd6a2130a0ac2a m30000| Fri Feb 22 11:21:53.839 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:53-512754d10cfd6a2130a0ac2b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532113838), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:53.839 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.864 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.894 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.942 [conn11] moveChunk request accepted at version 11|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:53.942 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:53.942 [migrateThread] starting receiving-end of migration of chunk { _id: 9.0 } -> { _id: 10.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:53.943 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:53.943 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:53.944 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30000| Fri Feb 22 11:21:53.961 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 9.0 }, max: { _id: 10.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:53.961 [conn11] moveChunk setting version to: 12|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:53.961 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:53.964 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:53.964 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:53.964 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:53-512754d178e37a7f0861eba6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532113964), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 20 } } m30000| Fri Feb 22 11:21:53.964 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.971 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 9.0 }, max: { _id: 10.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:53.971 [conn11] moveChunk updating self version to: 12|1||5127547fd4b973931fc9a229 through { _id: 10.0 } -> { _id: 11.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:53.971 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.001 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.004 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.049 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.049 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.112 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:54-512754d20cfd6a2130a0ac2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532114112), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:54.112 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.139 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.170 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:54.214 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:54.214 [cleanupOldData-512754d20cfd6a2130a0ac2d] (start) waiting to cleanup test.foo from { _id: 9.0 } -> { _id: 10.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:54.221 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.234 [cleanupOldData-512754d20cfd6a2130a0ac2d] waiting to remove documents for test.foo from { _id: 9.0 } -> { _id: 10.0 } m30000| Fri Feb 22 11:21:54.234 [cleanupOldData-512754d20cfd6a2130a0ac2d] moveChunk starting delete for: test.foo from { _id: 9.0 } -> { _id: 10.0 } m30000| Fri Feb 22 11:21:54.235 [cleanupOldData-512754d20cfd6a2130a0ac2d] moveChunk deleted 1 documents for test.foo from { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:54.248 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.285 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.351 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:54.351 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:54-512754d20cfd6a2130a0ac2e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532114351), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, step1 of 6: 0, step2 of 6: 306, step3 of 6: 0, step4 of 6: 18, step5 of 6: 253, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:54.352 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.377 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.407 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.454 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 9.0 }, max: { _id: 10.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:41 r:107 w:18 reslen:37 818ms m30999| Fri Feb 22 11:21:54.454 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:54.455 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 12|1||5127547fd4b973931fc9a229 based on: 11|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:54.455 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:54.455 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.484 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.519 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:54.590 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:21:59.259 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.288 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.338 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:59.425 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.454 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.484 [conn6] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:59.561 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:21:59 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30999| Fri Feb 22 11:21:59.592 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:59.593 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:59.593 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:59 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754d7d4b973931fc9a235" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754d1d4b973931fc9a234" } } m30000| Fri Feb 22 11:21:59.593 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.623 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.660 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:59.765 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.794 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.830 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:59.936 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754d7d4b973931fc9a235 m30999| Fri Feb 22 11:21:59.936 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:59.936 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:59.936 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:59.938 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:59.938 [Balancer] donor : shard0000 chunks on 10 m30999| Fri Feb 22 11:21:59.938 [Balancer] receiver : shard0002 chunks on 5 m30999| Fri Feb 22 11:21:59.938 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:59.938 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_10.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 10.0 }, max: { _id: 11.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:59.938 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 12|1||000000000000000000000000min: { _id: 10.0 }max: { _id: 11.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:59.938 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:59.939 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:59.939 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.968 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.005 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.106 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.132 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.172 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.277 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754d70cfd6a2130a0ac2f m30000| Fri Feb 22 11:22:00.277 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d80cfd6a2130a0ac30", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532120277), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:00.277 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.303 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.333 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.470 [conn19] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30000| Fri Feb 22 11:22:00.482 [conn11] moveChunk request accepted at version 12|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:00.483 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:22:00.483 [migrateThread] starting receiving-end of migration of chunk { _id: 10.0 } -> { _id: 11.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:22:00.484 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:22:00.484 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:22:00.484 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:22:00.493 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:22:00.493 [conn11] moveChunk setting version to: 13|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:00.493 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:22:00.495 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:22:00.495 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:22:00.495 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d8aaaba61d9eb25100", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532120495), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:00.501 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.503 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:22:00.503 [conn11] moveChunk updating self version to: 13|1||5127547fd4b973931fc9a229 through { _id: 11.0 } -> { _id: 12.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:00.503 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.536 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.539 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.575 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.584 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.686 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d80cfd6a2130a0ac31", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532120686), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:00.686 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.712 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.743 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:00.823 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:00.823 [cleanupOldData-512754d80cfd6a2130a0ac32] (start) waiting to cleanup test.foo from { _id: 10.0 } -> { _id: 11.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:00.823 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.843 [cleanupOldData-512754d80cfd6a2130a0ac32] waiting to remove documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:22:00.843 [cleanupOldData-512754d80cfd6a2130a0ac32] moveChunk starting delete for: test.foo from { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:22:00.843 [cleanupOldData-512754d80cfd6a2130a0ac32] moveChunk deleted 1 documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30001| Fri Feb 22 11:22:00.849 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.889 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.993 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:22:00.993 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d80cfd6a2130a0ac33", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532120993), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 6: 0, step2 of 6: 543, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:00.993 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:01.020 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:01.050 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:01.130 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:125 w:14 reslen:37 1191ms m30999| Fri Feb 22 11:22:01.130 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:01.132 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 34 version: 13|1||5127547fd4b973931fc9a229 based on: 12|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:01.132 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:01.132 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:01.167 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:01.210 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:01.301 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:22:02.934 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:02.960 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:02.991 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:03.071 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:03.097 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:03.127 [conn19] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:06.301 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:06.302 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:06.302 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754ded4b973931fc9a236" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754d7d4b973931fc9a235" } } m30000| Fri Feb 22 11:22:06.302 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.331 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.366 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:06.441 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.469 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.504 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:06.577 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754ded4b973931fc9a236 m30999| Fri Feb 22 11:22:06.577 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:06.577 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:06.577 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:06.579 [Balancer] shard0002 has more chunks me:6 best: shard0001:6 m30999| Fri Feb 22 11:22:06.579 [Balancer] collection : test.foo m30999| Fri Feb 22 11:22:06.579 [Balancer] donor : shard0000 chunks on 9 m30999| Fri Feb 22 11:22:06.579 [Balancer] receiver : shard0001 chunks on 6 m30999| Fri Feb 22 11:22:06.579 [Balancer] threshold : 2 m30999| Fri Feb 22 11:22:06.579 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_11.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 11.0 }, max: { _id: 12.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:22:06.579 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 13|1||000000000000000000000000min: { _id: 11.0 }max: { _id: 12.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:22:06.579 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:22:06.579 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 11.0 }, max: { _id: 12.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:22:06.579 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.605 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.640 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:06.714 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.741 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.776 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:06.850 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754de0cfd6a2130a0ac34 m30000| Fri Feb 22 11:22:06.850 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:06-512754de0cfd6a2130a0ac35", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532126850), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:22:06.850 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.878 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.914 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.022 [conn11] moveChunk request accepted at version 13|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:07.022 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:22:07.022 [migrateThread] starting receiving-end of migration of chunk { _id: 11.0 } -> { _id: 12.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:22:07.023 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:22:07.023 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.024 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30000| Fri Feb 22 11:22:07.032 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 11.0 }, max: { _id: 12.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:22:07.032 [conn11] moveChunk setting version to: 14|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:22:07.032 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:22:07.034 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.034 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.034 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:07-512754df78e37a7f0861eba7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532127034), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:07.034 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.042 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 11.0 }, max: { _id: 12.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:22:07.043 [conn11] moveChunk updating self version to: 14|1||5127547fd4b973931fc9a229 through { _id: 12.0 } -> { _id: 13.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:07.043 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.060 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.065 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.098 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.116 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.225 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:07-512754df0cfd6a2130a0ac36", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532127225), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:22:07.225 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.251 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.281 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.361 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:07.361 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:07.362 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:22:07.362 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:07.362 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:07.362 [cleanupOldData-512754df0cfd6a2130a0ac37] (start) waiting to cleanup test.foo from { _id: 11.0 } -> { _id: 12.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:07.362 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.382 [cleanupOldData-512754df0cfd6a2130a0ac37] waiting to remove documents for test.foo from { _id: 11.0 } -> { _id: 12.0 } m30000| Fri Feb 22 11:22:07.382 [cleanupOldData-512754df0cfd6a2130a0ac37] moveChunk starting delete for: test.foo from { _id: 11.0 } -> { _id: 12.0 } m30000| Fri Feb 22 11:22:07.382 [cleanupOldData-512754df0cfd6a2130a0ac37] moveChunk deleted 1 documents for test.foo from { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.387 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.423 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.498 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:22:07.498 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:07-512754df0cfd6a2130a0ac38", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532127498), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, step1 of 6: 0, step2 of 6: 442, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:07.498 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.524 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.554 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.635 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 11.0 }, max: { _id: 12.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:106 w:15 reslen:37 1055ms m30999| Fri Feb 22 11:22:07.635 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:07.636 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 14|1||5127547fd4b973931fc9a229 based on: 13|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:07.636 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:07.636 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.666 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.703 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:07.805 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:22:12.806 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:12.807 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:12.807 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754e4d4b973931fc9a237" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754ded4b973931fc9a236" } } m30000| Fri Feb 22 11:22:12.807 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:12.841 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:12.882 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:12.989 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.023 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.065 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:13.160 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754e4d4b973931fc9a237 m30999| Fri Feb 22 11:22:13.160 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:13.160 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:13.160 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:13.162 [Balancer] collection : test.foo m30999| Fri Feb 22 11:22:13.162 [Balancer] donor : shard0000 chunks on 8 m30999| Fri Feb 22 11:22:13.162 [Balancer] receiver : shard0002 chunks on 6 m30999| Fri Feb 22 11:22:13.162 [Balancer] threshold : 2 m30999| Fri Feb 22 11:22:13.162 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_12.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 12.0 }, max: { _id: 13.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:22:13.162 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 14|1||000000000000000000000000min: { _id: 12.0 }max: { _id: 13.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:22:13.162 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:22:13.162 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:22:13.162 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.189 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.230 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.330 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.364 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.405 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.501 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754e50cfd6a2130a0ac39 m30000| Fri Feb 22 11:22:13.501 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:13-512754e50cfd6a2130a0ac3a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532133501), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:13.501 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.527 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.558 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.673 [conn11] moveChunk request accepted at version 14|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:13.673 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:22:13.673 [migrateThread] starting receiving-end of migration of chunk { _id: 12.0 } -> { _id: 13.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:22:13.675 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:22:13.675 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:22:13.676 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30000| Fri Feb 22 11:22:13.684 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:22:13.684 [conn11] moveChunk setting version to: 15|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:13.684 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:22:13.686 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:22:13.686 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:22:13.686 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:13-512754e5aaaba61d9eb25101", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532133686), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:13.686 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.694 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:22:13.694 [conn11] moveChunk updating self version to: 15|1||5127547fd4b973931fc9a229 through { _id: 13.0 } -> { _id: 14.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:13.694 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.727 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.731 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.759 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.778 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.876 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:13-512754e50cfd6a2130a0ac3b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532133876), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:13.876 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.903 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.933 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.960 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.994 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:14.013 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:14.013 [cleanupOldData-512754e60cfd6a2130a0ac3c] (start) waiting to cleanup test.foo from { _id: 12.0 } -> { _id: 13.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:14.013 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.030 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.033 [cleanupOldData-512754e60cfd6a2130a0ac3c] waiting to remove documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30000| Fri Feb 22 11:22:14.033 [cleanupOldData-512754e60cfd6a2130a0ac3c] moveChunk starting delete for: test.foo from { _id: 12.0 } -> { _id: 13.0 } m30000| Fri Feb 22 11:22:14.034 [cleanupOldData-512754e60cfd6a2130a0ac3c] moveChunk deleted 1 documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:22:14.040 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.082 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.118 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.150 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:22:14.150 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:14-512754e60cfd6a2130a0ac3d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532134150), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:14.150 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.153 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.178 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.195 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.207 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.288 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.321 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:186 w:15 reslen:37 1158ms m30999| Fri Feb 22 11:22:14.321 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 11:22:14.322 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:14.322 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 36 version: 15|1||5127547fd4b973931fc9a229 based on: 14|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:14.323 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:14.323 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.355 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.359 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.398 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.458 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:14.491 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30001| Fri Feb 22 11:22:14.492 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.523 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.629 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.662 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.687 [conn10] CMD fsync: sync:1 lock:0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("5127547dd4b973931fc9a225") } shards: { "_id" : "shard0000", "host" : "localhost:30000", "tags" : [ "a" ] } { "_id" : "shard0001", "host" : "localhost:30001", "tags" : [ "a" ] } { "_id" : "shard0002", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } test.foo shard key: { "_id" : 1 } chunks: shard0001 7 shard0002 7 shard0000 7 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : shard0001 { "t" : 2000, "i" : 0 } { "_id" : 0 } -->> { "_id" : 1 } on : shard0002 { "t" : 3000, "i" : 0 } { "_id" : 1 } -->> { "_id" : 2 } on : shard0001 { "t" : 4000, "i" : 0 } { "_id" : 2 } -->> { "_id" : 3 } on : shard0002 { "t" : 5000, "i" : 0 } { "_id" : 3 } -->> { "_id" : 4 } on : shard0001 { "t" : 6000, "i" : 0 } { "_id" : 4 } -->> { "_id" : 5 } on : shard0002 { "t" : 7000, "i" : 0 } { "_id" : 5 } -->> { "_id" : 6 } on : shard0001 { "t" : 8000, "i" : 0 } { "_id" : 6 } -->> { "_id" : 7 } on : shard0002 { "t" : 9000, "i" : 0 } { "_id" : 7 } -->> { "_id" : 8 } on : shard0001 { "t" : 10000, "i" : 0 } { "_id" : 8 } -->> { "_id" : 9 } on : shard0002 { "t" : 11000, "i" : 0 } { "_id" : 9 } -->> { "_id" : 10 } on : shard0001 { "t" : 12000, "i" : 0 } { "_id" : 10 } -->> { "_id" : 11 } on : shard0002 { "t" : 13000, "i" : 0 } { "_id" : 11 } -->> { "_id" : 12 } on : shard0001 { "t" : 14000, "i" : 0 } { "_id" : 12 } -->> { "_id" : 13 } on : shard0002 { "t" : 15000, "i" : 0 } { "_id" : 13 } -->> { "_id" : 14 } on : shard0000 { "t" : 15000, "i" : 1 } { "_id" : 14 } -->> { "_id" : 15 } on : shard0000 { "t" : 1000, "i" : 31 } { "_id" : 15 } -->> { "_id" : 16 } on : shard0000 { "t" : 1000, "i" : 33 } { "_id" : 16 } -->> { "_id" : 17 } on : shard0000 { "t" : 1000, "i" : 35 } { "_id" : 17 } -->> { "_id" : 18 } on : shard0000 { "t" : 1000, "i" : 37 } { "_id" : 18 } -->> { "_id" : 19 } on : shard0000 { "t" : 1000, "i" : 39 } { "_id" : 19 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1000, "i" : 40 } tag: a { "_id" : -1 } -->> { "_id" : 1000 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } m30999| Fri Feb 22 11:22:19.492 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:19.493 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:19.493 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:19 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754ebd4b973931fc9a238" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754e4d4b973931fc9a237" } } m30000| Fri Feb 22 11:22:19.493 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.524 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.561 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.632 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.661 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.696 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:19.769 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754ebd4b973931fc9a238 m30999| Fri Feb 22 11:22:19.769 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:19.769 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:19.769 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:19.771 [Balancer] ns: test.foo need to split on { _id: -1.0 } because there is a range there m30001| Fri Feb 22 11:22:19.771 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", splitKeys: [ { _id: -1.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30001| Fri Feb 22 11:22:19.772 [initandlisten] connection accepted from 127.0.0.1:48199 #20 (19 connections now open) m30002| Fri Feb 22 11:22:19.773 [initandlisten] connection accepted from 127.0.0.1:61722 #20 (19 connections now open) m30001| Fri Feb 22 11:22:19.774 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30001:1361532139:10113 (sleeping for 30000ms) m30001| Fri Feb 22 11:22:19.774 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:22:19.774 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.774 [initandlisten] connection accepted from 127.0.0.1:46014 #21 (20 connections now open) m30001| Fri Feb 22 11:22:19.774 [conn11] SyncClusterConnection connecting to [localhost:30001] m30001| Fri Feb 22 11:22:19.775 [initandlisten] connection accepted from 127.0.0.1:38642 #21 (20 connections now open) m30001| Fri Feb 22 11:22:19.775 [conn11] SyncClusterConnection connecting to [localhost:30002] m30002| Fri Feb 22 11:22:19.775 [initandlisten] connection accepted from 127.0.0.1:36772 #21 (20 connections now open) m30000| Fri Feb 22 11:22:19.775 [conn21] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } m30001| Fri Feb 22 11:22:19.807 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.810 [conn21] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.851 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.861 [conn21] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.939 [conn21] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.939 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.980 [conn21] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.985 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.021 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.027 [conn21] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:20.110 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.110 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361532139:10113' acquired, ts : 512754eb78e37a7f0861eba8 m30001| Fri Feb 22 11:22:20.110 [conn11] SyncClusterConnection connecting to [localhost:30000] m30001| Fri Feb 22 11:22:20.110 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:20.110 [initandlisten] connection accepted from 127.0.0.1:46249 #22 (21 connections now open) m30001| Fri Feb 22 11:22:20.110 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:20.110 [initandlisten] connection accepted from 127.0.0.1:42216 #22 (21 connections now open) m30002| Fri Feb 22 11:22:20.111 [initandlisten] connection accepted from 127.0.0.1:50697 #22 (21 connections now open) m30001| Fri Feb 22 11:22:20.121 [conn11] no current chunk manager found for this shard, will initialize m30001| Fri Feb 22 11:22:20.122 [conn11] splitChunk accepted at version 14|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:20.122 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.135 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.149 [conn22] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.173 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.182 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.280 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:20-512754ec78e37a7f0861eba9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:42693", time: new Date(1361532140280), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: -1.0 }, lastmod: Timestamp 15000|2, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: -1.0 }, max: { _id: 0.0 }, lastmod: Timestamp 15000|3, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:22:20.280 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.308 [conn22] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.346 [conn22] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:20.417 [conn21] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.447 [conn21] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.485 [conn21] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.587 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361532139:10113' unlocked. m30001| Fri Feb 22 11:22:20.587 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", splitKeys: [ { _id: -1.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:126 reslen:37 816ms m30999| Fri Feb 22 11:22:20.589 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 15|3||5127547fd4b973931fc9a229 based on: 15|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:20.589 [Balancer] split worked: { ok: 1.0 } m30999| Fri Feb 22 11:22:20.589 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:22:20.589 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:20.589 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.623 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.664 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:20.758 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:29.561 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:29.596 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:29.628 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:29.711 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:29.742 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:29.769 [conn6] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30999| Fri Feb 22 11:22:29.847 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:22:29 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:33.208 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:33.238 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:33.266 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:33.353 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:33.384 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:33.412 [conn12] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:50.247 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.269 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.306 [conn15] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:50.405 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.427 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.466 [conn15] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:50.759 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:50.759 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:50.759 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127550ad4b973931fc9a239" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754ebd4b973931fc9a238" } } m30000| Fri Feb 22 11:22:50.759 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.786 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.819 [conn5] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:50.882 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.909 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.942 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:51.018 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127550ad4b973931fc9a239 m30999| Fri Feb 22 11:22:51.018 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:51.018 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:51.019 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:51.020 [Balancer] chunk { _id: "test.foo-_id_0.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:22:51.020 [Balancer] shard0001 has more chunks me:8 best: shard0000:7 m30999| Fri Feb 22 11:22:51.020 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:22:51.020 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:22:51.020 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 3|0||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:22:51.021 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:22:51.021 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:22:51.021 [initandlisten] connection accepted from 127.0.0.1:58098 #23 (22 connections now open) m30002| Fri Feb 22 11:22:51.022 [initandlisten] connection accepted from 127.0.0.1:41207 #23 (22 connections now open) m30002| Fri Feb 22 11:22:51.024 [conn11] SyncClusterConnection connecting to [localhost:30000] m30002| Fri Feb 22 11:22:51.024 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548 (sleeping for 30000ms) m30000| Fri Feb 22 11:22:51.024 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.024 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:51.024 [initandlisten] connection accepted from 127.0.0.1:40174 #23 (22 connections now open) m30002| Fri Feb 22 11:22:51.024 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:51.024 [initandlisten] connection accepted from 127.0.0.1:44990 #24 (23 connections now open) m30002| Fri Feb 22 11:22:51.024 [initandlisten] connection accepted from 127.0.0.1:62763 #24 (23 connections now open) m30000| Fri Feb 22 11:22:51.032 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.049 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.053 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.083 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.091 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:51.155 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.181 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:51.189 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.215 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.216 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.245 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.291 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 5127550baaaba61d9eb25102 m30002| Fri Feb 22 11:22:51.291 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550baaaba61d9eb25103", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532171291), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0002", to: "shard0000" } } m30002| Fri Feb 22 11:22:51.291 [conn11] SyncClusterConnection connecting to [localhost:30000] m30002| Fri Feb 22 11:22:51.291 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:51.291 [initandlisten] connection accepted from 127.0.0.1:45942 #24 (23 connections now open) m30002| Fri Feb 22 11:22:51.291 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:51.291 [initandlisten] connection accepted from 127.0.0.1:42360 #25 (24 connections now open) m30002| Fri Feb 22 11:22:51.292 [initandlisten] connection accepted from 127.0.0.1:33710 #25 (24 connections now open) m30000| Fri Feb 22 11:22:51.292 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.318 [conn25] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:51.325 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.351 [conn25] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.351 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.379 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.427 [conn11] no current chunk manager found for this shard, will initialize m30002| Fri Feb 22 11:22:51.428 [conn11] moveChunk request accepted at version 15|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:51.428 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:22:51.429 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 1.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:22:51.430 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:22:51.430 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:22:51.430 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:22:51.439 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:22:51.439 [conn11] moveChunk setting version to: 16|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:51.439 [initandlisten] connection accepted from 127.0.0.1:57636 #25 (24 connections now open) m30000| Fri Feb 22 11:22:51.439 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:22:51.441 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:22:51.441 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:22:51.441 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550b0cfd6a2130a0ac3e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532171441), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:51.441 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.450 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:22:51.450 [conn11] moveChunk updating self version to: 16|1||5127547fd4b973931fc9a229 through { _id: 2.0 } -> { _id: 3.0 } for collection 'test.foo' m30002| Fri Feb 22 11:22:51.450 [conn11] SyncClusterConnection connecting to [localhost:30000] m30002| Fri Feb 22 11:22:51.450 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:51.450 [initandlisten] connection accepted from 127.0.0.1:65387 #26 (25 connections now open) m30002| Fri Feb 22 11:22:51.450 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:51.450 [initandlisten] connection accepted from 127.0.0.1:53938 #26 (25 connections now open) m30002| Fri Feb 22 11:22:51.450 [initandlisten] connection accepted from 127.0.0.1:37179 #26 (25 connections now open) m30000| Fri Feb 22 11:22:51.451 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.471 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.477 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.512 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.524 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.598 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550baaaba61d9eb25104", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532171598), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:22:51.598 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.624 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.654 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.734 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:51.734 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:51.734 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:22:51.734 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:51.735 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:51.735 [cleanupOldData-5127550baaaba61d9eb25105] (start) waiting to cleanup test.foo from { _id: 0.0 } -> { _id: 1.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:51.735 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.755 [cleanupOldData-5127550baaaba61d9eb25105] waiting to remove documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:22:51.755 [cleanupOldData-5127550baaaba61d9eb25105] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:22:51.755 [cleanupOldData-5127550baaaba61d9eb25105] moveChunk deleted 1 documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:22:51.761 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.801 [conn24] CMD fsync: sync:1 lock:0 { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } m30002| Fri Feb 22 11:22:51.871 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:22:51.871 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550baaaba61d9eb25106", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532171871), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 407, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:51.871 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.897 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.927 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:52.008 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:42 r:72 w:13 reslen:37 987ms m30999| Fri Feb 22 11:22:52.008 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:52.009 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 38 version: 16|1||5127547fd4b973931fc9a229 based on: 15|3||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:52.009 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:52.009 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:52.044 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:52.084 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:52.144 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } m30999| Fri Feb 22 11:22:57.145 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:57.146 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:57.146 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:57 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275511d4b973931fc9a23a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127550ad4b973931fc9a239" } } m30000| Fri Feb 22 11:22:57.146 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.180 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.221 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:57.285 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.319 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.359 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:57.421 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275511d4b973931fc9a23a m30999| Fri Feb 22 11:22:57.421 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:57.421 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:57.421 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:57.423 [Balancer] chunk { _id: "test.foo-_id_2.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:22:57.423 [Balancer] shard0001 has more chunks me:8 best: shard0000:8 m30999| Fri Feb 22 11:22:57.423 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:22:57.423 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:22:57.423 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 16|1||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:22:57.423 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:22:57.424 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:22:57.424 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.449 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.490 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:57.558 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.583 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.624 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.694 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275511aaaba61d9eb25107 m30002| Fri Feb 22 11:22:57.694 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:57-51275511aaaba61d9eb25108", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532177694), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:22:57.694 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.720 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.750 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.832 [conn11] moveChunk request accepted at version 16|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:57.832 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:22:57.832 [migrateThread] starting receiving-end of migration of chunk { _id: 2.0 } -> { _id: 3.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:22:57.833 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:22:57.833 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:22:57.834 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:22:57.843 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:22:57.843 [conn11] moveChunk setting version to: 17|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:57.843 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:22:57.844 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:22:57.844 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:22:57.844 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:57-512755110cfd6a2130a0ac3f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532177844), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:57.844 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.853 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:22:57.853 [conn11] moveChunk updating self version to: 17|1||5127547fd4b973931fc9a229 through { _id: 4.0 } -> { _id: 5.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:57.853 [conn26] CMD fsync: sync:1 lock:0 { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } m30001| Fri Feb 22 11:22:57.893 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.893 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.935 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.944 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.036 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:58-51275512aaaba61d9eb25109", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532178036), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:22:58.036 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:58.064 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.096 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.172 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:58.172 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:58.172 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:22:58.173 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:58.173 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:58.173 [cleanupOldData-51275512aaaba61d9eb2510a] (start) waiting to cleanup test.foo from { _id: 2.0 } -> { _id: 3.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:58.173 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.193 [cleanupOldData-51275512aaaba61d9eb2510a] waiting to remove documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:22:58.193 [cleanupOldData-51275512aaaba61d9eb2510a] moveChunk starting delete for: test.foo from { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:22:58.193 [cleanupOldData-51275512aaaba61d9eb2510a] moveChunk deleted 1 documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:22:58.199 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.239 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.309 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:22:58.309 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:58-51275512aaaba61d9eb2510b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532178309), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:58.309 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:58.335 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.365 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.446 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:139 w:22 reslen:37 1022ms m30999| Fri Feb 22 11:22:58.446 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:58.447 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 17|1||5127547fd4b973931fc9a229 based on: 16|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:58.447 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:58.447 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:58.481 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.522 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:58.616 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30000| Fri Feb 22 11:22:59.847 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:59.873 [conn7] CMD fsync: sync:1 lock:0 { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30002| Fri Feb 22 11:22:59.908 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:59.979 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:00.004 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:00.040 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:00.115 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:22:59 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30000| Fri Feb 22 11:23:03.489 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.518 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.548 [conn12] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:03.617 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:03.617 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:03.618 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275517d4b973931fc9a23b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275511d4b973931fc9a23a" } } m30000| Fri Feb 22 11:23:03.618 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:03.628 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.647 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.665 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.685 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.698 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:03.764 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.794 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.827 [conn5] CMD fsync: sync:1 lock:0 { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30999| Fri Feb 22 11:23:03.901 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275517d4b973931fc9a23b m30999| Fri Feb 22 11:23:03.901 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:03.901 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:03.901 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:03.903 [Balancer] chunk { _id: "test.foo-_id_4.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 4.0 }, max: { _id: 5.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:03.903 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:03.903 [Balancer] going to move to: shard0001 m30999| Fri Feb 22 11:23:03.903 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 17|1||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 }) shard0002:localhost:30002 -> shard0001:localhost:30001 m30002| Fri Feb 22 11:23:03.903 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:03.903 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:03.903 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.929 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.964 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:04.037 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.063 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.098 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.174 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275517aaaba61d9eb2510c m30002| Fri Feb 22 11:23:04.174 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-51275518aaaba61d9eb2510d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532184174), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:04.174 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.199 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.228 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.311 [conn11] moveChunk request accepted at version 17|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:04.311 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:23:04.311 [migrateThread] starting receiving-end of migration of chunk { _id: 4.0 } -> { _id: 5.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30001| Fri Feb 22 11:23:04.312 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:23:04.312 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.313 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:23:04.322 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:04.322 [conn11] moveChunk setting version to: 18|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:23:04.322 [initandlisten] connection accepted from 127.0.0.1:49673 #27 (26 connections now open) m30001| Fri Feb 22 11:23:04.322 [conn27] Waiting for commit to finish m30001| Fri Feb 22 11:23:04.323 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.323 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.323 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-5127551878e37a7f0861ebaa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532184323), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:04.323 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.332 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:04.332 [conn11] moveChunk updating self version to: 18|1||5127547fd4b973931fc9a229 through { _id: 6.0 } -> { _id: 7.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:04.332 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.348 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.353 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.393 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.403 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.481 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-51275518aaaba61d9eb2510e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532184481), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:04.481 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.507 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.538 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:04.617 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:04.618 [cleanupOldData-51275518aaaba61d9eb2510f] (start) waiting to cleanup test.foo from { _id: 4.0 } -> { _id: 5.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:04.618 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.638 [cleanupOldData-51275518aaaba61d9eb2510f] waiting to remove documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:23:04.638 [cleanupOldData-51275518aaaba61d9eb2510f] moveChunk starting delete for: test.foo from { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:23:04.638 [cleanupOldData-51275518aaaba61d9eb2510f] moveChunk deleted 1 documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.643 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.679 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.754 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:04.754 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-51275518aaaba61d9eb25110", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532184754), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 6: 0, step2 of 6: 407, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:04.754 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.782 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.817 [conn18] CMD fsync: sync:1 lock:0 { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } m30002| Fri Feb 22 11:23:04.890 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:125 w:14 reslen:37 987ms m30999| Fri Feb 22 11:23:04.890 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:04.892 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 18|1||5127547fd4b973931fc9a229 based on: 17|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:04.892 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:04.892 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.922 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.958 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:05.027 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } m30999| Fri Feb 22 11:23:10.027 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:10.028 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:10.028 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127551ed4b973931fc9a23c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275517d4b973931fc9a23b" } } m30000| Fri Feb 22 11:23:10.028 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.062 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.103 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:10.210 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.244 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.281 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:10.347 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127551ed4b973931fc9a23c m30999| Fri Feb 22 11:23:10.347 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:10.347 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:10.347 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:10.349 [Balancer] chunk { _id: "test.foo-_id_6.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 6.0 }, max: { _id: 7.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:10.349 [Balancer] shard0001 has more chunks me:9 best: shard0000:9 m30999| Fri Feb 22 11:23:10.349 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:10.349 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:23:10.349 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 18|1||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:23:10.349 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:10.349 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:10.350 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.375 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.416 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:10.518 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.545 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.583 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.654 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 5127551eaaaba61d9eb25111 m30002| Fri Feb 22 11:23:10.654 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:10-5127551eaaaba61d9eb25112", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532190654), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:10.654 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.680 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.710 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.859 [conn18] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30002| Fri Feb 22 11:23:10.860 [conn11] moveChunk request accepted at version 18|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:10.860 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:23:10.860 [migrateThread] starting receiving-end of migration of chunk { _id: 6.0 } -> { _id: 7.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:23:10.861 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:23:10.861 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:23:10.861 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:23:10.870 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:10.870 [conn11] moveChunk setting version to: 19|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:23:10.870 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:23:10.872 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:23:10.872 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:23:10.872 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:10-5127551e0cfd6a2130a0ac40", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532190872), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:10.872 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.880 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:10.881 [conn11] moveChunk updating self version to: 19|1||5127547fd4b973931fc9a229 through { _id: 8.0 } -> { _id: 9.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:10.881 [conn26] CMD fsync: sync:1 lock:0 { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } m30001| Fri Feb 22 11:23:10.917 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.917 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.957 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.962 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.063 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:11-5127551faaaba61d9eb25113", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532191063), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:11.063 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:11.089 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.120 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:11.200 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:11.200 [cleanupOldData-5127551faaaba61d9eb25114] (start) waiting to cleanup test.foo from { _id: 6.0 } -> { _id: 7.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:11.200 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.220 [cleanupOldData-5127551faaaba61d9eb25114] waiting to remove documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:23:11.220 [cleanupOldData-5127551faaaba61d9eb25114] moveChunk starting delete for: test.foo from { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:23:11.220 [cleanupOldData-5127551faaaba61d9eb25114] moveChunk deleted 1 documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30001| Fri Feb 22 11:23:11.226 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.261 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.370 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:11.370 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:11-5127551faaaba61d9eb25115", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532191370), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:11.371 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:11.396 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.426 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.507 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:18 r:109 w:14 reslen:37 1157ms m30999| Fri Feb 22 11:23:11.507 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:11.508 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 19|1||5127547fd4b973931fc9a229 based on: 18|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:11.508 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:11.508 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:11.537 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.573 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:11.677 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } m30999| Fri Feb 22 11:23:16.678 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:16.679 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:16.679 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275524d4b973931fc9a23d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127551ed4b973931fc9a23c" } } m30000| Fri Feb 22 11:23:16.679 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:16.710 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:16.747 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:16.826 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:16.859 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:16.897 [conn5] CMD fsync: sync:1 lock:0 { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } m30999| Fri Feb 22 11:23:16.963 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275524d4b973931fc9a23d m30999| Fri Feb 22 11:23:16.963 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:16.963 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:16.963 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:16.965 [Balancer] chunk { _id: "test.foo-_id_8.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 8.0 }, max: { _id: 9.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:16.965 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:16.965 [Balancer] going to move to: shard0001 m30999| Fri Feb 22 11:23:16.965 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 19|1||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 }) shard0002:localhost:30002 -> shard0001:localhost:30001 m30002| Fri Feb 22 11:23:16.965 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:16.965 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:16.965 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:16.992 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.029 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:17.099 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.128 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.166 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.236 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275524aaaba61d9eb25116 m30002| Fri Feb 22 11:23:17.236 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-51275525aaaba61d9eb25117", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532197236), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:17.236 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.264 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.297 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.373 [conn11] moveChunk request accepted at version 19|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:17.373 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:23:17.374 [migrateThread] starting receiving-end of migration of chunk { _id: 8.0 } -> { _id: 9.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30001| Fri Feb 22 11:23:17.375 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:23:17.375 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.375 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:23:17.384 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:17.384 [conn11] moveChunk setting version to: 20|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:23:17.384 [conn27] Waiting for commit to finish m30001| Fri Feb 22 11:23:17.385 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.385 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.385 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-5127552578e37a7f0861ebab", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532197385), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:17.385 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.394 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:17.394 [conn11] moveChunk updating self version to: 20|1||5127547fd4b973931fc9a229 through { _id: 10.0 } -> { _id: 11.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:17.394 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.412 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.425 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.450 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.473 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.543 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-51275525aaaba61d9eb25118", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532197543), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:17.543 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.569 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.601 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:17.680 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:17.680 [cleanupOldData-51275525aaaba61d9eb25119] (start) waiting to cleanup test.foo from { _id: 8.0 } -> { _id: 9.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:17.680 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.700 [cleanupOldData-51275525aaaba61d9eb25119] waiting to remove documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:23:17.700 [cleanupOldData-51275525aaaba61d9eb25119] moveChunk starting delete for: test.foo from { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:23:17.700 [cleanupOldData-51275525aaaba61d9eb25119] moveChunk deleted 1 documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.706 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.743 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.816 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:17.816 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-51275525aaaba61d9eb2511a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532197816), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:17.816 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.842 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.873 [conn18] CMD fsync: sync:1 lock:0 { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30002| Fri Feb 22 11:23:17.953 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:30 r:106 w:8 reslen:37 987ms m30999| Fri Feb 22 11:23:17.953 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:17.954 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 20|1||5127547fd4b973931fc9a229 based on: 19|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:17.954 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:17.954 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.989 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:18.026 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:18.090 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30000| Fri Feb 22 11:23:20.541 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:20.567 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:20.602 [conn15] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:20.678 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:20.705 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:20.740 [conn15] CMD fsync: sync:1 lock:0 { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30000| Fri Feb 22 11:23:21.461 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:21.487 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:21.517 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:21.597 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:21.622 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:21.653 [conn18] CMD fsync: sync:1 lock:0 { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30999| Fri Feb 22 11:23:23.090 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:23.091 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:23.091 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127552bd4b973931fc9a23e" } } m30000| Fri Feb 22 11:23:23.091 [conn5] CMD fsync: sync:1 lock:0 m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275524d4b973931fc9a23d" } } m30001| Fri Feb 22 11:23:23.124 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.164 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:23.232 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.265 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.304 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:23.368 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127552bd4b973931fc9a23e m30999| Fri Feb 22 11:23:23.368 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:23.368 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:23.368 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:23.370 [Balancer] chunk { _id: "test.foo-_id_10.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 10.0 }, max: { _id: 11.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:23.370 [Balancer] shard0001 has more chunks me:10 best: shard0000:10 m30999| Fri Feb 22 11:23:23.370 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:23.370 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:23:23.370 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 20|1||000000000000000000000000min: { _id: 10.0 }max: { _id: 11.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:23:23.371 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:23.371 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:23.371 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.396 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.436 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:23.505 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.530 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.570 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.641 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 5127552baaaba61d9eb2511b m30002| Fri Feb 22 11:23:23.642 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:23-5127552baaaba61d9eb2511c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532203641), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:23.642 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.667 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.697 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.779 [conn11] moveChunk request accepted at version 20|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:23.779 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:23:23.779 [migrateThread] starting receiving-end of migration of chunk { _id: 10.0 } -> { _id: 11.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:23:23.780 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:23:23.780 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:23:23.781 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:23:23.789 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:23.789 [conn11] moveChunk setting version to: 21|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:23:23.789 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:23:23.791 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:23:23.791 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:23:23.791 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:23-5127552b0cfd6a2130a0ac41", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532203791), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:23.791 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.800 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:23.800 [conn11] moveChunk updating self version to: 21|1||5127547fd4b973931fc9a229 through { _id: 12.0 } -> { _id: 13.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:23.800 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.837 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.837 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.876 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.882 [conn26] CMD fsync: sync:1 lock:0 { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } m30002| Fri Feb 22 11:23:23.948 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:23-5127552baaaba61d9eb2511d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532203948), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:23.949 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.974 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.005 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.085 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:24.085 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:24.086 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:24.086 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:24.086 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:24.086 [cleanupOldData-5127552caaaba61d9eb2511e] (start) waiting to cleanup test.foo from { _id: 10.0 } -> { _id: 11.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:24.086 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.106 [cleanupOldData-5127552caaaba61d9eb2511e] waiting to remove documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:23:24.106 [cleanupOldData-5127552caaaba61d9eb2511e] moveChunk starting delete for: test.foo from { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:23:24.106 [cleanupOldData-5127552caaaba61d9eb2511e] moveChunk deleted 1 documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30001| Fri Feb 22 11:23:24.114 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.150 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.222 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:24.222 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:24-5127552caaaba61d9eb2511f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532204222), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 296, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:24.222 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:24.248 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.278 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.359 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:117 w:11 reslen:37 988ms m30999| Fri Feb 22 11:23:24.359 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:24.362 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 43 version: 21|1||5127547fd4b973931fc9a229 based on: 20|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:24.362 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:24.362 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:24.392 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.425 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:24.495 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } m30999| Fri Feb 22 11:23:29.496 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:29.496 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:29.497 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275531d4b973931fc9a23f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127552bd4b973931fc9a23e" } } m30000| Fri Feb 22 11:23:29.497 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.527 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:29.562 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:29.635 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.667 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:29.706 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:29.772 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275531d4b973931fc9a23f m30999| Fri Feb 22 11:23:29.772 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:29.772 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:29.772 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:29.774 [Balancer] chunk { _id: "test.foo-_id_12.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 12.0 }, max: { _id: 13.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:29.774 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:29.774 [Balancer] going to move to: shard0001 m30999| Fri Feb 22 11:23:29.774 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 21|1||000000000000000000000000min: { _id: 12.0 }max: { _id: 13.0 }) shard0002:localhost:30002 -> shard0001:localhost:30001 m30002| Fri Feb 22 11:23:29.774 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:29.774 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:29.774 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.797 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:29.835 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:29.908 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.931 [conn24] CMD fsync: sync:1 lock:0 { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } m30002| Fri Feb 22 11:23:29.969 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.045 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275531aaaba61d9eb25120 m30002| Fri Feb 22 11:23:30.045 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-51275532aaaba61d9eb25121", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532210045), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:30.045 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.068 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.106 [conn25] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:30.115 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.144 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.172 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.173 [conn11] moveChunk request accepted at version 21|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:30.173 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:23:30.174 [migrateThread] starting receiving-end of migration of chunk { _id: 12.0 } -> { _id: 13.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30001| Fri Feb 22 11:23:30.175 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:23:30.175 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.175 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:23:30.184 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:30.184 [conn11] moveChunk setting version to: 22|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:23:30.184 [conn27] Waiting for commit to finish m30001| Fri Feb 22 11:23:30.186 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.186 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.186 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-5127553278e37a7f0861ebac", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532210186), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:30.186 [conn22] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.194 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:30.194 [conn11] moveChunk moved last chunk out for collection 'test.foo' m30000| Fri Feb 22 11:23:30.194 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.209 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.224 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.247 [conn22] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:30.248 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.284 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.287 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.318 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.354 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-51275532aaaba61d9eb25122", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532210354), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:30.354 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.380 [conn25] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:30.388 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:23:30 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30002| Fri Feb 22 11:23:30.418 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:30.490 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:30.490 [cleanupOldData-51275532aaaba61d9eb25123] (start) waiting to cleanup test.foo from { _id: 12.0 } -> { _id: 13.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:30.490 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.510 [cleanupOldData-51275532aaaba61d9eb25123] waiting to remove documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:23:30.510 [cleanupOldData-51275532aaaba61d9eb25123] moveChunk starting delete for: test.foo from { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:23:30.511 [cleanupOldData-51275532aaaba61d9eb25123] moveChunk deleted 1 documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.516 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.554 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.627 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:30.627 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-51275532aaaba61d9eb25124", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532210627), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 6: 0, step2 of 6: 399, step3 of 6: 0, step4 of 6: 10, step5 of 6: 306, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:30.627 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.653 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.694 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.763 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:106 w:6 reslen:37 989ms m30999| Fri Feb 22 11:23:30.763 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:30.765 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 44 version: 22|0||5127547fd4b973931fc9a229 based on: 21|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:30.765 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:30.765 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.799 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.841 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:30.901 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 0, "shard0000" : 11, "shard0001" : 11 } --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("5127547dd4b973931fc9a225") } shards: { "_id" : "shard0000", "host" : "localhost:30000", "tags" : [ "a" ] } { "_id" : "shard0001", "host" : "localhost:30001", "tags" : [ "a" ] } { "_id" : "shard0002", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } test.foo shard key: { "_id" : 1 } chunks: shard0001 11 shard0000 11 too many chunks to print, use verbose if you want to force print tag: a { "_id" : -1 } -->> { "_id" : 1000 } undefined m30999| Fri Feb 22 11:23:30.956 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30000| Fri Feb 22 11:23:30.957 [conn3] end connection 127.0.0.1:40427 (24 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn3] end connection 127.0.0.1:55562 (25 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn5] end connection 127.0.0.1:60590 (24 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn5] end connection 127.0.0.1:48409 (25 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn3] end connection 127.0.0.1:64894 (24 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn5] end connection 127.0.0.1:52548 (24 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn6] end connection 127.0.0.1:58494 (23 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn6] end connection 127.0.0.1:46581 (24 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn6] end connection 127.0.0.1:52531 (24 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn7] end connection 127.0.0.1:56278 (21 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn7] end connection 127.0.0.1:58330 (22 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn7] end connection 127.0.0.1:61888 (21 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn9] end connection 127.0.0.1:38220 (20 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn9] end connection 127.0.0.1:57851 (22 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn10] end connection 127.0.0.1:44398 (20 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn9] end connection 127.0.0.1:42564 (20 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn10] end connection 127.0.0.1:50063 (21 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn11] end connection 127.0.0.1:53660 (20 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn10] end connection 127.0.0.1:59349 (20 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn11] end connection 127.0.0.1:42693 (20 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn11] end connection 127.0.0.1:44320 (20 connections now open) Fri Feb 22 11:23:31.956 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:23:31.956 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:23:31.957 [interruptThread] now exiting m30000| Fri Feb 22 11:23:31.957 dbexit: m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:23:31.957 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:23:31.957 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:23:31.957 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:23:31.957 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:23:31.957 [conn1] end connection 127.0.0.1:51670 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn2] end connection 127.0.0.1:58478 (17 connections now open) m30001| Fri Feb 22 11:23:31.957 [conn15] end connection 127.0.0.1:38369 (18 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn17] end connection 127.0.0.1:45549 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn18] end connection 127.0.0.1:35927 (17 connections now open) m30001| Fri Feb 22 11:23:31.957 [conn19] end connection 127.0.0.1:41492 (17 connections now open) m30001| Fri Feb 22 11:23:31.957 [conn17] end connection 127.0.0.1:40060 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn20] end connection 127.0.0.1:51262 (17 connections now open) m30002| Fri Feb 22 11:23:31.957 [conn16] end connection 127.0.0.1:56139 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn25] end connection 127.0.0.1:57636 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn26] end connection 127.0.0.1:65387 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn19] end connection 127.0.0.1:61265 (17 connections now open) m30002| Fri Feb 22 11:23:31.957 [conn17] end connection 127.0.0.1:33670 (16 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn24] end connection 127.0.0.1:45942 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn14] end connection 127.0.0.1:49538 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn12] end connection 127.0.0.1:40271 (17 connections now open) m30002| Fri Feb 22 11:23:31.957 [conn12] end connection 127.0.0.1:60374 (15 connections now open) m30001| Fri Feb 22 11:23:31.958 [conn12] end connection 127.0.0.1:43078 (15 connections now open) m30002| Fri Feb 22 11:23:31.958 [conn13] end connection 127.0.0.1:43096 (14 connections now open) m30002| Fri Feb 22 11:23:31.958 [conn14] end connection 127.0.0.1:33814 (14 connections now open) m30001| Fri Feb 22 11:23:31.958 [conn13] end connection 127.0.0.1:49945 (14 connections now open) m30000| Fri Feb 22 11:23:31.958 [conn16] end connection 127.0.0.1:43618 (17 connections now open) m30001| Fri Feb 22 11:23:31.958 [conn14] end connection 127.0.0.1:51906 (14 connections now open) m30000| Fri Feb 22 11:23:31.958 [conn22] end connection 127.0.0.1:46249 (17 connections now open) m30002| Fri Feb 22 11:23:31.958 [conn19] end connection 127.0.0.1:64097 (12 connections now open) m30000| Fri Feb 22 11:23:31.967 [conn13] end connection 127.0.0.1:34264 (4 connections now open) m30000| Fri Feb 22 11:23:31.968 [conn21] end connection 127.0.0.1:46014 (3 connections now open) m30000| Fri Feb 22 11:23:31.968 [conn23] end connection 127.0.0.1:40174 (2 connections now open) m30000| Fri Feb 22 11:23:31.968 [conn15] end connection 127.0.0.1:62544 (2 connections now open) m30000| Fri Feb 22 11:23:31.988 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:23:31.990 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:23:31.990 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:23:31.990 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:23:31.990 dbexit: really exiting now Fri Feb 22 11:23:32.956 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 11:23:32.957 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:23:32.957 [interruptThread] now exiting m30001| Fri Feb 22 11:23:32.957 dbexit: m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:23:32.957 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 11:23:32.957 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 11:23:32.957 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 11:23:32.957 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:23:32.957 [conn1] end connection 127.0.0.1:61815 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn2] end connection 127.0.0.1:58147 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn16] end connection 127.0.0.1:62454 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn18] end connection 127.0.0.1:58911 (12 connections now open) m30002| Fri Feb 22 11:23:32.957 [conn15] end connection 127.0.0.1:45677 (11 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn20] end connection 127.0.0.1:48199 (12 connections now open) m30002| Fri Feb 22 11:23:32.957 [conn20] end connection 127.0.0.1:61722 (11 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn21] end connection 127.0.0.1:38642 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn22] end connection 127.0.0.1:42216 (12 connections now open) m30002| Fri Feb 22 11:23:32.958 [conn21] end connection 127.0.0.1:36772 (9 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn24] end connection 127.0.0.1:44990 (12 connections now open) m30002| Fri Feb 22 11:23:32.958 [conn22] end connection 127.0.0.1:50697 (9 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn25] end connection 127.0.0.1:42360 (12 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn23] end connection 127.0.0.1:58098 (12 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn26] end connection 127.0.0.1:53938 (12 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn27] end connection 127.0.0.1:49673 (12 connections now open) m30001| Fri Feb 22 11:23:32.987 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:23:32.989 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:23:32.989 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:23:32.989 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:23:32.989 dbexit: really exiting now Fri Feb 22 11:23:33.957 shell: stopped mongo program on port 30001 m30002| Fri Feb 22 11:23:33.957 got signal 15 (Terminated), will terminate after current cmd ends m30002| Fri Feb 22 11:23:33.957 [interruptThread] now exiting m30002| Fri Feb 22 11:23:33.957 dbexit: m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: going to close listening sockets... m30002| Fri Feb 22 11:23:33.957 [interruptThread] closing listening socket: 18 m30002| Fri Feb 22 11:23:33.957 [interruptThread] closing listening socket: 19 m30002| Fri Feb 22 11:23:33.957 [interruptThread] closing listening socket: 20 m30002| Fri Feb 22 11:23:33.957 [interruptThread] removing socket file: /tmp/mongodb-30002.sock m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: going to flush diaglog... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: going to close sockets... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: waiting for fs preallocator... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: lock for final commit... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: final commit... m30002| Fri Feb 22 11:23:33.958 [conn1] end connection 127.0.0.1:34448 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn2] end connection 127.0.0.1:47206 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn18] end connection 127.0.0.1:39075 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn23] end connection 127.0.0.1:41207 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn24] end connection 127.0.0.1:62763 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn25] end connection 127.0.0.1:33710 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn26] end connection 127.0.0.1:37179 (7 connections now open) m30002| Fri Feb 22 11:23:33.984 [interruptThread] shutdown: closing all files... m30002| Fri Feb 22 11:23:33.988 [interruptThread] closeAllFiles() finished m30002| Fri Feb 22 11:23:33.988 [interruptThread] journalCleanup... m30002| Fri Feb 22 11:23:33.988 [interruptThread] removeJournalFiles m30002| Fri Feb 22 11:23:33.988 dbexit: really exiting now Fri Feb 22 11:23:34.957 shell: stopped mongo program on port 30002 *** ShardingTest balance_tags1 completed successfully in 187.381 seconds *** Fri Feb 22 11:23:34.985 [conn11] end connection 127.0.0.1:60212 (0 connections now open) 3.1267 minutes Fri Feb 22 11:23:35.007 [initandlisten] connection accepted from 127.0.0.1:54305 #12 (1 connection now open) Fri Feb 22 11:23:35.008 [conn12] end connection 127.0.0.1:54305 (0 connections now open) ******************************************* Test : btreedel.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/btreedel.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/btreedel.js";TestData.testFile = "btreedel.js";TestData.testName = "btreedel";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:23:35 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:23:35.190 [initandlisten] connection accepted from 127.0.0.1:41651 #13 (1 connection now open) null Fri Feb 22 11:23:35.201 [conn13] build index test.foo { _id: 1 } Fri Feb 22 11:23:35.205 [conn13] build index done. scanned 0 total records. 0.003 secs Fri Feb 22 11:23:39.849 [FileAllocator] allocating new datafile /data/db/sconsTests/test.2, filling with zeroes... Fri Feb 22 11:23:39.849 [FileAllocator] done allocating datafile /data/db/sconsTests/test.2, size: 256MB, took 0 secs Fri Feb 22 11:24:07.875 [FileAllocator] allocating new datafile /data/db/sconsTests/test.3, filling with zeroes... Fri Feb 22 11:24:07.875 [FileAllocator] done allocating datafile /data/db/sconsTests/test.3, size: 512MB, took 0 secs 1 insert done count: 1000000 { "_id" : 1, "x" : "a b" } Fri Feb 22 11:24:24.865 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 38 locks(micros) r:180164 nreturned:39199 reslen:4194313 174ms Fri Feb 22 11:24:25.329 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:190051 nreturned:39199 reslen:4194313 190ms Fri Feb 22 11:24:25.766 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:190862 nreturned:39199 reslen:4194313 190ms Fri Feb 22 11:24:26.202 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:196068 nreturned:39199 reslen:4194313 196ms Fri Feb 22 11:24:26.643 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:352461 nreturned:39199 reslen:4194313 202ms Fri Feb 22 11:24:27.209 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:192556 nreturned:39199 reslen:4194313 192ms { "_id" : 200002, "x" : "a b" } Fri Feb 22 11:24:27.767 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:234290 nreturned:39199 reslen:4194313 194ms Fri Feb 22 11:24:28.296 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:176823 nreturned:39199 reslen:4194313 176ms Fri Feb 22 11:24:28.718 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:291071 nreturned:39199 reslen:4194313 187ms Fri Feb 22 11:24:29.154 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:193883 nreturned:39199 reslen:4194313 193ms Fri Feb 22 11:24:29.604 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 917 locks(micros) r:406013 nreturned:39199 reslen:4194313 216ms { "_id" : 400002, "x" : "a b" } 2 3 true { "_id" : 400003, "x" : "a b" } Fri Feb 22 11:24:42.867 [conn13] remove test.foo query: { _id: { $gt: 200000.0, $lt: 600000.0 } } ndeleted:399999 keyUpdates:0 numYields: 123 locks(micros) w:23926391 13214ms Fri Feb 22 11:24:42.996 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:128592 nreturned:39199 reslen:4194313 128ms Fri Feb 22 11:24:43.398 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:124170 nreturned:39199 reslen:4194313 124ms Fri Feb 22 11:24:43.836 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:133705 nreturned:39199 reslen:4194313 133ms Fri Feb 22 11:24:44.293 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:123602 nreturned:39199 reslen:4194313 123ms Fri Feb 22 11:24:44.869 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:203996 nreturned:39199 reslen:4194313 204ms Fri Feb 22 11:24:45.393 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:208413 nreturned:39199 reslen:4194313 208ms Fri Feb 22 11:24:45.846 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:193932 nreturned:39199 reslen:4194313 193ms Fri Feb 22 11:24:46.327 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:214002 nreturned:39199 reslen:4194313 214ms Fri Feb 22 11:24:46.777 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:266115 nreturned:39199 reslen:4194313 185ms Fri Feb 22 11:24:47.276 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:194124 nreturned:39199 reslen:4194313 194ms 4. n:431286 { "_id" : 999999, "x" : "a b" } btreedel.js success Fri Feb 22 11:24:47.689 [conn13] end connection 127.0.0.1:41651 (0 connections now open) 1.2118 minutes Fri Feb 22 11:24:47.721 [initandlisten] connection accepted from 127.0.0.1:60941 #14 (1 connection now open) Fri Feb 22 11:24:47.722 [conn14] end connection 127.0.0.1:60941 (0 connections now open) ******************************************* Test : bulk_shard_insert.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/bulk_shard_insert.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/bulk_shard_insert.js";TestData.testFile = "bulk_shard_insert.js";TestData.testName = "bulk_shard_insert";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:24:47 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:24:47.898 [initandlisten] connection accepted from 127.0.0.1:65134 #15 (1 connection now open) null Seeded with 1361532287907 Resetting db path '/data/db/bulk_shard_insert0' Fri Feb 22 11:24:47.918 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/bulk_shard_insert0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:24:48.011 [initandlisten] MongoDB starting : pid=22355 port=30000 dbpath=/data/db/bulk_shard_insert0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:24:48.011 [initandlisten] m30000| Fri Feb 22 11:24:48.011 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:24:48.011 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:24:48.011 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:24:48.011 [initandlisten] m30000| Fri Feb 22 11:24:48.011 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:24:48.011 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:24:48.011 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:24:48.011 [initandlisten] allocator: system m30000| Fri Feb 22 11:24:48.011 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:24:48.012 [initandlisten] journal dir=/data/db/bulk_shard_insert0/journal m30000| Fri Feb 22 11:24:48.012 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:24:48.026 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:24:48.026 [FileAllocator] creating directory /data/db/bulk_shard_insert0/_tmp m30000| Fri Feb 22 11:24:48.026 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:24:48.026 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/local.0, filling with zeroes... m30000| Fri Feb 22 11:24:48.027 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:24:48.030 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:24:48.030 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:24:48.121 [initandlisten] connection accepted from 127.0.0.1:40234 #1 (1 connection now open) Resetting db path '/data/db/bulk_shard_insert1' Fri Feb 22 11:24:48.125 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/bulk_shard_insert1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:24:48.219 [initandlisten] MongoDB starting : pid=22356 port=30001 dbpath=/data/db/bulk_shard_insert1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:24:48.220 [initandlisten] m30001| Fri Feb 22 11:24:48.220 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:24:48.220 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:24:48.220 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:24:48.220 [initandlisten] m30001| Fri Feb 22 11:24:48.220 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:24:48.220 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:24:48.220 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:24:48.220 [initandlisten] allocator: system m30001| Fri Feb 22 11:24:48.220 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:24:48.220 [initandlisten] journal dir=/data/db/bulk_shard_insert1/journal m30001| Fri Feb 22 11:24:48.220 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:24:48.237 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/local.ns, filling with zeroes... m30001| Fri Feb 22 11:24:48.237 [FileAllocator] creating directory /data/db/bulk_shard_insert1/_tmp m30001| Fri Feb 22 11:24:48.237 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:24:48.237 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/local.0, filling with zeroes... m30001| Fri Feb 22 11:24:48.238 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:24:48.241 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:24:48.241 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:24:48.327 [initandlisten] connection accepted from 127.0.0.1:56952 #1 (1 connection now open) Resetting db path '/data/db/bulk_shard_insert2' Fri Feb 22 11:24:48.330 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30002 --dbpath /data/db/bulk_shard_insert2 --setParameter enableTestCommands=1 m30002| Fri Feb 22 11:24:48.420 [initandlisten] MongoDB starting : pid=22357 port=30002 dbpath=/data/db/bulk_shard_insert2 64-bit host=bs-smartos-x86-64-1.10gen.cc m30002| Fri Feb 22 11:24:48.420 [initandlisten] m30002| Fri Feb 22 11:24:48.420 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30002| Fri Feb 22 11:24:48.420 [initandlisten] ** uses to detect impending page faults. m30002| Fri Feb 22 11:24:48.420 [initandlisten] ** This may result in slower performance for certain use cases m30002| Fri Feb 22 11:24:48.420 [initandlisten] m30002| Fri Feb 22 11:24:48.420 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30002| Fri Feb 22 11:24:48.420 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30002| Fri Feb 22 11:24:48.420 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30002| Fri Feb 22 11:24:48.420 [initandlisten] allocator: system m30002| Fri Feb 22 11:24:48.420 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert2", port: 30002, setParameter: [ "enableTestCommands=1" ] } m30002| Fri Feb 22 11:24:48.421 [initandlisten] journal dir=/data/db/bulk_shard_insert2/journal m30002| Fri Feb 22 11:24:48.421 [initandlisten] recover : no journal files present, no recovery needed m30002| Fri Feb 22 11:24:48.436 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/local.ns, filling with zeroes... m30002| Fri Feb 22 11:24:48.436 [FileAllocator] creating directory /data/db/bulk_shard_insert2/_tmp m30002| Fri Feb 22 11:24:48.437 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/local.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:24:48.437 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/local.0, filling with zeroes... m30002| Fri Feb 22 11:24:48.437 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/local.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:24:48.440 [initandlisten] waiting for connections on port 30002 m30002| Fri Feb 22 11:24:48.440 [websvr] admin web console waiting for connections on port 31002 m30002| Fri Feb 22 11:24:48.531 [initandlisten] connection accepted from 127.0.0.1:36630 #1 (1 connection now open) Resetting db path '/data/db/bulk_shard_insert3' Fri Feb 22 11:24:48.538 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30003 --dbpath /data/db/bulk_shard_insert3 --setParameter enableTestCommands=1 m30003| Fri Feb 22 11:24:48.618 [initandlisten] MongoDB starting : pid=22358 port=30003 dbpath=/data/db/bulk_shard_insert3 64-bit host=bs-smartos-x86-64-1.10gen.cc m30003| Fri Feb 22 11:24:48.619 [initandlisten] m30003| Fri Feb 22 11:24:48.619 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30003| Fri Feb 22 11:24:48.619 [initandlisten] ** uses to detect impending page faults. m30003| Fri Feb 22 11:24:48.619 [initandlisten] ** This may result in slower performance for certain use cases m30003| Fri Feb 22 11:24:48.619 [initandlisten] m30003| Fri Feb 22 11:24:48.619 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30003| Fri Feb 22 11:24:48.619 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30003| Fri Feb 22 11:24:48.619 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30003| Fri Feb 22 11:24:48.619 [initandlisten] allocator: system m30003| Fri Feb 22 11:24:48.619 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert3", port: 30003, setParameter: [ "enableTestCommands=1" ] } m30003| Fri Feb 22 11:24:48.619 [initandlisten] journal dir=/data/db/bulk_shard_insert3/journal m30003| Fri Feb 22 11:24:48.619 [initandlisten] recover : no journal files present, no recovery needed m30003| Fri Feb 22 11:24:48.632 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/local.ns, filling with zeroes... m30003| Fri Feb 22 11:24:48.632 [FileAllocator] creating directory /data/db/bulk_shard_insert3/_tmp m30003| Fri Feb 22 11:24:48.633 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/local.ns, size: 16MB, took 0 secs m30003| Fri Feb 22 11:24:48.633 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/local.0, filling with zeroes... m30003| Fri Feb 22 11:24:48.633 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/local.0, size: 64MB, took 0 secs m30003| Fri Feb 22 11:24:48.636 [websvr] admin web console waiting for connections on port 31003 m30003| Fri Feb 22 11:24:48.636 [initandlisten] waiting for connections on port 30003 m30003| Fri Feb 22 11:24:48.740 [initandlisten] connection accepted from 127.0.0.1:35811 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 11:24:48.740 [initandlisten] connection accepted from 127.0.0.1:35416 #2 (2 connections now open) ShardingTest bulk_shard_insert : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001, connection to localhost:30002, connection to localhost:30003 ] } Fri Feb 22 11:24:48.744 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:24:48.757 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:24:48.758 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=22359 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:24:48.758 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:24:48.758 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:24:48.758 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:24:48.759 [initandlisten] connection accepted from 127.0.0.1:42442 #3 (3 connections now open) m30000| Fri Feb 22 11:24:48.762 [initandlisten] connection accepted from 127.0.0.1:40998 #4 (4 connections now open) m30000| Fri Feb 22 11:24:48.762 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:24:48.775 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838 (sleeping for 30000ms) m30000| Fri Feb 22 11:24:48.775 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/config.ns, filling with zeroes... m30000| Fri Feb 22 11:24:48.775 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:24:48.775 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/config.0, filling with zeroes... m30000| Fri Feb 22 11:24:48.775 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:24:48.775 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/config.1, filling with zeroes... m30000| Fri Feb 22 11:24:48.776 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:24:48.778 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:24:48.778 [conn4] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.779 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:24:48.780 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:24:48.781 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:24:48.782 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:24:48.782 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 51275580ce6119f732c457ec m30999| Fri Feb 22 11:24:48.784 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:24:48.784 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:24:48.784 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:48-51275580ce6119f732c457ed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532288784), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:24:48.784 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:24:48.784 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.785 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:24:48.785 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 11:24:48.785 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.786 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:48-51275580ce6119f732c457ef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532288786), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:24:48.786 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:24:48.786 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30000| Fri Feb 22 11:24:48.787 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:24:48.788 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:24:48.788 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 11:24:48.788 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:24:48.788 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 11:24:48.789 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:24:48.790 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.790 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:24:48.790 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:24:48.790 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.790 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 11:24:48.791 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.791 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:24:48.791 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.791 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:24:48.792 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.792 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 11:24:48.792 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 11:24:48.793 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.793 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:24:48.793 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:24:48 m30000| Fri Feb 22 11:24:48.794 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 11:24:48.794 [initandlisten] connection accepted from 127.0.0.1:55370 #5 (5 connections now open) m30000| Fri Feb 22 11:24:48.795 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 51275580ce6119f732c457f1 m30999| Fri Feb 22 11:24:48.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30999| Fri Feb 22 11:24:48.945 [mongosMain] connection accepted from 127.0.0.1:33354 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:24:48.947 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:24:48.947 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:24:48.948 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.948 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 11:24:48.949 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30001| Fri Feb 22 11:24:48.951 [initandlisten] connection accepted from 127.0.0.1:59009 #2 (2 connections now open) m30999| Fri Feb 22 11:24:48.952 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30002 m30002| Fri Feb 22 11:24:48.953 [initandlisten] connection accepted from 127.0.0.1:47913 #2 (2 connections now open) m30999| Fri Feb 22 11:24:48.954 [conn1] going to add shard: { _id: "shard0002", host: "localhost:30002" } { "shardAdded" : "shard0002", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30003 m30003| Fri Feb 22 11:24:48.955 [initandlisten] connection accepted from 127.0.0.1:41716 #2 (2 connections now open) m30999| Fri Feb 22 11:24:48.956 [conn1] going to add shard: { _id: "shard0003", host: "localhost:30003" } { "shardAdded" : "shard0003", "ok" : 1 } m30000| Fri Feb 22 11:24:48.957 [initandlisten] connection accepted from 127.0.0.1:63280 #6 (6 connections now open) m30999| Fri Feb 22 11:24:48.957 [conn1] creating WriteBackListener for: localhost:30000 serverID: 51275580ce6119f732c457f0 m30999| Fri Feb 22 11:24:48.957 [conn1] creating WriteBackListener for: localhost:30001 serverID: 51275580ce6119f732c457f0 m30001| Fri Feb 22 11:24:48.957 [initandlisten] connection accepted from 127.0.0.1:60370 #3 (3 connections now open) m30002| Fri Feb 22 11:24:48.958 [initandlisten] connection accepted from 127.0.0.1:50557 #3 (3 connections now open) m30999| Fri Feb 22 11:24:48.958 [conn1] creating WriteBackListener for: localhost:30002 serverID: 51275580ce6119f732c457f0 m30003| Fri Feb 22 11:24:48.958 [initandlisten] connection accepted from 127.0.0.1:42191 #3 (3 connections now open) m30999| Fri Feb 22 11:24:48.958 [conn1] creating WriteBackListener for: localhost:30003 serverID: 51275580ce6119f732c457f0 m30999| Fri Feb 22 11:24:48.959 [conn1] couldn't find database [bulk_shard_insert] in config db m30000| Fri Feb 22 11:24:48.959 [initandlisten] connection accepted from 127.0.0.1:61993 #7 (7 connections now open) m30001| Fri Feb 22 11:24:48.960 [initandlisten] connection accepted from 127.0.0.1:56609 #4 (4 connections now open) m30002| Fri Feb 22 11:24:48.961 [initandlisten] connection accepted from 127.0.0.1:41136 #4 (4 connections now open) m30003| Fri Feb 22 11:24:48.961 [initandlisten] connection accepted from 127.0.0.1:52636 #4 (4 connections now open) m30999| Fri Feb 22 11:24:48.961 [conn1] put [bulk_shard_insert] on: shard0001:localhost:30001 m30999| Fri Feb 22 11:24:48.963 [conn1] enabling sharding on: bulk_shard_insert m30001| Fri Feb 22 11:24:48.964 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.ns, filling with zeroes... m30001| Fri Feb 22 11:24:48.964 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:24:48.964 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.0, filling with zeroes... m30001| Fri Feb 22 11:24:48.964 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:24:48.965 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.1, filling with zeroes... m30001| Fri Feb 22 11:24:48.965 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:24:48.967 [conn4] build index bulk_shard_insert.coll { _id: 1 } m30001| Fri Feb 22 11:24:48.968 [conn4] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:24:48.968 [conn4] info: creating collection bulk_shard_insert.coll on add index m30999| Fri Feb 22 11:24:48.968 [conn1] CMD: shardcollection: { shardcollection: "bulk_shard_insert.coll", key: { _id: 1.0 } } m30999| Fri Feb 22 11:24:48.968 [conn1] enable sharding on: bulk_shard_insert.coll with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:24:48.968 [conn1] going to create 1 chunk(s) for: bulk_shard_insert.coll using new epoch 51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:48.969 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 2 version: 1|0||51275580ce6119f732c457f2 based on: (empty) m30000| Fri Feb 22 11:24:48.970 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:24:48.971 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:24:48.971 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:24:48.972 [initandlisten] connection accepted from 127.0.0.1:64913 #8 (8 connections now open) Bulk size is 4000 Document size is 141 m30001| Fri Feb 22 11:24:49.280 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.281 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.281 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755815b44ae98eaa729fc') } ], shardId: "bulk_shard_insert.coll-_id_MinKey", configdb: "localhost:30000" } m30000| Fri Feb 22 11:24:49.282 [initandlisten] connection accepted from 127.0.0.1:37721 #9 (9 connections now open) m30001| Fri Feb 22 11:24:49.283 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480 (sleeping for 30000ms) m30001| Fri Feb 22 11:24:49.284 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275581ad0d9d7dc768fedd m30001| Fri Feb 22 11:24:49.285 [conn4] splitChunk accepted at version 1|0||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:49.286 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:49-51275581ad0d9d7dc768fede", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532289286), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:49.287 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:49.288 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 3 version: 1|2||51275580ce6119f732c457f2 based on: 1|0||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:49.288 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } on: { _id: ObjectId('512755815b44ae98eaa729fc') } (splitThreshold 921) m30001| Fri Feb 22 11:24:49.667 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa729fc') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.671 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa729fc') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.672 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755815b44ae98eaa7493b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa729fc')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:49.673 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275581ad0d9d7dc768fedf m30001| Fri Feb 22 11:24:49.674 [conn4] splitChunk accepted at version 1|2||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:49.674 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:49-51275581ad0d9d7dc768fee0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532289674), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:49.675 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:49.675 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 4 version: 1|4||51275580ce6119f732c457f2 based on: 1|2||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:49.676 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: ObjectId('512755815b44ae98eaa729fc') }max: { _id: MaxKey } on: { _id: ObjectId('512755815b44ae98eaa7493b') } (splitThreshold 471859) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:49.919 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } Inserted 12000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 3 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 4 } m30001| Fri Feb 22 11:24:50.204 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.446 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.454 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.455 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755825b44ae98eaa7781b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa7493b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:50.456 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275582ad0d9d7dc768fee1 m30001| Fri Feb 22 11:24:50.457 [conn4] splitChunk accepted at version 1|4||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:50.458 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:50-51275582ad0d9d7dc768fee2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532290458), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:50.458 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:50.459 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 5 version: 1|6||51275580ce6119f732c457f2 based on: 1|4||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:50.459 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: ObjectId('512755815b44ae98eaa7493b') }max: { _id: MaxKey } on: { _id: ObjectId('512755825b44ae98eaa7781b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 20000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 4 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 6 } m30001| Fri Feb 22 11:24:50.692 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.903 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.136 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.148 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.148 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755835b44ae98eaa7a6fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755825b44ae98eaa7781b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:51.149 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275583ad0d9d7dc768fee3 m30001| Fri Feb 22 11:24:51.150 [conn4] splitChunk accepted at version 1|6||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:51.151 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:51-51275583ad0d9d7dc768fee4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532291151), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:51.151 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:51.152 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 6 version: 1|8||51275580ce6119f732c457f2 based on: 1|6||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:51.153 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: ObjectId('512755825b44ae98eaa7781b') }max: { _id: MaxKey } on: { _id: ObjectId('512755835b44ae98eaa7a6fb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 32000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 5 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 8 } m30001| Fri Feb 22 11:24:51.496 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.846 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } Inserted 40000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 5 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 8 } m30001| Fri Feb 22 11:24:52.199 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:52.211 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:52.213 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755845b44ae98eaa7d5db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755835b44ae98eaa7a6fb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:52.214 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275584ad0d9d7dc768fee5 m30001| Fri Feb 22 11:24:52.215 [conn4] splitChunk accepted at version 1|8||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:52.215 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:52-51275584ad0d9d7dc768fee6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532292215), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:52.216 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:52.217 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 7 version: 1|10||51275580ce6119f732c457f2 based on: 1|8||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:52.217 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }max: { _id: MaxKey } on: { _id: ObjectId('512755845b44ae98eaa7d5db') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:52.559 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:52.904 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } Inserted 52000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 6 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 10 } m30001| Fri Feb 22 11:24:53.274 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:53.286 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:53.287 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755855b44ae98eaa804bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755845b44ae98eaa7d5db')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:53.288 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275585ad0d9d7dc768fee7 m30001| Fri Feb 22 11:24:53.289 [conn4] splitChunk accepted at version 1|10||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:53.290 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:53-51275585ad0d9d7dc768fee8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532293290), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:53.290 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:53.291 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 8 version: 1|12||51275580ce6119f732c457f2 based on: 1|10||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:53.291 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: ObjectId('512755845b44ae98eaa7d5db') }max: { _id: MaxKey } on: { _id: ObjectId('512755855b44ae98eaa804bb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:53.612 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } Inserted 60000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 7 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 12 } m30001| Fri Feb 22 11:24:53.883 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.140 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.148 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.149 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755865b44ae98eaa8339b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755855b44ae98eaa804bb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:54.149 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275586ad0d9d7dc768fee9 m30001| Fri Feb 22 11:24:54.150 [conn4] splitChunk accepted at version 1|12||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:54.151 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:54-51275586ad0d9d7dc768feea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532294151), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:54.151 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:54.152 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 9 version: 1|14||51275580ce6119f732c457f2 based on: 1|12||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:54.152 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: ObjectId('512755855b44ae98eaa804bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755865b44ae98eaa8339b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:54.395 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } Inserted 72000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 8 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 14 } m30001| Fri Feb 22 11:24:54.647 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.901 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.909 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.909 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755865b44ae98eaa8627b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8339b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:54.910 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275586ad0d9d7dc768feeb m30001| Fri Feb 22 11:24:54.911 [conn4] splitChunk accepted at version 1|14||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:54.911 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:54-51275586ad0d9d7dc768feec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532294911), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:54.912 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:54.913 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 10 version: 1|16||51275580ce6119f732c457f2 based on: 1|14||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:54.913 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: ObjectId('512755865b44ae98eaa8339b') }max: { _id: MaxKey } on: { _id: ObjectId('512755865b44ae98eaa8627b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 80000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 9 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 16 } m30001| Fri Feb 22 11:24:55.178 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.434 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.683 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.702 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.702 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755875b44ae98eaa8915b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8627b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:55.703 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275587ad0d9d7dc768feed m30001| Fri Feb 22 11:24:55.704 [conn4] splitChunk accepted at version 1|16||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:55.705 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:55-51275587ad0d9d7dc768feee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532295705), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:55.705 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:55.706 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 11 version: 1|18||51275580ce6119f732c457f2 based on: 1|16||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:55.706 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: ObjectId('512755865b44ae98eaa8627b') }max: { _id: MaxKey } on: { _id: ObjectId('512755875b44ae98eaa8915b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 92000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 10 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 18 } m30001| Fri Feb 22 11:24:56.048 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:56.386 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } Inserted 100000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 10 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 18 } m30001| Fri Feb 22 11:24:56.749 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:56.761 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:56.762 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755885b44ae98eaa8c03b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755875b44ae98eaa8915b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:56.763 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275588ad0d9d7dc768feef m30001| Fri Feb 22 11:24:56.766 [conn4] splitChunk accepted at version 1|18||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:56.767 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:56-51275588ad0d9d7dc768fef0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532296767), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:56.767 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:56.768 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 12 version: 1|20||51275580ce6119f732c457f2 based on: 1|18||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:56.768 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: ObjectId('512755875b44ae98eaa8915b') }max: { _id: MaxKey } on: { _id: ObjectId('512755885b44ae98eaa8c03b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:57.093 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:57.354 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } Inserted 112000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 11 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 20 } m30001| Fri Feb 22 11:24:57.620 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:57.628 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:57.629 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755895b44ae98eaa8ef1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755885b44ae98eaa8c03b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:57.629 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275589ad0d9d7dc768fef1 m30001| Fri Feb 22 11:24:57.630 [conn4] splitChunk accepted at version 1|20||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:57.631 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:57-51275589ad0d9d7dc768fef2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532297631), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:57.631 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:57.632 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 13 version: 1|22||51275580ce6119f732c457f2 based on: 1|20||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:57.632 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: ObjectId('512755885b44ae98eaa8c03b') }max: { _id: MaxKey } on: { _id: ObjectId('512755895b44ae98eaa8ef1b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:57.878 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } Inserted 120000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 12 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 22 } m30001| Fri Feb 22 11:24:58.138 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:58.389 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:58.401 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:58.401 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558a5b44ae98eaa91dfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755895b44ae98eaa8ef1b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:58.402 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558aad0d9d7dc768fef3 m30001| Fri Feb 22 11:24:58.403 [conn4] splitChunk accepted at version 1|22||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:58.404 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:58-5127558aad0d9d7dc768fef4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532298404), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:58.404 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:58.405 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 14 version: 1|24||51275580ce6119f732c457f2 based on: 1|22||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:58.405 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558a5b44ae98eaa91dfb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:58.652 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } Inserted 132000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 13 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 24 } m30001| Fri Feb 22 11:24:58.936 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.185 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.192 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.193 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558b5b44ae98eaa94cdb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558a5b44ae98eaa91dfb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:59.193 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558bad0d9d7dc768fef5 m30001| Fri Feb 22 11:24:59.194 [conn4] splitChunk accepted at version 1|24||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:59.195 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:59-5127558bad0d9d7dc768fef6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532299195), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:59.195 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:59.196 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 15 version: 1|26||51275580ce6119f732c457f2 based on: 1|24||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:59.196 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }max: { _id: MaxKey } on: { _id: ObjectId('5127558b5b44ae98eaa94cdb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 140000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 14 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 26 } m30001| Fri Feb 22 11:24:59.457 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.715 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.997 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:00.007 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:00.007 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558b5b44ae98eaa97bbb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa94cdb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:00.008 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558cad0d9d7dc768fef7 m30001| Fri Feb 22 11:25:00.009 [conn4] splitChunk accepted at version 1|26||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:00.010 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:00-5127558cad0d9d7dc768fef8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532300010), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:00.010 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:00.011 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 16 version: 1|28||51275580ce6119f732c457f2 based on: 1|26||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:00.011 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }max: { _id: MaxKey } on: { _id: ObjectId('5127558b5b44ae98eaa97bbb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 152000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 15 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 28 } m30001| Fri Feb 22 11:25:00.340 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:00.673 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } Inserted 160000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 15 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 28 } m30001| Fri Feb 22 11:25:01.069 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.080 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.082 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa97bbb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:01.083 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558dad0d9d7dc768fef9 m30001| Fri Feb 22 11:25:01.083 [conn4] splitChunk accepted at version 1|28||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:01.084 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:01-5127558dad0d9d7dc768fefa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532301084), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:01.085 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:01.085 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 17 version: 1|30||51275580ce6119f732c457f2 based on: 1|28||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:01.086 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }max: { _id: MaxKey } on: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:01.336 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.579 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } Inserted 172000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 16 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 30 } m30001| Fri Feb 22 11:25:01.854 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.863 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.863 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558d5b44ae98eaa9d97b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558c5b44ae98eaa9aa9b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:01.864 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558dad0d9d7dc768fefb m30001| Fri Feb 22 11:25:01.865 [conn4] splitChunk accepted at version 1|30||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:01.865 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:01-5127558dad0d9d7dc768fefc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532301865), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:01.866 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:01.873 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 18 version: 1|32||51275580ce6119f732c457f2 based on: 1|30||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:01.873 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558d5b44ae98eaa9d97b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:02.130 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } Inserted 180000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 17 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } -->> { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } on : shard0001 { "t" : 1000, "i" : 31 } { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 32 } m30001| Fri Feb 22 11:25:02.424 [conn3] insert bulk_shard_insert.coll ninserted:4000 keyUpdates:0 locks(micros) w:105771 105ms m30001| Fri Feb 22 11:25:02.425 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:02.681 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:02.689 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:02.691 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558e5b44ae98eaaa085b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558d5b44ae98eaa9d97b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:02.692 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558ead0d9d7dc768fefd m30001| Fri Feb 22 11:25:02.692 [conn4] splitChunk accepted at version 1|32||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:02.693 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:02-5127558ead0d9d7dc768fefe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532302693), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:02.694 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:02.695 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 19 version: 1|34||51275580ce6119f732c457f2 based on: 1|32||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:02.695 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558e5b44ae98eaaa085b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:02.937 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558e5b44ae98eaaa085b') } -->> { : MaxKey } Inserted 192000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 18 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 10