2012-08-22 09:45:20 EDT | Wed Aug 22 09:45:20 [conn37] end connection 127.0.0.1:64877 (0 connections now open) |
2012-08-22 09:45:21 EDT | MongoDB shell version: 2.2.0-rc2-pre- |
| null |
| Resetting db path '/data/db/find_and_modify_sharded_20' |
| Wed Aug 22 09:45:21 shell: started program /data/buildslaves/OS_X_105_32bit_V2.2/mongo/mongod --port 30000 --dbpath /data/db/find_and_modify_sharded_20 |
| m30000| Wed Aug 22 09:45:21 [initandlisten] |
| m30000| Wed Aug 22 09:45:21 [initandlisten] MongoDB starting : pid=16413 port=30000 dbpath=/data/db/find_and_modify_sharded_20 64-bit host=bs-osx-106-i386-1.local |
| m30000| Wed Aug 22 09:45:21 [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 532 processes, 10240 files. Number of processes should be at least 5120 : 0.5 times number of files. |
| m30000| Wed Aug 22 09:45:21 [initandlisten] db version v2.2.0-rc2-pre-, pdfile version 4.5 |
| m30000| Wed Aug 22 09:45:21 [initandlisten] build info: Darwin bs-osx-106-i386-1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30000| Wed Aug 22 09:45:21 [initandlisten] git version: 45d66f6b12d8e6faee340c915340256ae1f0a221 |
| m30000| Wed Aug 22 09:45:21 [initandlisten] journal dir=/data/db/find_and_modify_sharded_20/journal |
| m30000| Wed Aug 22 09:45:21 [initandlisten] options: { dbpath: "/data/db/find_and_modify_sharded_20", port: 30000 } |
| m30000| Wed Aug 22 09:45:21 [initandlisten] recover : no journal files present, no recovery needed |
| m30000| Wed Aug 22 09:45:21 [initandlisten] waiting for connections on port 30000 |
| m30000| Wed Aug 22 09:45:21 [websvr] admin web console waiting for connections on port 31000 |
| Resetting db path '/data/db/find_and_modify_sharded_21' |
| m30000| Wed Aug 22 09:45:21 [initandlisten] connection accepted from 127.0.0.1:64885 #1 (1 connection now open) |
| Wed Aug 22 09:45:22 shell: started program /data/buildslaves/OS_X_105_32bit_V2.2/mongo/mongod --port 30001 --dbpath /data/db/find_and_modify_sharded_21 |
| m30001| Wed Aug 22 09:45:22 [initandlisten] |
| m30001| Wed Aug 22 09:45:22 [initandlisten] MongoDB starting : pid=16414 port=30001 dbpath=/data/db/find_and_modify_sharded_21 64-bit host=bs-osx-106-i386-1.local |
| m30001| Wed Aug 22 09:45:22 [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 532 processes, 10240 files. Number of processes should be at least 5120 : 0.5 times number of files. |
| m30001| Wed Aug 22 09:45:22 [initandlisten] db version v2.2.0-rc2-pre-, pdfile version 4.5 |
| m30001| Wed Aug 22 09:45:22 [initandlisten] build info: Darwin bs-osx-106-i386-1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30001| Wed Aug 22 09:45:22 [initandlisten] git version: 45d66f6b12d8e6faee340c915340256ae1f0a221 |
| m30001| Wed Aug 22 09:45:22 [initandlisten] options: { dbpath: "/data/db/find_and_modify_sharded_21", port: 30001 } |
| m30001| Wed Aug 22 09:45:22 [initandlisten] journal dir=/data/db/find_and_modify_sharded_21/journal |
| m30001| Wed Aug 22 09:45:22 [initandlisten] recover : no journal files present, no recovery needed |
| m30001| Wed Aug 22 09:45:22 [initandlisten] waiting for connections on port 30001 |
| m30001| Wed Aug 22 09:45:22 [initandlisten] connection accepted from 127.0.0.1:64887 #1 (1 connection now open) |
| m30001| Wed Aug 22 09:45:22 [websvr] admin web console waiting for connections on port 31001 |
| "localhost:30000" |
| m30000| Wed Aug 22 09:45:22 [initandlisten] connection accepted from 127.0.0.1:64888 #2 (2 connections now open) |
| ShardingTest find_and_modify_sharded_2 : |
| "config" : "localhost:30000", |
| "shards" : [ |
| connection to localhost:30000, |
| connection to localhost:30001 |
| ] |
| { |
| } |
| Wed Aug 22 09:45:22 shell: started program /data/buildslaves/OS_X_105_32bit_V2.2/mongo/mongos --port 30999 --configdb localhost:30000 -vv --chunkSize 1 |
2012-08-22 09:45:24 EDT | m30999| Wed Aug 22 09:45:22 warning: running with 1 config server should be done only for testing purposes and is not recommended for production |
| m30999| Wed Aug 22 09:45:22 [mongosMain] MongoS version 2.2.0-rc2-pre- starting: pid=16415 port=30999 64-bit host=bs-osx-106-i386-1.local (--help for usage) |
| m30999| Wed Aug 22 09:45:22 [mongosMain] build info: Darwin bs-osx-106-i386-1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30999| Wed Aug 22 09:45:22 [mongosMain] git version: 45d66f6b12d8e6faee340c915340256ae1f0a221 |
| m30999| Wed Aug 22 09:45:22 [mongosMain] config string : localhost:30000 |
| m30999| Wed Aug 22 09:45:22 [mongosMain] creating new connection to:localhost:30000 |
| m30999| Wed Aug 22 09:45:22 BackgroundJob starting: ConnectBG |
| m30000| Wed Aug 22 09:45:22 [initandlisten] connection accepted from 127.0.0.1:64890 #3 (3 connections now open) |
| m30999| Wed Aug 22 09:45:22 [mongosMain] connected connection! |
| m30999| Wed Aug 22 09:45:22 BackgroundJob starting: CheckConfigServers |
| m30999| Wed Aug 22 09:45:22 [CheckConfigServers] creating new connection to:localhost:30000 |
| m30999| Wed Aug 22 09:45:22 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:22 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, vv: true } |
| m30000| Wed Aug 22 09:45:22 [initandlisten] connection accepted from 127.0.0.1:64891 #4 (4 connections now open) |
| m30999| Wed Aug 22 09:45:22 [CheckConfigServers] connected connection! |
| m30999| Wed Aug 22 09:45:22 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:22 [mongosMain] connected connection! |
| m30000| Wed Aug 22 09:45:22 [initandlisten] connection accepted from 127.0.0.1:64892 #5 (5 connections now open) |
| m30000| Wed Aug 22 09:45:22 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/config.ns, filling with zeroes... |
| m30999| Wed Aug 22 09:45:22 [mongosMain] creating new connection to:localhost:30000 |
| m30000| Wed Aug 22 09:45:22 [FileAllocator] creating directory /data/db/find_and_modify_sharded_20/_tmp |
| m30000| Wed Aug 22 09:45:22 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/config.0, filling with zeroes... |
| m30000| Wed Aug 22 09:45:22 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/config.ns, size: 16MB, took 0.46 secs |
| m30000| Wed Aug 22 09:45:24 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/config.1, filling with zeroes... |
| m30000| Wed Aug 22 09:45:24 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/config.0, size: 64MB, took 1.518 secs |
| m30000| Wed Aug 22 09:45:24 [conn5] build index config.version { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn5] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.settings { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0.002 secs |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.chunks { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [conn3] info: creating collection config.chunks on add index |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.chunks { ns: 1, min: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [conn5] insert config.version keyUpdates:0 locks(micros) w:2070343 2070ms |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.chunks { ns: 1, lastmod: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.shards { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30999| Wed Aug 22 09:45:24 [websvr] admin web console waiting for connections on port 31999 |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: Balancer |
| m30999| Wed Aug 22 09:45:24 [websvr] fd limit hard:10240 soft:10240 max conn: 8192 |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: cursorTimeout |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: PeriodicTask::Runner |
| m30999| Wed Aug 22 09:45:24 [Balancer] about to contact config servers and shards |
| m30999| Wed Aug 22 09:45:24 [mongosMain] fd limit hard:10240 soft:10240 max conn: 8192 |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [conn3] info: creating collection config.shards on add index |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.shards { host: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30999| Wed Aug 22 09:45:24 [mongosMain] waiting for connections on port 30999 |
| m30000| Wed Aug 22 09:45:24 [conn5] build index config.mongos { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn5] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.lockpings { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:24 [initandlisten] connection accepted from 127.0.0.1:64903 #6 (6 connections now open) |
| m30999| Wed Aug 22 09:45:24 [Balancer] balancer id: bs-osx-106-i386-1.local:30999 started at Aug 22 09:45:24 |
| m30999| Wed Aug 22 09:45:24 [Balancer] config servers and shards contacted successfully |
| m30999| Wed Aug 22 09:45:24 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:24 [Balancer] connected connection! |
| m30999| Wed Aug 22 09:45:24 [Balancer] Refreshing MaxChunkSize: 1 |
| m30999| Wed Aug 22 09:45:24 [Balancer] creating new connection to:localhost:30000 |
| m30999| Wed Aug 22 09:45:24 [Balancer] skew from remote server localhost:30000 found: -1 |
| m30999| Wed Aug 22 09:45:24 [Balancer] skew from remote server localhost:30000 found: 0 |
| m30999| Wed Aug 22 09:45:24 [Balancer] skew from remote server localhost:30000 found: 0 |
| m30999| Wed Aug 22 09:45:24 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. |
| m30999| Wed Aug 22 09:45:24 [LockPinger] cluster localhost:30000 pinged successfully at Wed Aug 22 09:45:24 2012 by distributed lock pinger 'localhost:30000/bs-osx-106-i386-1.local:30999:1345643124:16807', sleeping for 30000ms |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.lockpings { ping: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 1 total records. 0.001 secs |
| m30999| Wed Aug 22 09:45:24 [Balancer] inserting initial doc in config.locks for lock balancer |
| m30999| Wed Aug 22 09:45:24 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-osx-106-i386-1.local:30999:1345643124:16807 (sleeping for 30000ms) |
| m30999| Wed Aug 22 09:45:24 [Balancer] about to acquire distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807: |
| m30999| "who" : "bs-osx-106-i386-1.local:30999:1345643124:16807:Balancer:282475249", |
| m30999| "process" : "bs-osx-106-i386-1.local:30999:1345643124:16807", |
| m30999| "when" : { "$date" : "Wed Aug 22 09:45:24 2012" }, |
| m30999| "why" : "doing balance round", |
| m30999| "ts" : { "$oid" : "5034e27434a70f9f6800f33f" } } |
| m30999| { "_id" : "balancer", |
| m30999| "state" : 0 } |
| m30000| Wed Aug 22 09:45:24 [conn6] build index config.locks { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn6] build index done. scanned 0 total records. 0.001 secs |
| m30999| { "state" : 1, |
| m30999| Wed Aug 22 09:45:24 [Balancer] *** start balancing round |
| m30999| Wed Aug 22 09:45:24 [Balancer] distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807' acquired, ts : 5034e27434a70f9f6800f33f |
| m30999| Wed Aug 22 09:45:24 [Balancer] no collections to balance |
| m30999| Wed Aug 22 09:45:24 [Balancer] *** end of balancing round |
| m30999| Wed Aug 22 09:45:24 [Balancer] distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807' unlocked. |
| m30999| Wed Aug 22 09:45:24 [Balancer] no need to move any chunk |
| m30999| Wed Aug 22 09:45:24 [mongosMain] connection accepted from 127.0.0.1:64904 #1 (1 connection now open) |
| ShardingTest undefined going to add shard : localhost:30000 |
| m30000| Wed Aug 22 09:45:24 [conn3] build index config.databases { _id: 1 } |
| m30000| Wed Aug 22 09:45:24 [conn3] build index done. scanned 0 total records. 0 secs |
| m30999| Wed Aug 22 09:45:24 [conn1] put [admin] on: config:localhost:30000 |
| m30999| Wed Aug 22 09:45:24 [conn1] couldn't find database [admin] in config db |
| m30999| Wed Aug 22 09:45:24 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } |
| ShardingTest undefined going to add shard : localhost:30001 |
| m30999| Wed Aug 22 09:45:24 [conn1] creating new connection to:localhost:30001 |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: ConnectBG |
| { "shardAdded" : "shard0000", "ok" : 1 } |
| m30001| Wed Aug 22 09:45:24 [initandlisten] connection accepted from 127.0.0.1:64906 #2 (2 connections now open) |
| m30999| Wed Aug 22 09:45:24 [conn1] connected connection! |
| m30999| Wed Aug 22 09:45:24 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } |
| m30999| Wed Aug 22 09:45:24 [conn1] couldn't find database [test] in config db |
| m30999| Wed Aug 22 09:45:24 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0 |
| m30999| Wed Aug 22 09:45:24 [conn1] put [test] on: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:24 [conn1] enabling sharding on: test |
| { "shardAdded" : "shard0001", "ok" : 1 } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "test" }, fields: {} } and CInfo { v_ns: "", filter: {} } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:24 [conn1] creating new connection to:localhost:30000 |
| m30000| Wed Aug 22 09:45:24 [initandlisten] connection accepted from 127.0.0.1:64907 #7 (7 connections now open) |
| m30999| Wed Aug 22 09:45:24 [conn1] connected connection! |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: WriteBackListener-localhost:30000 |
| m30999| Wed Aug 22 09:45:24 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5034e27434a70f9f6800f33e |
| m30999| Wed Aug 22 09:45:24 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5034e27434a70f9f6800f33e'), authoritative: true } |
| m30999| Wed Aug 22 09:45:24 [conn1] creating new connection to:localhost:30001 |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:24 [conn1] initializing shard connection to localhost:30000 |
| m30001| Wed Aug 22 09:45:24 [initandlisten] connection accepted from 127.0.0.1:64908 #3 (3 connections now open) |
| m30999| Wed Aug 22 09:45:24 [conn1] connected connection! |
| m30999| Wed Aug 22 09:45:24 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5034e27434a70f9f6800f33e |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: WriteBackListener-localhost:30001 |
| m30999| Wed Aug 22 09:45:24 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5034e27434a70f9f6800f33e'), authoritative: true } |
| m30999| Wed Aug 22 09:45:24 [conn1] initializing shard connection to localhost:30001 |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] finishing over 1 shards |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] finishing over 1 shards |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:24 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| ---------- Creating large payload... |
| ---------- Done. |
2012-08-22 09:45:30 EDT | m30999| Wed Aug 22 09:45:24 [conn1] DROP: test.stuff_col_update |
| m30001| Wed Aug 22 09:45:24 [conn3] CMD: drop test.stuff_col_update |
| m30001| Wed Aug 22 09:45:24 [conn3] CMD: drop test.stuff_col_update_upsert |
| m30999| Wed Aug 22 09:45:24 [conn1] DROP: test.stuff_col_fam |
| m30001| Wed Aug 22 09:45:24 [conn3] CMD: drop test.stuff_col_fam |
| m30999| Wed Aug 22 09:45:24 [conn1] DROP: test.stuff_col_update_upsert |
| m30001| Wed Aug 22 09:45:24 [conn3] CMD: drop test.stuff_col_fam_upsert |
| m30999| Wed Aug 22 09:45:24 [conn1] DROP: test.stuff_col_fam_upsert |
| m30999| Wed Aug 22 09:45:24 BackgroundJob starting: ConnectBG |
| m30001| Wed Aug 22 09:45:24 [initandlisten] connection accepted from 127.0.0.1:64909 #4 (4 connections now open) |
| m30999| Wed Aug 22 09:45:24 [conn1] connected connection! |
| m30999| Wed Aug 22 09:45:24 [conn1] CMD: shardcollection: { shardcollection: "test.stuff_col_update", key: { _id: 1.0 } } |
| m30999| Wed Aug 22 09:45:24 [conn1] creating new connection to:localhost:30001 |
| m30999| Wed Aug 22 09:45:24 [conn1] enable sharding on: test.stuff_col_update with shard key: { _id: 1.0 } |
| m30001| Wed Aug 22 09:45:24 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.ns, filling with zeroes... |
| m30001| Wed Aug 22 09:45:24 [FileAllocator] creating directory /data/db/find_and_modify_sharded_21/_tmp |
| m30001| Wed Aug 22 09:45:25 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.0, filling with zeroes... |
| m30001| Wed Aug 22 09:45:25 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.ns, size: 16MB, took 0.943 secs |
| m30001| Wed Aug 22 09:45:29 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.0, size: 64MB, took 4.185 secs |
| m30001| Wed Aug 22 09:45:29 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_21/test.1, filling with zeroes... |
| m30001| Wed Aug 22 09:45:29 [conn4] build index test.stuff_col_update { _id: 1 } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index done. scanned 0 total records. 0.474 secs |
| m30001| Wed Aug 22 09:45:30 [conn4] info: creating collection test.stuff_col_update on add index |
| m30001| Wed Aug 22 09:45:30 [conn4] insert test.system.indexes keyUpdates:0 locks(micros) w:5730999 5731ms |
| m30999| Wed Aug 22 09:45:30 [conn1] going to create 1 chunk(s) for: test.stuff_col_update using new epoch 5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:30 [conn1] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 2 version: 1|0||5034e27a34a70f9f6800f340 based on: (empty) |
| m30999| Wed Aug 22 09:45:30 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_update with version 1|0||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30000 ns:test.stuff_col_update my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x100b05510 |
| m30999| Wed Aug 22 09:45:30 [conn1] resetting shard version of test.stuff_col_update on localhost:30000, version is zero |
| m30000| Wed Aug 22 09:45:30 [conn3] build index config.collections { _id: 1 } |
| m30000| Wed Aug 22 09:45:30 [conn3] build index done. scanned 0 total records. 0.001 secs |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0000 localhost:30000 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0000", shardHost: "localhost:30000" } 0x100b05ab0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f340'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion failed! |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 0 current: 2 version: 1|0||5034e27a34a70f9f6800f340 manager: 0x100b05510 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 0 current: 2 version: 1|0||5034e27a34a70f9f6800f340 manager: 0x100b05510 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f340'), serverID: ObjectId('5034e27434a70f9f6800f33e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_update", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_update'", ok: 0.0 } |
| m30000| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64911 #8 (8 connections now open) |
| m30001| Wed Aug 22 09:45:30 [conn3] no current chunk manager found for this shard, will initialize |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index test.stuff_col_update_upsert { _id: 1 } |
| m30999| Wed Aug 22 09:45:30 [conn1] CMD: shardcollection: { shardcollection: "test.stuff_col_update_upsert", key: { _id: 1.0 } } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index done. scanned 0 total records. 0.089 secs |
| m30999| Wed Aug 22 09:45:30 [conn1] enable sharding on: test.stuff_col_update_upsert with shard key: { _id: 1.0 } |
| m30001| Wed Aug 22 09:45:30 [conn4] info: creating collection test.stuff_col_update_upsert on add index |
| m30999| Wed Aug 22 09:45:30 [conn1] going to create 1 chunk(s) for: test.stuff_col_update_upsert using new epoch 5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:30 [conn1] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 3 version: 1|0||5034e27a34a70f9f6800f341 based on: (empty) |
| m30999| Wed Aug 22 09:45:30 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|0||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:30 [conn1] resetting shard version of test.stuff_col_update_upsert on localhost:30000, version is zero |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0000 localhost:30000 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0000", shardHost: "localhost:30000" } 0x100b05ab0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30000 ns:test.stuff_col_update_upsert my last seq: 0 current: 3 version: 0|0||000000000000000000000000 manager: 0x1009109e0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f341'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion failed! |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 0 current: 3 version: 1|0||5034e27a34a70f9f6800f341 manager: 0x1009109e0 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 0 current: 3 version: 1|0||5034e27a34a70f9f6800f341 manager: 0x1009109e0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f341'), serverID: ObjectId('5034e27434a70f9f6800f33e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_update_upsert", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_update_upsert'", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:30 [conn3] no current chunk manager found for this shard, will initialize |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] CMD: shardcollection: { shardcollection: "test.stuff_col_fam", key: { _id: 1.0 } } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index test.stuff_col_fam { _id: 1 } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index done. scanned 0 total records. 0 secs |
| m30001| Wed Aug 22 09:45:30 [conn4] info: creating collection test.stuff_col_fam on add index |
| m30999| Wed Aug 22 09:45:30 [conn1] enable sharding on: test.stuff_col_fam with shard key: { _id: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] going to create 1 chunk(s) for: test.stuff_col_fam using new epoch 5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:30 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 4 version: 1|0||5034e27a34a70f9f6800f342 based on: (empty) |
| m30999| Wed Aug 22 09:45:30 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|0||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:30 [conn1] resetting shard version of test.stuff_col_fam on localhost:30000, version is zero |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0000 localhost:30000 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0000", shardHost: "localhost:30000" } 0x100b05ab0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30000 ns:test.stuff_col_fam my last seq: 0 current: 4 version: 0|0||000000000000000000000000 manager: 0x100b07c80 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion failed! |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 0 current: 4 version: 1|0||5034e27a34a70f9f6800f342 manager: 0x100b07c80 |
| m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_fam", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_fam'", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30001| Wed Aug 22 09:45:30 [conn3] no current chunk manager found for this shard, will initialize |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 0 current: 4 version: 1|0||5034e27a34a70f9f6800f342 manager: 0x100b07c80 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] CMD: shardcollection: { shardcollection: "test.stuff_col_fam_upsert", key: { _id: 1.0 } } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index test.stuff_col_fam_upsert { _id: 1 } |
| m30001| Wed Aug 22 09:45:30 [conn4] build index done. scanned 0 total records. 0 secs |
| m30001| Wed Aug 22 09:45:30 [conn4] info: creating collection test.stuff_col_fam_upsert on add index |
| m30999| Wed Aug 22 09:45:30 [conn1] enable sharding on: test.stuff_col_fam_upsert with shard key: { _id: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] going to create 1 chunk(s) for: test.stuff_col_fam_upsert using new epoch 5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:30 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 5 version: 1|0||5034e27a34a70f9f6800f343 based on: (empty) |
| m30999| Wed Aug 22 09:45:30 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|0||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30000 ns:test.stuff_col_fam_upsert my last seq: 0 current: 5 version: 0|0||000000000000000000000000 manager: 0x100b08440 |
| m30999| Wed Aug 22 09:45:30 [conn1] resetting shard version of test.stuff_col_fam_upsert on localhost:30000, version is zero |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0000 localhost:30000 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0000", shardHost: "localhost:30000" } 0x100b05ab0 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 0 current: 5 version: 1|0||5034e27a34a70f9f6800f343 manager: 0x100b08440 |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion failed! |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f343'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 0 current: 5 version: 1|0||5034e27a34a70f9f6800f343 manager: 0x100b08440 |
| m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.stuff_col_fam_upsert", need_authoritative: true, errmsg: "first time for collection 'test.stuff_col_fam_upsert'", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:30 [conn3] no current chunk manager found for this shard, will initialize |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5034e27a34a70f9f6800f343'), serverID: ObjectId('5034e27434a70f9f6800f33e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| ---------- Update via findAndModify... |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 13698 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 189 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:30 [conn4] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : MinKey } -->> { : MaxKey } |
| m30000| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64913 #9 (9 connections now open) |
| m30001| Wed Aug 22 09:45:30 [conn4] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_fam-_id_MinKey", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:30 [conn4] request split points lookup for chunk test.stuff_col_fam { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:30 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-osx-106-i386-1.local:30001:1345643130:1286748362 (sleeping for 30000ms) |
| m30001| Wed Aug 22 09:45:30 [conn4] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:30 [conn4] distributed lock 'test.stuff_col_fam/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e27af1ab96b7480e02e7 |
| m30001| Wed Aug 22 09:45:30 [conn4] splitChunk accepted at version 1|0||5034e27a34a70f9f6800f342 |
| m30000| Wed Aug 22 09:45:30 [conn8] build index config.changelog { _id: 1 } |
| m30000| Wed Aug 22 09:45:30 [conn8] build index done. scanned 0 total records. 0 secs |
| m30001| Wed Aug 22 09:45:30 [conn4] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:30-0", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64909", time: new Date(1345643130375), what: "split", ns: "test.stuff_col_fam", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f342') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f342') } } } |
| m30001| Wed Aug 22 09:45:30 [conn4] distributed lock 'test.stuff_col_fam/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:30 [Balancer] creating new connection to:localhost:30000 |
| m30999| Wed Aug 22 09:45:30 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:30 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 6 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|0||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:30 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:30 [Balancer] connected connection! |
| m30999| Wed Aug 22 09:45:30 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|0||5034e27a34a70f9f6800f342 and 1 chunks |
| m30000| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64914 #10 (10 connections now open) |
| m30999| Wed Aug 22 09:45:30 [Balancer] creating new connection to:localhost:30000 |
| m30999| Wed Aug 22 09:45:30 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 4 current: 6 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100b09320 |
| m30999| Wed Aug 22 09:45:30 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:30 [conn1] autosplitted test.stuff_col_fam shard: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921) |
| m30000| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64915 #11 (11 connections now open) |
| m30999| Wed Aug 22 09:45:30 [Balancer] connected connection! |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:30 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] Refreshing MaxChunkSize: 1 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 120910 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:30 [Balancer] about to acquire distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807: |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| "who" : "bs-osx-106-i386-1.local:30999:1345643124:16807:Balancer:282475249", |
| m30999| "process" : "bs-osx-106-i386-1.local:30999:1345643124:16807", |
| m30999| "when" : { "$date" : "Wed Aug 22 09:45:30 2012" }, |
| m30999| { "state" : 1, |
| m30999| "ts" : { "$oid" : "5034e27a34a70f9f6800f344" } } |
| m30999| "why" : "doing balance round", |
| m30999| "state" : 0, |
| m30999| "ts" : { "$oid" : "5034e27434a70f9f6800f33f" } } |
| m30999| Wed Aug 22 09:45:30 [Balancer] distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807' acquired, ts : 5034e27a34a70f9f6800f344 |
| m30999| Wed Aug 22 09:45:30 [Balancer] *** start balancing round |
| m30000| Wed Aug 22 09:45:30 [conn10] build index config.tags { _id: 1 } |
| m30000| Wed Aug 22 09:45:30 [conn10] build index done. scanned 0 total records. 0.001 secs |
| m30999| { "_id" : "balancer", |
| m30000| Wed Aug 22 09:45:30 [conn10] build index config.tags { ns: 1, min: 1 } |
| m30000| Wed Aug 22 09:45:30 [conn10] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:30 [conn10] info: creating collection config.tags on add index |
| m30999| Wed Aug 22 09:45:30 [Balancer] collection : test.stuff_col_update |
| m30999| Wed Aug 22 09:45:30 [Balancer] donor : shard0001 chunks on 1 |
| m30999| Wed Aug 22 09:45:30 [Balancer] receiver : shard0000 chunks on 0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] shard0001 has more chunks me:1 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 101343 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:30 [Balancer] shard0001 has more chunks me:1 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] collection : test.stuff_col_update_upsert |
| m30999| Wed Aug 22 09:45:30 [Balancer] donor : shard0001 chunks on 1 |
| m30999| Wed Aug 22 09:45:30 [Balancer] receiver : shard0000 chunks on 0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:30 [Balancer] shard0001 has more chunks me:2 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] collection : test.stuff_col_fam |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [Balancer] donor : shard0001 chunks on 2 |
| m30999| Wed Aug 22 09:45:30 [Balancer] receiver : shard0000 chunks on 0 |
2012-08-22 09:45:31 EDT | m30999| Wed Aug 22 09:45:30 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:30 [Balancer] ns: test.stuff_col_fam going to move { _id: "test.stuff_col_fam-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f342'), ns: "test.stuff_col_fam", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] |
| m30999| Wed Aug 22 09:45:30 [Balancer] collection : test.stuff_col_fam_upsert |
| m30999| Wed Aug 22 09:45:30 [Balancer] donor : shard0001 chunks on 1 |
| m30999| Wed Aug 22 09:45:30 [Balancer] receiver : shard0000 chunks on 0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] shard0001 has more chunks me:1 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:30 [Balancer] threshold : 2 |
| m30001| Wed Aug 22 09:45:30 [conn4] received moveChunk request: { moveChunk: "test.stuff_col_fam", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.stuff_col_fam-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false } |
| m30001| Wed Aug 22 09:45:30 [conn4] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:30 [Balancer] moving chunk ns: test.stuff_col_fam moving ( ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 |
| m30001| Wed Aug 22 09:45:30 [conn4] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:30-1", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64909", time: new Date(1345643130390), what: "moveChunk.start", ns: "test.stuff_col_fam", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } |
| m30001| Wed Aug 22 09:45:30 [conn4] distributed lock 'test.stuff_col_fam/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e27af1ab96b7480e02e8 |
| m30001| Wed Aug 22 09:45:30 [conn4] moveChunk number of documents: 0 |
| m30001| Wed Aug 22 09:45:30 [conn4] moveChunk request accepted at version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:30 BackgroundJob starting: ConnectBG |
| m30999| Wed Aug 22 09:45:30 [conn1] connected connection! |
| m30001| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64916 #5 (5 connections now open) |
| m30001| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64917 #6 (6 connections now open) |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] creating new connection to:localhost:30001 |
2012-08-22 09:45:37 EDT | m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:30 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:30 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:30 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:30 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:30 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30000| Wed Aug 22 09:45:30 [initandlisten] connection accepted from 127.0.0.1:64918 #12 (12 connections now open) |
| m30000| Wed Aug 22 09:45:31 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/config.1, size: 128MB, took 6.996 secs |
| m30000| Wed Aug 22 09:45:31 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/test.ns, filling with zeroes... |
| m30000| Wed Aug 22 09:45:31 [initandlisten] connection accepted from 127.0.0.1:64921 #13 (13 connections now open) |
| m30001| Wed Aug 22 09:45:30 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30000| Wed Aug 22 09:45:32 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/test.ns, size: 16MB, took 0.847 secs |
| m30000| Wed Aug 22 09:45:32 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/test.0, filling with zeroes... |
| m30001| Wed Aug 22 09:45:31 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Wed Aug 22 09:45:32 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Wed Aug 22 09:45:33 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Wed Aug 22 09:45:34 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Wed Aug 22 09:45:35 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30000| Wed Aug 22 09:45:36 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_20/test.0, size: 64MB, took 4.602 secs |
| m30001| Wed Aug 22 09:45:36 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30000| Wed Aug 22 09:45:37 [FileAllocator] allocating new datafile /data/db/find_and_modify_sharded_20/test.1, filling with zeroes... |
| m30000| Wed Aug 22 09:45:37 [migrateThread] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:37 [migrateThread] build index test.stuff_col_fam { _id: 1 } |
| m30000| Wed Aug 22 09:45:37 [migrateThread] info: creating collection test.stuff_col_fam on add index |
| m30001| Wed Aug 22 09:45:37 [conn5] command admin.$cmd command: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:344 6721ms |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30000| Wed Aug 22 09:45:37 [conn12] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:33 reslen:2370 6719ms |
| m30000| Wed Aug 22 09:45:37 [conn12] serverStatus was very slow: { after basic: 0, middle of mem: 0, after mem: 0, after connections: 0, after extra info: 0, after counters: 0, after repl: 0, after asserts: 0, after dur: 6641, at end: 6641 } |
| m30000| Wed Aug 22 09:45:37 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff_col_fam' { _id: MinKey } -> { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 1ms sequenceNumber: 7 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 6 current: 7 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100912180 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: chunk manager reload forced for collection 'test.stuff_col_fam', config version is 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 130346 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 8 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 7 current: 8 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100b081d0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: chunk manager reload forced for collection 'test.stuff_col_fam', config version is 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 124926 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30000| Wed Aug 22 09:45:37 [migrateThread] migrate commit flushed to journal for 'test.stuff_col_fam' { _id: MinKey } -> { _id: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 9 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 8 current: 9 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100b0a1a0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: chunk manager reload forced for collection 'test.stuff_col_fam', config version is 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 191302 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 10 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: chunk manager reload forced for collection 'test.stuff_col_fam', config version is 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 9 current: 10 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100b081d0 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 208425 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 11 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 10 current: 11 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100b0a570 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: chunk manager reload forced for collection 'test.stuff_col_fam', config version is 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 224760 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 1 chunks into new chunk manager for test.stuff_col_fam with version 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam: 0ms sequenceNumber: 12 version: 1|2||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: chunk manager reload forced for collection 'test.stuff_col_fam', config version is 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam { setShardVersion: "test.stuff_col_fam", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f342'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f342'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam my last seq: 11 current: 12 version: 1|2||5034e27a34a70f9f6800f342 manager: 0x100b09ec0 |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 134649 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Wed Aug 22 09:45:37 [conn4] moveChunk setting version to: 2|0||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [conn1] warning: splitChunk failed - cmd: { splitChunk: "test.stuff_col_fam", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_fam-_id_0.0", configdb: "localhost:30000" } result: { who: { _id: "test.stuff_col_fam", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", state: 2, ts: ObjectId('5034e27af1ab96b7480e02e8'), when: new Date(1345643130389), who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn4:1219394844", why: "migrate-{ _id: MinKey }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30000| Wed Aug 22 09:45:37 [migrateThread] migrate commit flushed to journal for 'test.stuff_col_fam' { _id: MinKey } -> { _id: 0.0 } |
| m30000| Wed Aug 22 09:45:37 [migrateThread] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-0", server: "bs-osx-106-i386-1.local", clientAddr: ":27017", time: new Date(1345643137396), what: "moveChunk.to", ns: "test.stuff_col_fam", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 6734, step2 of 5: 0, step3 of 5: 0, step4 of 5: 2, step5 of 5: 266 } } |
| m30000| Wed Aug 22 09:45:37 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff_col_fam' { _id: MinKey } -> { _id: 0.0 } |
| m30000| Wed Aug 22 09:45:37 [initandlisten] connection accepted from 127.0.0.1:64925 #14 (14 connections now open) |
| ---------- Done. |
| ---------- Upsert via findAndModify... |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 36726 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32794 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 1.0 } |
| m30001| Wed Aug 22 09:45:37 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff_col_fam", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } |
| m30001| Wed Aug 22 09:45:37 [conn4] moveChunk updating self version to: 2|1||5034e27a34a70f9f6800f342 through { _id: 0.0 } -> { _id: MaxKey } for collection 'test.stuff_col_fam' |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32794 splitThreshold: 921 |
| m30001| Wed Aug 22 09:45:37 [conn4] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-2", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64909", time: new Date(1345643137397), what: "moveChunk.commit", ns: "test.stuff_col_fam", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } |
| m30001| Wed Aug 22 09:45:37 [conn4] doing delete inline |
| m30001| Wed Aug 22 09:45:37 [conn4] distributed lock 'test.stuff_col_fam/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30001| Wed Aug 22 09:45:37 [conn4] moveChunk deleted: 0 |
| m30001| Wed Aug 22 09:45:37 [conn4] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-3", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64909", time: new Date(1345643137398), what: "moveChunk.from", ns: "test.stuff_col_fam", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 7003, step5 of 6: 2, step6 of 6: 0 } } |
| m30001| Wed Aug 22 09:45:37 [conn4] command admin.$cmd command: { moveChunk: "test.stuff_col_fam", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.stuff_col_fam-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false } ntoreturn:1 keyUpdates:0 locks(micros) r:54 w:32 reslen:37 7009ms |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_fam_upsert-_id_MinKey", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02e9 |
| m30001| Wed Aug 22 09:45:37 [conn5] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-4", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137406), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|0||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [Balancer] loading chunk manager for collection test.stuff_col_fam using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f342 and 2 chunks |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:37 [Balancer] loaded 2 chunks into new chunk manager for test.stuff_col_fam with version 2|1||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [Balancer] moveChunk result: { ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [Balancer] *** end of balancing round |
| m30999| Wed Aug 22 09:45:37 [Balancer] ChunkManager: time to load chunks for test.stuff_col_fam: 1ms sequenceNumber: 13 version: 2|1||5034e27a34a70f9f6800f342 based on: 1|2||5034e27a34a70f9f6800f342 |
| m30999| Wed Aug 22 09:45:37 [Balancer] distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807' unlocked. |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|2||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|0||5034e27a34a70f9f6800f343 and 1 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 14 version: 1|2||5034e27a34a70f9f6800f343 based on: 1|0||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 5 current: 14 version: 1|2||5034e27a34a70f9f6800f343 manager: 0x100b09cc0 |
| m30999| Wed Aug 22 09:45:37 [conn1] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921) |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f343'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f343'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 100851 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98382 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 15.0 } ], shardId: "test.stuff_col_fam_upsert-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02ea |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|2||5034e27a34a70f9f6800f343 |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-5", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137428), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 15.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|4||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f343 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 15 version: 1|4||5034e27a34a70f9f6800f343 based on: 1|2||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 15.0 } (splitThreshold 471859) (migrate suggested) |
| m30999| Wed Aug 22 09:45:37 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:37 [conn1] recently split chunk: { min: { _id: 15.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('5034e27a34a70f9f6800f343'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f343'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 14 current: 15 version: 1|4||5034e27a34a70f9f6800f343 manager: 0x100911d20 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 194961 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 30.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 30.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 30.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 49.0 } ], shardId: "test.stuff_col_fam_upsert-_id_15.0", configdb: "localhost:30000" } |
| m30000| Wed Aug 22 09:45:37 [conn9] update config.locks query: { _id: "test.stuff_col_fam_upsert", state: 0, ts: ObjectId('5034e281f1ab96b7480e02ea') } update: { $set: { state: 1, who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn5:1418874726", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", when: new Date(1345643137468), why: "split-{ _id: 15.0 }", ts: ObjectId('5034e281f1ab96b7480e02eb') } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:211 279ms |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02eb |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|4||5034e27a34a70f9f6800f343 |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-6", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137750), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 49.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') }, right: { min: { _id: 49.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30001| Wed Aug 22 09:45:37 [conn5] command admin.$cmd command: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 49.0 } ], shardId: "test.stuff_col_fam_upsert-_id_15.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:95 282ms |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|6||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|4||5034e27a34a70f9f6800f343 and 3 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 16 version: 1|6||5034e27a34a70f9f6800f343 based on: 1|4||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } on: { _id: 49.0 } (splitThreshold 943718) (migrate suggested) |
| m30999| Wed Aug 22 09:45:37 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 15 current: 16 version: 1|6||5034e27a34a70f9f6800f343 manager: 0x100912ba0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('5034e27a34a70f9f6800f343'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] recently split chunk: { min: { _id: 49.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f343'), ok: 1.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } dataWritten: 224475 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 64.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 64.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_fam_upsert { : 49.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_fam_upsert", keyPattern: { _id: 1.0 }, min: { _id: 49.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 80.0 } ], shardId: "test.stuff_col_fam_upsert-_id_49.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_fam_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02ec |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|6||5034e27a34a70f9f6800f343 |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-7", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137795), what: "split", ns: "test.stuff_col_fam_upsert", details: { before: { min: { _id: 49.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 49.0 }, max: { _id: 80.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') }, right: { min: { _id: 80.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_fam_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_fam_upsert with version 1|8||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_fam_upsert using old chunk manager w/ version 1|6||5034e27a34a70f9f6800f343 and 4 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_fam_upsert: 0ms sequenceNumber: 17 version: 1|8||5034e27a34a70f9f6800f343 based on: 1|6||5034e27a34a70f9f6800f343 |
| m30999| Wed Aug 22 09:45:37 [conn1] autosplitted test.stuff_col_fam_upsert shard: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 49.0 } max: { _id: MaxKey } on: { _id: 80.0 } (splitThreshold 943718) (migrate suggested) |
| m30999| Wed Aug 22 09:45:37 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:37 [conn1] recently split chunk: { min: { _id: 80.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_fam_upsert { setShardVersion: "test.stuff_col_fam_upsert", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('5034e27a34a70f9f6800f343'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f343'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_fam_upsert my last seq: 16 current: 17 version: 1|8||5034e27a34a70f9f6800f343 manager: 0x100912800 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 80.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 80.0 } max: { _id: MaxKey } dataWritten: 230503 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 80.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 80.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 80.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 80.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_fam_upsert { : 80.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_fam_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 80.0 } max: { _id: MaxKey } dataWritten: 196764 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split { _id: 95.0 } |
| ---------- Basic update... |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 113671 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| ---------- Done. |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_update { : MinKey } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02ed |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_update-_id_MinKey", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 198 splitThreshold: 921 |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|0||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-8", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137841), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update with version 1|2||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 18 version: 1|2||5034e27a34a70f9f6800f340 based on: 1|0||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921) |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|0||5034e27a34a70f9f6800f340 and 1 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f340'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f340'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 2 current: 18 version: 1|2||5034e27a34a70f9f6800f340 manager: 0x100b07f30 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 106455 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98526 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_update-_id_0.0", configdb: "localhost:30000" } |
| m30000| Wed Aug 22 09:45:37 [conn9] update config.locks query: { _id: "test.stuff_col_update", state: 0, ts: ObjectId('5034e281f1ab96b7480e02ed') } update: { $set: { state: 1, who: "bs-osx-106-i386-1.local:30001:1345643130:1286748362:conn5:1418874726", process: "bs-osx-106-i386-1.local:30001:1345643130:1286748362", when: new Date(1345643137854), why: "split-{ _id: 0.0 }", ts: ObjectId('5034e281f1ab96b7480e02ee') } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:200 126ms |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02ee |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|2||5034e27a34a70f9f6800f340 |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-9", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137985), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') }, right: { min: { _id: 99.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30001| Wed Aug 22 09:45:37 [conn5] command admin.$cmd command: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 99.0 } ], shardId: "test.stuff_col_update-_id_0.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:95 132ms |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update with version 1|4||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 19 version: 1|4||5034e27a34a70f9f6800f340 based on: 1|2||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f340 and 2 chunks |
| m30999| Wed Aug 22 09:45:37 [conn1] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 99.0 } (splitThreshold 471859) (migrate suggested) |
| m30999| Wed Aug 22 09:45:37 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:37 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 18 current: 19 version: 1|4||5034e27a34a70f9f6800f340 manager: 0x100912b60 |
| m30999| Wed Aug 22 09:45:37 [conn1] recently split chunk: { min: { _id: 99.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f340'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('5034e27a34a70f9f6800f340'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 240743 splitThreshold: 1048576 |
| m30001| Wed Aug 22 09:45:37 [conn5] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : 0.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:37 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 99.0 }, from: "shard0001", splitKeys: [ { _id: 47.0 } ], shardId: "test.stuff_col_update-_id_0.0", configdb: "localhost:30000" } |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| ---------- Done. |
| m30001| Wed Aug 22 09:45:37 [conn5] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e281f1ab96b7480e02ef |
| m30001| Wed Aug 22 09:45:37 [conn5] splitChunk accepted at version 1|4||5034e27a34a70f9f6800f340 |
| ---------- Basic update with upsert... |
| m30001| Wed Aug 22 09:45:37 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:37-10", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643137998), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: 0.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 47.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') }, right: { min: { _id: 47.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') } } } |
| m30001| Wed Aug 22 09:45:37 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : 47.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 0.0 } -->> { : 47.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 47.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update { : 47.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update", keyPattern: { _id: 1.0 }, min: { _id: 47.0 }, max: { _id: 99.0 }, from: "shard0001", splitKeys: [ { _id: 72.0 } ], shardId: "test.stuff_col_update-_id_47.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 47.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e282f1ab96b7480e02f0 |
| m30001| Wed Aug 22 09:45:38 [conn5] splitChunk accepted at version 1|6||5034e27a34a70f9f6800f340 |
| m30001| Wed Aug 22 09:45:38 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:38-11", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643138012), what: "split", ns: "test.stuff_col_update", details: { before: { min: { _id: 47.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 47.0 }, max: { _id: 72.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') }, right: { min: { _id: 72.0 }, max: { _id: 99.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340') } } } |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 47.0 } -->> { : 72.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 47.0 } -->> { : 72.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 72.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 72.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 72.0 } -->> { : 99.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update { : 72.0 } -->> { : 99.0 } |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:37 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:37 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:37 [conn1] loaded 3 chunks into new chunk manager for test.stuff_col_update with version 1|6||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 20 version: 1|6||5034e27a34a70f9f6800f340 based on: 1|4||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:37 [conn1] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|4||5034e27a34a70f9f6800f340 and 3 chunks |
| m30999| Wed Aug 22 09:45:38 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 19 current: 20 version: 1|6||5034e27a34a70f9f6800f340 manager: 0x100b05510 |
| m30999| Wed Aug 22 09:45:38 [conn1] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|3||000000000000000000000000 min: { _id: 0.0 } max: { _id: 99.0 } on: { _id: 47.0 } (splitThreshold 1048576) |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f340'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('5034e27a34a70f9f6800f340'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { _id: 0.0 } max: { _id: 47.0 } dataWritten: 221205 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 40.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|5||000000000000000000000000 min: { _id: 0.0 } max: { _id: 47.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: 99.0 } dataWritten: 237160 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 33.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 76.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update with version 1|8||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:38 [conn1] ChunkManager: time to load chunks for test.stuff_col_update: 0ms sequenceNumber: 21 version: 1|8||5034e27a34a70f9f6800f340 based on: 1|6||5034e27a34a70f9f6800f340 |
| m30999| Wed Aug 22 09:45:38 [conn1] loading chunk manager for collection test.stuff_col_update using old chunk manager w/ version 1|6||5034e27a34a70f9f6800f340 and 4 chunks |
| m30999| Wed Aug 22 09:45:38 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update my last seq: 20 current: 21 version: 1|8||5034e27a34a70f9f6800f340 manager: 0x100b07f30 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update { setShardVersion: "test.stuff_col_update", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('5034e27a34a70f9f6800f340'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:38 [conn1] autosplitted test.stuff_col_update shard: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 47.0 } max: { _id: 99.0 } on: { _id: 72.0 } (splitThreshold 1048576) |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f340'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { _id: 47.0 } max: { _id: 72.0 } dataWritten: 228231 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|7||000000000000000000000000 min: { _id: 47.0 } max: { _id: 72.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 69.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 72.0 } max: { _id: 99.0 } dataWritten: 227831 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 71.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 72.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 91.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 93.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 72.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 89.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 72.0 } max: { _id: 99.0 } dataWritten: 229894 splitThreshold: 1048576 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 88.0 } |
| m30001| Wed Aug 22 09:45:38 [FileAllocator] done allocating datafile /data/db/find_and_modify_sharded_21/test.1, size: 128MB, took 7.948 secs |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 154762 splitThreshold: 921 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32849 splitThreshold: 921 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 32849 splitThreshold: 921 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : MinKey } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.stuff_col_update_upsert-_id_MinKey", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:38 [conn5] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e282f1ab96b7480e02f1 |
| m30001| Wed Aug 22 09:45:38 [conn5] splitChunk accepted at version 1|0||5034e27a34a70f9f6800f341 |
| m30001| Wed Aug 22 09:45:38 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:38-12", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643138269), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') } } } |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:38 [conn1] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|0||5034e27a34a70f9f6800f341 and 1 chunks |
| m30999| Wed Aug 22 09:45:38 [conn1] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 22 version: 1|2||5034e27a34a70f9f6800f341 based on: 1|0||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921) |
| m30999| Wed Aug 22 09:45:38 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|2||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('5034e27a34a70f9f6800f341'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f341'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 3 current: 22 version: 1|2||5034e27a34a70f9f6800f341 manager: 0x100b05510 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 224419 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 98547 splitThreshold: 471859 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 0.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:38 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : 0.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e282f1ab96b7480e02f2 |
| m30001| Wed Aug 22 09:45:38 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 15.0 } ], shardId: "test.stuff_col_update_upsert-_id_0.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:38 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:38-13", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643138283), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 15.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') } } } |
| m30001| Wed Aug 22 09:45:38 [conn5] splitChunk accepted at version 1|2||5034e27a34a70f9f6800f341 |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:38 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|4||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 23 version: 1|4||5034e27a34a70f9f6800f341 based on: 1|2||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|2||5034e27a34a70f9f6800f341 and 2 chunks |
| m30999| Wed Aug 22 09:45:38 [conn1] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } on: { _id: 15.0 } (splitThreshold 471859) (migrate suggested) |
| m30999| Wed Aug 22 09:45:38 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:38 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 22 current: 23 version: 1|4||5034e27a34a70f9f6800f341 manager: 0x1009109e0 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('5034e27a34a70f9f6800f341'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:38 [conn1] recently split chunk: { min: { _id: 15.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f341'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 200346 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 30.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 30.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : 15.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:38 [conn5] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e282f1ab96b7480e02f3 |
| m30001| Wed Aug 22 09:45:38 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 48.0 } ], shardId: "test.stuff_col_update_upsert-_id_15.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:38 [conn5] splitChunk accepted at version 1|4||5034e27a34a70f9f6800f341 |
| m30001| Wed Aug 22 09:45:38 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:38-14", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643138304), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 48.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') }, right: { min: { _id: 48.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') } } } |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30999| Wed Aug 22 09:45:38 [conn1] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|4||5034e27a34a70f9f6800f341 and 3 chunks |
| m30999| Wed Aug 22 09:45:38 [conn1] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 24 version: 1|6||5034e27a34a70f9f6800f341 based on: 1|4||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|4||000000000000000000000000 min: { _id: 15.0 } max: { _id: MaxKey } on: { _id: 48.0 } (splitThreshold 943718) (migrate suggested) |
| m30999| Wed Aug 22 09:45:38 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|6||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:38 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 23 current: 24 version: 1|6||5034e27a34a70f9f6800f341 manager: 0x100b08440 |
| m30999| Wed Aug 22 09:45:38 [conn1] recently split chunk: { min: { _id: 48.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f341'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('5034e27a34a70f9f6800f341'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } dataWritten: 211629 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 63.0 } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 63.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:38 [conn5] request split points lookup for chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split { _id: 63.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] max number of requested split points reached (2) before the end of chunk test.stuff_col_update_upsert { : 48.0 } -->> { : MaxKey } |
| m30001| Wed Aug 22 09:45:38 [conn5] created new distributed lock for test.stuff_col_update_upsert on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:38 [conn5] received splitChunk request: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 48.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 83.0 } ], shardId: "test.stuff_col_update_upsert-_id_48.0", configdb: "localhost:30000" } |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e282f1ab96b7480e02f4 |
| m30001| Wed Aug 22 09:45:38 [conn5] splitChunk accepted at version 1|6||5034e27a34a70f9f6800f341 |
| m30001| Wed Aug 22 09:45:38 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:38-15", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643138411), what: "split", ns: "test.stuff_col_update_upsert", details: { before: { min: { _id: 48.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 48.0 }, max: { _id: 83.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') }, right: { min: { _id: 83.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341') } } } |
| m30999| Wed Aug 22 09:45:38 [Balancer] Refreshing MaxChunkSize: 1 |
| m30999| Wed Aug 22 09:45:38 [Balancer] about to acquire distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807: |
| m30999| "who" : "bs-osx-106-i386-1.local:30999:1345643124:16807:Balancer:282475249", |
| m30999| "process" : "bs-osx-106-i386-1.local:30999:1345643124:16807", |
| m30999| "when" : { "$date" : "Wed Aug 22 09:45:38 2012" }, |
| m30999| "why" : "doing balance round", |
| m30999| "ts" : { "$oid" : "5034e28234a70f9f6800f345" } } |
| m30999| { "state" : 1, |
| m30999| "state" : 0, |
| m30999| "ts" : { "$oid" : "5034e27a34a70f9f6800f344" } } |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update_upsert/bs-osx-106-i386-1.local:30001:1345643130:1286748362' unlocked. |
| m30001| Wed Aug 22 09:45:38 [conn5] command admin.$cmd command: { splitChunk: "test.stuff_col_update_upsert", keyPattern: { _id: 1.0 }, min: { _id: 48.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 83.0 } ], shardId: "test.stuff_col_update_upsert-_id_48.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 reslen:95 116ms |
| m30999| { "_id" : "balancer", |
| m30999| Wed Aug 22 09:45:38 [Balancer] distributed lock 'balancer/bs-osx-106-i386-1.local:30999:1345643124:16807' acquired, ts : 5034e28234a70f9f6800f345 |
| m30999| Wed Aug 22 09:45:38 [Balancer] *** start balancing round |
| m30999| Wed Aug 22 09:45:38 [conn1] loaded 2 chunks into new chunk manager for test.stuff_col_update_upsert with version 1|8||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] ChunkManager: time to load chunks for test.stuff_col_update_upsert: 0ms sequenceNumber: 25 version: 1|8||5034e27a34a70f9f6800f341 based on: 1|6||5034e27a34a70f9f6800f341 |
| m30999| Wed Aug 22 09:45:38 [conn1] loading chunk manager for collection test.stuff_col_update_upsert using old chunk manager w/ version 1|6||5034e27a34a70f9f6800f341 and 4 chunks |
| m30999| Wed Aug 22 09:45:38 [Balancer] shard0001 has more chunks me:5 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:38 [Balancer] collection : test.stuff_col_update |
| m30999| Wed Aug 22 09:45:38 [Balancer] donor : shard0001 chunks on 5 |
| m30999| Wed Aug 22 09:45:38 [Balancer] receiver : shard0000 chunks on 0 |
| m30999| Wed Aug 22 09:45:38 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:38 [conn1] autosplitted test.stuff_col_update_upsert shard: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|6||000000000000000000000000 min: { _id: 48.0 } max: { _id: MaxKey } on: { _id: 83.0 } (splitThreshold 943718) (migrate suggested) |
| m30999| Wed Aug 22 09:45:38 [Balancer] ns: test.stuff_col_update going to move { _id: "test.stuff_col_update-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f340'), ns: "test.stuff_col_update", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] |
| m30999| Wed Aug 22 09:45:38 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 |
| m30999| Wed Aug 22 09:45:38 [conn1] recently split chunk: { min: { _id: 83.0 }, max: { _id: MaxKey } } already in the best shard: shard0001:localhost:30001 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion shard0001 localhost:30001 test.stuff_col_update_upsert { setShardVersion: "test.stuff_col_update_upsert", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('5034e27a34a70f9f6800f341'), serverID: ObjectId('5034e27434a70f9f6800f33e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b066b0 |
| m30999| Wed Aug 22 09:45:38 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5034e27a34a70f9f6800f341'), ok: 1.0 } |
| m30999| Wed Aug 22 09:45:38 [Balancer] shard0001 has more chunks me:5 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:38 [Balancer] collection : test.stuff_col_update_upsert |
| m30999| Wed Aug 22 09:45:38 [conn1] have to set shard version for conn: localhost:30001 ns:test.stuff_col_update_upsert my last seq: 24 current: 25 version: 1|8||5034e27a34a70f9f6800f341 manager: 0x100b0a010 |
| m30999| Wed Aug 22 09:45:38 [Balancer] donor : shard0001 chunks on 5 |
| m30999| Wed Aug 22 09:45:38 [Balancer] receiver : shard0000 chunks on 0 |
| m30999| Wed Aug 22 09:45:38 [Balancer] ns: test.stuff_col_update_upsert going to move { _id: "test.stuff_col_update_upsert-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f341'), ns: "test.stuff_col_update_upsert", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] |
| m30999| Wed Aug 22 09:45:38 [Balancer] shard0001 has more chunks me:1 best: shard0000:1 |
| m30999| Wed Aug 22 09:45:38 [Balancer] collection : test.stuff_col_fam |
| m30999| Wed Aug 22 09:45:38 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:38 [Balancer] donor : shard0000 chunks on 1 |
| m30999| Wed Aug 22 09:45:38 [Balancer] receiver : shard0000 chunks on 1 |
| m30999| Wed Aug 22 09:45:38 [Balancer] shard0001 has more chunks me:5 best: shard0000:0 |
| m30999| Wed Aug 22 09:45:38 [Balancer] collection : test.stuff_col_fam_upsert |
| m30999| Wed Aug 22 09:45:38 [Balancer] donor : shard0001 chunks on 5 |
| m30999| Wed Aug 22 09:45:38 [Balancer] receiver : shard0000 chunks on 0 |
| m30999| Wed Aug 22 09:45:38 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:38 [Balancer] threshold : 2 |
| m30999| Wed Aug 22 09:45:38 [Balancer] ns: test.stuff_col_fam_upsert going to move { _id: "test.stuff_col_fam_upsert-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f343'), ns: "test.stuff_col_fam_upsert", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] |
| m30999| Wed Aug 22 09:45:38 [Balancer] moving chunk ns: test.stuff_col_update moving ( ns:test.stuff_col_update at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { _id: MinKey } max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 |
| m30001| Wed Aug 22 09:45:38 [conn5] created new distributed lock for test.stuff_col_update on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Wed Aug 22 09:45:38 [conn5] distributed lock 'test.stuff_col_update/bs-osx-106-i386-1.local:30001:1345643130:1286748362' acquired, ts : 5034e282f1ab96b7480e02f5 |
| m30001| Wed Aug 22 09:45:38 [conn5] received moveChunk request: { moveChunk: "test.stuff_col_update", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.stuff_col_update-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false } |
| m30001| Wed Aug 22 09:45:38 [conn5] about to log metadata event: { _id: "bs-osx-106-i386-1.local-2012-08-22T13:45:38-16", server: "bs-osx-106-i386-1.local", clientAddr: "127.0.0.1:64916", time: new Date(1345643138446), what: "moveChunk.start", ns: "test.stuff_col_update", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } |
| m30001| Wed Aug 22 09:45:38 [conn5] moveChunk number of documents: 0 |
| m30000| Wed Aug 22 09:45:38 [migrateThread] build index test.stuff_col_update { _id: 1 } |
| m30000| Wed Aug 22 09:45:38 [migrateThread] build index done. scanned 0 total records. 0 secs |
| m30000| Wed Aug 22 09:45:38 [migrateThread] info: creating collection test.stuff_col_update on add index |
| m30000| Wed Aug 22 09:45:38 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff_col_update' { _id: MinKey } -> { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:38 [conn5] moveChunk request accepted at version 1|8||5034e27a34a70f9f6800f340 |
| m30000| Wed Aug 22 09:45:38 [migrateThread] migrate commit flushed to journal for 'test.stuff_col_update' { _id: MinKey } -> { _id: 0.0 } |
| m30001| Wed Aug 22 09:45:38 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 83.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 83.0 } max: { _id: MaxKey } dataWritten: 227406 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30001| Wed Aug 22 09:45:38 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 83.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 83.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30999| Wed Aug 22 09:45:38 [conn1] about to initiate autosplit: ns:test.stuff_col_update_upsert at: shard0001:localhost:30001 lastmod: 1|8||000000000000000000000000 min: { _id: 83.0 } max: { _id: MaxKey } dataWritten: 197094 splitThreshold: 943718 |
| m30001| Wed Aug 22 09:45:38 [conn4] request split points lookup for chunk test.stuff_col_update_upsert { : 83.0 } -->> { : MaxKey } |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Wed Aug 22 09:45:38 [conn1] chunk not full enough to trigger auto-split no split entry |
| ---------- Done. |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { ns: 1.0, min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| ---------- Printing chunks: |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] finishing over 1 shards |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.stuff_col_fam-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5034e27a34a70f9f6800f342'), ns: "test.stuff_col_fam", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| test.stuff_col_fam-_id_0.0 2000|1 { "_id" : 0 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_fam |
| test.stuff_col_fam_upsert-_id_MinKey 1000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_fam_upsert |
| ShardingTest test.stuff_col_fam-_id_MinKey 2000|0 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0000 test.stuff_col_fam |
| test.stuff_col_fam_upsert-_id_0.0 1000|3 { "_id" : 0 } -> { "_id" : 15 } shard0001 test.stuff_col_fam_upsert |
| test.stuff_col_fam_upsert-_id_15.0 1000|5 { "_id" : 15 } -> { "_id" : 49 } shard0001 test.stuff_col_fam_upsert |
| test.stuff_col_fam_upsert-_id_49.0 1000|7 { "_id" : 49 } -> { "_id" : 80 } shard0001 test.stuff_col_fam_upsert |
| test.stuff_col_fam_upsert-_id_80.0 1000|8 { "_id" : 80 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_fam_upsert |
| test.stuff_col_update-_id_0.0 1000|5 { "_id" : 0 } -> { "_id" : 47 } shard0001 test.stuff_col_update |
| test.stuff_col_update-_id_MinKey 1000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_update |
| test.stuff_col_update-_id_47.0 1000|7 { "_id" : 47 } -> { "_id" : 72 } shard0001 test.stuff_col_update |
| test.stuff_col_update-_id_72.0 1000|8 { "_id" : 72 } -> { "_id" : 99 } shard0001 test.stuff_col_update |
| test.stuff_col_update-_id_99.0 1000|4 { "_id" : 99 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_update |
| test.stuff_col_update_upsert-_id_0.0 1000|3 { "_id" : 0 } -> { "_id" : 15 } shard0001 test.stuff_col_update_upsert |
| test.stuff_col_update_upsert-_id_15.0 1000|5 { "_id" : 15 } -> { "_id" : 48 } shard0001 test.stuff_col_update_upsert |
| test.stuff_col_update_upsert-_id_MinKey 1000|1 { "_id" : { $minKey : 1 } } -> { "_id" : 0 } shard0001 test.stuff_col_update_upsert |
| test.stuff_col_update_upsert-_id_48.0 1000|7 { "_id" : 48 } -> { "_id" : 83 } shard0001 test.stuff_col_update_upsert |
| |
| test.stuff_col_update_upsert-_id_83.0 1000|8 { "_id" : 83 } -> { "_id" : { $maxKey : 1 } } shard0001 test.stuff_col_update_upsert |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.stuff_col_fam" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.stuff_col_fam" } } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] |
| ---------- Verifying that both codepaths resulted in splits... |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] finishing over 1 shards |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 2.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| m30999| Wed Aug 22 09:45:38 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| assert: 2 is not greater than 2 : findAndModify update code path didn't result in splits |
| ()@src/mongo/shell/utils.js:37 |
| ("2 is not greater than 2 : findAndModify update code path didn't result in splits")@src/mongo/shell/utils.js:58 |
| (2,2,"findAndModify update code path didn't result in splits")@src/mongo/shell/utils.js:251 |
| @/data/buildslaves/OS_X_105_32bit_V2.2/mongo/jstests/sharding/findandmodify2.js:106 |
| |
| Error("Printing Stack Trace")@:0 |
| Wed Aug 22 09:45:38 uncaught exception: 2 is not greater than 2 : findAndModify update code path didn't result in splits |
| failed to load: /data/buildslaves/OS_X_105_32bit_V2.2/mongo/jstests/sharding/findandmodify2.js |
2012-08-22 09:45:44 EDT | |
2012-08-22 09:45:46 EDT | Wed Aug 22 09:45:45 got signal 15 (Terminated), will terminate after current cmd ends |
| Wed Aug 22 09:45:45 [interruptThread] now exiting |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: going to close listening sockets... |
| Wed Aug 22 09:45:45 [interruptThread] closing listening socket: 14 |
| Wed Aug 22 09:45:45 [interruptThread] closing listening socket: 15 |
| Wed Aug 22 09:45:45 [interruptThread] closing listening socket: 16 |
| Wed Aug 22 09:45:45 [interruptThread] removing socket file: /tmp/mongodb-27999.sock |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: going to flush diaglog... |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: going to close sockets... |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: waiting for fs preallocator... |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: lock for final commit... |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: final commit... |
| Wed Aug 22 09:45:45 [interruptThread] shutdown: closing all files... |
| Wed Aug 22 09:45:45 [interruptThread] closeAllFiles() finished |
| Wed Aug 22 09:45:45 [interruptThread] journalCleanup... |
| Wed Aug 22 09:45:45 [interruptThread] removeJournalFiles |
| Wed Aug 22 09:45:45 dbexit: |
| Wed Aug 22 09:45:46 [interruptThread] shutdown: removing fs lock... |
| Wed Aug 22 09:45:46 dbexit: really exiting now |