-
Type: Bug
-
Resolution: Duplicate
-
Priority: Minor - P4
-
None
-
Affects Version/s: 2.0.2
-
Component/s: None
-
None
-
ALL
I added sharding to a collection with ~ 3 million documents but as the balancing seemed to have a performance impact, I dropped the collection.
I added the shard key and index and as data was fed back in, errors started to appear. This includes from the shell:
mongos> db.cta.count()
Wed Feb 29 16:58:21 uncaught exception: count failed: {
"assertion" : "setShardVersion failed host: mongos3a:27018
",
"assertionCode" : 10429,
"errmsg" : "db assertion failure",
"ok" : 0
}
From mongos, I have the following:
Wed Feb 29 16:58:05 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' acquired, ts : 4f4ead7d2d46e699744598a6
Wed Feb 29 16:58:05 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' unlocked.
Wed Feb 29 16:58:15 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' acquired, ts : 4f4ead872d46e699744598a7
Wed Feb 29 16:58:15 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' unlocked.
Wed Feb 29 16:58:21 [conn1869] going to retry checkShardVersion host: mongos3a:27018
Wed Feb 29 16:58:21 [conn1869] going to retry checkShardVersion host: mongos3a:27018
{ oldVersion: Timestamp 9000|0, assertion: "assertion s/d_state.cpp:529", errmsg: "db assertion failure", ok: 0.0 }Wed Feb 29 16:58:21 [conn1869] going to retry checkShardVersion host: mongos3a:27018
{ oldVersion: Timestamp 9000|0, assertion: "assertion s/d_state.cpp:529", errmsg: "db assertion failure", ok: 0.0 }Wed Feb 29 16:58:21 [conn1869] going to retry checkShardVersion host: mongos3a:27018
{ oldVersion: Timestamp 9000|0, assertion: "assertion s/d_state.cpp:529", errmsg: "db assertion failure", ok: 0.0 }Wed Feb 29 16:58:21 [conn1869] setShardVersion failed host: mongos3a:27018
{ oldVersion: Timestamp 9000|0, assertion: "assertion s/d_state.cpp:529", errmsg: "db assertion failure", ok: 0.0 }Wed Feb 29 16:58:21 [conn1869] Assertion: 10429:setShardVersion failed host: mongos3a:27018
{ oldVersion: Timestamp 9000|0, assertion: "assertion s/d_state.cpp:529", errmsg: "db assertion failure", ok: 0.0 }0x535572 0x7f4556 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x5c1e06 0x5bffe7 0x79879b 0x793175 0x76cf2b 0x7b4927 0x7c6cf1 0x5e6a07 0x3144a0673d 0x3143ed44bd
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x112) [0x535572]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f4556]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f3d2d]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f3d2d]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f3d2d]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f3d2d]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f3d2d]
./mongodb-linux-x86_64-2.0.2/bin/mongos [0x7f3d2d]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBaseERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi+0x16) [0x5c1e06]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo15ShardConnection11_finishInitEv+0x137) [0x5bffe7]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo15dbgrid_pub_cmds8CountCmd3runERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x6ab) [0x79879b]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo7Command20runAgainstRegisteredEPKcRNS_7BSONObjERNS_14BSONObjBuilderEi+0x8b5) [0x793175]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo14SingleStrategy7queryOpERNS_7RequestE+0x5cb) [0x76cf2b]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo7Request7processEi+0x187) [0x7b4927]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x71) [0x7c6cf1]
./mongodb-linux-x86_64-2.0.2/bin/mongos(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x287) [0x5e6a07]
/lib64/libpthread.so.0 [0x3144a0673d]
/lib64/libc.so.6(clone+0x6d) [0x3143ed44bd]
Wed Feb 29 16:58:21 [conn1869] ~ScopedDBConnection: _conn != null
Wed Feb 29 16:58:25 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' acquired, ts : 4f4ead912d46e699744598a8
Wed Feb 29 16:58:25 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' unlocked.
Wed Feb 29 16:58:35 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' acquired, ts : 4f4ead9b2d46e699744598a9
Wed Feb 29 16:58:35 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' unlocked.
Wed Feb 29 16:58:45 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' acquired, ts : 4f4eada52d46e699744598aa
Wed Feb 29 16:58:46 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' unlocked.
Wed Feb 29 16:58:56 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' acquired, ts : 4f4eadb02d46e699744598ab
Wed Feb 29 16:58:56 [Balancer] distributed lock 'balancer/my07apl01.cityofchicago.org:27017:1330355727:1804289383' unlocked.
Any ideas?
thx
- depends on
-
SERVER-4262 when dropping collections need to invalidate all conn sharding state
- Closed