[SERVER-2404] Probable protocol problem Created: 25/Jan/11 Updated: 30/Mar/12 Resolved: 01/Feb/11 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Internal Client, Sharding |
| Affects Version/s: | 1.7.5 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Aristarkh Zagorodnikov | Assignee: | Greg Studer |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Operating System: | ALL |
| Participants: |
| Description |
|
While trying to configure fresh sharding: xm@celestine-3:~$ /opt/mongodb/bin/mongo celestine-3 shards: > db.runCommand( { enablesharding : "test1" } ) ) { "ok" : 1 } |
| Comments |
| Comment by Aristarkh Zagorodnikov [ 01/Feb/11 ] |
|
Can't repro after installing todays nightly build and starting from scratch on all three machines. Either I didn't properly update binaries at one of the machines or it got fixed as a side effect from other fixes. I think this can be closed since it now works fine right after configuring: ); { "shardAdded" : "testRS", "ok" : 1 }> db.runCommand( { enablesharding : "test1" }) { "ok" : 1 } |
| Comment by Greg Studer [ 31/Jan/11 ] |
|
Can't seem to reproduce, using either the newest version or 1.6.5. Do you have any more information about how the replica set and sharding configuration was started/restarted in order to cause the BSONElement error? Any stack traces from the db logs when the error occurs would also be helpful. |
| Comment by Eliot Horowitz (Inactive) [ 31/Jan/11 ] |
|
@greg - can you see if you can reproduce |
| Comment by Aristarkh Zagorodnikov [ 26/Jan/11 ] |
|
There's only one shard comprised of 3-server replica set (named testRS), all having the same oplog timestamp. |
| Comment by Eliot Horowitz (Inactive) [ 25/Jan/11 ] |
|
Is it possible one of the shards you were talking to was older? |
| Comment by Aristarkh Zagorodnikov [ 25/Jan/11 ] |
|
The "no primary" is fixed by : ) |
| Comment by Aristarkh Zagorodnikov [ 25/Jan/11 ] |
|
It (maybe it's by design) also prevents sharding the database that does not exist: > db.printShardingStatus() shards: > db.runCommand( { moveprimary : "test1", to : "testRS" } ); |
| Comment by Aristarkh Zagorodnikov [ 25/Jan/11 ] |
|
Looks like this happens when the subject database does not exist on the replica set. |
| Comment by Aristarkh Zagorodnikov [ 25/Jan/11 ] |
|
Please note that I cannot repeat this problem after starting from scratch (stopping all servers, removing config server data, starting over with sharding configuration). |