-
Type:
Question
-
Resolution: Done
-
Priority:
Critical - P2
-
None
-
Affects Version/s: 2.2.3
-
Component/s: Replication, Shell
-
None
-
Environment:CentOS release 6.4 (Final)
-
None
-
None
-
None
-
None
-
None
-
None
-
None
I'm seeing odd behavior:
I have a shard that has a tagged member for ETL work, it's _id:5 in the following rs.conf():
s9:SECONDARY> rs.conf()
{
"_id" : "s9",
"version" : 16,
"members" : [
{
"_id" : 3,
"host" : "ec2-184-169-144-16.us-west-1.compute.amazonaws.com:27017",
"priority" : 0,
"hidden" : true,
"buildIndexes" : false
},
{
"_id" : 4,
"host" : "50.23.75.133:28009",
"priority" : 2
},
{
"_id" : 5,
"host" : "198.23.68.151:28009",
"votes" : 0,
"priority" : 0,
"tags" : {
"etlstafe" : "true"
}
},
{
"_id" : 6,
"host" : "50.23.100.8:28009",
"priority" : 3
}
]
}
When I connect to this secondary directly with (while logged on to the hosts console):
mongo --port 28009
I get the following:
MongoDB shell version: 2.2.3 connecting to: 127.0.0.1:28009/test s9:SECONDARY> db.getMongo().setSlaveOk() s9:SECONDARY> db.blocks.getIndexes() [ ] s9:SECONDARY>
When in fact, I should see
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "blocks.blocks",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"user_id" : 1,
"visible_ts" : 1,
"state" : 1,
"private" : 1,
"created" : 1
},
"ns" : "blocks.blocks",
"name" : "user_id_1_visible_ts_1_state_1_private_1_created_1"
},
{
"v" : 1,
"key" : {
"user_id" : 1
},
"ns" : "blocks.blocks",
"name" : "user_id_1"
},
{
"v" : 1,
"key" : {
"custom_id" : 1
},
"ns" : "blocks.blocks",
"name" : "custom_id_1"
},
{
"v" : 1,
"key" : {
"short_hash" : 1
},
"ns" : "blocks.blocks",
"name" : "short_hash_1"
},
{
"v" : 1,
"key" : {
"user_id" : 1,
"private" : 1,
"created" : -1
},
"ns" : "blocks.blocks",
"name" : "user_id_1_private_1_created_-1"
}
]
I'm only able to get the full list of indexes when I restart the member in stand alone mode (not part of replset and on a unique port, as though I'm doing maintenance).
Is this because of the tag in place on that member?