[SERVER-1535] don't let you add SYNC as a shard (WAS: Failed to insert data into shard, which is a replica set) Created: 02/Aug/10  Updated: 12/Jul/16  Resolved: 03/Aug/10

Status: Closed
Project: Core Server
Component/s: Replication, Sharding
Affects Version/s: 1.5.7
Fix Version/s: 1.5.8

Type: Bug Priority: Major - P3
Reporter: Che-Ching Wu Assignee: Eliot Horowitz (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Text File console.txt     Text File mongod2.log     Text File mongos.log     Text File mongos2.log    
Operating System: Linux
Participants:

 Description   

I built up a system like this:
2 replica set shard servers
1. [shard11, shard12]
2. [shard21, shard22, arbiter2]
1 config server and 1 aggregator use the same machine

Then create one database and one collection, that shard was enabled.
After inserting some data via python driver, I got error. Please check attachment.



 Comments   
Comment by Che-Ching Wu [ 04/Aug/10 ]

MongoDB shell version: 1.5.7
connecting to: test
> db.printShardingStatus()
— Sharding Status —
sharding version:

{ "_id" : 1, "version" : 3 }


shards:
{
"_id" : "shard0000",
"host" : "shard1/vm-shard11:27018,vm-shard12:27018"
}
{
"_id" : "shard0001",
"host" : "shard2/vm-shard21:27018,vm-shard22:27018,vm-arbiter2:27018"
}
databases:

{ "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "mytest", "partitioned" : true, "primary" : "shard0000" }

mytest.repl chunks:
{ "_id" :

{ $minKey : 1 }

} -->>

{ "_id" : ObjectId("4c578d083a4f3f1b2b000000") }

on : shard0001

{ "t" : 200 0, "i" : 2 }

{ "_id" : ObjectId("4c578d083a4f3f1b2b000000") }

-->>

{ "_id" : ObjectId("4c578d0f3a4f3f1b2b000268") }

on : shard0001

{ "t" : 38000, "i" : 7 }

{ "_id" : ObjectId("4c578d0f3a4f3f1b2b000268") }

-->>

{ "_id" : ObjectId("4c578d143a4f3f1b2b000443") }

on : shard0001

{ "t" : 38000, "i" : 9 }

......
......

{ "_id" : ObjectId("4c578f423a4f3f1b2b00b1d1") }

-->>

{ "_id" : ObjectId("4c578f553a4f3f1b2b00b933") }

on : shard0000

{ "t" : 38000, "i" : 5 }

{ "_id" : ObjectId("4c578f553a4f3f1b2b00b933") }

-->>

{ "_id" : ObjectId("4c578f733a4f3f1b2b00c1b2") }

on : shard0000

{ "t" : 42000, "i" : 0 }

{ "_id" : ObjectId("4c578f733a4f3f1b2b00c1b2") }

-->>

{ "_id" : ObjectId("4c578f883a4f3f1b2b00c914") }

on : shard0000

{ "t" : 42000, "i" : 1 }

{ "_id" : ObjectId("4c578f883a4f3f1b2b00c914") }

-->>

{ "_id" : ObjectId("4c578f983a4f3f1b2b00d076") }

on : shard0000

{ "t" : 42000, "i" : 2 }

{ "_id" : ObjectId("4c578f983a4f3f1b2b00d076") }

-->>

{ "_id" : ObjectId("4c578fa93a4f3f1b2b00d7d8") }

on : shard0000

{ "t" : 42000, "i" : 3 }

{ "_id" : ObjectId("4c578fa93a4f3f1b2b00d7d8") }

-->>

{ "_id" : ObjectId("4c578fe33a4f3f1b2b00e058") }

on : shard0000

{ "t" : 43000, "i" : 0 }

{ "_id" : ObjectId("4c578fe33a4f3f1b2b00e058") }

-->> { "_id" :

{ $maxKey : 1 }

} on : shard0000

{ "t" : 43000, "i" : 1 } { "_id" : "test", "partitioned" : false, "primary" : "shard0001" }

> bye

Comment by Eliot Horowitz (Inactive) [ 03/Aug/10 ]

Can you send the output of db.printShardingStatus

Comment by Che-Ching Wu [ 03/Aug/10 ]

Please check mongod2.log out.

Comment by Eliot Horowitz (Inactive) [ 03/Aug/10 ]

Can you attach mongod logs as well?

Comment by Che-Ching Wu [ 03/Aug/10 ]

Yes, I stopped all services and cleaned all data. Then restarted all of them. The version I use is still 1.5.7.

Comment by Eliot Horowitz (Inactive) [ 03/Aug/10 ]

Did you start from scratch?

Comment by Che-Ching Wu [ 03/Aug/10 ]

This time I got another error after following your instructions to start. Here they are:

*mongos*
log file please check mongos2.log

0x507de1 0x5e6b9e 0x5ef895 0x5efa87 0x61096f 0x6137f5 0x63ca8b 0x647a29 0x55ad12 0x66be20 0x337f6064a7 0x337ead3c2d
/opt/mongodb/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x1e1) [0x507de1]
/opt/mongodb/bin/mongos(_ZN5mongo5Chunk12moveIfShouldEN5boost10shared_ptrIS0_EE+0x5ee) [0x5e6b9e]
/opt/mongodb/bin/mongos(_ZN5mongo5Chunk14_splitIfShouldEl+0xa05) [0x5ef895]
/opt/mongodb/bin/mongos(_ZN5mongo5Chunk13splitIfShouldEl+0x27) [0x5efa87]
/opt/mongodb/bin/mongos(_ZN5mongo13ShardStrategy7_insertERNS_7RequestERNS_9DbMessageEN5boost10shared_ptrINS_12ChunkManagerEEE+0x24f) [0x61096f]
/opt/mongodb/bin/mongos(_ZN5mongo13ShardStrategy7writeOpEiRNS_7RequestE+0x295) [0x6137f5]
/opt/mongodb/bin/mongos(_ZN5mongo7Request7processEi+0x16b) [0x63ca8b]
/opt/mongodb/bin/mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE+0x149) [0x647a29]
/opt/mongodb/bin/mongos(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x252) [0x55ad12]
/opt/mongodb/bin/mongos(thread_proxy+0x80) [0x66be20]
/lib64/libpthread.so.0 [0x337f6064a7]
/lib64/libc.so.6(clone+0x6d) [0x337ead3c2d]

    • mongod **
      0x5313c3 0x53db81 0x7ebc21 0x77be14 0x77d268 0x5f77d5 0x5fcb76 0x6eec1a 0x6f2344 0x80e532 0x818340 0x379ce064a7 0x379c2d3c2d
      /opt/mongodb/bin/mongod(_ZN5mongo12sayDbContextEPKc+0xb3) [0x5313c3]
      /opt/mongodb/bin/mongod(_ZN5mongo8assertedEPKcS1_j+0x111) [0x53db81]
      /opt/mongodb/bin/mongod(_ZN5mongo16MoveChunkCommand3runERKSsRNS_7BSONObjERSsRNS_14BSONObjBuilderEb+0x29b1) [0x7ebc21]
      /opt/mongodb/bin/mongod(_ZN5mongo11execCommandEPNS_7CommandERNS_6ClientEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0x584) [0x77be14]
      /opt/mongodb/bin/mongod(_ZN5mongo12_runCommandsEPKcRNS_7BSONObjERNS_10BufBuilderERNS_14BSONObjBuilderEbi+0x7a8) [0x77d268]
      /opt/mongodb/bin/mongod(_ZN5mongo11runCommandsEPKcRNS_7BSONObjERNS_5CurOpERNS_10BufBuilderERNS_14BSONObjBuilderEbi+0x35) [0x5f77d5]
      /opt/mongodb/bin/mongod(ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1+0x29d6) [0x5fcb76]
      /opt/mongodb/bin/mongod [0x6eec1a]
      /opt/mongodb/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_8SockAddrE+0x14b4) [0x6f2344]
      /opt/mongodb/bin/mongod(_ZN5mongo10connThreadEPNS_13MessagingPortE+0x312) [0x80e532]
      /opt/mongodb/bin/mongod(thread_proxy+0x80) [0x818340]
      /lib64/libpthread.so.0 [0x379ce064a7]
      /lib64/libc.so.6(clone+0x6d) [0x379c2d3c2d]
    • python client **
      Traceback (most recent call last):
      File "bench.py", line 19, in ?
      id = conn[product][coll].insert(doc, safe=True)
      File "build/bdist.linux-x86_64/egg/pymongo/collection.py", line 232, in insert
      File "/usr/lib64/python2.4/site-packages/pymongo-1.7_-py2.4-linux-x86_64.egg/pymongo/connection.py", line 596, in _send_message
      return self.__check_response_to_last_error(response)
      File "/usr/lib64/python2.4/site-packages/pymongo-1.7_-py2.4-linux-x86_64.egg/pymongo/connection.py", line 565, in __check_response_to_last_error
      raise OperationFailure(error["err"], error["code"])
      pymongo.errors.OperationFailure: moveAndCommit failed: db assertion failure { assertion: "assertion s/d_migrate.cpp:239", errmsg: "db assertion failure", ok: 0.0 }
Comment by auto [ 03/Aug/10 ]

Author:

{'login': 'erh', 'name': 'Eliot Horowitz', 'email': 'eliot@10gen.com'}

Message: can't use SYNC cluster a shard SERVER-1535
http://github.com/mongodb/mongo/commit/fd600969e0eccdfce48bb93fadaecc5daed64129

Comment by Eliot Horowitz (Inactive) [ 03/Aug/10 ]

You need to add a repica set shard as

name/vm-shard21:27018, vm-shard22:27018, vm-arbiter2:27018

will enforce this

Comment by Che-Ching Wu [ 03/Aug/10 ]

I use 1.5.7

Here are commands I ran.

/opt/mongodb/bin/mongod --fork --dbpath /var/lib/mongo/ --logpath /var/log/mongo/mongod.log --logappend --rest -shardsvr --replSet shard1/vm-shard11:27018,vm-shard12:27018

echo 'cfg = {_id: "shard1", members:[{_id: 0, host:"vm-shard11:27018"},{_id: 1, host:"vm-shard12:27018"}]}; rs.initiate(cfg);' | /opt/mongodb/bin/mongo localhost:27018

/opt/mongodb/bin/mongod --fork --dbpath /var/lib/mongo/ --logpath /var/log/mongo/mongod.log --logappend --rest -shardsvr --replSet shard2/vm-shard21:27018,vm-shard22:27018,vm-arbiter2:27018

echo 'cfg = {_id: "shard2", members:[{_id: 0, host:"vm-shard21:27018"},{_id: 1, host:"vm-shard22:27018"},{_id: 2, host:"vm-arbiter2:27018", "arbiterOnly": true}]}; rs.initiate(cfg);' | /opt/mongodb/bin/mongo localhost:27018

/opt/mongodb/bin/mongod --fork --dbpath /var/lib/mongo/ --logpath /var/log/mongo/mongod.log --logappend --rest -configsvr

/opt/mongodb/bin/mongos --fork --configdb vm-config1:27019 --logpath /var/log/mongo/mongos.log --logappend

echo 'use admin; db.runCommand(

{ addshard : "vm-shard11:27018,vm-shard12:27018"}

); db.runCommand(

{ addShard : "vm-shard21:27018, vm-shard22:27018, vm-arbiter2:27018"}

);'

And the sharding status:
> db.printShardingStatus()
— Sharding Status —
sharding version:

{ "_id" : 1, "version" : 3 }

shards:
{
"_id" : "shard0000",
"host" : "vm-shard11:27018,vm-shard12:27018"
}
{
"_id" : "shard0001",
"host" : "vm-shard21:27018, vm-shard22:27018, vm-arbiter2:27018"
}
databases:

{ "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "wfbsh", "partitioned" : true, "primary" : "shard0000" }

wfbsh.repl chunks:
{ "_id" :

{ $minKey : 1 }

} -->>

{ "_id" : ObjectId("4c568b063a4f3f1313000000") }

on : shard0000

{ "t" : 4000, "i" : 0 }

{ "_id" : ObjectId("4c568b063a4f3f1313000000") }

-->>

{ "_id" : ObjectId("4c568be13a4f3f134d00005e") }

on : shard0000

{ "t" : 4000, "i" : 1 }

{ "_id" : ObjectId("4c568be13a4f3f134d00005e") }

-->>

{ "_id" : ObjectId("4c5694e83a4f3f139f0001da") }

on : shard0000

{ "t" : 4000, "i" : 2 }

{ "_id" : ObjectId("4c5694e83a4f3f139f0001da") }

-->> { "_id" :

{ $maxKey : 1 }

} on : shard0000

{ "t" : 4000, "i" : 3 }
Comment by Eliot Horowitz (Inactive) [ 02/Aug/10 ]

Can you provide all the startup lines and output of db.printShardingStatus
Also - what version are you running?

Generated at Thu Feb 08 02:57:17 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.