Details
-
Bug
-
Resolution: Incomplete
-
Major - P3
-
None
-
None
-
None
-
None
-
Solaris
Description
We just had failed upgrades on two of our mongodb replica set clusters.
Both are setup as such.
1 arbiter, 2 replicaset members.
virtual/mapped memory went from 3-7GB, to immediate 34GB one one replica set, and 16GB on another. Both sets of boxes have 4GB memory and 8GB swap
on version 1.8.1, same data, with active traffic.
18180 root 4 45 0 3522M 1999M cpu/13 4:02 6.25% mongod
on version 2.0.3, same data, same node, with active traffic
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
11752 root 92 59 0 34G 1883M cpu/11 13:19 7.25% mongod
all are running under solaris zones ( joyent ), but are just regular joyent slices.
after about 30 minutes of traffic, mongod's would crash. and replica fail over would continue, back and forth. We were upgrading to 2.0.3 to try to address the constant fail over issue. Performance is fine, so that's why we've kept the boxes at 4GB, our active data set is low. ( on disk data is about 10GB on both).
this is in 64bit