[SERVER-4417] Replication stops in one data centre when reIndex issued on one secondary in that datacentre Created: 02/Dec/11  Updated: 29/Feb/12  Resolved: 07/Dec/11

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.0.1
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Pierre Dane Assignee: Kristina Chodorow (Inactive)
Resolution: Done Votes: 3
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

windows 64 Mutex build


Operating System: Windows
Participants:

 Description   

we have three data centres. Master is in one , and we have pairs of secondaries in the other two - these have priority = 0 and sometimes votes = 0. Only one of the secondaries in each data centre is read from. Today I issued a reindex on the secondary that was not being read from and replication to the one that was being read from halted. As soon as the reindex completed the other secondary started syncing again. Servers in other data centres were not affected and continued syncing. Here is a snippet of the mongostat:

localhost:3003 0 1239 301 0 96 7 0 140g 141g 12g 3 0 0|0 0|0 254k 4m
mongo03.lon.mongo:3003 *9 *0 *335 *0 11 3|0 0 140g 140g 12g 5.7 0 0|0 1|0 745b 240k 12
mongo04.lon.mongo:3003 *14 1677 *495 *0 0 3|0 0 140g 141g 11.7g 30 0 0|0 0|0 223k 6m 685
mongo09.wdc.mongo:3003 0 1197 297 0 93 7 0 140g 141g 12g 6.1 0 0|0 0|0 248k 4m 929
mongo10.wdc.mongo:3003 *6 2 *275 *0 11 7|0 0 142g 142g 12g 0 0 0|1 1|1 1k 138k 16
mongo74.ams01.mongo:3003 *0 *0 *0 *0 0 3|0 0 140g 140g 6.12g 0 0 0|0 1|0 275b 3k 13
mongo75.ams01.mongo:3003 *0 1640 *0 *0 0 4|0 0 140g 140g 7.81g 0 0 0|0 0|0 189k 5m 120

localhost:3003 0 1205 307 0 107 6 0 140g 141g 12g 4.6 0 0|0 0|0 253k 3m
mongo03.lon.mongo:3003 *9 *0 *337 *0 12 6|0 0 140g 140g 12g 8.5 0 0|0 1|0 1k 168k 12
mongo04.lon.mongo:3003 *8 1738 *318 *0 0 6|0 0 140g 141g 11.7g 7.1 0 1|0 1|0 200k 5m 685
mongo09.wdc.mongo:3003 0 1267 332 0 110 6 0 140g 141g 12g 1.5 0 0|0 0|0 269k 3m 929
mongo10.wdc.mongo:3003 *9 *0 *325 *0 11 7|0 0 142g 142g 12g 0 0 0|1 1|1 1k 166k 16
mongo74.ams01.mongo:3003 *0 *0 *0 *0 0 6|0 0 140g 140g 6.12g 0 0 0|0 1|0 730b 3k 13
mongo75.ams01.mongo:3003 *0 1775 *0 *0 0 6|0 0 140g 140g 7.81g 0 0 0|0 0|0 204k 5m 120

localhost:3003 0 1389 355 0 102 8 0 140g 141g 12g 0 0 0|0 0|0 290k 4m
mongo03.lon.mongo:3003 *9 *0 *338 *0 11 3|0 0 140g 140g 12g 14.2 0 0|0 1|0 792b 159k 12
mongo04.lon.mongo:3003 *10 1695 *341 *0 0 3|0 0 140g 141g 11.7g 11.2 0 0|0 0|0 195k 5m 685
mongo09.wdc.mongo:3003 0 1360 348 0 100 8 0 140g 141g 12g 3 0 0|0 1|0 284k 4m 929
mongo10.wdc.mongo:3003 *10 *0 *319 *0 10 6|0 0 142g 142g 12g 0 0 0|1 1|1 1k 147k 16
mongo74.ams01.mongo:3003 *0 *0 *0 *0 1 7|0 0 140g 140g 6.12g 0 0 0|0 1|0 591b 7k 14
mongo75.ams01.mongo:3003 *0 1621 *0 *0 0 3|0 0 140g 140g 7.81g 0 0 0|0 0|0 186k 5m 120

localhost:3003 0 1327 360 0 109 5 0 140g 141g 12g 3 0 1|0 1|0 285k 4m
mongo03.lon.mongo:3003 *14 *0 *375 *0 13 6|0 0 140g 140g 12g 11.4 0 0|0 1|0 1k 195k 12
mongo04.lon.mongo:3003 *14 1689 *391 *0 0 6|0 0 140g 141g 11.7g 13 0 0|0 0|0 195k 5m 685
mongo09.wdc.mongo:3003 0 1343 359 0 110 6 0 140g 141g 12g 3 0 0|0 0|0 286k 4m 929
mongo10.wdc.mongo:3003 *13 *0 *355 *0 11 7|0 0 142g 142g 12g 0 0 0|0 1|0 1k 179k 16
mongo74.ams01.mongo:3003 *0 *0 *0 *0 0 6|0 0 140g 140g 6.12g 0 0 0|0 1|0 730b 3k 14
mongo75.ams01.mongo:3003 *0 1621 *0 *0 0 3|0 0 140g 140g 7.81g 0 0 0|0 0|0 186k 5m 120

localhost:3003 0 1231 311 0 101 8 0 140g 141g 12g 3 0 0|0 0|0 255k 4m
mongo03.lon.mongo:3003 *12 *0 *318 *0 12 4|0 0 140g 140g 12g 11.4 0 0|0 1|0 990b 157k 12
mongo04.lon.mongo:3003 *12 1740 *312 *0 0 3|0 0 140g 141g 11.7g 14 0 8|0 8|0 199k 5m 685
mongo09.wdc.mongo:3003 0 1242 299 0 99 8 0 140g 141g 12g 0 0 0|0 1|0 252k 5m 930
mongo10.wdc.mongo:3003 *11 *0 *277 *0 11 5|0 0 142g 142g 12g 0 0 0|0 0|0 1k 197k 16
mongo74.ams01.mongo:3003 *5 *0 *62 *0 0 6|0 0 140g 140g 6.12g 11.4 0 1|0 1|0 730b 3k 14
mongo75.ams01.mongo:3003 *0 1595 *0 *0 0 6|0 0 140g 140g 7.81g 0 0 0|0 0|0 184k 5m 120

insert query update delete getmore command flushes mapped vsize res locked % idx miss % qr|qw ar|aw netIn netOut c
localhost:3003 0 1332 320 0 110 6 0 140g 141g 12g 6.1 0 0|0 1|0 270k 5m
mongo03.lon.mongo:3003 *8 *0 *316 *0 9 5|0 0 140g 140g 12g 19.9 0 1|0 1|0 1k 136k 12
mongo04.lon.mongo:3003 *8 1643 *281 *0 0 6|0 0 140g 141g 11.7g 10.1 0 0|0 0|0 190k 5m 685
mongo09.wdc.mongo:3003 0 1310 352 0 112 6 0 140g 141g 12g 7.6 0 0|0 1|0 280k 4m 930
mongo10.wdc.mongo:3003 *9 *0 3|330 *0 12 8|0 0 142g 142g 12g 15.3 0 0|0 0|0 1k 4m 16
mongo74.ams01.mongo:3003 *4 *0 *29 *0 1 3|0 0 140g 140g 6.13g 5.7 0 0|0 1|0 322b 49k 14
mongo75.ams01.mongo:3003 *9 1709 *91 *0 0 3|0 0 140g 140g 7.81g 4.2 0 0|0 0|0 196k 6m 120

localhost:3003 0 1185 350 0 98 8 0 140g 141g 12g 3 0 0|0 1|0 264k 4m
mongo03.lon.mongo:3003 *9 *0 *370 *0 7 5|0 0 140g 140g 12g 44.2 0 1|0 1|0 755b 196k 12
mongo04.lon.mongo:3003 *9 1541 *404 *0 0 3|0 0 140g 141g 11.7g 12.8 0 0|0 0|0 177k 4m 685
mongo09.wdc.mongo:3003 0 1142 322 0 98 7 0 140g 141g 12g 3 0 0|0 1|0 250k 4m 930
mongo10.wdc.mongo:3003 *9 *0 *343 *0 5 5|0 0 142g 142g 12g 0 0 0|0 0|0 725b 165k 16
mongo74.ams01.mongo:3003 *52 *0 *1493 *0 0 6|0 0 140g 140g 6.14g 78.5 0 1|0 1|0 730b 3k 14
mongo75.ams01.mongo:3003 *0 1600 *0 *0 0 7|0 0 140g 140g 7.81g 0 0 0|0 0|0 184k 6m 120

localhost:3003 0 1207 328 0 102 5 0 140g 141g 12g 3 0 0|0 0|0 259k 4m
mongo03.lon.mongo:3003 *8 *0 *358 *0 12 4|0 0 140g 140g 12g 15.7 0 0|0 1|0 1k 179k 12
mongo04.lon.mongo:3003 *8 1671 *365 *0 0 6|0 0 140g 141g 11.7g 12.8 0 0|0 0|0 192k 5m 685
mongo09.wdc.mongo:3003 0 1301 336 0 105 6 0 140g 141g 12g 3 0 0|0 0|0 273k 5m 930
mongo10.wdc.mongo:3003 *6 *0 *301 *0 10 7|0 0 142g 142g 12g 0 0 0|0 0|0 1k 152k 16
mongo74.ams01.mongo:3003 *82 *0 *2220 *0 0 3|0 0 140g 140g 6.15g 100 0 1|0 1|0 275b 3k 14
mongo75.ams01.mongo:3003 *0 1603 *0 *0 0 2|0 0 140g 140g 7.81g 0 0 0|0 0|0 184k 6m 120

localhost:3003 0 1356 323 0 106 8 0 140g 141g 12g 4.6 0 0|0 0|0 275k 5m
mongo03.lon.mongo:3003 *10 *0 *343 *0 12 5|0 0 140g 140g 12g 11.4 0 0|0 1|0 1k 169k 12
mongo04.lon.mongo:3003 *10 1744 *343 *0 0 4|0 0 140g 141g 11.7g 8.5 0 0|0 0|0 200k 5m 684
mongo09.wdc.mongo:3003 0 1325 327 0 110 7 0 140g 141g 12g 6.1 0 0|0 0|0 273k 5m 930
mongo10.wdc.mongo:3003 *7 *0 *326 *0 11 5|0 0 142g 142g 12g 0 0 0|0 0|0 1k 162k 16
mongo74.ams01.mongo:3003 *99 *0 *2330 *0 0 6|0 0 140g 140g 6.16g 98.5 0 1|0 1|0 730b 3k 14
mongo75.ams01.mongo:3003 *0 1600 *0 *0 0 7|0 0 140g 140g 7.81g 0 0 0|0 0|0 184k 5m 120

localhost:3003 0 1265 333 0 116 6 0 140g 141g 12g 7.6 0 0|0 0|0 268k 5m
mongo03.lon.mongo:3003 *10 *0 *343 *0 12 5|0 0 140g 140g 12g 11.4 0 0|0 1|0 1k 169k 12
mongo04.lon.mongo:3003 *10 1744 *343 *0 0 4|0 0 140g 141g 11.7g 8.5 0 0|0 0|0 200k 5m 684
mongo09.wdc.mongo:3003 0 1205 327 0 112 6 0 140g 141g 12g 7.6 0 0|0 0|0 258k 4m 930
mongo10.wdc.mongo:3003 *7 *0 *337 *0 11 8|0 0 142g 142g 12g 0 0 0|0 0|0 1k 169k 16
mongo74.ams01.mongo:3003 *92 *0 *2444 *0 0 3|0 0 140g 140g 6.18g 98.5 0 2|0 1|0 275b 3k 14
mongo75.ams01.mongo:3003 *0 1611 *0 *0 0 3|0 0 140g 140g 7.81g 0 0 0|0 0|0 185k 5m 120

insert query update delete getmore command flushes mapped vsize res locked % idx miss % qr|qw ar|aw netIn netOut c
localhost:3003 0 1188 356 0 109 6 0 140g 141g 12g 6.1 0 0|0 1|0 266k 4m
mongo03.lon.mongo:3003 *3 *0 *365 *0 12 4|0 0 140g 140g 12g 11.4 0 0|0 1|0 990b 177k 12
mongo04.lon.mongo:3003 *3 1669 *345 *0 0 5|0 0 140g 141g 11.7g 15.4 0 3|0 3|0 192k 5m 684
mongo09.wdc.mongo:3003 0 1217 360 0 116 7 0 140g 141g 12g 9.2 0 0|0 1|1 272k 4m 930
mongo10.wdc.mongo:3003 *3 *0 *343 *0 12 5|0 0 142g 142g 12g 1.5 0 0|0 0|0 1k 4m 16
mongo74.ams01.mongo:3003 *92 *0 *2444 *0 0 3|0 0 140g 140g 6.18g 98.5 0 2|0 1|0 275b 3k 14
mongo75.ams01.mongo:3003 *41 1720 *1102 *0 0 6|0 0 140g 140g 7.81g 42.2 0 4|1 5|1 197k 6m 120

localhost:3003 0 1286 337 0 117 19 0 140g 141g 12g 12.3 0 0|0 0|0 273k 9m
mongo03.lon.mongo:3003 *8 *0 *350 *0 11 5|0 0 140g 140g 12g 11.4 0 0|0 1|0 1k 171k 12
mongo04.lon.mongo:3003 *8 1712 1|370 *0 0 6|0 0 140g 141g 11.7g 14.4 0 0|0 0|0 197k 5m 684
mongo09.wdc.mongo:3003 0 1323 271 0 90 18 0 140g 141g 12g 6.1 0 0|0 1|0 253k 9m 930
mongo10.wdc.mongo:3003 *9 *0 *312 *0 10 9|0 0 142g 142g 12g 0 0 0|0 0|0 1k 142k 16
mongo74.ams01.mongo:3003 *62 *0 *1835 *0 0 14|0 0 140g 140g 6.19g 72.8 0 0|0 0|0 1k 4m 14
mongo75.ams01.mongo:3003 *74 1616 *2200 *0 0 3|0 0 140g 140g 7.81g 87.3 0 5|0 5|0 186k 5m 122

localhost:3003 0 1119 52 0 18 6 0 140g 141g 12g 0 0 0|0 20|0 146k 4m
mongo03.lon.mongo:3003 *5 *0 *248 *0 9 4|0 0 140g 140g 12g 12.8 0 0|0 1|0 849b 123k 12
mongo04.lon.mongo:3003 *5 1698 *248 *0 0 4|0 0 140g 141g 11.7g 11.4 0 0|0 1|0 195k 4m 683
mongo09.wdc.mongo:3003 0 922 33 0 1 7 0 140g 141g 12g 0 0 64|30 106|30 94k 2m 930
mongo10.wdc.mongo:3003 *2 *0 *108 *0 6 4|0 0 142g 142g 12g 0 0 0|0 1|0 572b 71k 16
mongo74.ams01.mongo:3003 *80 *0 *2626 *0 0 5|0 0 140g 140g 6.2g 100 0 0|0 0|0 407b 5k 14
mongo75.ams01.mongo:3003 *100 1654 *2318 *0 0 14|0 0 140g 140g 7.81g 92.9 0 7|1 8|1 191k 5m 122



 Comments   
Comment by Pierre Dane [ 07/Dec/11 ]

Well I see that my secondaries are syncing from the other secondary in that data centre which means no double sync across the atlantic. Woop woop. Thanks

Comment by Kristina Chodorow (Inactive) [ 07/Dec/11 ]

Yes, 2.0 members will choose the member with the lowest ping time (who's ahead of them in operations) to sync from.

Check rs.status() to see ping times and who members are syncing from.

Comment by Pierre Dane [ 07/Dec/11 ]

Thanks Kristina - I thought that after the initial sync was completed then all secondaries synced off the master and that sync chaining was still something that was being worked on. Did I miss this new functionality? Great if the slave chaining is implemented

Comment by Kristina Chodorow (Inactive) [ 07/Dec/11 ]

I'm guessing that the other server in the data center was syncing from the one that was being reIndexed. Did you happen to run rs.status() on the stuck member during the reIndex?

Secondaries try to sync from the nearest member (in terms of ping time) so it's likely for members in a data center to sync from other members in the same data center. In the future, I'd recommend running rs.status() on other members before kicking off a reIndex. Check the "syncingTo" field to make sure no one is syncing from the member being reindexed, as reindexing will block replication.

If someone (A) is syncing from a member you want to reindex (B), one option is to take B offline and start it without the --replSet option on a different port. This makes it a stand-alone server that the replica set can't find (because it's listening on a different port) and then you can do the reIndex without affecting the set at all. Meanwhile, A will choose someone else to sync from.

Comment by Pierre Dane [ 07/Dec/11 ]

This happened again. I tried killing the operation with no success and could not shut down the server either. Had to wait until the index build completed and so was operating on stale data for a while I am afraid to reIndex any more servers now. Any updates on this please? Thanks

Comment by Pierre Dane [ 02/Dec/11 ]

Further problems. THe problem above was occuring on another instance on that server. All of a sudden, both servers started syncing, including the one that was reindexing. I restarted the server being queried (hnot the one reindexing) shortly afterwards.

Fri Dec 02 14:41:11 [initandlisten] connection accepted from 127.0.0.1:63062 #25733
79814900/106124022 75%
83388700/106124022 78%
86872400/106124022 81%
90375500/106124022 85%
93831100/106124022 88%
97273300/106124022 91%
100635700/106124022 94%
103504900/106124022 97%
Fri Dec 02 14:42:32 [conn14413] external sort used : 0 files in 268 secs
Fri Dec 02 14:42:32 [conn14413] done building bottom layer, going to commit
Fri Dec 02 14:42:32 [conn14413] build index done 106124022 records 268.678 secs
Fri Dec 02 14:42:32 [conn14413] build index attributestore.attributestore

{ ct.4.uid: 1.0 }

background
Fri Dec 02 14:42:32 [conn25730] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62970]
Fri Dec 02 14:42:32 [conn25728] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62920]
Fri Dec 02 14:42:32 [conn25727] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62902]
Fri Dec 02 14:42:32 [conn25700] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62006]
Fri Dec 02 14:42:32 [conn25707] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62216]
Fri Dec 02 14:42:32 [conn25701] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62030]
Fri Dec 02 14:42:32 [conn25703] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62080]
Fri Dec 02 14:42:32 [conn25713] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62425]
Fri Dec 02 14:42:32 [conn25722] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 109ms
Fri Dec 02 14:42:32 [conn25715] command attributestore.$cmd command:

{ collstats: "attributestore" }

ntoreturn:1 reslen:325 124ms
Fri Dec 02 14:42:32 [conn25722] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62716]
Fri Dec 02 14:42:32 [conn25715] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62491]
Fri Dec 02 14:42:32 [conn25723] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62767]
Fri Dec 02 14:42:32 [conn25718] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 124ms
Fri Dec 02 14:42:32 [conn25718] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62575]
Fri Dec 02 14:42:32 [conn25708] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62233]
Fri Dec 02 14:42:32 [conn25719] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62626]
Fri Dec 02 14:42:32 [conn25709] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62283]
Fri Dec 02 14:42:32 [conn25705] command attributestore.$cmd command:

{ collstats: "attributestore" }

ntoreturn:1 reslen:325 202ms
Fri Dec 02 14:42:32 [conn25705] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62146]
Fri Dec 02 14:42:32 [conn25711] command attributestore.$cmd command:

{ collstats: "attributestore" }

ntoreturn:1 reslen:325 202ms
Fri Dec 02 14:42:32 [conn25717] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62560]
Fri Dec 02 14:42:32 [conn25711] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62357]
Fri Dec 02 14:42:32 [conn25721] command attributestore.$cmd command:

{ collstats: "attributestore" }

ntoreturn:1 reslen:325 202ms
Fri Dec 02 14:42:32 [conn25724] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 202ms
Fri Dec 02 14:42:32 [conn25724] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62785]
Fri Dec 02 14:42:32 [conn25725] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62836]
Fri Dec 02 14:42:32 [conn25721] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62701]
Fri Dec 02 14:42:32 [conn25716] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 218ms
Fri Dec 02 14:42:32 [conn25710] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 202ms
Fri Dec 02 14:42:32 [conn25716] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62510]
Fri Dec 02 14:42:32 [conn25710] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62299]
Fri Dec 02 14:42:32 [conn25706] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 249ms
Fri Dec 02 14:42:32 [conn25706] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62164]
Fri Dec 02 14:42:32 [conn25731] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 436ms
Fri Dec 02 14:42:32 [conn25712] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 234ms
Fri Dec 02 14:42:32 [conn25714] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 249ms
Fri Dec 02 14:42:32 [conn25704] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 124ms
Fri Dec 02 14:42:32 [conn25426] getmore local.oplog.rs query: { ts:

{ $gte: new Date(5681465443039051814) }

} cursorid:80337332639598 nreturned:3 reslen:665 6036794ms
Fri Dec 02 14:42:32 [conn25731] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62985]
Fri Dec 02 14:42:32 [conn25712] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62375]
Fri Dec 02 14:42:32 [conn25704] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62098]
Fri Dec 02 14:42:32 [conn25714] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62440]
Fri Dec 02 14:42:32 [conn25726] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 234ms
Fri Dec 02 14:42:32 [conn25726] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62851]
Fri Dec 02 14:42:32 [conn25720] query attributestore.storehealth nscanned:1 nreturned:1 reslen:68 249ms
Fri Dec 02 14:42:32 [conn25720] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62644]
Fri Dec 02 14:42:32 [rsSync] repl: old cursor isDead, will initiate a new one
Fri Dec 02 14:42:34 [rsSync] replSet syncing to: mongo10.wdc01.struq.mongo:3006
320700/106124026 0%
Fri Dec 02 14:42:36 [conn25426] getMore: cursorid not found local.oplog.rs 80337332639598
Fri Dec 02 14:42:36 [conn25426] end connection 10.70.6.86:60858
Fri Dec 02 14:42:37 [initandlisten] connection accepted from 10.70.6.86:55118 #25734
803900/106124261 0%
1252800/106125067 1%
1729300/106125889 1%
Fri Dec 02 14:42:45 [initandlisten] connection accepted from 127.0.0.1:63113 #25735
Fri Dec 02 14:42:46 [initandlisten] connection accepted from 127.0.0.1:63116 #25736
Fri Dec 02 14:42:46 [conn25736] end connection 127.0.0.1:63116
2185200/106126745 2%
2729000/106127497 2%
3175100/106128424 2%
Fri Dec 02 14:42:53 [rsSync] warning: ns: attributestore.attributestore couldn't unindex key: { : "ade685f4-35a4-4eb4-b9e5-71b710caf015" } for doc: _id: "beb2837d-72c2-43fa-a82d-4faa0b122466"
3640700/106129321 3%
4067200/106130250 3%
Fri Dec 02 14:43:00 [clientcursormon] mem (MB) res:4096 virt:96520 mapped:96322
4535000/106131034 4%
4990700/106131894 4%
5397900/106132790 5%
Fri Dec 02 14:43:09 [rsSync] warning: ns: attributestore.attributestore couldn't unindex key: { : "f4affffe-59f6-46c2-8c98-ad3376bb9fec" } for doc: _id: "3c0c2ed1-f452-465b-9e6f-39e508ad0058"
5836700/106133558 5%
6286000/106134331 5%
6670200/106135256 6%
7059400/106136140 6%
7462900/106136949 7%
7732000/106137336 7%
Fri Dec 02 14:43:28 [rsSync] warning: ns: attributestore.attributestore couldn't unindex key: { : "ccef31f5-8ab9-4cb2-a33f-54ae0e093462" } for doc: _id: "ee468bfa-724f-4c59-8bd4-ac8de8700e50"
7871300/106137586 7%
8255200/106138454 7%
8656200/106139341 8%
9127800/106140267 8%
9649500/106141189 9%
Fri Dec 02 14:43:43 [rsSync] warning: ns: attributestore.attributestore couldn't unindex key: { : "b2706af2-4890-4ec9-8bbe-7c006429b149" } for doc: _id: "b2e6bf3a-15cf-4281-8f9a-a85bbb3cb319"
10137500/106142105 9%
10629100/106142992 10%
11172100/106143831 10%
Fri Dec 02 14:43:51 [initandlisten] connection accepted from 127.0.0.1:63155 #25737
Fri Dec 02 14:43:51 [conn25737] end connection 127.0.0.1:63155
11665600/106144800 10%

Comment by Pierre Dane [ 02/Dec/11 ]

Could you please make this ticket private if possible. Thanks

Generated at Thu Feb 08 03:05:55 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.