[SERVER-16588] [rsSync] warning: DR102 too much data written uncommitted 314.577MB Created: 18/Dec/14 Updated: 30/May/16 Resolved: 02/Mar/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Replication |
| Affects Version/s: | 2.6.4 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor - P4 |
| Reporter: | Craig Genner | Assignee: | Ramon Fernandez Marina |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Operating System: | ALL | ||||||||
| Steps To Reproduce: | Install 2.6.6 onto debian 7.7, set up a 3 node cluster (one is an arbiter node). Watch the mongo.log on the secondary. |
||||||||
| Participants: | |||||||||
| Description |
|
I'm seeing the problem (or similar to) as reported in https://jira.mongodb.org/browse/SERVER-6925
Given that this was reported as fixed in 2.2.x and 2.3.x series I'm a little unsure if this is new or another manifest of the same bug. The data inserted into mongo is very small (in the order of kilobytes) and it's an average of a couple of MB per hour - so not exactly large data volumes to replicate.
|
| Comments |
| Comment by Daniel Pasette (Inactive) [ 30/May/16 ] |
|
We believe this is related to physical data layout in the oplog. In a degenerate case this layout could result in this warning, though it is rare. A machine with a different oplog size would indeed change whether this message appears or not. |
| Comment by Yoni Levy [ 29/May/16 ] |
|
Hi, I'm seeing the same exact error, also with 314.577MB, on mongo version 2.6.3. Any new information on this? |
| Comment by Craig Genner [ 02/Mar/15 ] |
|
I've been waiting for any ideas to resolve this after I attached the logs. I'll get the additional information to you on the indexes you have requested. Thanks Craig |
| Comment by Ramon Fernandez Marina [ 02/Mar/15 ] |
|
craiggenner, we haven't heard from you for a while so I'm going to resolve this ticket. If this is still an issue for you feel free to reopen it and let us know if you have any indexes in the collections where these operations are being performed (kglue.user, kglue.issue). Regards, |
| Comment by Craig Genner [ 22/Dec/14 ] |
|
mongo log attachement |
| Comment by Daniel Pasette (Inactive) [ 22/Dec/14 ] |
|
If you can compress and attach the full mongodb log, that would be helpful. |
| Comment by Craig Genner [ 22/Dec/14 ] |
|
Hi Dan, There isn't a great deal of documents in this mongo installation, it has maybe a couple of hundred new and updated documents each day, I would certainly not expect the database traffic over an hour to be more than 100MB, so why it's replicating more than 256MB is a source of confusion. We are also seeing this at all times of the day, and it's always 314.577MB. This is an example of the largest document size that would be updated/inserted: shard001:PRIMARY> db.issue.find( {'ticketId':'INC000000453819'}).pretty(); }, , ], , , , { "timestamp" : ISODate("2014-12-22T10:04:36.916Z"), "notificationType" : "ACKNOWLEDGEMENT", "alertType" : "SMS", "status" : "RECEIVED" } ] , , , , { "summary" : "Acknowledgement received from Md", "detail" : null, "eventTime" : ISODate("2014-12-22T10:04:36.916Z"), "recordedInTicket" : true, "type" : "INTERNAL", "dataType" : "INFO" } ], , { "status" : "ACCEPTED", "timestamp" : ISODate("2014-12-22T10:04:36.916Z") } ], Anything else I can provide? |
| Comment by Daniel Pasette (Inactive) [ 21/Dec/14 ] |
|
Hi craiggenner, the message you're seeing indicates you are writing more than a couple kb. Can you describe the nature of the writes you're performing? |