[SERVER-23328] mongodb read error: failed to read 8589938688 bytes at offset 3739193344: WT_ERROR: non-specific WiredTiger error Created: 24/Mar/16 Updated: 23/Apr/16 Resolved: 22/Apr/16 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Admin, WiredTiger |
| Affects Version/s: | 3.2.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Laurent Eon | Assignee: | Kelsey Schubert |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||
| Operating System: | ALL | ||||||||||||||||||||
| Participants: | |||||||||||||||||||||
| Description |
|
Hello, The collection is pretyy big with 26 000 000 Documents and 8Go of Data Dou you have an idea? I use mongodb 3.2 on a Windows server
|
| Comments |
| Comment by Ramon Fernandez Marina [ 22/Apr/16 ] | |
|
Thanks for the update kamaileon, and glad to hear MongoDB is working well. Please let us know if the issue resurfaces. | |
| Comment by Laurent Eon [ 20/Apr/16 ] | |
|
ok, so since I use mongodb on debian, there's no any more problem, no need to restart mongodb each day So it's all good for me now | |
| Comment by Laurent Eon [ 14/Apr/16 ] | |
|
I'm gonna try to move mongodb on a Linux Debian virtual machine.... (still under hyper-V) | |
| Comment by Laurent Eon [ 14/Apr/16 ] | |
|
Mongo is installed on a virtual server, hyper-V, and E: is a virtual drive | |
| Comment by Michael Cahill (Inactive) [ 14/Apr/16 ] | |
|
kamaileon, I was reviewing this issue today and wanted to ask a question. I note that MongoDB data is stored on your E: drive – is that a locally connected disk or a network mount? If a network mount, can you please give details of the NAS? | |
| Comment by Laurent Eon [ 13/Apr/16 ] | |
|
in the log files, you could see this error : 2016-04-13T06:30:53.478+0200 E STORAGE [thread2] WiredTiger (-28967) [1460521853:478862][26636:2000696320], log-server: log server error: The process cannot access the file because another process has locked a portion of the file. | |
| Comment by Laurent Eon [ 13/Apr/16 ] | |
|
I have discovered a new issue this morning. I enclose you the mongo log files | |
| Comment by Laurent Eon [ 06/Apr/16 ] | |
|
3.3 screenshot-graissage is another script which doesn't use mongodb | |
| Comment by Laurent Eon [ 06/Apr/16 ] | |
|
1. no, and each time, the restart of mongodb solve the issue
3.2 the second script (called serfifi) opens a file and parses it, every five minutes (it doesn't use mongo data base)
4. Done One question : Is mongodb more stable in linux distribution ? | |
| Comment by Kelsey Schubert [ 05/Apr/16 ] | |
|
Hi kamaileon,
Thank you, | |
| Comment by Laurent Eon [ 05/Apr/16 ] | |
|
I don't know if there is a link or if it's another problem, but since mongodb windows service didn't restart this morning, my RAM memory was fully used (26 Go) | |
| Comment by Laurent Eon [ 05/Apr/16 ] | |
|
yesterday I disabled the restart of mongodb to try to reproduce the problem. | |
| Comment by Kelsey Schubert [ 31/Mar/16 ] | |
|
Hi kamaileon, Thank you for uploading the additional files. We have investigated the files and diagnostic data that you have uploaded. I'd like to summarize the behavior you are observing. Each morning updates are imported to your MongoDB server using mongoimport. If the server has not been restarted recently, a query executed during this process may encounter WiredTiger read error. However, if the server has been restarted recently, queries executed during the mongoimport process do not encounter any errors. My understanding is that both of these behaviors are readily reproducible. It would greatly help our investigation if you could reproduce this issue and upload the files before restarting the server. We would need a copy of the files you have uploaded (all WiredTiger files, journal files, diagnostic.data and the collection mentioned in the error message) before it is restarted. This will allow us to see the state of the files and metadata before recovery is run. Thank you again for your help, | |
| Comment by Laurent Eon [ 30/Mar/16 ] | |
|
And here is the result wth {full:true}option : { , , , }, ], | |
| Comment by Laurent Eon [ 30/Mar/16 ] | |
|
hello, i can't send you all files in dbpath because there are other big collections not related to our problem, but I have uploaded you all files which I guessed necessary. Tell me if you need some other specific files. I temporarily solved the issue by restarting mongodb daily at 3:00 am. Here is the result of the validate command : , ], option to do more thorough scan.", | |
| Comment by Kelsey Schubert [ 29/Mar/16 ] | |
|
Hi kamaileon, Thank you for uploading the logs to the portal. In order to reproduce this behavior on our side, we would need the the complete contents of dbpath in addition to the collection you have previously uploaded. This error may indicate some of kind of corruption in your data files. To investigate, can you please execute db.collection.validate() on the affected collection and post the output? Thank you, | |
| Comment by Laurent Eon [ 25/Mar/16 ] | |
|
Hello, thank you for your quick response I also attached you the two mongo logs files for yesterday and today. You can see at 6:30, the start of my import script and the mongo crash at 06:32. Then the mongo service restart at 07:32, and from 08:42 to 09:11, you can see my import script is working fine on the data base | |
| Comment by Ramon Fernandez Marina [ 24/Mar/16 ] | |
|
kamaileon, the scenario you describe should work well. Would you be able to share the dataset with us so we can try to reproduce the issue on our end? I've created a private, secure upload portal you can use to send us your data. It will only be accessible to MongoDB staff investigating this ticket. Note that the portal has a 5GB upload limit, but on your linux box you can use split to break down the dataset into smaller files:
and then upload all the part.* files. Thanks, |