[SERVER-54016] WiredTigerHS.wt: handle-read: pread: failed to read 4096 bytes at offset 36864: WT_ERROR: non-specific WiredTiger error Created: 25/Jan/21  Updated: 27/Oct/23  Resolved: 01/Feb/21

Status: Closed
Project: Core Server
Component/s: Logging
Affects Version/s: 4.4.1
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: teng zhuofei Assignee: Dmitry Agranat
Resolution: Works as Designed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File WiredTiger.wt     File WiredTigerHS.wt     File mongodb.conf     Text File mongodb.log    
Participants:

 Description   

 I backed up the data by hard linking the data directory of the master node of the configuration and data pod and copying the files, and then there was a problem in restoring the data.

using the same replica set keys

Using mongod --repair still cannot fix the problem

 



 Comments   
Comment by Dmitry Agranat [ 01/Feb/21 ]

Hi 15319725080@163.com,

I backed up the data by hard linking the data directory of the master node of the configuration and data pod and copying the files, and then there was a problem in restoring the data.

We do not support "hard linking the data directory", the issue you reported is not unexpected. As such, I will go ahead and close this case.

Please refer to our supported backup methods: https://docs.mongodb.com/manual/core/backups/index.html. For questions about backup methods, we'd like to encourage you to start by asking our community for help by posting on the MongoDB Developer Community Forums.

Regards,
Dima

Comment by teng zhuofei [ 29/Jan/21 ]

I use https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded,and running 3 configsvr,2 mongos,3 shard and 2 replicaset and 1 arbiter.
I backed up the data by hard linking the data directory of the master node of the configuration and data pod and copying the files, and then there was a problem in restoring the data.
because my secondary has journaling enabled and its journal and data files are on the same volume,so I am not top all write operations.But I stop balance.
the uncompressed size of the data is 170Gi.I copy the entire /bitnami/mongodb-sharded with per a pod,using the same replica set keys in new cluster.

Comment by Dmitry Agranat [ 28/Jan/21 ]

Hi 15319725080@163.com, just to clarify my previous question, could you provide steps, as detailed as possible about the backup procedure you've used including the state of the source where the backup was executed? When describing the source state during the time of the backup, did you stop all write operations? What is the uncompressed size of the data? About the source topology, is it a standalone, a replica set of a sharded cluster?

The more details you can provide, the better we'll be equipped to help you.

Thanks,
Dima

Comment by teng zhuofei [ 28/Jan/21 ]

I backed up the data by hard linking the data directory of the master node of the configuration and data pod and copying the files, and then there was a problem in restoring the data.

Comment by Dmitry Agranat [ 28/Jan/21 ]

Hi 15319725080@163.com, it sounds like there might have been an issue with your backup process that resulted in the metadata corruption. Could you please link the MongoDB documentation page and the exact steps you've executed to backup your data?

Generated at Thu Feb 08 05:32:26 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.