Description
While troubleshooting an issue uploading debugsymbols for one of the variants for 5.2.0-rc1 we found that this was caused by the file size
S3 copy failed for task mongo_release_enterprise_rhel_72_s390x_push_7bf8bcff9dd539233a5747b02bcfa087690a01e5_21_12_14_19_53_24, retrying: InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
|
status code: 400, request id: ZCTCNHNH6ZMXWQWQ, host id: /iFfWmbVbOU4P4gLyfSTAhu2N6481lBL7icqMIkCNvrmXvrx6AIn0LDmCrGqYwiQRLdyLm5jY6Y=
|
Debugsymbols are big and this time went over 5 gigs.
S3 faq says that 5 gigs is the limit for objects to be uploaded in a single PUT while multipart upload supports objects up to 5 TB.
The step that fails in the push task is s3copy that copies files from one s3 bucket to another. s3put worked ok.
I believe one of the reasons to use s3copy for push tasks in server releases is the fact that it keeps track of uploaded/copied files in evergreen db (pushes collection) so restarting a release won't override the published artifacts. annie.black looked into this and thinks it's only s3copy that does this and not s3.put, so we can't easily swap the commands.
With the file size getting bigger I expect us to start running into this problem more often so we'd need to find a way to fix this.
Some options:
- Find a way for s3copy to support objects larger than 5 gigs
- Have a way to track files uploaded to s3 with s3.put command
- Create/use a service outside of evergreen that can publish artifacts and track them in a similar way.
Attachments
Issue Links
- is related to
-
SERVER-63432 Transferring large file to repo
-
- Closed
-