Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-82776

fast_archive errors when there is not enough disk space

    XMLWordPrintableJSON

Details

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major - P3 Major - P3
    • None
    • None
    • None
    • None
    • ALL
    • Build and Correctness OnDeck

    Description

      here and here are examples of tasks that fail because there is no space left on the device to extract the core dumps to. 

       

      We need to do one of the following things
      1. delete files to save space (the task is already over at this point, are there any useless files on disk?)

      2. move these tasks to a large distro

      3. compress the files to another place that does have space

      Currently we rely on the unextracted core dumps to live on the machine at a later step to verify that they come from a "known binary", this might need to be changed if we need to delete them as we go to save space.

       

      It also might be good to add a limit to the amount of core dumps that can get uploaded. I can't find the task link anymore but I have seen a task that tried to upload 300+ core dumps because it was a suite with lots of tests and every one of them failed producing core dumps. Adding an arbitrary limit of 50 or so core dumps seems weird but is probably "good enough" for developers to get the information they need. I am not sure if there is a good way to prioritize which core dumps should get uploaded in this case.

      Attachments

        Activity

          People

            trevor.guidry@mongodb.com Trevor Guidry
            trevor.guidry@mongodb.com Trevor Guidry
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated: