Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-225

Server fails on dataset - database corruption

    XMLWordPrintableJSON

Details

    • Icon: Bug Bug
    • Resolution: Incomplete
    • Icon: Critical - P2 Critical - P2
    • None
    • None
    • Usability
    • None
    • 32-bit linux

    Description

      Tue Aug 11 01:03:19 lindex.newsites Caught Assertion insert, continuing
      Tue Aug 11 01:01:52 insert lindex.newsites exception userassert:can't map file memory - mongo requires 64 bit build for larger datasets 0ms
      mmap() failed for /mnt/lindex/mongodb/lindex.6 len:536870912 errno:12
      mmap() failed for /mnt/lindex/mongodb/lindex.6 len:536870912 errno:12

      My collection is called "newsites"

      In the shell, when I view the collections it shows many duplicates of the same collection (at least 20). When I try and drop the collection it says:

      > db.newsites.drop()

      {"errmsg" : "ns not found" , "ok" : 0}

      The server was consuming ~ 1.3 GB of memory (before restarted). The node has 1.7 GB of memory.

      I restarted the server and the duplicate collections are still there. After the server restart, the process was able to run about 2/3 of way through the dataset and crashed again.

      The raw disk is consuming ~ 3,027 MB (mongo data dir).

      From a code standpoint, I am using the Java driver and iterating over 1,884,105 rows using a DBCursor. I am inspecting a long value in the document and inserting a record into a new collection if it matches (in this case... all 1.8 M documents match).

      This is a problem on a live/production server.

      Attachments

        Activity

          People

            eliot Eliot Horowitz (Inactive)
            rn@deftlabs.com Ryan Nitz
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: