Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-4801

It seems that mongoDB compression does not work properly

    • Type: Icon: Technical Debt Technical Debt
    • Resolution: Gone away
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 4.0.9
    • Component/s: Verify
    • Environment:
      mongoDB 4.0.9 + Nginx + PHP7.0-fpm + php mongo driver + php mongo lib
      Ubuntu 16.04, RAM 20GB, SSD 512GB

      Nginx + php-fpm + mongoDB(+mongodb php-lib)

      trying to compare the compression rate of mongoDB but, the results are not as expected. Here are my experiments.
       /etc/mongod.conf

      1. mongod.conf //default setting
      1. for documentation of all options, see:
      2. http://docs.mongodb.org/manual/reference/configuration-options/
      1. Where and how to store data.
        storage:
        dbPath: /var/lib/mongodb
        journal:
        enabled: true
      2. engine:
      3. mmapv1:
      4. wiredTiger:
      1. where to write logging data.
        systemLog:
        destination: file
        logAppend: true
        path: /var/log/mongodb/mongod.log
      1. network interfaces
        net:
        port: 27017
        bindIp: 127.0.0.1
      1. how the process runs
        processManagement:
        timeZoneInfo: /usr/share/zoneinfo

      #security:
      #operationProfiling:
      #replication:
      #sharding:

        1. Enterprise-Only Options:
          #auditLog:
          #snmp:
           

          Setting compression when creating collection in mongoDB shell

          mongoDB shell> db.createCollection( "test",{storageEngine:{wiredTiger:

          Unknown macro: {configString}

          }})

          The compression options are set to 6 in total

          block_compressor = none or snappy or zlib // prefix_compression = false or true

          When checked with db.printCollectionStats(), the options were applied correctly.

          insert data size is 100KB * 100000 = about 9GB.

      but db.test.storageSize() result.

      block_compression none = 10653536256 (byte)

      block_compression snappy = 10653405184 (byte)

      block_compression zlib = 6690177024 (byte)

      zlib is about 40% compressed compared to none. but, none and snappy are not different.

      (prefix_compress is also unchanged.)

      What settings should I add?

      +UPDATE

      snappy+false
       "compression" : {
      "compressed pages read" : 0,
      "compressed pages written" : 0,
      "page written failed to compress" : 100007,
      "page written was too small to compress" : 1025
      }
       

      zlib+false
       "compression" : {
      "compressed pages read" : 0,
      "compressed pages written" : 98881,
      "page written failed to compress" : 0,
      "page written was too small to compress" : 924
      }
       

      what is "page written failed to compress" mean? What is the solution?

      +update2

      used mongoDB server version: 4.0.9
       insert data document

      $result = $collection->insertOne( ['num'=> (int)$i ,
      'title' => "$i",
      'main' => "$i",
      'img' => "$t",
      'user'=>"$users",
      'like'=> 0,
      'time'=> "$date" ] );

      --Variable Description--
      $i = 1 ~ 100,000 (Increment by 1)
      $t = 100KB(102400byt) random string
      $users = (Random 10 characters in 12134567890abcdefghij)
      $data = Real-time server date (ex = 2019:05:18 xx.xx.xx)

      index
      db.test.createIndex( { "num":1 } )
      db.test.createIndex( { "title":1 } )
      db.test.createIndex( { "user":1 } )
      db.test.createIndex( { "like":1 } )
      db.test.createIndex( { "time":1 } )
       

      collection stats is too long so I will put only two.

      snappy+false

      "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

      snappy+true

      "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=true,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

      Thank you for your interest.

            Assignee:
            backlog-server-storage-engines [DO NOT USE] Backlog - Storage Engines Team
            Reporter:
            null-fiy J
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: