Uploaded image for project: 'Rust Driver'
  1. Rust Driver
  2. RUST-1028

after multi threading find in substread, the memory doesn't released?

    • Type: Icon: Task Task
    • Resolution: Works as Designed
    • Priority: Icon: Unknown Unknown
    • None
    • Affects Version/s: None
    • Component/s: None

      Hi, when I'm trying to make parallel find in one collection(which contains 10190890 records, but I limit it to 509545 items), I'm sadly find out that
      when find complete, and everything is dropped, the memory still remains to high,

      When it first make a concurrent read, the memory is 2.9G, I think this is expected.
      But everything is done, the memory still be 391M, which suprise to me.

      I have no clue about what's going on here and why this happened, can you please help?

      My server contains 40-core cpu, total memory is 120G.

      Here is one record example in mongodb:

      { "_id" : ObjectId("6137150c554447359d6141a7"), "beta" : 22, "momentum" : 32, "size" : 11, "earnings_yield" : 21, "residual_volatility" : 1, "growth" : 15, "book_to_price" : 14, "leverage" : 13, "liquidity" : 21, "non_linear_size" : 129, "industry" : "adf", "comovement" : 1, "date" : ISODate("2005-01-06T00:00:00.000Z"), "order_book_id" : "fdas" }

      here is reproducing code example:

      Unable to find source-code formatter for language: rust. Available languages are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml
      rust
      use bson::{doc, Document};
      use mongodb::options::FindOptions;
      use mongodb::sync::Client;
      use std::sync::mpsc;
      use std::thread;
      use std::io::{self, BufRead};
      
      enum TaskResult {
          Success,
          Fail,
      }
      
      fn main() {
          {
              let source =
                  Client::with_uri_str("mongodb://localhost:27017/?authSource=admin")
                      .unwrap();
              let (sender, receiver) = mpsc::channel();
      
              for _ in 0..40 {
                  let source_coll = source.database("betahub").collection::<Document>("factor_exposure");
                  let sender = sender.clone();
                  thread::spawn(move || {
                      let res = source_coll
                          .find(doc! {}, FindOptions::builder().batch_size(10000).limit(509545).build())
                          .and_then(|cursor| {
                              let mut buf = vec![];
                              Ok(for _doc in cursor { buf.push(_doc); if buf.len() >= 10000 {buf.clear()}})
                          });
                      match res {
                          Err(e) => {
                              let _ = sender.send(TaskResult::Fail);
                          }
                          Ok(_) => {
                              let _ = sender.send(TaskResult::Success);
                          }
                      }
                  });
              }
      
              let mut count = 0;
              while let Ok(event) = receiver.recv() {
                  match event {
                      TaskResult::Fail => {
                          println!("get fail");
                          break;
                      },
                      TaskResult::Success => {
                          count += 1;
                          if count == 40 {
                              break;
                          }
                      }
                  }
              }
          }
      
          println!("Every thing is done, press enter to exit.");
          let _line1 = io::stdin().lock().lines().next().unwrap().unwrap();
      }
      

            Assignee:
            abraham.egnor@mongodb.com Abraham Egnor
            Reporter:
            yonghengzero@gmail.com Chen Zero
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: