Uploaded image for project: 'Python Driver'
  1. Python Driver
  2. PYTHON-1005

Have to close cursors explicitly to get currect documents in multithreaded highload Django app

    • Type: Icon: Task Task
    • Resolution: Cannot Reproduce
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.0.3
    • Component/s: None
    • None
    • Environment:
      Ubuntu v14.04
      Mongodb v2.6.11
      PyMongo v3.0.3

      12 core CPU
      16 GB RAM

      Hi,

      We are using Mongodb in one of our Django applications running over uwsgi. When running it on a weak system (2 core CPU, 2 GB RAM) with high load and processing requests on 6 parallel process (2 requests each), there is nothing unusual.
      But when we are running it on a more powerful system (12 core CPU, 16 GB RAM) returned cursor will contain less results than expected. For example if we have 8000 documents in our collection matching a query, sometimes cursor contain much less or even 0 documents.
      But this will not happen as soon as we are running our code. It takes some time running and process something about 100000 request, then it starts to return wrong responses.
      Respawning uwsgin processes will help to reduce the problem. But even then it will happen some times.
      Anyway we changed our code to close any cursor we use, explicitly in code when we do not need them anymore. Then the problem is completely gone. What is the reason of this behavior at all? Since pymongo has a garbage collector this has not to be necessary to close cursors explicitly.
      And why, without closing them, pymongo uses old cursors instead of creating new one?

      Thank you all.

            Assignee:
            bernie@mongodb.com Bernie Hackett
            Reporter:
            5hahinism Shahin Azad
            Votes:
            1 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: