• Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 8.0.3
    • Component/s: None
    • Environment:
      Distributor ID: Debian
      Description: Debian GNU/Linux 12 (bookworm)
      Release: 12
      Codename: bookworm
      Linux v2202406226603273352 6.1.0-27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.115-1 (2024-11-01) x86_64 GNU/Linux
    • ALL
    • Hide

      const { MongoClient } = require("mongodb");

      // Replace the uri string with your connection string.
      const uri =
        "mongodb://cluster_url?retryWrites=true&writeConcern=majority";

      const client = new MongoClient(uri);

      async function run() {
        try

      {     await client.connect();     const database = client.db("test");     const movies = database.collection("users");     console.log("Starting batched async stress test...");     const batchSize = 1000; // Number of documents per batch     const totalDocs = 380000; // Total number of documents to insert     // Create all documents     const documents = Array.from(\{ length: totalDocs }

      , (_, i) => ({
            name: `movie ${i}`,
            "rando_long_data": "a".repeat(1000),
          }));

          // Split documents into batches
          const batches = [];
          for (let i = 0; i < documents.length; i += batchSize)

      {       batches.push(documents.slice(i, i + batchSize));     }

          // Insert batches concurrently
          await Promise.all(
            batches.map(async (batch, index) => {
              await movies.insertMany(batch);
              console.log(`Batch ${index + 1}/${batches.length} inserted successfully.`);
            })
          );

          console.log("All batches inserted successfully!");
        } catch (error)

      {     console.error("Error during stress test:", error);   }

      finally

      {     await client.close();   }

      }

      run().catch(console.dir);
       
      Run a primary shard node without forked node and no log path EG: 
      mongod --shardsvr --replSet server15 --dbpath /data/sharddb --bind_ip_all --port 27018

      Show
      const { MongoClient } = require("mongodb"); // Replace the uri string with your connection string. const uri =   "mongodb://cluster_url?retryWrites=true&writeConcern=majority"; const client = new MongoClient(uri); async function run() {   try {     await client.connect();     const database = client.db("test");     const movies = database.collection("users");     console.log("Starting batched async stress test...");     const batchSize = 1000; // Number of documents per batch     const totalDocs = 380000; // Total number of documents to insert     // Create all documents     const documents = Array.from(\{ length: totalDocs } , (_, i) => ({       name: `movie ${i}`,       "rando_long_data": "a".repeat(1000),     }));     // Split documents into batches     const batches = [];     for (let i = 0; i < documents.length; i += batchSize) {       batches.push(documents.slice(i, i + batchSize));     }     // Insert batches concurrently     await Promise.all(       batches.map(async (batch, index) => {         await movies.insertMany(batch);         console.log(`Batch ${index + 1}/${batches.length} inserted successfully.`);       })     );     console.log("All batches inserted successfully!");   } catch (error) {     console.error("Error during stress test:", error);   } finally {     await client.close();   } } run().catch(console.dir);   Run a primary shard node without forked node and no log path EG:  mongod --shardsvr --replSet server15 --dbpath /data/sharddb --bind_ip_all --port 27018

      I have set up a mongodb cluster (primary shard being server 15) which is receiving all the segfaults. I do a lot of bulk inserts as part of my stress testing. Payload is also big so could be me overloading it. But, it shouldn't segfault according to my experience. It is chunked the rest does not segfault. So I have no what I am running into here. 

       

      If that code actually reproduces it is unknown. I set up a system of 1 mongos router, 3 config servers & 5 shards. Server 15 being the second shard, and it keeps segfaulting on a lot of inserts. I also raised soft and hard limit didn't fix it. I even sourced my bash and restarted sshd just to be sure.

            Assignee:
            chris.kelly@mongodb.com Chris Kelly
            Reporter:
            demykromhof@gmail.com Demy K
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: