Uploaded image for project: 'MongoDB Database Tools'
  1. MongoDB Database Tools
  2. TOOLS-944

write concern mongos tests are flaky

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • 3.2.1, 3.3.0
    • Affects Version/s: None
    • Component/s: None
    • None
    • Server Tools C (11/23/15), Server Tools D (12/11/15)
    • Not Needed
    • v3.2

      sometimes syncing appears to continue working even after it has been switched off, e.g.

      ----
      stopping the other member
      ----
       m31100| 2015-10-15T19:07:46.500+0000 I JOURNAL  [repl writer worker 15] journalCleanup...
       m31100| 2015-10-15T19:07:46.500+0000 I JOURNAL  [repl writer worker 15] removeJournalFiles
       m31100| 2015-10-15T19:07:46.505+0000 I JOURNAL  [repl writer worker 15] journalCleanup...
       m31100| 2015-10-15T19:07:46.505+0000 I JOURNAL  [repl writer worker 15] removeJournalFiles
       m31100| 2015-10-15T19:07:46.507+0000 I COMMAND  [repl writer worker 15] dropDatabase foo finished
      2015-10-15 15:07:51 EDT	
      ----
      mongoimport with majority with no working nodes should fail
      ----
      2015-10-15T19:07:51.508+0000 I -        shell: started program (sh896):  /data/mci/src/mongoimport --file wc.csv -d foo -c bar --writeConcern={w:"majority",wtimeout:2000} --host 127.0.0.1:30999
       m30999| 2015-10-15T19:07:51.520+0000 I NETWORK  [mongosMain] connection accepted from 127.0.0.1:45596 #10 (2 connections now open)
      sh896| 2015-10-15T19:07:51.522+0000	connected to: 127.0.0.1:30999
       m30999| 2015-10-15T19:07:51.524+0000 I SHARDING [conn10] couldn't find database [foo] in config db
       m29000| 2015-10-15T19:07:51.526+0000 I STORAGE  [conn6] CMD fsync: sync:1 lock:0
       m30999| 2015-10-15T19:07:51.529+0000 I SHARDING [conn10] 	 put [foo] on: test-rs0:test-rs0/ip-10-216-200-39:31100,ip-10-216-200-39:31101,ip-10-216-200-39:31102
       m31102| 2015-10-15T19:07:51.529+0000 I INDEX    [conn14] allocating new ns file /data/db/test-rs0-2/foo.ns, filling with zeroes...
       m31102| 2015-10-15T19:07:51.580+0000 I STORAGE  [FileAllocator] allocating new datafile /data/db/test-rs0-2/foo.0, filling with zeroes...
       m31102| 2015-10-15T19:07:51.581+0000 I STORAGE  [FileAllocator] done allocating datafile /data/db/test-rs0-2/foo.0, size: 16MB,  took 0 secs
       m31100| 2015-10-15T19:07:51.593+0000 I INDEX    [repl writer worker 15] allocating new ns file /data/db/test-rs0-0/foo.ns, filling with zeroes...
       m31100| 2015-10-15T19:07:51.648+0000 I STORAGE  [FileAllocator] allocating new datafile /data/db/test-rs0-0/foo.0, filling with zeroes...
       m31100| 2015-10-15T19:07:51.648+0000 I STORAGE  [FileAllocator] done allocating datafile /data/db/test-rs0-0/foo.0, size: 16MB,  took 0 secs
       m31102| 2015-10-15T19:07:51.666+0000 I COMMAND  [conn14] command foo.$cmd command: insert { insert: "bar", documents: 101, writeConcern: { getLastError: 1, w: "majority", wtimeout: 2000 }, ordered: false, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:140 locks:{ Global: { acquireCount: { w: 105 } }, MMAPV1Journal: { acquireCount: { w: 109 } }, Database: { acquireCount: { w: 104, W: 1 } }, Collection: { acquireCount: { W: 2 } }, Metadata: { acquireCount: { W: 4 } }, oplog: { acquireCount: { w: 102 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 85 } } } 137ms
      sh896| 2015-10-15T19:07:51.666+0000	imported 101 documents
       m30999| 2015-10-15T19:07:51.668+0000 I NETWORK  [conn10] end connection 127.0.0.1:45596 (1 connection now open)
      assert: [1] != [0] are not equal : mongoimport with majority with no working nodes should fail
      

            Assignee:
            kyle.erf Kyle Erf
            Reporter:
            kyle.erf Kyle Erf
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: