Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-17812

LockPinger has audit-related GLE failure

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: 2.6.8, 3.0.1, 3.1.0
    • Fix Version/s: 2.6.10, 3.0.3, 3.1.1
    • Component/s: Security, Sharding
    • Labels:
      None
    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      ALL
    • Backport Completed:
    • Steps To Reproduce:
      Hide
      1. Start a sharded cluster with auditing on, ie. the mongods and mongoses all have something like

        --auditFormat JSON --auditDestination file --auditPath /path/to/audit.log
        

        in addition to normal authentication.

      2. Shard a collection and then do something which causes a distlock to be taken, eg. a bunch of splits

        db.auth(...)
        sh.stopBalancer()
        db.test.insert({})
        sh.enableSharding("test")
        sh.shardCollection("test.test", {_id:1})
        for(i=0;i<100;i++)sh.splitAt("test.test", {_id:i})
        

      Show
      Start a sharded cluster with auditing on, ie. the mongods and mongoses all have something like --auditFormat JSON --auditDestination file --auditPath /path/to/audit.log in addition to normal authentication. Shard a collection and then do something which causes a distlock to be taken, eg. a bunch of splits db.auth(...) sh.stopBalancer() db.test.insert({}) sh.enableSharding( "test" ) sh.shardCollection( "test.test" , {_id:1}) for (i=0;i<100;i++)sh.splitAt( "test.test" , {_id:i})
    • Sprint:
      Security 1 04/03/15
    • Case:

      Description

      In a sharded cluster that has auditing enabled, the following error occurs every 30s when a distributed lock is pinged:

      2015-03-26T08:00:32.332+1100 W SHARDING [LockPinger] distributed lock pinger 'genique:22025,genique:22026,genique:22027/genique:22023:1427317232:2093216429' detected an exception while pinging. :: caused by :: SyncClusterConnection::update prepare failed:  genique:22025 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information. genique:22026 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information. genique:22027 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information.
      2015-03-26T08:01:02.385+1100 W SHARDING [LockPinger] distributed lock pinger 'genique:22025,genique:22026,genique:22027/genique:22023:1427317232:2093216429' detected an exception while pinging. :: caused by :: SyncClusterConnection::update prepare failed:  genique:22025 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information. genique:22026 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information. genique:22027 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information.
      2015-03-26T08:01:32.386+1100 W SHARDING [LockPinger] distributed lock pinger 'genique:22025,genique:22026,genique:22027/genique:22023:1427317232:2093216429' detected an exception while pinging. :: caused by :: SyncClusterConnection::update prepare failed:  genique:22025 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information. genique:22026 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information. genique:22027 (127.0.1.1):getLastError command failed: Audit metadata does not include both user and role information.
      

      I've only seen this on shard primaries, but it might happen on mongoses too (haven't tried/checked).

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: