Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-50436

MongoDB split horizons doesn't seem work properly

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 4.4.0
    • Component/s: Replication, Shell
    • Replication
    • ALL
    • Hide

      1. docker-compose.yml
      version: '2.4'

      services:
      mongo:
      image: mongo:4.4
      command: ["mongod", "--config", "/etc/mongo/mongod.conf"]
      volumes:
      - './mongod.conf:/etc/mongo/mongod.conf'
      - './mongo.pem:/etc/mongo/mongo.pem'
      ports:
      - '27017:27017'
       
       
      2. mongod.conf
      net:
      tls:
      mode: requireTLS
      certificateKeyFile: /etc/mongo/mongo.pem
      replication:
      replSetName: horizons
       
      3. Create certs:
      bin/openssl req -new -x509 -newkey rsa:2048 -keyout mongo.key -nodes -out mongo.crt -subj "/CN=mongo" -addext "subjectAltName=DNS:mongo,DNS:localhost,IP:127.0.0.1"
      cat mongo.crt mongo.key > mongo.pem
       
      4. Run in mongo shell (mongo --tls --tlsCAFile mongo.crt):
      rs.initiate({
      _id: "horizons",
      members: [
      {_id: 0, host: "mongo", horizons: {"localhost": "localhost"}}
      ],
      {color:#d4d4d4}})
       
      5. Try to connect with: mongo --tls --tlsCAFile mongo.crt mongodb://localhost/?replicaSet=horizons
       
      You can add --verbose to mongo to see the log of course.

      Show
      1. docker-compose.yml version : '2.4' services : mongo : image : mongo:4.4 command : [ "mongod" , "--config" , "/etc/mongo/mongod.conf" ] volumes : - './mongod.conf:/etc/mongo/mongod.conf' - './mongo.pem:/etc/mongo/mongo.pem' ports : - '27017:27017'     2. mongod.conf net : tls : mode : requireTLS certificateKeyFile : /etc/mongo/mongo.pem replication : replSetName : horizons   3. Create certs: bin/openssl req -new -x509 -newkey rsa:2048 -keyout mongo.key -nodes -out mongo.crt -subj "/CN=mongo" -addext "subjectAltName=DNS:mongo,DNS:localhost,IP:127.0.0.1" cat mongo.crt mongo.key > mongo.pem   4. Run in mongo shell ( mongo --tls --tlsCAFile mongo.crt ): rs . initiate ({ _id : "horizons" , members : [ { _id : 0 , host : "mongo" , horizons : { "localhost" : "localhost" }} ], {color:#d4d4d4}})   5. Try to connect with: mongo --tls --tlsCAFile mongo.crt mongodb://localhost/?replicaSet=horizons   You can add --verbose to mongo to see the log of course.

      I'm trying to evaluate the split horizons feature using just a basic:

      replicaSetHorizons:
        - "localhost": "localhost:<nodePort>"
      

      With minikube. It doesn't seem to work with mongo shell:

      mongo --tls --tlsCAFile mongo.crt --verbose "mongodb://localhost/?replicaSet=horizons"
      

       
      I was curious on the implementation, so tried to isolate this further, it looks to me that this works using an undocumented property of the replicaset members named horizons, so I tried setting it myself using a single Mongo in Docker:

      rs.initiate({    
          _id: "horizons",
          members: [        
              {_id: 0, host: "mongo", horizons: {"localhost": "localhost"}} 
          ],
      }))
      

      And got the same result. Looking at the log of the mongo shell and trying out the isMaster command for myself, it looks like the first response does indeed respent the horizons setting, but then subsequent isMaster responses from the same connection then fail to respect the horizons config (They don't persist the SNI name for awaitable/exhaust isMaster? Whatever is used today)

      Using mongo from the mongo:4.4 official Docker image on Docker 19.03.12 on mac.

        1. log.txt
          31 kB
          Someone Somebody

            Assignee:
            backlog-server-repl [DO NOT USE] Backlog - Replication Team
            Reporter:
            temp4746@gmail.com Someone Somebody
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

              Created:
              Updated:
              Resolved: