|
Author:
{'name': 'Divjot Arora', 'username': 'divjotarora', 'email': 'divjot.arora@10gen.com'}
Message: Add support for GSSAPI ServiceHost
GODRIVER-698
Change-Id: I6888326e272ab63ea2594181d09fcf0b9d5c17aa
Branch: master
https://github.com/mongodb/mongo-go-driver/commit/60e1c818f7389f0ced1a35d6904794c0f2ebdf8d
|
|
Hi kris.brandow,
Could we get an update on this? This one is blocking our upgrade to the go driver.
Thanks!
|
|
Evergreen run looks good.
What's the next step? Does my fork go into code review?
|
|
Ok. I got all of our Kerberos tests to pass in my VM with this: https://github.com/tolsen/mongo-go-driver/commit/e3a858d3fa37b7d7683dd066ced7fbade8ffa403
I'll now submit this to Evergreen to run our full test suite.
|
|
Yes.
|
|
Still some compilation problems but I'll help get it to compile on my own fork.
Should the following line be a newAuthError() instead of fmt.Errorf()?
https://github.com/divjotarora/mongo-go-driver/blob/ca0f5169f1b144bb6407733f188697b03665aa78/x/mongo/driver/auth/gssapi.go#L49
|
|
divjot.arora pushed a fix. Trying again
|
|
It doesn't compile:
../../../go.mongodb.org/mongo-driver/x/mongo/driver/auth/internal/gssapi/gss.go:22: imported and not used: "net"
|
|
|
Ok. Thank you. I'm testing it now
|
|
tim.olsen The port isn't relevant here because the line that calls net.SplitHostPort ignores the port anyway. The latest commit on https://github.com/divjotarora/mongo-go-driver/tree/godriver698 should have a fix for the host/port issue.
|
|
I think in this case in the implementation, "target" should be constructed from SERVICE_HOST and the port in the passed-in value of target. I am not super familiar with how this code works, but does that sound right?
|
|
In this context there is no port required. It's only the host name that matters.
|
|
Agreed. Would it just default to 27017 or is there a way to specify the port separately? I'm not familiar with the shell implementation.
|
|
It shouldn't require a port. divjot.arora do you agree?
|
|
It looks like the implementation requires a port as part of SERVICE_HOST:
error creating gssapi: invalid endpoint (localhost) specified: address localhost: missing port in address
|
This is inconsistent as how the feature is used in other libraries/tools. For example, ServiceHost in mgo is just a hostname. And --gssapiHostName in the mongo shell is just a hostname. What do you guys think?
|
|
Nevermind. divjot.arora has pointed out that his new branch should be sufficient. I'll try that out.
|
|
divjot.arora has provided me with a fix for the last comment's issue at https://github.com/divjotarora/mongo-go-driver/tree/godriver698 . However, that didn't include the prior change in the PR (which is also needed). I have combined the two here: https://github.com/tolsen/mongo-go-driver/tree/GODRIVER-698 . That's what I will be testing
|
|
I'm getting:
"error creating gssapi: unknown mechanism property SERVICE_HOST"
I think we need to add SERVICE_HOST here: https://github.com/mongodb/mongo-go-driver/blob/master/x/mongo/driver/auth/internal/gssapi/gss.go#L32-L39
|
|
Thank you. I'll test the pull request.
I've filed the DRIVERS ticket: DRIVERS-613
|
|
We've decided that it's necessary for drivers to support this feature, and a DRIVERS ticket and associated auth spec change will be forthcoming.
|
|
The pull request at https://review.gerrithub.io/c/mongodb/mongo-go-driver/+/442585 has been rebased on master, tim.olsen
|
|
Here's where the server plumbs this information down into Cyrus SASL: https://github.com/mongodb/mongo/blob/e241839a6a3f0d9249ef735d27be9cd6d797003a/src/mongo/client/cyrus_sasl_client_session.cpp#L261
|
|
behackett, we currently use the address that was originally provided. We do have the me field as the CanonicalAddr, but we don't use that when we create the pool, so the original address is used (in this case the one provided by the connection string).
|
|
behackett, Tim is correct. There are no users in MongoDB. But you can still authenticate as users from LDAP. However, authenticating as a user doesn't guarantee that the user is an administrator. That's why we require the use of the localhost auth bypass to create the administrative roles for it.
|
|
behackett With regard to creating the first role: With LDAP Authorization no users actually exist in MongoDB. They're all in LDAP. User -> permission mapping is done via LDAP group membership. LDAP groups map to MongoDB custom roles (if they exist). Since no MongoDB custom roles exist yet, no users have any permissions yet. Therefore the localhost exception must be provided in order to create the first custom role (which would presumably inherit real privileges).
|
|
kris.brandow, when an application makes a direct connection to a single mongod/s does the Go driver continue to use the hostname specified in the URI when creating new connections, or does it replace the provided hostname with the value of the "me" field from ismaster? That is, in tim.olsen's example will the driver use "locahost" for connections after the initial handshake?
|
|
spencer.jackson or sara.golemon, can one of you comment on the above PLAIN auth examples? Why would connecting over the loopback address be required to create a new role for a user that already exists and has already been authenticated? Since the user already exists, and has already been authenticated the localhost exception seems like a red herring. Is that a server bug?
|
|
Here is the mongos log around the time that the mgo automation agent creates the initial role:
2019-02-19T15:26:48.020+0000 I ACCESS [conn28] Unauthorized: not authorized on admin to execute command { serverStatus: 1, locks: false, recordStats: false, $db: "admin" }
|
2019-02-19T15:26:48.021+0000 I NETWORK [conn28] end connection 10.122.95.11:45316 (2 connections now open)
|
2019-02-19T15:26:48.021+0000 I NETWORK [listener] connection accepted from 127.0.0.1:57938 #33 (3 connections now open)
|
2019-02-19T15:26:48.022+0000 I ACCESS [conn33] note: no users configured in admin.system.users, allowing localhost access
|
2019-02-19T15:26:48.027+0000 I ACCESS [conn33] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.028+0000 I NETWORK [conn33] end connection 127.0.0.1:57938 (2 connections now open)
|
2019-02-19T15:26:48.029+0000 I NETWORK [listener] connection accepted from 127.0.0.1:57942 #34 (3 connections now open)
|
2019-02-19T15:26:48.034+0000 I ACCESS [conn34] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.036+0000 I NETWORK [listener] connection accepted from 127.0.0.1:57946 #35 (4 connections now open)
|
2019-02-19T15:26:48.041+0000 I ACCESS [conn35] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.045+0000 I ACCESS [conn34] Unauthorized: not authorized on admin to execute command { find: "system.version", filter: { _id: "featureCompatibilityVersion" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $db: "admin" }
|
2019-02-19T15:26:48.045+0000 I NETWORK [conn34] end connection 127.0.0.1:57942 (3 connections now open)
|
2019-02-19T15:26:48.046+0000 I NETWORK [listener] connection accepted from 127.0.0.1:57952 #36 (4 connections now open)
|
2019-02-19T15:26:48.051+0000 I ACCESS [conn36] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.052+0000 I ACCESS [conn36] Unauthorized: not authorized on admin to execute command { find: "system.roles", filter: { db: "admin", role: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $db: "admin" }
|
2019-02-19T15:26:48.053+0000 I NETWORK [conn36] end connection 127.0.0.1:57952 (3 connections now open)
|
2019-02-19T15:26:48.053+0000 I NETWORK [listener] connection accepted from 127.0.0.1:57956 #37 (4 connections now open)
|
2019-02-19T15:26:48.058+0000 I ACCESS [conn37] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.063+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45360 #38 (5 connections now open)
|
2019-02-19T15:26:48.068+0000 I ACCESS [conn38] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.068+0000 I ACCESS [conn38] Unauthorized: not authorized on admin to execute command { balancerStatus: 1, $db: "admin" }
|
2019-02-19T15:26:48.068+0000 I NETWORK [conn38] end connection 10.122.95.11:45360 (4 connections now open)
|
2019-02-19T15:26:48.102+0000 I ACCESS [conn22] Unauthorized: not authorized on admin to execute command { serverStatus: 1, locks: false, recordStats: false, $db: "admin" }
|
2019-02-19T15:26:48.102+0000 I NETWORK [conn22] end connection 10.122.95.11:45282 (3 connections now open)
|
2019-02-19T15:26:48.661+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45382 #39 (4 connections now open)
|
2019-02-19T15:26:48.666+0000 I ACCESS [conn39] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.666+0000 I ACCESS [conn39] Unauthorized: not authorized on admin to execute command { getCmdLineOpts: 1, $db: "admin" }
|
2019-02-19T15:26:48.666+0000 I NETWORK [conn39] end connection 10.122.95.11:45382 (3 connections now open)
|
2019-02-19T15:26:48.716+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45398 #40 (4 connections now open)
|
2019-02-19T15:26:48.721+0000 I ACCESS [conn40] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.721+0000 I ACCESS [conn40] Unauthorized: not authorized on admin to execute command { balancerStatus: 1, $db: "admin" }
|
2019-02-19T15:26:48.721+0000 I NETWORK [conn40] end connection 10.122.95.11:45398 (3 connections now open)
|
2019-02-19T15:26:48.738+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45402 #41 (4 connections now open)
|
2019-02-19T15:26:48.742+0000 I ACCESS [conn41] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.743+0000 I ACCESS [conn41] Unauthorized: not authorized on admin to execute command { balancerStatus: 1, $db: "admin" }
|
2019-02-19T15:26:48.743+0000 I NETWORK [conn41] end connection 10.122.95.11:45402 (3 connections now open)
|
2019-02-19T15:26:48.822+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45406 #42 (4 connections now open)
|
2019-02-19T15:26:48.830+0000 I ACCESS [conn42] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.831+0000 I ACCESS [conn42] Unauthorized: not authorized on admin to execute command { getCmdLineOpts: 1, $db: "admin" }
|
2019-02-19T15:26:48.831+0000 I NETWORK [conn42] end connection 10.122.95.11:45406 (3 connections now open)
|
2019-02-19T15:26:48.838+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45410 #43 (4 connections now open)
|
2019-02-19T15:26:48.843+0000 I ACCESS [conn43] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.843+0000 I ACCESS [conn43] Unauthorized: not authorized on admin to execute command { balancerStatus: 1, $db: "admin" }
|
2019-02-19T15:26:48.844+0000 I NETWORK [conn43] end connection 10.122.95.11:45410 (3 connections now open)
|
2019-02-19T15:26:48.940+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45414 #44 (4 connections now open)
|
2019-02-19T15:26:48.945+0000 I ACCESS [conn44] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.945+0000 I ACCESS [conn44] Unauthorized: not authorized on admin to execute command { getCmdLineOpts: 1, $db: "admin" }
|
2019-02-19T15:26:48.945+0000 I NETWORK [conn44] end connection 10.122.95.11:45414 (3 connections now open)
|
2019-02-19T15:26:48.949+0000 I NETWORK [listener] connection accepted from 10.122.95.11:45418 #45 (4 connections now open)
|
2019-02-19T15:26:48.953+0000 I ACCESS [conn45] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T15:26:48.953+0000 I ACCESS [conn45] Unauthorized: not authorized on admin to execute command { balancerStatus: 1, $db: "admin" }
|
2019-02-19T15:26:48.954+0000 I NETWORK [conn45] end connection 10.122.95.11:45418 (3 connections now open)
|
|
The corresponding automation agent log:
[2019/02/19 15:26:48.053] [.error] [cm/auth/rolemgmt.go:ReadRole:155] <s+l_mongos1> [15:26:48.053] Error finding role=cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc db=admin in admin.system.roles : <s+l_mongos1> [15:26:48.053] Error calling FindOneWithSort : <s+l_mongos1> [15:26:48.053] Error calling FindOne in coll (admin.system.roles). res:[&map[]] : not authorized on admin to execute command { find: "system.roles", filter: { db: "admin", role: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, $db: "admin" }
|
[2019/02/19 15:26:48.053] [.debug] [cm/atmcreds/atmcreds.go:upsertAtmLdapRole:227] <s+l_mongos1> [15:26:48.053] Received auth error while trying to find automation role. Assuming we need to create it.
|
[2019/02/19 15:26:48.053] [.debug] [cm/mongoctl/processctl.go:RunCommandWithTimeout:863] <s+l_mongos1> [15:26:48.053] Starting RunCommand(dbName=admin, cmd=[{createRole cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc} {roles [[{role clusterAdmin} {db admin}] [{role readWriteAnyDatabase} {db admin}] [{role userAdminAnyDatabase} {db admin}] [{role dbAdminAnyDatabase} {db admin}] [{role restore} {db admin}] [{role backup} {db admin}]]} {privileges []}]) to ip-10-122-95-11.ec2.internal:9010 (local=true) ...
|
[2019/02/19 15:26:48.053] [.debug] [cm/connectionpool/connectionpool.go:dialSessionWithTimeout:548] <s+l_mongos1> [15:26:48.053] Attempting to dial a session for cp = ip-10-122-95-11.ec2.internal:9010 (local=true), direct = false, consistency = 2, timeout = 40s, failFast = true, identitiesToTry: [automation-agent@$external[[PLAIN]][12] __system@local[[MONGODB-CR/SCRAM-SHA-1 SCRAM-SHA-256]][19] ]
|
[2019/02/19 15:26:48.053] [.debug] [cm/connectionpool/connectionpool.go:getPreferredClientCert:386] <s+l_mongos1> [15:26:48.053] Finding preferred client cert for automation-agent@$external[[PLAIN]][12]
|
[2019/02/19 15:26:48.053] [.debug] [cm/connectionpool/connectionpool.go:getPreferredClientCert:395] <s+l_mongos1> [15:26:48.053] Did not find a preferred client cert for automation-agent@$external[[PLAIN]][12] and there is no default client cert
|
[2019/02/19 15:26:48.053] [.debug] [cm/connectionpool/connectionpool.go:handleDialAttempt:614] <s+l_mongos1> [15:26:48.053] Attempting to dial ip-10-122-95-11.ec2.internal:9010 (local=true) with identity = automation-agent@$external[[PLAIN]][12], dialinfo = &mgo.DialInfo{Addrs:[]string{"127.0.0.1:9010"}, Direct:false, Timeout:40000000000, FailFast:true, Database:"$external", ReplicaSetName:"", Source:"", Service:"", ServiceHost:"", Mechanism:"PLAIN", Username:"automation-agent", Password:"(omitted)", PoolLimit:0, ReadPreference:(*mgo.ReadPreference)(nil), WriteConcern:(*mgo.Safe)(nil), DialServer:(func(*mgo.ServerAddr) (net.Conn, error))(nil), Dial:(func(net.Addr) (net.Conn, error))(nil)} ssl=false clientCert=<nil>
|
[2019/02/19 15:26:48.059] [.debug] [cm/connectionpool/connectionpool.go:handleDialAttempt:621] <s+l_mongos1> [15:26:48.059] Successfully dialed ip-10-122-95-11.ec2.internal:9010 (local=true) with identity = automation-agent@$external[[PLAIN]][12] clientCert=<nil>
|
[2019/02/19 15:26:48.100] [.debug] [cm/mongoctl/processctl.go:func1:897] <s+l_mongos1> [15:26:48.100] ...Finished with runCommandWithTimeout(dbName=admin, cmd=[{createRole cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc} {roles [[{role clusterAdmin} {db admin}] [{role readWriteAnyDatabase} {db admin}] [{role userAdminAnyDatabase} {db admin}] [{role dbAdminAnyDatabase} {db admin}] [{role restore} {db admin}] [{role backup} {db admin}]]} {privileges []}]) to ip-10-122-95-11.ec2.internal:9010 (local=true) with result={"$clusterTime":{"clusterTime":6659733373864378372,"signature":{"hash":"YsksjtMRUWFhUpClo+GCUQk8iH0=","keyId":6659733159116013597}},"ok":1,"operationTime":6659733373864378372}
|
[2019/02/19 15:26:48.100] [.debug] [cm/atmcreds/atmcreds.go:upsertAtmLdapRole:247] <s+l_mongos1> [15:26:48.100] Result of executing cmd=[{createRole cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc} {roles [[{role clusterAdmin} {db admin}] [{role readWriteAnyDatabase} {db admin}] [{role userAdminAnyDatabase} {db admin}] [{role dbAdminAnyDatabase} {db admin}] [{role restore} {db admin}] [{role backup} {db admin}]]} {privileges []}] to create role cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc@admin was result=map[ok:1 operationTime:6659733373864378372 $clusterTime:map[signature:map[hash:[98 201 44 142 211 17 81 97 97 82 144 165 163 225 130 81 9 60 136 125] keyId:6659733159116013597] clusterTime:6659733373864378372]]
|
[2019/02/19 15:26:48.100] [.info] [cm/atmcreds/atmcreds.go:upsertAtmLdapRole:248] <s+l_mongos1> [15:26:48.100] <DB_WRITE> Created roles in db admin using cmd [{createRole cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc} {roles [[{role clusterAdmin} {db admin}] [{role readWriteAnyDatabase} {db admin}] [{role userAdminAnyDatabase} {db admin}] [{role dbAdminAnyDatabase} {db admin}] [{role restore} {db admin}] [{role backup} {db admin}]]} {privileges []}]
|
[2019/02/19 15:26:48.100] [.debug] [state/stateutil/stateutil.go:upsertAutomationCredentials:140] <s+l_mongos1> [15:26:48.100] Successfully added automation credentials
|
The "local = true" in "Successfully dialed ip-10-122-95-11.ec2.internal:9010 (local=true) with identity = automation-agent@$external[[PLAIN]][12] clientCert=<nil>" means that the agent dialed to 127.0.0.1
|
|
tim.olsen please post the mongos logs from when you run the same test with mgo.
|
|
I am unable to test the Kerberos + LDAP Authorization combination because there does not exist LDAP entries on the evergreen test ldap server that correspond to entries on the test kerberos server.
But I believe the example illustrates my point. In an LDAP Authorization setup, you must login to localhost AND authenticate as a user that has the role you are looking to create. In order to do that in a Kerberos + LDAP Authorization set up, I believe you need ServiceHost support.
|
|
Here's the mongos log:
2019-02-19T19:36:34.791+0000 I ACCESS [conn7059] Successfully authenticated as principal automation-agent on $external
|
2019-02-19T19:36:46.183+0000 I ACCESS [UserCacheInvalidator] User cache generation changed from 5c6c5aa2ce2c42bed2e458bb to 5c6c5ac0ce2c42bed2e45940; invalidating user cache
|
2019-02-19T19:36:53.997+0000 I ACCESS [conn7059] Unauthorized: not authorized on admin to execute command { createRole: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: [ "root" ], privileges: [], writeConcern: { w: "majority", wtimeout: 600000.0 }, lsid: { id: UUID("198c682e-5c9b-4510-8dd1-e22e19fc569d") }, $clusterTime: { clusterTime: Timestamp(1550604992, 1), signature: { hash: BinData(0, 4B06E5F200B29D2FE9570237D5985B2D2F65D341), keyId: 6659794044572401693 } }, $db: "admin" }
|
2019-02-19T19:36:58.058+0000 I NETWORK [conn7059] end connection 10.122.3.75:36692 (0 connections now open)
|
2019-02-19T19:37:03.387+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35744 #7060 (1 connection now open)
|
2019-02-19T19:37:03.388+0000 I NETWORK [conn7060] received client metadata from 127.0.0.1:35744 conn7060: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.6" }, os: { type: "Linux", name: "Amazon Linux release 2.0 (2017.12) LTS Release Candidate", architecture: "x86_64", version: "Kernel 4.9.76-38.79.amzn2.x86_64" } }
|
2019-02-19T19:37:03.390+0000 I ACCESS [conn7060] Unauthorized: not authorized on admin to execute command { getLog: "startupWarnings", lsid: { id: UUID("dbd281c1-13e1-450e-999f-94a237cf7df6") }, $clusterTime: { clusterTime: Timestamp(1550605022, 1), signature: { hash: BinData(0, C93B2B820C03AB3DF3B4E1EBFE7E2ED92FAF695B), keyId: 6659794044572401693 } }, $db: "admin" }
|
2019-02-19T19:37:13.241+0000 I ACCESS [conn7060] Not authorized to create the first role in the system 'cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc@admin' using the localhost exception. The user needs to acquire the role through external authentication first.
|
2019-02-19T19:37:13.241+0000 I ACCESS [conn7060] Unauthorized: not authorized on admin to execute command { createRole: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: [ "root" ], privileges: [], writeConcern: { w: "majority", wtimeout: 600000.0 }, lsid: { id: UUID("dbd281c1-13e1-450e-999f-94a237cf7df6") }, $clusterTime: { clusterTime: Timestamp(1550605028, 1), signature: { hash: BinData(0, 54E2C0E048178B413812FA5AFA918CD3A89FF8C9), keyId: 6659794044572401693 } }, $db: "admin" }
|
2019-02-19T19:37:14.041+0000 I ACCESS [conn7060] Unauthorized: not authorized on admin to execute command { endSessions: [ { id: UUID("dbd281c1-13e1-450e-999f-94a237cf7df6") } ], $db: "admin" }
|
2019-02-19T19:37:14.042+0000 I NETWORK [conn7060] end connection 127.0.0.1:35744 (0 connections now open)
|
2019-02-19T19:37:16.009+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35746 #7061 (1 connection now open)
|
2019-02-19T19:37:16.010+0000 I NETWORK [conn7061] received client metadata from 127.0.0.1:35746 conn7061: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.6" }, os: { type: "Linux", name: "Amazon Linux release 2.0 (2017.12) LTS Release Candidate", architecture: "x86_64", version: "Kernel 4.9.76-38.79.amzn2.x86_64" } }
|
2019-02-19T19:37:16.013+0000 I ACCESS [conn7061] Unauthorized: not authorized on admin to execute command { getLog: "startupWarnings", lsid: { id: UUID("185a84c8-0cf0-4f04-802b-5363d658c5aa") }, $clusterTime: { clusterTime: Timestamp(1550605035, 2), signature: { hash: BinData(0, 8A18E2EF03A8E7DF802008E45DB7B17E2BC52311), keyId: 6659794044572401693 } }, $db: "admin" }
|
2019-02-19T19:37:16.183+0000 I ACCESS [UserCacheInvalidator] User cache generation changed from 5c6c5ac0ce2c42bed2e45940 to 5c6c5adece2c42bed2e459c8; invalidating user cache
|
2019-02-19T19:37:16.183+0000 I SH_REFR [ConfigServerCatalogCacheLoader-3] Refresh for collection config.system.sessions took 0 ms and found the collection is not sharded
|
2019-02-19T19:37:16.183+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Collection config.system.sessions is not sharded.
|
2019-02-19T19:37:24.016+0000 I ACCESS [conn7061] Successfully authenticated as principal automation-agent on $external
|
|
|
I now believe ServiceHost support will be necessary.
I am unable to test Kerberos+LDAP authorization because we do not have entries in the evergreen test ldap server that correspond with the test kerberos server. However, I can test LDAP authentication with LDAP authorization. My testing shows that the only way to create the initial role on a sharded cluster is to connect to localhost AND authenticate as a member of that role.
[ec2-user@ip-10-122-3-75 fulltest]$ /tmp/mms-automation/test/versions/mongodb-linux-x86_64-enterprise-amazon2-4.0.6/bin/mongo `hostname -f`:9010
|
MongoDB shell version v4.0.6
|
connecting to: mongodb://ip-10-122-3-75.ec2.internal:9010/test?gssapiServiceName=mongodb
|
Implicit session: session { "id" : UUID("198c682e-5c9b-4510-8dd1-e22e19fc569d") }
|
MongoDB server version: 4.0.6
|
MongoDB Enterprise mongos> use $external
|
switched to db $external
|
MongoDB Enterprise mongos> db.auth({mechanism: "PLAIN", user: "automation-agent", pwd: "r3Itd41khkRV", digestPassword: false})
|
1
|
MongoDB Enterprise mongos> use admin
|
switched to db admin
|
MongoDB Enterprise mongos> db.createRole({role: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: ["root"], privileges: []})
|
2019-02-19T19:36:53.997+0000 E QUERY [js] Error: not authorized on admin to execute command { createRole: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: [ "root" ], privileges: [], writeConcern: { w: "majority", wtimeout: 600000.0 }, lsid: { id: UUID("198c682e-5c9b-4510-8dd1-e22e19fc569d") }, $clusterTime: { clusterTime: Timestamp(1550604992, 1), signature: { hash: BinData(0, 4B06E5F200B29D2FE9570237D5985B2D2F65D341), keyId: 6659794044572401693 } }, $db: "admin" } :
|
_getErrorWithCode@src/mongo/shell/utils.js:25:13
|
DB.prototype.createRole@src/mongo/shell/db.js:1779:1
|
@(shell):1:1
|
MongoDB Enterprise mongos>
|
bye
|
[ec2-user@ip-10-122-3-75 fulltest]$ /tmp/mms-automation/test/versions/mongodb-linux-x86_64-enterprise-amazon2-4.0.6/bin/mongo localhost:9010
|
MongoDB shell version v4.0.6
|
connecting to: mongodb://localhost:9010/test?gssapiServiceName=mongodb
|
Implicit session: session { "id" : UUID("dbd281c1-13e1-450e-999f-94a237cf7df6") }
|
MongoDB server version: 4.0.6
|
MongoDB Enterprise mongos> use admin
|
switched to db admin
|
MongoDB Enterprise mongos> db.createRole({role: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: ["root"], privileges: []})
|
2019-02-19T19:37:13.241+0000 E QUERY [js] Error: not authorized on admin to execute command { createRole: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: [ "root" ], privileges: [], writeConcern: { w: "majority", wtimeout: 600000.0 }, lsid: { id: UUID("dbd281c1-13e1-450e-999f-94a237cf7df6") }, $clusterTime: { clusterTime: Timestamp(1550605028, 1), signature: { hash: BinData(0, 54E2C0E048178B413812FA5AFA918CD3A89FF8C9), keyId: 6659794044572401693 } }, $db: "admin" } :
|
_getErrorWithCode@src/mongo/shell/utils.js:25:13
|
DB.prototype.createRole@src/mongo/shell/db.js:1779:1
|
@(shell):1:1
|
MongoDB Enterprise mongos>
|
bye
|
[ec2-user@ip-10-122-3-75 fulltest]$ /tmp/mms-automation/test/versions/mongodb-linux-x86_64-enterprise-amazon2-4.0.6/bin/mongo localhost:9010
|
MongoDB shell version v4.0.6
|
connecting to: mongodb://localhost:9010/test?gssapiServiceName=mongodb
|
Implicit session: session { "id" : UUID("185a84c8-0cf0-4f04-802b-5363d658c5aa") }
|
MongoDB server version: 4.0.6
|
MongoDB Enterprise mongos> use $external
|
switched to db $external
|
MongoDB Enterprise mongos> db.auth({mechanism: "PLAIN", user: "automation-agent", pwd: "r3Itd41khkRV", digestPassword: false})
|
1
|
MongoDB Enterprise mongos> use admin
|
switched to db admin
|
MongoDB Enterprise mongos> db.createRole({role: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: ["root"], privileges: []})
|
{
|
"role" : "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc",
|
"roles" : [
|
"root"
|
],
|
"privileges" : [ ]
|
}
|
MongoDB Enterprise mongos>
|
I believe connecting to localhost while authenticating via Kerberos requires ServiceHost support. jeff.yemin What do you think?
|
|
I may have spoken too soon. When I run the test on evergreen it fails. My virtual box where it worked has the hostname mapped to 127.0.1.1 (sic). Maybe MongoDB considers that to also be localhost. I will investigate further to see if it is possible to create an initial role on an LDAP Authz/Kerberos sharded cluster deployment on an evergreen machine.
|
|
Yep. That worked.
I am gaining confidence that we will be able to do without ServiceHost.
But still, I am unable to run some of our auth tests until GODRIVER-803 is resolved. So I think we can continue to hold off on ServiceHost. Once GODRIVER-803 is resolved I will be able to run our remaining auth tests and see if they uncover any problems with there not being ServiceHost support.
|
|
Hmm.. Maybe in the case of LDAP Authz I need to not use the localhost exception. I'll try that.
|
|
jeff.yemin I am having trouble creating the first role in a system using LDAP Native Authorization while not being authenticated (but using the localhost exception) on a mongos. Here are the relevant log lines from the mongos:
2019-02-16T21:35:45.717+0000 I NETWORK [thread2] connection accepted from 127.0.0.1:49876 #730 (343 connections now open)
|
2019-02-16T21:35:45.718+0000 I NETWORK [conn730] received client metadata from 127.0.0.1:49876 conn730: { driver: { name: "mongo-go-driver", version: "v1.0.0-rc1+prerelease" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.10.7", application: { name: "MongoDB Automation Agent v6.4.0 (git: DEV)" } }
|
2019-02-16T21:35:45.720+0000 I ACCESS [conn730] Not authorized to create the first role in the system 'cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc@admin' using the localhost exception. The user needs to acquire the role through external authentication first.
|
2019-02-16T21:35:45.720+0000 I ACCESS [conn730] Unauthorized: not authorized on admin to execute command { createRole: "cn=automation-agent-group,ou=Groups,dc=10gen,dc=cc", roles: [ { role: "clusterAdmin", db: "admin" }, { role: "readWriteAnyDatabase", db: "admin" }, { role: "userAdminAnyDatabase", db: "admin" }, { role: "dbAdminAnyDatabase", db: "admin" }, { role: "restore", db: "admin" }, { role: "backup", db: "admin" } ], privileges: [], writeConcern: { w: "majority" } }
|
This log appears to imply that I need to authenticate in order to create the first role. What do you think? Does this necessitate the support of ServiceHost in the Go driver?
|
|
Good point. I'll have to do some research into this. We may be able to get around this in the way you describe.
|
|
Sure, but the point of the localhost exception is you don't have any users yet so you can't actually auth. You can connect using localhost and add the first user, drop the connection, then authenticate using whatever the defined auth mechanism is for the cluster.
|
|
I believe we connect to 127.0.0.1 sometimes in order to access the localhost exception
|
|
tim.olsen, if I understand this request correctly, you need this because automation is connecting to a local mongod/s using "localhost" as the hostname and there may or may not be a way for the Go driver to translate localhost into the server's canonical name for Kerberos auth. So the proposed solution is to add a SERVICE_HOST option where you pass the canonical name. Since you know the canonical name, otherwise how would you pass it as SERVICE_HOST, why not just connect to mongod/s using the canonical name?
|
|
We sometimes connect to a mongod or mongos using 127.0.0.1. If you believe hostname canonicalization will work for the agent, then I am willing to try it.
|
|
tim.olsen, given CLOUDP-33451, can you add some context for how the automation agent uses this feature and why hostname canonicalization isn't sufficient?
|
|
I'm fine with adding the new option. I just want a spec ticket about it, because no other driver has it, with a discussion of why it's necessary. I don't want this research and discussion to get lost in the Go driver ticket tracker, since this will almost certainly come up again in the future.
|
|
behackett This might be possible in Go, but I wasn't able to find a good way to do this. We use net.LookupAddr and the documentation says "LookupAddr performs a reverse lookup for the given address, returning a list of names mapping to that address." Running net.LookupAddr("127.0.0.1") resulted in "localhost" and I didn't see any way of getting the FQDN from there.
|
|
craiggwilson to talk to jeff.yemin about what's next here.
|
|
The previous research can be found here: https://jira.mongodb.org/browse/SPEC-1040
libkrb5's test suite has a python implementation that's easy to read.
|
|
I'm worried that getaddrinfo isn't going to be useful with 127.0.0.1/localhost, or at least not portably so.
|
|
I did some research on how libkrb5 does it a while ago. They do two things:
- They look up the canonical name using getaddrinfo.
- They do a reverse DNS lookup on, I think, the canonical name (I'd have to read the source again to be sure about this)
https://web.mit.edu/kerberos/krb5-latest/doc/admin/princ_dns.html#service-principal-canonicalization
|
|
AH, good point. Yes, it very much is hostname canonicalization. I wonder if it's possible to get the right hostname here with proper canonicalization.
|
|
Specifically, this is used to connect to localhost (127.0.0.1) but auth as if connecting via the FQDN.
|
|
That sounds more like hostname canonicalization, which I still don't think we do right in drivers.
|
|
No, this is any environment. SERVICE_REALM is the "realm" or domain name, so "MONGODB.COM" for instance. SERVICE_NAME is the service name, which defaults to "mongodb".
What this ticket is about is that the hostname they've used in their connection string, eg. "localhost", is not the fqdn that the server responds to during kerberos negotiation. Some tools need to be able to specify the fqdn separately from the hostname in the URI.
Note that this will not work with any form of discovery that occurs with a replica set member unless all replica set members are setup to respond to the same fqdn. Hence, this really only applies to when a driver is talking directly to single member of whatever topology is setup.
|
|
Is this needed specifically for Windows environments? How is this different from SERVICE_REALM? We added that to support situations where the service and server are in two different realms, which seems to be a common situation in AD deployments, IIRC.
FYI craig.wilson@mongodb.com
|
Generated at Thu Feb 08 08:34:46 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.