[KAFKA-322] External secrets via ConfigProvider does not work Created: 22/Jul/22 Updated: 28/Oct/23 Resolved: 02/Mar/23 |
|
| Status: | Closed |
| Project: | Kafka Connector |
| Component/s: | Configuration |
| Affects Version/s: | None |
| Fix Version/s: | 1.10.0 |
| Type: | Question | Priority: | Critical - P2 |
| Reporter: | Florin Bunău | Assignee: | Ross Lawley |
| Resolution: | Fixed | Votes: | 1 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Quarter: | FY24Q1 | ||||||||
| Case: | (copied to CRM) | ||||||||
| Description |
|
After packaging together with the jar a ConfigProvider (the standard way external secrets are added to Kafka Connect plugins) for the purposes of getting external secrets and configuring the config provider itself (in this case a VaultConfigProvider), e.g.
// vault host, token and path redacted using / invoking the use of the secret in the plugin
will result in the secret engine being created and invoked:
but then Kafka Connect, and maybe more specifically the mongo plugin throws this error:
// vault host, token and path redacted This has been tested both in a local docker-compose environment with confluentinc/cp-kafka-connect:7.1.0. and real AWS MSK Connect environment It look like even though the secret engine is invoked, the plugin incorectly then uses the secret invocation/interpolation syntax itself as the value, instead of using the retrieved secret Could you please look into this? If this is indeed a known bug or gap it renders the plugin unusable in a production env, hence the priority Thank you, Florin @ LeadIQ |
| Comments |
| Comment by Ross Lawley [ 02/Mar/23 ] | |
|
Thanks bdesert@gmail.com for the PR. | |
| Comment by Githook User [ 02/Mar/23 ] | |
|
Author: {'name': 'Ed B', 'email': 'bdesert@gmail.com', 'username': 'bdesert'}Message: Ensure configs are parsed and evaluated (#133) When using config providers the `connection.uri` might not be processed. The reason is that validate function of either Sink or Source Connector, the config values are VALIDATED, but not PARSED. This fix ensures the values are parsed then validated
| |
| Comment by Ed Berezitsky [ 01/Mar/23 ] | |
|
I have a fix for this issue, if you have some cycles to review.
| |
| Comment by Ross Lawley [ 26/Jan/23 ] | |
|
I have moved this into the backlog and will look to get it scheduled . In the meantime if you are able to provide a PR just for your validation fix, that would be great and help speed up the development. All the best, Ross | |
| Comment by Ed Berezitsky [ 19/Jan/23 ] | |
|
@Ross Lawley, it doesn't work for me, I use multiple different config providers, and `connection.uri` is not being processed. The reason is that validate function of either Sink or Source Connector, the config values are VALIDATED, but not PARSED. For example, Sink Connector: Config is created from the validator. Since this config is used for validation of the connection, the `connection.uri` is used as is, without parsing. To work it around, I override a value or `connection.uri` in `connectorConfigs` with a parsed values from sinkConfig (which is parsed config). To properly fix it, validation functionality should utilize `sinkConfig` instead of `config` (or in addition to that).
| |
| Comment by Florin Bunău [ 29/Jul/22 ] | |
|
I can confirm the plugin works both with Vault and AWS Secret Manager ConfigProviders | |
| Comment by Florin Bunău [ 29/Jul/22 ] | |
|
Thank you @Ross Lawley Indeed this was the problem. I don't know how I misread the initial docs, I was looking at this for a couple days, couldn't figure out what was wrong. Upon careful inspection even AWS docs state a connection to the worker config. Sorry for the false alarm, this was very helpful | |
| Comment by Ross Lawley [ 27/Jul/22 ] | |
|
ConfigProviders should be opaque to the connectors - the connectors know nothing of them and the configurations provided to the connector should have already been transformed by the config provider. Please ensure that the config provider is configured on the Kafka workers / executors level and not along side the connector. Ross | |
| Comment by Florin Bunău [ 26/Jul/22 ] | |
|
Hi! an update on this
I also tried the AWS Secret Manager on a real AWS MSK Connect deployment, as described by the official documentation https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-config-provider.html packaging up the Mongo plugin together with https://github.com/jcustenborder/kafka-config-provider-aws ( note needs Guava 31.1-jre packaged up too https://github.com/google/guava/releases for it to work)
and it gives out the same error:
After the secret is retrieved, the mongo plugin just continues to use the string that invokes the secret instead of the actual retrieved secret value
|