[DOCS-8198] Hadoop Connector: Connection String required is not really the same as a connection string Created: 01/Jul/16 Updated: 27/Sep/18 Resolved: 27/Sep/18 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | drivers |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Steven Hand | Assignee: | Unassigned |
| Resolution: | Won't Fix | Votes: | 0 |
| Labels: | groom | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Participants: | |||||||||
| Days since reply: | 5 years, 19 weeks, 6 days ago | ||||||||
| Description |
|
The documentation for the Hive part of the Hadoop Connector states that the administrator should use connection string as the format for the value of the "mongo.uri" property. But the "/database" component of the connection needs to be a namespace, which is <database>.<collection>, not just a database name. Furthermore, I understand that Java driver in general requires that the "/database" component be a namespace too, implying that the "mongo.input.uri" and "mongo.output.uri" also require a namespace value in the database component. |
| Comments |
| Comment by Jonathan DeStefano [ 27/Sep/18 ] |
|
Thanks for filing a DOCS ticket. The MongoDB Connector for Hadoop is no longer supported. If you would like to access MongoDB databases using the Apache Spark libraries, use the MongoDB Connector for Spark. |