[DOCS-4712] Getting Started with Hadoop Connector - More prescriptive with Git clone step Created: 21/Jan/15  Updated: 27/Sep/18  Resolved: 27/Sep/18

Status: Closed
Project: Documentation
Component/s: drivers
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Minor - P4
Reporter: Matt Kalan Assignee: Unassigned
Resolution: Won't Fix Votes: 0
Labels: groom
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to DOCS-12076 Add banners for Spark Connector on th... Closed
Participants:
Days since reply: 5 years, 19 weeks, 6 days ago

 Description   

In this page: http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-hadoop/,
a couple tips would be helpful for those that don't use the Git command line often. Git clone by default in my environment installed to etc/profile.d but I had to look around a little bit. Also it made my directories owned by root, which caused issues later running the example with sudo. We could direct the user to clone to a specific directory ( and make sure it is owned by the user. Running the example with sudo later made it so JAVA_HOME was not seen and the build failed.



 Comments   
Comment by Jonathan DeStefano [ 27/Sep/18 ]

Thanks for filing a DOCS ticket. The MongoDB Connector for Hadoop is no longer supported. If you would like to access MongoDB databases using the Apache Spark libraries, use the MongoDB Connector for Spark.

Generated at Thu Feb 08 07:48:40 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.