[SERVER-37758] assert fail on fresh install Created: 25/Oct/18 Updated: 31/Oct/18 Resolved: 30/Oct/18 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Admin |
| Affects Version/s: | 4.0.3 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Jeff Meredith | Assignee: | Danny Hatcher (Inactive) |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Operating System: | ALL |
| Participants: |
| Description |
|
I'm deploying a single node mongodb (4.0.3) via Kubernetes/helm (chart 4.6.2) and docker. I often, but do not always, see the system initialization fail because it does not find the `featureCompatibilityVersion` file in the container it is similar to an issue reported earlier this year: https://jira.mongodb.org/browse/SERVER-33362
It seems impossible that a freshly created install would need repair. further, when i run the deploy in isolation (nothing else deploying at the same time) it works reliably.
|
| Comments |
| Comment by Jeff Meredith [ 31/Oct/18 ] | |
|
Hi Danny, Thanks again for your help. I changed my K8s Helm chart to use the library/mongo image in docker hub and that works great. my environment only seems to trip up when using the bitnami image. | |
| Comment by Danny Hatcher (Inactive) [ 30/Oct/18 ] | |
|
Hello Jeff, I've taken a quick look through the linked GitHub pages. You are correct that this script is starting up MongoDB, seeding a root user, and then restarting the node. However, without digging into the Docker container itself, I am unsure as to how it is actually seeding the root user. In a normal MongoDB installation, once you start it up as a fresh install it will come with the correct featureCompabilityVersion document. If you are unaware, this document was introduced in MongoDB 3.4 to ensure backwards compatibility with previous versions when going through the upgrade process. If you had a MongoDB 3.4 installation and you were upgrading to MongoDB 3.6, there were some changes in 3.6 that would not be downgradable back to 3.4. Thus, you would run 3.6 binaries but with a featureCompatibilityVersion of 3.4 in case you wanted to downgrade. When you decided that 3.6 was safe to run and you wanted the new features, you would upgrade this document to 3.6. This process continued in 4.0 and will likely continue in the foreseeable future. For more information, please see https://docs.mongodb.com/manual/reference/command/setFeatureCompatibilityVersion/index.html. My guess is that the container is starting up an old version of MongoDB (3.2 or earlier) to seed the user and then starting up again with a new version. Because it sees data files already existing, it does not automatically create the document. But because MongoDB relies on the presence of the document to determine which features to enable, it fatally asserts. Unfortunately, as I am unable to reproduce this and we have not had reports of this circumstance happening recently, I believe that it is almost certainly a problem with the container. I see you opened a GitHub issue with the project and I encourage you to keep following up with them. Thanks, Danny | |
| Comment by Jeff Meredith [ 26/Oct/18 ] | |
|
yes, it seems creating the root user initlializes the data/db directory. i updated my script to show FS contents before and after:
the log shows the difference:
It seems something isn't always squaring between bitnami initialization and mongodb. | |
| Comment by Jeff Meredith [ 26/Oct/18 ] | |
|
Hi Daniel, thank you for your response. I did not notice that log message previously. I did a little more digging by adding some bash script to the top of the app-entrypoint.sh file in the mongodb image i'm deploying:
I found that I get the log message you reference every time i run regardless of the contents of /opt/bitnami/mongodb; even when there is no data directory present:
Further, i'm deploying lots of other stuff (postgresql, mysql, influxdb, telegraf-ds, my app, nginx etc.). i would suspect that if K8s, docker and openstack were not providing clean volumes on the first run, i would see issues in at least one of these systems. i don't. they all start cleanly every time. Anyway, is there an explanation for why the STORAGE system is detecting previously existing data files when they do not exist at container startup? I don't believe that message is indicative of an issue with the container. Wouldn't the creation of the root user, which seems to happen earlier in the init process, need to add files to mongodb/data/db?
| |
| Comment by Danny Hatcher (Inactive) [ 26/Oct/18 ] | |
|
Hello Jeff, The following line indicates that there were data files previously existing and it is not a "freshly created install".
This is most likely an issue with the container that you are using and not MongoDB itself. Unfortunately, it is not a MongoDB-provided product so I do not have further information on what the problem could be. I highly encourage you to reach out to the maintainers of that container to see if they can help you troubleshoot the problem. Thank you, Danny |