[SERVER-524] Encryption of wire protocol with SSL Created: 07/Jan/10 Updated: 12/Jul/16 Resolved: 29/Oct/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Networking, Security |
| Affects Version/s: | None |
| Fix Version/s: | 3.0.0 |
| Type: | New Feature | Priority: | Major - P3 |
| Reporter: | Robert Vanderwall | Assignee: | Andreas Nilsson |
| Resolution: | Done | Votes: | 139 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||
| Participants: |
Andreas Nilsson, Andy, Andy Schwerin, BrandonM, crsmp, Daniel Doubrovkine, David Fogel, Eliot Horowitz, Eric Milkie, Flavien, James Page, Jan Stanzel, John Crenshaw, Jon Gorrono, julien, Justin Dearing, Karoly Negyesi, Laradji nacer, Levi Corcoran, Matic, Max Aller, Nadav Wiener, Ramon Fernandez Marina, Robert Vanderwall, Rob Giardina, Scott Hernandez, Thijs Cadier, Tyler Brock, uwe schaefer, Zakariya Dehlawi
|
||||||||||||||||||||||||||||||||||||||||
| Description |
|
Currently, the Mongo Wire protocol sends the data essentially in clear-text. This has two implications for my user scenario. First is that there's a lot of network traffic generated for queries. When reports are run and many fields of data are retrieved, I get the same field name over and over. Some compression here would speed up the delivery of the data. The query itself is lightning fast, but the transaction is slowed down by the movement of the massive amount of data. Second, the clear-text has security implications. Running SSL or some similar secure wire protocol could solve potentially both these issues. Thanks! Edit: Need to support auto-negotiation, and optional (both ssl/non-ssl connections), preferred and forced modes. |
| Comments |
| Comment by Ramon Fernandez Marina [ 29/Oct/15 ] | |
|
As of MongoDB version 3.0.0, community binaries for Windows and Linux platforms are compiled with SSL enabled and dynamically linked. Regards, | |
| Comment by Thijs Cadier [ 01/Jun/14 ] | |
|
Thanks Andy, that makes a lot sense. I guess we'll stick to our own compiled versions for the foreseeable future. | |
| Comment by Andy Schwerin [ 30/May/14 ] | |
|
thijs, the remaining work on this ticket is really about binary distribution. The 2.6 series source code in github supports SSL, but the openssl library version varies a lot by specific OS vendor and disto version. That means that you need a version of the MongoDB binaries for nearly every supported distro, unlike the current community binaries which run pretty much anywhere. When we've worked out a distribution solution, we'll be ready to resolve this ticket. Until then, on Linux you can try using your distribution's provided version of MongoDB, and on OS X you can try Brew as tyler.brock@gmail.com described above. You may, of course, also compile the code yourself, passing the --ssl option to the build scripts. If you happen to have a MongoDB subscription, you may be entitled to use the MongoDB Enterprise release for your platform. Since these builds are already targeted at specific OS distributions, we build them with SSL support. | |
| Comment by Thijs Cadier [ 30/May/14 ] | |
|
This doesn't seem to be in the 2.7 development release yet, will it be part of the 2.7 series? | |
| Comment by Tyler Brock [ 10/Mar/14 ] | |
|
Anyone on a Mac:
| |
| Comment by Eric Milkie [ 10/Mar/14 ] | |
|
Unfortunately, it's unlikely to be backported, as this is not a bug fix but a new feature, and the work to complete this will not be easily backportable. | |
| Comment by Jon Gorrono [ 08/Mar/14 ] | |
|
The fix version is still the 2.7.x branch. Does your comment (Andy) mean that this might make it into a 2.6.x minor release? | |
| Comment by Andy Schwerin [ 07/Mar/14 ] | |
|
javacruft, we checked on licensing and updated the copyright files to allow linking with openssl. Further, as of 2.6, support for SSL is pretty complete, and it would be fine for distro packages to have SSL enabled. This ticket remains open until we resolve some internal technical issues around distributing binaries, rather than around licensing or functionality in the code itself. | |
| Comment by Thijs Cadier [ 07/Nov/13 ] | |
|
It would be great if using SSL on Mongo connections was trivial to set up. Any progress on this? | |
| Comment by James Page [ 27/Feb/13 ] | |
|
Eliot - how are the licensing/export issues progressing? I'd really like to enable the SSL option in the Ubuntu packages for raring. | |
| Comment by Eliot Horowitz (Inactive) [ 28/Mar/12 ] | |
|
The SSL code in mongo is quite tiny, as it forced all sockets to use ssl, and uses openssl. Overall - we do hope to get the licensing/export issues worked out in the not too distant future, but obviously need to be careful. | |
| Comment by Levi Corcoran [ 27/Mar/12 ] | |
|
Eliot - I understand the licensing/export complications; but I'm wondering if you could clarify the nature of the current 'compile it yourself' option. This is a critical blocking issue for usage of Mongo in some security-sensitive areas (dealing with PCI, HIPPA, or other compliance requirements).
Any insight you could provide would be greatly appreciated! | |
| Comment by Scott Hernandez (Inactive) [ 22/Mar/12 ] | |
|
We need to support both encrypted and non-encrypted at the same via some auto-negotiation and/or alternative ports for SSL/NON-SSL. Currently it is all on, or off and provides no way to transition a live system without taking everything down to migrate from one level to another. In addition adding a force flag which can be set after the migration is needed to lock a current system in to one of the two modes to enforce that behavior. It would be preferable for negotiation to prefer SSL, but fall-back so that clients can do the same. This would allow inter-connects (replication, internal sharded connections) to be secured without users having to also. | |
| Comment by Eliot Horowitz (Inactive) [ 08/Feb/12 ] | |
|
The code for ssl is basically done - the problems are figuring out packaging/licensing/export issues. | |
| Comment by Jan Stanzel [ 08/Feb/12 ] | |
|
Is there any update on this you could give us? Can we still hope for 2.2 or has it been postponed? | |
| Comment by Eliot Horowitz (Inactive) [ 11/Nov/11 ] | |
|
We're hoping 2.2 Compiling in yourself is definitely an option though (scons --ssl) | |
| Comment by Flavien [ 11/Nov/11 ] | |
|
So, is there an ETA for having SSL officially supported? | |
| Comment by Laradji nacer [ 23/Aug/11 ] | |
|
Ssl is needed for data transfert between replicat set , if you use replicat set on multiple datacenter your data will go all over the net in clear unless you have a complex installation of vpn router. | |
| Comment by Eliot Horowitz (Inactive) [ 04/Aug/11 ] | |
|
@flavien - no. features are never backported. | |
| Comment by Flavien [ 04/Aug/11 ] | |
|
This is good news. Does it mesn SSL will be supported in 1.8.4? | |
| Comment by Eliot Horowitz (Inactive) [ 01/Aug/11 ] | |
|
Just an update. There is rudimentary support for SSL coded currently - but not linked in. | |
| Comment by julien [ 30/Jul/11 ] | |
|
Waiting for ssl encryption before using MongoDB. Need to store confidential data and cannot afford my password to be sniffed. | |
| Comment by Flavien [ 26/Jul/11 ] | |
|
Regarding encryption, I think this is very important to have that in mongodb. If you want to use a mongodb cloud provider (like Mongo HQ), you have to connect to the server through the internet, and there is a high risk of the traffic being sniffed. Adding an SSL option is almost a prerequisite to make those MongoDB clould providers relevant. | |
| Comment by Eliot Horowitz (Inactive) [ 29/Apr/11 ] | |
|
new ticket for compression | |
| Comment by Eliot Horowitz (Inactive) [ 29/Apr/11 ] | |
|
These were originally the same | |
| Comment by Max Aller [ 19/Apr/11 ] | |
|
If gzip uses too much CPU (plausible), perhaps Google's Snappy would be a better fit? http://code.google.com/p/snappy/ | |
| Comment by Daniel Doubrovkine [ 30/Mar/11 ] | |
|
Please consider that in the future there will be other protocols than tcp/ip (named pipes, shared memory, air pigeon) and that ssl is a layer on top of this. In addition, for SSL it's important to be able to swap libraries and both know and adjust the encryption levels used for SSL and for other places in MongoDB where encryption happens. This is because a) FIPS compliance is required in the US for both encryption generally and secure communication specifically in all common criteria/gov contracts b) export laws will force the use of different openssl versions in various countries. | |
| Comment by Zakariya Dehlawi [ 30/Mar/11 ] | |
|
I'm currently looking at the mongoDB arch, and trying to determine an elegant way of using openSSL APIs to handle ALL communication between client, server, mongos, mongoconf, shards, and replicas. The biggest problem with this is, is that it's going to require some digging and working Mongo's guts. | |
| Comment by Matic [ 30/Mar/11 ] | |
|
@John Crenshaw | |
| Comment by John Crenshaw [ 30/Mar/11 ] | |
|
I thought that the SSL issue could be worked around using stunnel, but it appears that this may not actually be practical (requiring each node to maintain a separate active local port for every other node that it may connect with). The problem is not the DB<->application exchange, but rather, the communication between mongos, mongoconf, shards, and replicas. If you keep everything in one data center you could secure it at the physical level, but if you locate your servers in multiple locations this won't work. Unencrypted replication between geographically separate datacenters would be reckless. Since "what if your datacenter burns down" was used extensively as an argument in favor of replication over journaling in the past, this seems to be a pretty serious oversight. If anyone else has found a good workaround for this problem, I'd love to know what it was. | |
| Comment by BrandonM [ 09/Mar/11 ] | |
|
I don't see this issue in the roadmap. What's the expected date for work to begin on SSL support? I also would second the motion for mutual SSL authentication support. As already commented above, SSL is a must in the federal space. Without SSL support, MongoDB will remain unused. | |
| Comment by uwe schaefer [ 26/Feb/11 ] | |
|
+1 for optional compression on the wire. | |
| Comment by Daniel Doubrovkine [ 14/Jan/11 ] | |
|
I would split this issue into encryption and compression. For encryption, consider sticking as close as possible to SSL, it's a known entity. A related issue | |
| Comment by Justin Dearing [ 06/Dec/10 ] | |
|
It seems there is a new game in town when it comes to ssl libraries: http://axtls.sourceforge.net/ Supposed to be nicer to work with than openssl. Never programmed against either just reporting what I've been told. | |
| Comment by David Fogel [ 01/Nov/10 ] | |
|
Regarding compression: gzip can be CPU-intense. One alternative I've read about recently is LZF compression, a variation of Lempel-Ziv compression, optimized for super fast compression and un-crompression. The main implementation is written in portable C: http://oldhome.schmorp.de/marc/liblzf.html There's been a recent Java port of that library here: http://github.com/ning/compress This doesn't solve the encryption feature obviously, but maybe encryption should be broken out into a separate issue? | |
| Comment by Justin Dearing [ 16/Oct/10 ] | |
|
I'd like an SSL ecrrypted port. It should be differtet from the standard port. You should be able to turn the two ports on and off. Using SSL also means you could offset encryption to a hardware SSL accellerator. | |
| Comment by Nadav Wiener [ 22/Jul/10 ] | |
|
Regarding compression: transport level data de-duplication can fit the bill nicely. While I'm at it, a few words on storage de-duplication: Data de-duplication introduces a level of indirection, but other than that the data is still laid out the same; OTOH, the main disadvantage of gzip is that your data becomes opaque if you go beyond compressing values. Caching a limited number of commonly duplicated chunks of data in RAM, and using copy-on-write once they change, can be applicable to slow changing, low cardinality data – but as opposed to the highly effective rle compression schemes used in relational column-oriented databases, data de-duplication works very well on semi-structured data. This can be implemented using specially annotated DBRefs, which for all intents and purposes will be resolved transparently by MongoDB for both querying and indexing. If the referenced documents are all cached, there should be no performance problem. Kind of like "hard references" that will be transparent to the user, as opposed to the "soft references" MongoDB has today. [1] http://www.cs.washington.edu/homes/franzi/pdf/net_project.pdf | |
| Comment by crsmp [ 29/May/10 ] | |
|
My vote is for encryption of all Mongodb traffic. This is a major requirement in many federal spaces and should the 10gen owners think that's a possible business case, this feature and others would be a step in the right direction. | |
| Comment by Karoly Negyesi [ 12/May/10 ] | |
|
I have offered http://jira.mongodb.org/browse/SERVER-863 as one way to save space on disk / wire / memory. | |
| Comment by Andy [ 07/May/10 ] | |
|
While compressing data before sending it is good, I think it's better to compress the data before storing it. That way data size could be minimized to fit into RAM or SSD (SSDs are still expensive and limited in capacity – A 64GB X25-E costs $800). | |
| Comment by Eliot Horowitz (Inactive) [ 02/Apr/10 ] | |
|
trying to keep 1.5/1.6 very focused on sharding + replica sets. | |
| Comment by Eliot Horowitz (Inactive) [ 22/Jan/10 ] | |
|
If you have a moment - could you try compressing a single one of those 2200 objects? | |
| Comment by Rob Giardina [ 21/Jan/10 ] | |
|
I find this very important for large data requests. Particularly given that most documents have repeated schema (and often large key names), the compression factor should be large. Anecdotally, I have a collection of 2200 documents that is 3.6mb uncompressed and 364k compressed with basic gzip. |