[SERVER-28435] Implement KeysCollectionCacheReader Created: 22/Mar/17 Updated: 06/Dec/17 Resolved: 19/Apr/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | None |
| Fix Version/s: | 3.5.7 |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Misha Tyulenev | Assignee: | Randolph Tan |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||
| Sprint: | Sharding 2017-04-17, Sharding 2017-05-08 | ||||||||
| Participants: | |||||||||
| Description |
|
The KeysCollectionReader is responsible for reading the keys in the admin.system.keys collection. 1. implement KeysCollectionCache abstract class
2. The KeysCollectionCacheReader must implement KeysCollectionCache API:
|
| Comments |
| Comment by Githook User [ 19/Apr/17 ] | |||||||||||||||||||||
|
Author: {u'username': u'renctan', u'name': u'Randolph Tan', u'email': u'randolph@10gen.com'}Message: | |||||||||||||||||||||
| Comment by Githook User [ 17/Apr/17 ] | |||||||||||||||||||||
|
Author: {u'username': u'renctan', u'name': u'Randolph Tan', u'email': u'randolph@10gen.com'}Message: | |||||||||||||||||||||
| Comment by Githook User [ 11/Apr/17 ] | |||||||||||||||||||||
|
Author: {u'username': u'renctan', u'name': u'Randolph Tan', u'email': u'randolph@10gen.com'}Message: | |||||||||||||||||||||
| Comment by Randolph Tan [ 04/Apr/17 ] | |||||||||||||||||||||
|
misha.tyulenev This will come from the caller when it needs to sign the time "forThisTime" | |||||||||||||||||||||
| Comment by Misha Tyulenev [ 30/Mar/17 ] | |||||||||||||||||||||
|
renctan Please clarify where the forThisTime argument comes from | |||||||||||||||||||||
| Comment by Randolph Tan [ 30/Mar/17 ] | |||||||||||||||||||||
I think it's confusing that this method is even refrencing it's own clusterTime. For example, in the case when a node gets a new clusterTime with a new generation, this method will fail this condition. I propose instead that we have something like:
#2 sounds good to me. | |||||||||||||||||||||
| Comment by Misha Tyulenev [ 30/Mar/17 ] | |||||||||||||||||||||
|
Thanks for the feedback: 2. Good point, I think that its ok using afterClusterTime with the inaccurate value (say Max uint64) so it can never be reached if there is a timeout built into the wait
So I think that the correct behavior is to query to the config server with afterClusterTime argument (which can directly be sent by users anyways so we need to have enough protection there from DOS attacks) but it must timeout. | |||||||||||||||||||||
| Comment by Randolph Tan [ 30/Mar/17 ] | |||||||||||||||||||||
Why do we have "_expiresAt < <current logical time>" here? I think this should not throw an error:
It is possible to hit this even with a legitimate signature if the mongos talked to a secondary config server that is behind. In other cases, mongos could use the clusterTime of the request to pass to the readAfterClusterTime when it talks to the config server, but in this case it's still checking the validity of the clusterTime so it can't just blindly use it. So this either has to wait until you get to the requested keyId (with timeout) or use temporarily use (as opposed to setting the global config opTime) the unvalidated clusterTime as readAfter argument to the query for reading the keys. | |||||||||||||||||||||
| Comment by Misha Tyulenev [ 30/Mar/17 ] | |||||||||||||||||||||
|
renctan please review the proposal |