-
Type:
Improvement
-
Resolution: Unresolved
-
Priority:
Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
Context
Can we improve the API to help customers avoid inserting data with duplicate keys? I am aware that this is technically allowed by the BSON spec, and the server is not enforcing it (we cannot because of backwards compatibility and performance).
The Go driver warns about it here. For us on the server, this is a headache. Customers who have this kind of data it is always unintentional and result in query incorrectness, where we have no way of automatic remediation.
bson.D offers an ordered representation where this is an issue. For timeseries collections we recommend that ordered representations are used for performance and compression but this risk this undefined server behavior.
Definition of done
I would like bson.D to be a higher level construct that keeps track of inserted keys and do some well defined behavior when it happens (return error, overwrite or ignore). Any solution is acceptable to me over allowing duplicate keys.
We should make it easy for customers to do the right thing and difficult for them to do the wrong thing.
Pitfalls
What should the implementer watch out for? What are the risks?
- is related to
-
CDRIVER-6155 Improve API to help customers avoiding duplicate key names
-
- Backlog
-