|
jesse, I think that %.20g seems reasonable. Even using %g would not solve the problem Jeroen sees, since more precise numbers would be truncated. It seems a better policy to emit more digits than necessary.
FWIW, the Python interpreter does something complicated with repr(some_float) called on a floating point, where it prints the fewest possible digits d_0, d_1, ..., d_n such that float(d_0, d_1, ..., d_n) == float(some_float). This is very convenient, but I don't think it's necessary for everyone to implement. See https://bugs.python.org/issue1580 if interested.
|
|
Hey Jeroen, I asked @llvt to help answer, since he's the author of our Extended JSON Spec and understands format specifiers better than me. What I want to say is, we've somewhat standardized on printf ("%.20g") as our definitive string-encoding of doubles, as you can see in this test:
https://github.com/mongodb/specifications/blob/deff67d64b61861fee9e02dc5f38f574dc8b0513/source/bson-corpus/tests/double.json#L17-L17
I changed the format string in libbson 1.6 from "%g" to "%.20g", which resulted in the change you see, but it matches the standard tests that all MongoDB drivers must pass so I don't think we should revert it.
|