[CSHARP-3562] BsonDocument.Parse failure on large double Created: 09/Apr/21 Updated: 27/Oct/23 Resolved: 26/Apr/21 |
|
| Status: | Closed |
| Project: | C# Driver |
| Component/s: | None |
| Affects Version/s: | 2.12.2 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Alex Bevilacqua | Assignee: | Robert Stam |
| Resolution: | Gone away | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Case: | (copied to CRM) |
| Description |
Note that the same JSON string will parse using JObject.Parse:
|
| Comments |
| Comment by Backlog - Core Eng Program Management Team [ 26/Apr/21 ] | |||||||||||||||
|
There hasn't been any recent activity on this ticket, so we're resolving it. Thanks for reaching out! Please feel free to comment on this if you're able to provide more information. | |||||||||||||||
| Comment by Robert Stam [ 10/Apr/21 ] | |||||||||||||||
|
Whether or not it is a breaking change depends on how you look at it... A principal property of integers is that they have exact precision. They might overflow, but they never lose precision. The C# driver implementation is based on that principle, and if the integer doesn't fit in either a 32-bit or 64-bit integer then we throw an exception rather than allow precision to be lost. Applications might be relying on that behavior to ensure that no precision is ever inadvertently lost. To change the behavior now to allow precision to be lost would be a breaking change (a behavioral breaking change). You are right that we would like all drivers to handle this situation in the same way. Apparently they don't. We are discussing internally what the correct behavior should be, and based on what we decide some drivers might have to be changed. Floating point arithmetic is tricky, with lots of idiosyncrasies. In my view we should never silently default to floating point. An application should only use floating point when it does so deliberately, with full awareness of all the weirdness of floating point arithmetic and its frequent loss of precision. | |||||||||||||||
| Comment by Graham Gearing [ 09/Apr/21 ] | |||||||||||||||
|
This would not be a breaking change. You are separating int32 and int64 based on the scale of the number. This would be another break when the scale exceeds Int64's range. It really is very important that Mongo behaves consistently in all interfaces. That one interface is Java under the covers and another is C should not change the result. A valid piece of JSON data is submitted. One interface says the data is good. Another interface says the same data is bad. | |||||||||||||||
| Comment by Robert Stam [ 09/Apr/21 ] | |||||||||||||||
|
I understand what you are saying... but the shell handles numbers differently because it is a Javascript based shell. In the shell ALL numbers are floating point numbers because in Javascript ALL numbers are floating point numbers. While your request is reasonable, we actually parse numbers into different BSON types (int32, int64, double or decimal) depending on what they look like. Parsing numbers that look like integers as doubles instead just because the value is large would be a breaking change for us, so I don't see that we would make such a change. Fortunately there are simple workarounds you can use to express the intent clearly. | |||||||||||||||
| Comment by Alex Bevilacqua [ 09/Apr/21 ] | |||||||||||||||
|
rstam I believe the desired intent here is for the behavior to be the same as if this was inserted using the shell:
| |||||||||||||||
| Comment by Robert Stam [ 09/Apr/21 ] | |||||||||||||||
|
This behavior is by design. If a number "looks" link an integer we verify that it can be stored as an integer. If you want to parse that constant as a double you should use:
or:
| |||||||||||||||
| Comment by Alex Bevilacqua [ 09/Apr/21 ] | |||||||||||||||
|
It appears the BsonDocument.Parse method is failing to parse an integer literal greater than UInt64.MaxValue, when in fact it should be treating it as a Double which would allow for a max value of either Double.PositiveInfinity or Double.NegativeInfinity. |