When converting a decimal to Decimal128 and back, the result is not as expected when the original decimal is "zero with decimals", e.g. 0.000
Code to reproduce:
Unable to find source-code formatter for language: csharp. Available languages are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml
var originals = new[] { 1M, 1.000M, 0M, 0.000M }; foreach( var original in originals ) { var dec128 = new Decimal128( original ); var newDecimal = Decimal128.ToDecimal( dec128 ); if( !decimal.GetBits( original ).SequenceEqual( decimal.GetBits( newDecimal ) ) ) Console.Write( "!" ); Console.WriteLine( $"{original} = {dec128} = {newDecimal}" ); }
The last value, 0.000M is converted to a straight Decimal.Zero.
I think this is due to line 719 in Decimal128.cs:
Unable to find source-code formatter for language: csharp. Available languages are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, yaml
if (Decimal128.IsZero(d)) { return decimal.Zero; }
I don't believe this is the correct behavior.