I understand this is not a bug as much as a feature, but I think it's the wrong behavior:
Right now if trying to deserialize a float to a float32, the decoder will return 0 if the float is too precise for 32 bits, without giving an error. I think the ideal solution would be to return an error, but that would require major changes. If we have to fail silently, the imprecise number seems better than 0.