I was reading through some code the other day and was surprised to find that it was using doubles to represent dollar amounts. Reason for the alarm bells is that doubles and floats cannot accurately represent many decimal fractions (eg, 0.1), since doubles and floats internally work with powers of 2 rather than powers of 10. These inaccuracies are likely to lead to significant errors, especially when performing arithmetic (eg, adding up a table of dollar amounts). See this IBM article for a more in depth explanation and examples. The solution is to use types that work with powers of ten internally. In C#, you can use ‘decimal’ and in Java or Ruby, ‘BigDecimal‘, to avoid these problems.
Using C#: You wouldn’t happen to know a nice way to truncate down from 7 decimal to 4 decimal places for example.
All the Decimal members (in .NET 2.0) add a rounding aspect to any return values or only consider the integer component or round up.
We see the options as:-
1. A small amount of arithmetic to ‘construct’ the correct output. (E.g. maybe rounding and then ‘if’ either subtract or add a delta.)
2. Casting to a string and then manipulating and then casting sub strings back.
Neither of the above fell elegant though.
P.S. Great blog, I’ll be back to have a bit more of read especially the book reviews.
what is the namespace for the currency variable
You could multiply the number by 10 to the power of the number of decimal places you are interested in, truncate the result and then divide it by the number of decimal places you are interested in. Eg, (x * 10000).truncate() / 10000.
The .NET decimal type is still a floating-point numeric, as it uses a variable scaling factor. It is the fact that the implied base of decimal’s scaling factor is 10 (not 2) that exempts it from the accuracy limitations of the other floating-point data types in representing “real” numbers.
Thanks Michael, the title of this post should really be “Using _base 2_ floating point variables to represent money => not a good idea!