I was reading through some code the other day and was surprised to find that it was using doubles to represent dollar amounts. Reason for the alarm bells is that doubles and floats cannot accurately represent many decimal fractions (eg, 0.1), since doubles and floats internally work with powers of 2 rather than powers of 10. These inaccuracies are likely to lead to significant errors, especially when performing arithmetic (eg, adding up a table of dollar amounts). See this IBM article for a more in depth explanation and examples. The solution is to use types that work with powers of ten internally. In C#, you can use 'decimal' and in Java or Ruby, 'BigDecimal', to avoid these problems.