Truncating a number to specific
number of decimals
Today I encountered a strange
problem with a code snippet I wrote and it took a while to figure it out what
the issue was. The issue was with using the type "Double" in doing
fractional calculations. Our application has a need to truncate a number to a
specified number of decimal positions. For this requirement, the existing
Math.Truncate(...) function has to be extended to control the number of decimal
positions.
The initial function for the custom Truncate function is as below:
public static double Truncate(double value, int decimals)
{
double factor = Math.Pow(10, decimals);
double result = Math.Truncate(factor * value) / factor;
return result;
}
When this is unit tested through a
Console application, LINQPad and MS Tests, all the attempted tests ran
successfully. When this is published for production usage, seldom we noticed
that the decimal values were being rounded down slightly deviating from
the expected results.
After troubleshooting all other
application functions related to the calculations, I found out that the
extended Truncate function was the culprit.
In the past, 4-5 years ago I wrote a
similar function in VB.Net, for a different project and for a different comany,
using the Decimal type. It
worked fine without issues. I used Double
this time as all other variables in the application were of type Double that
uses this function, to avoid casting.
So, I changed the type to Decimal
and tested the application again. Now, all the previous results that were
showing wrong values showed the correct values as expected.
The final version of the
function is as below:
public static decimal Truncate(decimal value, int decimals)
{
decimal factor = (decimal)Math.Pow(10, decimals);
decimal result = Math.Truncate(factor * value) / factor;
return result;
}
… and I changed the overloaded function
to deal with the existing code to avoid casting all over the places.
public static double Truncate(double value, int decimals){
return (double)Truncate((decimal)value, decimals);
}
So, whenever you are dealing with
calculations where precision is important, the better type to use is Decimal.