JavaScript and Floating Points Arithmetics

We (i.e. anyone who has played with JavaScript) all have heard something about how floating points are tricky and borderline impossible to deal with. While this is not exclusive to JS, it is worth knowing a thing or two behind the limitations of dealing with floating numbers.

Let’s start with a well known example:

var a = 0.1 + 0.2;
a === 0.3;   // false
console.log(a);   //0.30000000000000004

The only way to deal with this is to use the toFixed() property from the Number object or to convert everything into integers, perform the calculations then convert everything back into decimals. Both methods are not guaranteed to produce the correct result, especially when dealing with complex calculations with various floating point variables.

I found out the best way to understand floating point problems is to use the decimal system most humans are so used to. Try expressing 1/3 in a decimal system in the best way possible. There is literally no way to express it to its precision. There are hacks, like 0.333... repeating, but these are all ways that confirm our lack of expressing 1/3 in decimal. Something similar is happening with JavaScript and floating points.

Anyone who has taken an intro class in Calculus will be familiar with the Zeno’s paradox. To summarize it, 1 + 1/2 + 1/4 + 1/8 + .... will always approach 2 but never be equal to 2. This is because we are always halving our distance from 2. That is exactly what is going on when JavaScript tries to express some floating points.

Consider this Binary code:

Integers:
Binary: 1 => Decimal: 1
Binary: 10 => Decimal: 2
Binary: 1101 => Decimal: 13

Floating points:
Binary: 0.1 => Decimal: 0.5
Binary: 0.0101 => Decimal: 0.3125
Binary: 0.00011001 => Decimal: 0.09765625
Binary: 0.00011001100110011 => Decimal: 0.09999847412109375

As you can see from above, the binary value is getting closer and close to 0.1 (in Decimal) but never actually equals it. It is a shortcoming of expressing certain floating points in binary; in the same way we can never fully express certain floating points (e.g: 1/3) in decimal. You can try this with pretty much any base system (try expressing 0.1 (decimal) in Base 3).

To answer our original issue (i.e. 0.1 + 0.2), calculations are usually transformed into binary, evaluated then converted back into decimal. With its 32-bit limitation, the expression is limited to only 32 floating points. It then becomes:

0.00011001100110011001100110011001 //approx. of 0.1
+
0.00110011001100110011001100110011 //approx. of 0.2
__________________________________

0.01001100110011001100110011001100 //the actual result in binary to be converted into decimal

Want to try something even more fun?

for(var i = 0, x= 0; i<10;i++){
x += 0.1;  //increment x by 0.1 ten times
}

console.log(x); //0.9999999999999999

PS: I should emphasize that this isn’t something that is unique to JavaScript. Most languages by default have this issue. I just used JavaScript because it’s the most comfortable/easy language to express the idea.