0.1 + 0.2 != 0.3

Jul 16 2024

Recently someone shared a link to a Javascript quiz that asks various questions about Javascript's infamous quirks. It was a fresh remix on a greatest hit that we've all been jamming to since the early 2000's. And within that quiz was a question that seems to appear in most of these quizzes about Javascript:

What will be the output of this code?
console.log(0.1 + 0.2 == 0.3);

This is a real fun question and it never ceases to amaze at least one React developer each time it's shared in slack (React, after all, is just the new jQuery, right? That's a joke, guys). This question, though, doesn't have a thing in the world to do with Javascript and, like a crumugeon, I've been well-actuallying this question since these quizzes first started appearing. In case you haven't already guessed, in Javasript (node) this produces the following output …

$ node math.js
false

Not a Javascript thing at all

Like I said, this is not a problem with Javascript. To prove it, here's similar code in all your favorite (and least favorite) languages. Each of these will print false or an equivalent.

Python
print(0.1 + 0.2 == 0.3)
# False
Ruby
puts(0.1 + 0.2 == 0.3)
// false
PHP
<?php echo (0.1 + 0.2 == 0.3) ? 'true' : 'false';
// false
Java
class Main {
  public static void main(String[] args) {
    System.out.println(0.1 + 0.2 == 0.3);
  }
}
// false
C
#include <stdio.h>
int main() {
  printf("%d\n", 0.1 + 0.2 == 0.3);
}
// 0
Go
package main
import "fmt"
func main() {
  a := 0.1 // otherwise the compiler will use constant folding
  fmt.Println(a+0.2 == 0.3)
}
// false
Et tu, Rust?
fn main() {
  println!("{}", 0.1 + 0.2 == 0.3)
}
// false

As you can see, while this question/quirk is regularly attributed to Javascript, this is not behavior that is unique to Javascript. In fact, you can see this behavior in any language that uses IEEE 754 to represent floating point values.

What's going on here?

First, I'm going to try my best to avoid a lot of IEEE 754 details in this post. If you want exhaustive coverage, I highly recommend Computer Systems: A Programmer's Perspective [Section 2.4]. For our purposes, a double precision floating point value (which each of the language above uses) consists of 53 bits1 for the fractional part (F) of the number and 11 bits for the exponent (E). There is also a sign bit, but we can ignore that entirely here since everything is positive. So we can think of floating point numbers as being scientific notation using base 2 instead of base 10.

F×2EF \times 2^{E}

If we look at how 0.1 is represented in binary, we end up with this:

0.110 = 1.100110011001100110102 × 2-4

You will notice that the fractional part is repeating. In fact, much like ⅓ in base 10, the fractional part repeats indefinitely (0.3333…). In order to fit this infinitely repreating fraction into 53 bits, we have to round it. In this case, we round up since the next bit is 1 and not 0 (1001). That's how we end up with 1010 at the end, instead of 1001. You may be surprised that 0.2 has the exact same fractional component as 0.1 but 0.2 is 2 × 0.1, so it makes a lot of sense that we would just change the exponent.

0.210 = 1.100110011001100110102 × 2-3

And finally, here's 0.3. You'll notice that the repeating part of 0.3 is a 0011 (3) and so it is actually rounded down.

0.310 = 1.001100110011001100112 × 2-2

0.1 + 0.2

Now, let's add 0.1 and 0.2 together and see what happens. To add 0.1 and 0.2, we first have to rewrite the smaller number so that it has the same exponent as the larger number. Then we can simply add the fractional parts using base 2 addition. Here's what that looks like:

0.1100110011001 … 10011010 × 2-3 + 1.100110011001 … 10011010 × 2-3 = 10.011001100110 … 011001110 × 2-3

Because of the shifting and the carry, we now have 55 bits so we're going to need to round away 2 bits. However, the bits that we need to discard are 10, so we have to round up, giving us.

1.00110011001100110011102 × 2-2

0.110 + 0.210 = 1.001100110011001101002 × 2-2

This is the value of about 0.30000000000000004 which we would have seen if we printed the result of 0.1 + 0.2. Obviously, this is ever so slightly different from 0.3.

1. The fractional part in a double precision floating point number is actually only 52 bits. In normalized form, there is an implicit 1 in front of the fractional part. So while it is 52 bits in memory, it represents a 53 bit value. [back]