5.6 — Relational operators and floating point comparisons

Relational operators are operators that let you compare two values. There are 6 relational operators:

Operator Symbol Form Operation
Greater than > x > y true if x is greater than y, false otherwise
Less than < x < y true if x is less than y, false otherwise
Greater than or equals >= x >= y true if x is greater than or equal to y, false otherwise
Less than or equals <= x <= y true if x is less than or equal to y, false otherwise
Equality == x == y true if x equals y, false otherwise
Inequality != x != y true if x does not equal y, false otherwise

You have already seen how most of these work, and they are pretty intuitive. Each of these operators evaluates to the boolean value true (1), or false (0).

Here’s some sample code using these operators with integers:

And the results from a sample run:

Enter an integer: 4
Enter another integer: 5
4 does not equal 5
4 is less than 5
4 is less than or equal to 5

These operators are extremely straightforward to use when comparing integers.

Boolean conditional values

By default, conditions in an if statement or conditional operator (and a few other places) evaluate as Boolean values.

Many new programmers will write statements like this one:

This is redundant, as the == true doesn’t actually add any value to the condition. Instead, we should write:

Similarly, the following:

is better written as:

Best practice

Don’t add unnecessary == or != to conditions. It makes them harder to read without offering any additional value.

Comparison of floating point values

Consider the following program:

Variables d1 and d2 should both have value 0.01. But this program prints an unexpected result:

d1 > d2

If you inspect the value of d1 and d2 in a debugger, you’d likely see that d1 = 0.0100000000000005116 and d2 = 0.0099999999999997868. Both numbers are close to 0.01, but d1 is greater than, and d2 is less than.

If a high level of precision is required, comparing floating point values using any of the relational operators can be dangerous. This is because floating point values are not precise, and small rounding errors in the floating point operands may cause unexpected results. We discussed rounding errors in lesson 4.8 -- Floating point numbers if you need a refresher.

When the less than and greater than operators (<, <=, >, and >=) are used with floating point values, they will usually produce the correct answer (only potentially failing when the operands are almost identical). Because of this, use of these operators with floating point operands can be acceptable, so long as the consequence of getting a wrong answer when the operands are similar is slight.

For example, consider a game (such as Space Invaders) where you want to determine whether two moving objects (such as a missile and an alien) intersect. If the objects are still far apart, these operators will return the correct answer. If the two objects are extremely close together, you might get an answer either way. In such cases, the wrong answer probably wouldn’t even be noticed (it would just look like a near miss, or near hit) and the game would continue.

Floating point equality

The equality operators (== and !=) are much more troublesome. Consider operator==, which returns true only if its operands are exactly equal. Because even the smallest rounding error will cause two floating point numbers to not be equal, operator== is at high risk for returning false when a true might be expected. Operator!= has the same kind of problem.

For this reason, use of these operators with floating point operands should be avoided.


Avoid using operator== and operator!= with floating point operands.

So how can we reasonably compare two floating point operands to see if they are equal?

The most common method of doing floating point equality involves using a function that looks to see if two numbers are almost the same. If they are “close enough”, then we call them equal. The value used to represent “close enough” is traditionally called epsilon. Epsilon is generally defined as a small positive number (e.g. 0.00000001, sometimes written 1e-8).

New developers often try to write their own “close enough” function like this:

std::abs() is a function in the <cmath> header that returns the absolute value of its argument. So std::abs(a - b) <= epsilon checks if the distance between a and b is less than whatever epsilon value representing "close enough" was passed in. If a and b are close enough, the function returns true to indicate they're equal. Otherwise, it returns false.

While this function can work, it's not great. An epsilon of 0.00001 is good for inputs around 1.0, too big for inputs around 0.0000001, and too small for inputs like 10,000. This means every time we call this function, we have to pick an epsilon that's appropriate for our inputs. If we know we're going to have to scale epsilon in proportion to our inputs, we might as well modify the function to do that for us.

Donald Knuth, a famous computer scientist, suggested the following method in his book “The Art of Computer Programming, Volume II: Seminumerical Algorithms (Addison-Wesley, 1969)”:

In this case, instead of epsilon being an absolute number, epsilon is now relative to the magnitude of a or b.

Let's examine in more detail how this crazy looking function works. On the left side of the <= operator, std::abs(a - b) tells us the distance between a and b as a positive number.

On the right side of the <= operator, we need to calculate the largest value of "close enough" we're willing to accept. To do this, the algorithm chooses the larger of a and b (as a rough indicator of the overall magnitude of the numbers), and then multiplies it by epsilon. In this function, epsilon represents a percentage. For example, if we want to say "close enough" means a and b are within 1% of the larger of a and b, we pass in an epsilon of 0.01 (1% = 1/100 = 0.01). The value for epsilon can be adjusted to whatever is most appropriate for the circumstances (e.g. an epsilon of 0.002 means within 0.2%).

To do inequality (!=) instead of equality, simply call this function and use the logical NOT operator (!) to flip the result:

Note that while the approximatelyEqual() function will work for most cases, it is not perfect, especially as the numbers approach zero:

Perhaps surprisingly, this returns:


The second call didn't perform as expected. The math simply breaks down close to zero.

One way to avoid this is to use both an absolute epsilon (as we did in the first approach) and a relative epsilon (as we did in Knuth's approach):

In this algorithm, we first check if a and b are close together in absolute terms, which handles the case where a and b are both close to zero. The absEpsilon parameter should be set to something very small (e.g. 1e-12). If that fails, then we fall back to Knuth's algorithm, using the relative epsilon.

Here's our previous code testing both algorithms:


You can see that with an appropriately picked absEpsilon, approximatelyEqualAbsRel() handles the small inputs correctly.

Comparison of floating point numbers is a difficult topic, and there's no "one size fits all" algorithm that works for every case. However, the approximatelyEqualAbsRel() should be good enough to handle most cases you'll encounter.

5.7 -- Logical operators
5.5 -- Comma and conditional operators

175 comments to 5.6 — Relational operators and floating point comparisons

  • It is useful to note that there is still some problem when comparing two large numbers. Consider the following case:

    Formally the algorithm is consistent, `1e15 + 1.0` and `1e15` are equal up to a relative precision of `1e-12`. I was thinking about "renormalizing" the numbers in the following way:

    This way you're always comparing "small" numbers. But I might be missing some obvious downsides of this (other than the performance cost obviously).

    • Muhammad Ali

      So you are essentially using Knuth's algorithm for making the comparison at the last step, but I think that breaks down for numbers close to zero which you end up doing when you renormalize them.

  • json

    So how can we reasonably compare two floating point operands to see if they are equal?

    Is converting float to a integer first then compare then convert it back to float solves this problem?

    100.00 - 99.99 => 10000.00 - 9999.00 => 1 => .01
    10.00 - 9.99 => 1000.00 - 999.00 => 1 => .01

    you can adjust the precision when converting to an integer say 2 decimal places just multiply by 10^2

    Is there any problem with this?

  • Tufa

    What are std::abs and std::max??What do they do

  • Tufa

    oh yes, my head exploded, I understand the concept but I don't think I will know how to use it right now

  • ChebCheb

    Hmm, seems like one should avoid equality comparisons of floating point numbers when possible.

    For example, instead of writing a code that's comparing let's say X Kilograms and Y Kilograms, where both may have decimals like if X was 5.750 Kilograms.
    better just write a code that compares X Grams and Y Grams, so X would e.g. be 5750 Grams. That way there are no floating point numbers in the first place so those problems just won't show up.

    So, avoiding doubles and instead use ints WHEN equality comparison will take place AND there's no other reason speaking against usage of ints (like the numbers are too big/small or whatever) would be good, right?

    Still trying to figure out if i unerstand all this :D

  • Mateusz Kacperski

    My brain exploded after this lesson :D

  • SuperNoob

    Damn! I would rather write the function that compares floating point values in python and then just interface with c++ :') Python has some rounding errors too but they are easily managable with a very slight workaround.

    Also, isn't Space Invaders originally written in assembly? I wrote both Space Invaders and the chrome dinosaur game using python. But running speed is worrisome. The c++ version would be killer I guess!

    This works too.

  • nav

    #include <algorithm>
    #include <iostream>
    #include <cmath>

    // Absolute value
    bool  absoluteValue(long double Millisecond, long double second, long double Nanosecond )
        return (std::abs(Millisecond - second)<= (std::max(std::abs(Millisecond), std::abs(second)) * Nanosecond));
    int main()
      // std::cout << std::setprecision(17);
        long double Nanosecond{ 1e-9 };  //  Scientific notation ( 1e-9 )
        long double Millisecond{ 0.001 };
        long double second{ 1.0 };
        for (Millisecond; Millisecond <= 0.999; Millisecond +=0.001)
            Millisecond += Millisecond;

       absoluteValue(Millisecond, second, Nanosecond);
       if (absoluteValue)
           std::cout << "result is : 0 ( false ) then Milliseconds and 1 second
           Aren't Equal ";
           std::cout << "Are  equal";
       return 0;

    • nav

  • Pavel


    In your example:

    There is still room for wrong calculation in case if value of variable "diff" will fall between value of absolute epsilon and relative epsilon (e.g. 1e-9 to 1e-11), so IF statement will evaluate comparison as false and will move to Knuth's algorithm which will be evaluated as false too. If i am not missing anything, to avoid this issue, both absolute and relative epsilon values have to be equal.
    Another point is that this issue is being faced only when biggest comparing number is less than 1, so as an option we can use this code below. If you can kindly check this out, would be perfect:

  • Pavel


    I apologize in advance if i missed the explanation to my question, but is there any particular reason to complicate the code with absolute/relative epsilon approach instead of just adding more accuracy to relative epsilon value (e.g. not 1e-8, but 1e-12 or 1e-15 straight away)? As from your example here "std::cout << approximatelyEqualAbsRel(a-1.0, 0.0, 1e-12, 1e-8)" in theory, the difference between "a" and "b" can be higher than absolute epsilon value as well, which will consequently print "false" while it is actually not. Thanks.

  • Pavel

    I am a bit confused, which chapter should i move on next after i complete chapter#? To chapter# L as it continues 5... numbering? Or i dont need to jump and can just follow as per the current setup in contents?

  • Jacob

    I was quite amused to see the "close enough" scheme used here.  In my Perl package Math::Yapp (Yet Another Polynomial Package) I had to do the same thing, except I was dealing with complex numbers.  In that case I needed to see how close a complex number is to being a solution to a polynomial equation so I was almost always comparing it to 0.0.

    And yes, at 9th degree polynomials, the rounding errors suddenly got so severe that I had put a disclaimer in the documentation.  And I didn't even bother with the Gram-Schmidt orthogonalization process; it was a losing proposition even at low degree.

  • TavonCpp

    holy shit, is comparing floating points this complicate on all programming languages?

    • nascardriver

      I've honestly never need to do this. If you're using floating point numbers, you'll almost exclusively use <, <=, >, >= and not ==, so there's almost no need to the functions shown here.
      If you do find yourself in a situation where you need to compare floating point numbers for equality, you should try to reformulate the problem in integers instead.

  • Chayim

    What does it mean that 'max' multiplies the greatest 'abs' by epsilon?

    • samivagyok

      First, the abs function takes the absolute value of the parameter in the parentheses (e.g. abs(-6) = 6, abs(2) = 2), and after that the max function returns the bigger value out of the parameters (e.g. max(2, 6), will return 6, because 6>2). And after that the returned value from max() is multiplied by the epsilon value, which is user given. Hope the explanation helped.

  • Chayim

    Why is an - used (a - b) and not a comma (a, b) and how does - work?

    • nascardriver

      `std::abs` has a single parameter, you can only give it 1 value.
      If you do 5-3, you get 2. That's the distance between 3 and 5.
      But if you swap them around, 3-5, you get -2. Not what you want, so you use `std::abs` to convert it to positive 2.

  • Chayim

    Can you complete this code you mentioned:

    to show how to compile and execute it?

    • nascardriver

  • Chayim

    Why can’t the compiler use precise double floating point? It’s weird and awkward that computing should make such an error and why? Can the compiler not calculate? A computer does exactly what it’s programmed. Maybe it’s because of how the transistors operate and calculate, I’m not familiar 100% on hardware computer science.

  • If including windows.h, then you'll need to parenthesize std::max, something like this

    to prevent "Illegal token on right side of ::" compile error from confusing function call with macro.

  • It's+Me

    Yeah, This time my mind really got f****

  • swaraj

    in the last example wouldn't it be a better choice to define real-epsilon as a const variable rather than a function parameter

  • notaduck448

    Would rounding (to the 4th or 5th decimal place) be an option for comparing floating-point numbers? The float comparison algorithm just seems unnecessarily complicated.

  • John

    Hello, do I need to understand how the function works floating point equality functions works, I understand what is going on there.

  • Timothy Quintaro

    I'm so confused. shouldn't it print out true or false rather than 1 or 0. cuz the method return is a bool not an int.

  • Raffaello

    did I just learn my first algorithm? :) a question for @alex and @nascardriver, have you read the whole art of computer programming?
    also, I wanted to see outputs of the last code in decimal form, without going through it with a debugger, so I added the header file iomanip, and used in main std::setprecision to 16 digits, but it didn't work :\

    • nascardriver


      Everything you learned so far is an algorithm, at least according to Wikipedia's definition.
      I don't think anyone can know everything about computer programming. I'm a passionate C++ developer, I just like the language and try to learn more about it. C++ is huge, even after years of use, you'll learn something new every now and then.

      Setting the precision to 16 should be enough to see a change in the output. 16 is a magic number, avoid magic numbers. You can obtain the precision necessary to represent every double through `std::numeric_limits` from the <limits> header:

Leave a Comment

Put all code inside code tags: [code]your code here[/code]