Search

3.5 — Relational operators (comparisons)

There are 6 relational operators:

Operator Symbol Form Operation
Greater than > x > y true if x is greater than y, false otherwise
Less than < x < y true if x is less than y, false otherwise
Greater than or equals >= x >= y true if x is greater than or equal to y, false otherwise
Less than or equals <= x <= y true if x is less than or equal to y, false otherwise
Equality == x == y true if x equals y, false otherwise
Inequality != x != y true if x does not equal y, false otherwise

You have already seen how all of these work, and they are pretty intuitive. Each of these operators evaluates to the boolean value true (1), or false (0).

Here’s some sample code using these operators with integers:

And the results from a sample run:

Enter an integer: 4
Enter another integer: 5
4 does not equal 5
4 is less than 5
4 is less than or equal to 5

These operators are extremely straightforward to use when comparing integers.

Comparison of floating point values

Directly comparing floating point values using any of these operators is dangerous. This is because small rounding errors in the floating point operands may cause unexpected results. We discussed rounding errors in detail in section 2.5 -- floating point numbers.

Here’s an example of rounding errors causing unexpected results:

This program prints an unexpected result:

d1 > d2

In the above program, d1 = 0.0100000000000005116 and d2 = 0.0099999999999997868. Both numbers are close to 0.01, but d1 is greater than, and d2 is less than. And neither are equal.

Sometimes the need to do floating point comparisons is unavoidable. In this case, the less than and greater than operators (>, >=, <, and <=) are often used with floating point values as normal. The operators will produce the correct result most of the time, only potentially failing when the two operands are close. Due to the way these operators tend to be used, a wrong result often only has slight consequences. The equality operator is much more troublesome since even the smallest of rounding errors makes it completely inaccurate. Consequently, using operator== or operator!= on floating point numbers is not advised. The most common method of doing floating point equality involves using a function that calculates how close the two values are to each other. If the two numbers are "close enough", then we call them equal. The value used to represent "close enough" is traditionally called epsilon. Epsilon is generally defined as a small number (e.g. 0.0000001).

New developers often try to write their own “close enough” function like this:

fabs() is a function in the <cmath> library that returns the absolute value of its parameter. fabs(a - b) returns the distance between a and b as a positive number. This function checks if the distance between a and b is less than whatever epsilon value representing “close enough” was passed in. If a and b are close enough, the function returns true.

While this works, it’s not great. An epsilon of 0.00001 is good for inputs around 1.0, too big for numbers around 0.0000001, and too small for numbers like 10,000. This means every time we call this function, we have to pick an epsilon that’s appropriate for our inputs. If we know we’re going to have to scale epsilon in proportion to our inputs, we might as well modify the function to do that for us.

Donald Knuth, a famous computer scientist, suggested the following method in his book “The Art of Computer Programming, Volume II: Seminumerical Algorithms (Addison-Wesley, 1969)”:

In this case, instead of using epsilon as an absolute number, we’re using epsilon as a multiplier, so its effect is relative to our inputs.

Let’s examine in more detail how the approximatelyEqual() function works. On the left side of the <= operator, the absolute value of a - b tells us the distance between a and b as a positive number. On the right side of the <= operator, we need to calculate the largest value of "close enough" we're willing to accept. To do this, the algorithm chooses the larger of a and b (as a rough indicator of the overall magnitude of the numbers), and then multiplies it by epsilon. In this function, epsilon represents a percentage. For example, if we want to say "close enough" means a and b are within 1% of the larger of a and b, we pass in an epsilon of 1% (1% = 1/100 = 0.01). The value for epsilon can be adjusted to whatever is most appropriate for the circumstances (e.g. 0.01% = an epsilon of 0.0001). To do inequality (!=) instead of equality, simply call this function and use the logical NOT operator (!) to flip the result:

Note that while the approximatelyEqual() function will work for many cases, it is not perfect, especially as the numbers approach zero:

Perhaps surprisingly, this returns:

1
0

The second call didn’t perform as expected. The math simply breaks down close to zero.

One way to avoid this is to use both an absolute epsilon (as we did in the first approach) and a relative epsilon (as we did in Knuth’s approach):

In this algorithm, we’ve added a new parameter: absEpsilon. First, we check to see if the distance between a and b is less than our absEpsilon, which should be set at something very small (e.g. 1e-12). This handles the case where a and b are both close to zero. If that fails, then we fall back to Knuth’s algorithm.

Here’s our previous code testing both algorithms:

1
0
1

You can see that with an appropriately picked absEpsilon, approximatelyEqualAbsRel() handles the small inputs correctly.

Comparison of floating point numbers is a difficult topic, and there’s no “one size fits all” algorithm that works for every case. However, the approximatelyEqualAbsRel() should be good enough to handle most cases you’ll encounter.

3.6 -- Logical operators
Index
3.4 -- Sizeof, comma, and conditional operators

108 comments to 3.5 — Relational operators (comparisons)

  • Hue Saki

    We discuss rounding errors in detail in section 2.5 -- floating point numbers.

    discuss should be discussed.

  • Benur21

    Why in the last code, the second function call returns 0?

    The browser javascript console returns 0.9999999999999999 for (0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1), and returns -1.1102230246251565e-16 for (0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1)-1.0.

    According to https://www.calculatorsoup.com/calculators/math/scientific-notation-converter.php, -1.1102230246251565e-16 is -0.00000000000000011102230246251565.

    Isn't it close enough to 0?

    • Alex

      Yes, but as noted in the lesson, the math for approximatelyEqual() breaks down close to 0. If you're curious as to why, you can assign fabs(a - b) and ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon) to doubles and inspect them in a debugger.

      That's the impetus for the improved version.

  • jason guo

    I am really confused with the ? and : alternative to if and else. I think that if and else is soooooooo much easier to read. Could you change the AppproximatelyEqual function to if and else format please in this comment section?

    • Alex

      If/else might be easier to read in isolation, but sometimes it can be much less concise, which can actually make it harder to read.

      Here's the requested function using an if/else. It went from 1 line to 6!

  • hero

    hi,
    can you please explain this funktion here

    I understand that if b is smaller than a that we do fab(a) * relEpsilon but that if a is smaller than b then we just return b?
    shoudnt it be like this?

    (also, from where did your website get this pic from? I cant remember which site I gave it to and would like to deactivate it, thx)

    • > shoudnt it be like this?
      Without knowing what you the code is supposed to achieve, there's no way to answer this.

      > where did your website get this pic from?
      gravatar

      • hero

        thx for the reply, I was just confused by the function that I posted, its from this site. I thought the idea behind the function from Donald Knuth, was that it first checks which number (a or b) is smaller and then multiply that with relEpsilon. with how the function is written

        I dont see how it would do a*relEpsilon, I only see that funtion doing b*relEpsilon if b is smaller than a.

  • DecSco

    Wouldn't it be possible to use the floating point standard to just compare a specified amount of significant digits (bitwise)? While that may be less intuitive, and possibly not as precise, it could be a much quicker way. Or am I wrong?

    • nascardriver

      Hi DecSco!

      In order for this to be more efficient it'd need to be implemented on CPU-level.
      If you're doing this through software the overhead by extracting the bits would be higher than the performance gain.

      • DecSco

        I thought if you use bitwise operators, it is in fact on that level - I mean, that's why you'd use C/C++ rather than say Python. Or do you mean you'd need specific hardware for that like FPUs?

        • nascardriver

          Hardware is what I meant. You wouldn't need a new component, it could be added to the FPU. Shrinking date types isn't usually done for performance gain but for network traffic reduction. For example ARMA, or PUPG I think is using a custom data type to transmit radar information.

  • Dear Teacher, please let me add to your comment: "The math simply breaks down close to zero", that the machines break down after 64th binary digit, and sometimes before that. Regards.

  • Samira Ferdi

    I try the floating point comparison problem. I made some changes to the code to figure out what are the value of d1 and d2. I using codeblocks 17.12 IDE and sets it to C++14 compiler. It prints d1 > d2 but the output of d1 is 0.01 and d2 is 0.01 too. Why it happen? the code is below.

    • nascardriver

      Hi Samira!

      @std::cout by default rounds floating point numbers. You can change this with @std::setprecision.

      Output

      d1 > d2
      d1 = 0.0100000000000051
      d2 = 0.00999999999999979

  • john smith

    "First, we check to see if the distance between a and b is less than our absEpsilon, which should be set at something very small (e.g. 1e-12). This handles the case where a and b are both close to zero."

    I'm not sure I understand this. Isn't it possible for a and b to be very close to each other and still be both large? Shouldn't we rather check e.g. "((a>b)?a:b)<absEpsilon"?

    If I understand correctly, the problem here is that for very small numbers, the relative epsilon turns out to be exactly zero - very small a or b multiplied by very small percentage exceeds float/double precision (smaller than smallest possible float/double value). Do I get it right?

  • RryanT

    Hi Alex!,
    Thank you so much for the updated, It's helping me a lot!

  • Cosmin

    Hi, Alex! Thank you very much for all of the effort! Writing the last function in order to test if it works, I wondered if it is possible to create my own library (I think that's how it is called), something similar to the libraries C++ uses. In this library I can put some useful functions, such as proxEquelAbsRel() function, and use the library when I need those functions, instead of writing them every time.

    • nascardriver

      Hi Cosmin!
      You can have classes/functions that you know you'll be using in other projects in a separate directory and tell your compiler to make those files available in your project.
      You could also write those functions/classes and compile them into a library, this way you  won't need to compile them every time you start a new project.
      How this is done depends on the compiler/IDE your using so I won't go into detail.
      In my experience option 1 is better to work with, because you'll find yourself making changes to the shared code while working on projects. This is easier without pre-compiled libraries.

    • Alex

      Yes! There are two options:
      1) Put the code you want to share in a central directory somewhere (e.g. c:\cpplibrary\). This includes both the .cpp files and the .h files. Then you'll need to do two things. First, your compiler settings should have an "include path" that will instruct the compiler where to look for #included files. Add the location of this central directory to the include path. That way, when you #include "somefile.h", the compiler will know where to find it. Second, make sure you add any .cpp files you use directly into your project. This will cause the compiler to compile the function definitions into your program.
      2) Create a precompiled library. I don't recommend this because it's more work and not particularly necessary if you're not distributing your code. But it is an option.

  • Vitor Finotti

    In the approximatelyEqual function you used:

    You say "On the right side of the <= operator, we need to calculate the largest value of "close enough" we're willing to accept. To do this, the algorithm chooses the larger of a and b (as a rough indicator of the overall magnitude of the numbers)..." Doesn't the algorithm returns the smaller one?

    • nascardriver

      Hi Vitor!
      The syntax of conditional operator is as follows:

      Let's look at an example similar to the code in approcimatelyEqual

      Reduce

      Reduce

      Reduce

      If you're still not quite certain how this works you might wanna have a another look at lesson 3.4 (Sizeof, comma, and conditional operators).

  • Hi!

    Thanks for these great updated tutorials :).

    Why instead of Knuth solution don't you just do rounding to a specific decimal place? For example, I updated your code to:

  • Katrina

    Hi, I tried this example with floats instead of doubles, and I got d1 == d2. I thought that floats had less precision than doubles, so I am confused. Thanks!

    • Alex

      Yes, floats have less precision. This means they may represent numbers slightly differently than doubles (e.g. round at a different place). As a result, something like this may express differently with floats and doubles. Which is yet another reason to avoid direct comparisons!

  • Luhan

    Here:

    Where i pointed out wouldn't have an 'else'?

    • Alex

      It could, but in this case, it's not necessary. If the if statement's conditional is true, then the function returns true immediately. That means the bottom code will only execute when the conditional was false, which is what we want anyway.

  • Angel

    "In this algorithm, we’ve added a new parameter: absEpsilon. First, we check to see if the a and b are less than our absEpsilon, which should be set at something very small (e.g. 1e-12). This handles the case where a and b are both close to zero. If that fails, then we fall back to Knuth’s algorithm."

    Hello, Alex, I'm a bit confused with the line "First, we check to see if the a and b are less than our absEpsilon", do you mean to first check if the "difference" between a and b are less than our absEpsilon instead?

  • Stephane

    Hi Alex,
    1) I read this whole part, but I don't understood the Knuth's method, can you please explain me in details.

    2) I searched in the web about best way for comparing floating point numbers and I saw on StackOverflow Relative error, absolute, and percentage error, can you explain me what do they mean?

    • Alex

      I'm not sure I can explain it in any more detail than what's already in the article. In short:
      1) In order to determine if two floating point numbers are equal, we take the approach that they're equal if they're "close enough" (to account for precision issues).
      2) We get to determine what "close enough" means. We do this by defining a value called epsilon.
      3) If the two numbers within epsilon of each other, we consider them equal.

      Then the question becomes, how do we pick epsilon?
      1) An absolute epsilon just uses a number, like 0.01, which means the numbers are considered the same if they're within 0.01 of each other. This is simple, but doesn't work well for both small and large numbers (0.01 is huge if you're trying to compare two very small numbers).
      2) A relative epsilon scales your epsilon by one of the input numbers, so instead of comparing against some absolute value, you're comparing against a number that is scaled appropriately for your inputs. In this context, the epsilon functions as a percentage of your input rather than an absolute number. This is typically done by multiplying your epsilon by one of the input numbers. Knuth's method multiplies it by the larger of the two numbers.

      • Caroline

        Can the relative epsilon value be chosen the same as the absolute epsilon value? In case not, why?

        • Alex

          Sure. In our example, we pick 1e-12 for absolute and 1e-8 for relative, but you could use 1e-8 for both if you wanted. It just depends on what your tolerance for considering two numbers "close enough" is.

  • Matt

    why don't we use an approxEqual function which checks whether or not the two numbers are within a certain percentage of each other?

    for example something like:

    this is how we calculate percent difference in intro physics classes and will produce a reasonable result regardless of the relative closeness to zero of either a or b, and requires fewer lines of code than the suggested isAlmostEqual functions. I suppose I'm not sure why I would use the longer code you suggest rather than the simpler (to me) percent difference function used in the sciences.

  • Justinas

    There is a small misprint:
    "Both numbers are close to 0.1 ..." -> should be "close to 0.01"

Leave a Comment

Put all code inside code tags: [code]your code here[/code]