Unsafe optimizations

I guess I’ll break from my standard practice and blog on something
non-Gnome. 🙂

I really need to find some kind of documentation on what a compiler is
allowed to optimize away and what it isn’t. Apparently, my
assumptions were totally wrong. If anyone has some good pointers on
this, I’d love to hear it. In particular…

I have some code where I need to know whether

i + a > i

Now, gcc (using the “safe”, or so I thought, level of optimization of
“-O2 -funroll-loops”) appears to optimize this as

a > 0

which is not the same. (i may be an int, but a is a double and I’m
not working with an infinite precision arithmetic package or anything
like that.) In fact, the latter can cause my program to fail horribly
(luckily it results in a situation that I can catch with an assert,
though aborting the program is pretty sucky behavior too).

I tried a couple different variations to try to trick the compiler out
of incorrectly optimizing things away: (i+a)-i > 0 had the same
problem. I tried the trickier i+a > i+1e-50 (not that I know whether
it is safe to assume |a|>1e-50, but it at least seemed fairly
reasonable). The compiler apparently optimized this to a > 1e-50 and
thus also failed. I tried sticking both i+a and i into variables and
then comparing the variables. That failed–unless I used the
variables elsewhere such as in a printf statement (i.e. if I’m trying
to debug the code it works, otherwise it doesn’t). To fix this, I had
to make a function:

bool
double_compare (double val1, double val2)
{
   return val1 > val2;
}

Then, calling double_compare(i_plus_a,i) would work (yes, i_plus_a is
a variable equal to i+a). Finally, something that works. However,
this isn’t very safe. It only works because gcc doesn’t yet do
aggressive inlining of functions (due to the length of double_compare,
it would be an obvious candidate for aggressive inlining).

I would have thought that such unsafe optimizations would have only
been done with -O3 or -ffast-math or something similar. Can anyone
tell me why these optimizations are considered okay for the compiler
to do at -O2, when they obviously produce incorrect results? What do
I do–depend on the assert to warn the person running the program that
they need to fix my code to outwit the “smart” compiler?

Update: Yes, I know how floating point arithmetic works. I am
aware that the compiler is transforming what I have to:

(double)i + (double)a > (double)i

and is then probably transforming this to the equivalent expression of

((double)i + (double)a) – ((double)i) > 0

The compiler is then probably either using extended precision
arithmetic to evaluate this (I want it to round after adding i and a
before moving on), or else assuming that addition and subtraction are
associative (which is NOT true for floating point numbers) to change
this to:

((double)i – (double)i) + (double)a > 0

If this were true, then the subtraction of i from itself could just be
removed. The problem is that I have situations where i is about 10 or
so, and a is 4e-16, and in these circumstances i+a is identically
equal to i in floating point arithmetic. I need to know whether i+a
and i are considered to be different floating point values (and, if
they are, whether their difference is positive).