[Interval] Help debugging compiler optimization

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

[Interval] Help debugging compiler optimization

Boost - Users mailing list
Dear all,

I am trying to understand why I am getting different numerical results with the interval library depending on the optimization level of the compiler.

I am attaching the smallest example I have been able to create:

# On my Mac laptop
Apple LLVM version 9.0.0 (clang-900.0.39.2)
boost 1.66 (installed via homebrew)
$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

I would expect to get the same output in both cases, but the lower end-points are different in the second case, and seem wrong to me since third2 * 3.0 < 1.0.

# On my Linux machine the effect is different:
gcc version 5.4.0 20160609
boost 1.58 on Ubuntu 16.04.9

$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

Can anyone explain what is going on?

Thanks in advance,

Tim

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

foo.cpp (826 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
I'm not sure what's going on (seems like maybe a fast but error prone floating point rounding), but comparing the optimization flags enabled between -O0 and -O2 might help shed some light on it.

You can get the explicit optimization flags enabled for your compiler via:
 g++ -Q -O2 --help=optimizers
 g++ -Q -O0 --help=optimizers

Regards,
Nate

On Fri, Mar 2, 2018 at 10:17 AM, Tim van Erven via Boost-users <[hidden email]> wrote:
Dear all,

I am trying to understand why I am getting different numerical results with the interval library depending on the optimization level of the compiler.

I am attaching the smallest example I have been able to create:

# On my Mac laptop
Apple LLVM version 9.0.0 (clang-900.0.39.2)
boost 1.66 (installed via homebrew)
$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

I would expect to get the same output in both cases, but the lower end-points are different in the second case, and seem wrong to me since third2 * 3.0 < 1.0.

# On my Linux machine the effect is different:
gcc version 5.4.0 20160609
boost 1.58 on Ubuntu 16.04.9

$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

Can anyone explain what is going on?

Thanks in advance,

Tim

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users



_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
Hi Nate,

Thanks for your help.

Below is the output from gcc 7.2.0 on mac os. Apparently the difference already happens between -O0 and -O1.
To summarize:
    - gcc on linux with boost 1.58
    - clang and gcc 7.2.0 on mac os with boost 1.66
all give wrong (or at least inconsistent) output depending on optimization options.
(Since third1 and third2 are both < 1/3, multiplying them by 3 should give an interval with lower end-point < 1, but it sometimes doesn't.)

I am attaching the outputs of
g++-7 -Q -O0 --help=optimizers > o0.txt
$ g++-7 -Q -O1 --help=optimizers > o1.txt

$ g++-7 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++-7 -O1 foo.cpp -o foo
./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

Best,
  Tim


On 02/03/2018 18:23, Nathan Ernst wrote:
I'm not sure what's going on (seems like maybe a fast but error prone floating point rounding), but comparing the optimization flags enabled between -O0 and -O2 might help shed some light on it.

You can get the explicit optimization flags enabled for your compiler via:
 g++ -Q -O2 --help=optimizers
 g++ -Q -O0 --help=optimizers

Regards,
Nate

On Fri, Mar 2, 2018 at 10:17 AM, Tim van Erven via Boost-users <[hidden email]> wrote:
Dear all,

I am trying to understand why I am getting different numerical results with the interval library depending on the optimization level of the compiler.

I am attaching the smallest example I have been able to create:

# On my Mac laptop
Apple LLVM version 9.0.0 (clang-900.0.39.2)
boost 1.66 (installed via homebrew)
$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

I would expect to get the same output in both cases, but the lower end-points are different in the second case, and seem wrong to me since third2 * 3.0 < 1.0.

# On my Linux machine the effect is different:
gcc version 5.4.0 20160609
boost 1.58 on Ubuntu 16.04.9

$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

Can anyone explain what is going on?

Thanks in advance,

Tim

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users



-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

o0.txt (13K) Download Attachment
o1.txt (13K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
I'm not really an expert in this space, but the one (rather tedious) exercise I could suggest is that for each differing option between -O0 and -O2, add the corresponding explicit option until you get the deviation you're exhibiting.

I did do a diff on your settings, and I didn't really see anything that stood out to me, hence why I'm suggesting the switch-by-switch test.

I'm not a boost contributor just a user and rare bug reporter. This does feel like a floating-point optimization switch, though, at least to me.

Regards,
Nate

On Sat, Mar 3, 2018 at 2:08 PM, Tim van Erven <[hidden email]> wrote:
Hi Nate,

Thanks for your help.

Below is the output from gcc 7.2.0 on mac os. Apparently the difference already happens between -O0 and -O1.
To summarize:
    - gcc on linux with boost 1.58
    - clang and gcc 7.2.0 on mac os with boost 1.66
all give wrong (or at least inconsistent) output depending on optimization options.
(Since third1 and third2 are both < 1/3, multiplying them by 3 should give an interval with lower end-point < 1, but it sometimes doesn't.)

I am attaching the outputs of
g++-7 -Q -O0 --help=optimizers > o0.txt
$ g++-7 -Q -O1 --help=optimizers > o1.txt

$ g++-7 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++-7 -O1 foo.cpp -o foo
./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

Best,
  Tim



On 02/03/2018 18:23, Nathan Ernst wrote:
I'm not sure what's going on (seems like maybe a fast but error prone floating point rounding), but comparing the optimization flags enabled between -O0 and -O2 might help shed some light on it.

You can get the explicit optimization flags enabled for your compiler via:
 g++ -Q -O2 --help=optimizers
 g++ -Q -O0 --help=optimizers

Regards,
Nate

On Fri, Mar 2, 2018 at 10:17 AM, Tim van Erven via Boost-users <[hidden email]> wrote:
Dear all,

I am trying to understand why I am getting different numerical results with the interval library depending on the optimization level of the compiler.

I am attaching the smallest example I have been able to create:

# On my Mac laptop
Apple LLVM version 9.0.0 (clang-900.0.39.2)
boost 1.66 (installed via homebrew)
$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

I would expect to get the same output in both cases, but the lower end-points are different in the second case, and seem wrong to me since third2 * 3.0 < 1.0.

# On my Linux machine the effect is different:
gcc version 5.4.0 20160609
boost 1.58 on Ubuntu 16.04.9

$ g++ foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

$ g++ -O2 foo.cpp -o foo
$ ./foo
third1 = 0.3333333333333333148296162562473909929394721984863281250000000000
third2 = 0.3333333333333333148296162562473909929394721984863281250000000000
v1 = (0.9999999999999998889776975374843459576368331909179687500000000000,1.0000000000000000000000000000000000000000000000000000000000000000)
v2 = (1.0000000000000000000000000000000000000000000000000000000000000000,1.0000000000000000000000000000000000000000000000000000000000000000)

Can anyone explain what is going on?

Thanks in advance,

Tim

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users



-- 
Tim van Erven [hidden email]
www.timvanerven.nl


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
On 6 March 2018 at 20:43, Nathan Ernst via Boost-users <[hidden email]> wrote:
This does feel like a floating-point optimization switch, though, at least to me.

Additionally, the optimization level it self could interfere with the order of calculation, leading to possibly different results.

degski

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
Not sure this "random" aswer explains it:

1/3 it's a recurring decimal (https://en.wikipedia.org/wiki/Repeating_decimal), that means it is not possible to represent it within a limited number of bits (even in 64 bits).

So your "bug" example, I would say that it's not a bug. Using optimization in one of the cases the compiler can figure out, that the value is 1, by analyzing constant values assigned to v2 and lazy expanding the expression (template expansions, etc), (which is correct math value).
The other option v1, uses memory storage, and because of that, it cannot lazy expand the expression at the function call, so because you cannot store 1/3 perfectly in 64 bits, after multiplying it, it always results in a number lower 1.

On Wed, Mar 7, 2018 at 11:54 AM, degski via Boost-users <[hidden email]> wrote:
On 6 March 2018 at 20:43, Nathan Ernst via Boost-users <[hidden email]> wrote:
This does feel like a floating-point optimization switch, though, at least to me.

Additionally, the optimization level it self could interfere with the order of calculation, leading to possibly different results.

degski

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users



_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
Hi Degski,

But if I assign 1/3 to a double and then print that to the console, then I would expect the compiler to be committed to the value shown in the console, but apparently it isn't, and still does optimization.

Best,
  Tim

On 07/03/2018 15:53, Mário Costa via Boost-users wrote:
Not sure this "random" aswer explains it:

1/3 it's a recurring decimal (https://en.wikipedia.org/wiki/Repeating_decimal), that means it is not possible to represent it within a limited number of bits (even in 64 bits).

So your "bug" example, I would say that it's not a bug. Using optimization in one of the cases the compiler can figure out, that the value is 1, by analyzing constant values assigned to v2 and lazy expanding the expression (template expansions, etc), (which is correct math value).
The other option v1, uses memory storage, and because of that, it cannot lazy expand the expression at the function call, so because you cannot store 1/3 perfectly in 64 bits, after multiplying it, it always results in a number lower 1.

On Wed, Mar 7, 2018 at 11:54 AM, degski via Boost-users <[hidden email]> wrote:
On 6 March 2018 at 20:43, Nathan Ernst via Boost-users <[hidden email]> wrote:
This does feel like a floating-point optimization switch, though, at least to me.

Additionally, the optimization level it self could interfere with the order of calculation, leading to possibly different results.

degski

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users




_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
I don't really understand what you mean.

lets say you have

double v1 = 1.0/3.0;

double computation = v1 * 3;

Using, O2, the compiler might resolve, by performing expression analisys at compile time, that you have always a constant value of 1/3 * 3 = 1.

Now using O0, it might not optimize, and compute the value at runtime (just as an example).

Because 1/3 does not fit into any number of bits without lossing precision (unless you'd be using some symbolic library), multiplying v1 * 3, will result in a value diferent from 1, that will be the rule.

Another issue Degski mentions, its the error propagation associated with the order the expressions are evaluated/executed ...



On Wed, Mar 7, 2018 at 5:09 PM, Tim van Erven via Boost-users <[hidden email]> wrote:
Hi Degski,

But if I assign 1/3 to a double and then print that to the console, then I would expect the compiler to be committed to the value shown in the console, but apparently it isn't, and still does optimization.

Best,
  Tim


On 07/03/2018 15:53, Mário Costa via Boost-users wrote:
Not sure this "random" aswer explains it:

1/3 it's a recurring decimal (https://en.wikipedia.org/wiki/Repeating_decimal), that means it is not possible to represent it within a limited number of bits (even in 64 bits).

So your "bug" example, I would say that it's not a bug. Using optimization in one of the cases the compiler can figure out, that the value is 1, by analyzing constant values assigned to v2 and lazy expanding the expression (template expansions, etc), (which is correct math value).
The other option v1, uses memory storage, and because of that, it cannot lazy expand the expression at the function call, so because you cannot store 1/3 perfectly in 64 bits, after multiplying it, it always results in a number lower 1.

On Wed, Mar 7, 2018 at 11:54 AM, degski via Boost-users <[hidden email]> wrote:
On 6 March 2018 at 20:43, Nathan Ernst via Boost-users <[hidden email]> wrote:
This does feel like a floating-point optimization switch, though, at least to me.

Additionally, the optimization level it self could interfere with the order of calculation, leading to possibly different results.

degski

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users




_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users



_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
Hi Mário,

My (somewhat naive) view of compilers is that they can only do optimization that has no visible effects on the behavior of the program.

What I meant is:

double v1 = 1.0/3.0;
printf("%f", v1);
double computation = v1 * 3;

Then the print statement will show the user some value for v1 that fits in a double. So I would expect that it is then no longer free to treat v1 as exactly 1/3, because the use has already observed a different value.

Best,
  TIm


On 07/03/2018 18:43, Mário Costa wrote:
I don't really understand what you mean.

lets say you have

double v1 = 1.0/3.0;

double computation = v1 * 3;

Using, O2, the compiler might resolve, by performing expression analisys at compile time, that you have always a constant value of 1/3 * 3 = 1.

Now using O0, it might not optimize, and compute the value at runtime (just as an example).

Because 1/3 does not fit into any number of bits without lossing precision (unless you'd be using some symbolic library), multiplying v1 * 3, will result in a value diferent from 1, that will be the rule.

Another issue Degski mentions, its the error propagation associated with the order the expressions are evaluated/executed ...



On Wed, Mar 7, 2018 at 5:09 PM, Tim van Erven via Boost-users <[hidden email]> wrote:
Hi Degski,

But if I assign 1/3 to a double and then print that to the console, then I would expect the compiler to be committed to the value shown in the console, but apparently it isn't, and still does optimization.

Best,
  Tim


On 07/03/2018 15:53, Mário Costa via Boost-users wrote:
Not sure this "random" aswer explains it:

1/3 it's a recurring decimal (https://en.wikipedia.org/wiki/Repeating_decimal), that means it is not possible to represent it within a limited number of bits (even in 64 bits).

So your "bug" example, I would say that it's not a bug. Using optimization in one of the cases the compiler can figure out, that the value is 1, by analyzing constant values assigned to v2 and lazy expanding the expression (template expansions, etc), (which is correct math value).
The other option v1, uses memory storage, and because of that, it cannot lazy expand the expression at the function call, so because you cannot store 1/3 perfectly in 64 bits, after multiplying it, it always results in a number lower 1.

On Wed, Mar 7, 2018 at 11:54 AM, degski via Boost-users <[hidden email]> wrote:
On 6 March 2018 at 20:43, Nathan Ernst via Boost-users <[hidden email]> wrote:
This does feel like a floating-point optimization switch, though, at least to me.

Additionally, the optimization level it self could interfere with the order of calculation, leading to possibly different results.

degski

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users




_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
-- 
Tim van Erven [hidden email]
www.timvanerven.nl
_______________________________________________ Boost-users mailing list [hidden email] https://lists.boost.org/mailman/listinfo.cgi/boost-users
-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
Hello
> My (somewhat naive) view of compilers is that they can only do
> optimization that has no visible effects on the behavior of the program.

I wouldn't be surprised if ieee floating point is an exception to that,
especially with optimizations enabled.

> What I meant is:
> double v1 = 1.0/3.0;
> printf("%f", v1);
> double computation = v1 * 3;
> Then the print statement will show the user some value for v1 that fits
> in a double. So I would expect that it is then no longer free to treat
> v1 as exactly 1/3, because the use has already observed a different value.

I think in order to show the number to the user it would have to be
copied to memory or at least a regular register but further computation
could continue to use the more accurate floating point register. And the
compiler might still optimize the computation away and just show you the
result of printf(..., 1.0/3.0). I don't know if writing constexpr double
v1 = 1.0/3.0 would change anything, at least if particular optimizations
are enabled and/or disabled.

Ilja
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
In reply to this post by Boost - Users mailing list

Would it help to use

 

constexpr double v1 and constexpr computation,

 

to see what the compiler can compute at compile-time

 

and to use a better print method

 

std::cout.precision(std::numeric_limits<double>::max_digits10); 

 

std::cout << v1 << “ “ << computation << std::endl;

 

to see the ‘true’ results of both your doubles?

 

godbolt.org  may also help you see what assembler is produced.

 

Paul

 

---

Paul A. Bristow

Prizet Farmhouse

Kendal UK LA8 8AB

+44 (0) 1539 561830

 

 

From: Boost-users [mailto:[hidden email]] On Behalf Of Tim van Erven via Boost-users
Sent: 08 March 2018 08:36
To: Mário Costa; [hidden email]
Cc: Tim van Erven
Subject: Re: [Boost-users] [Interval] Help debugging compiler optimization

 

Hi Mário,

My (somewhat naive) view of compilers is that they can only do optimization that has no visible effects on the behavior of the program.

What I meant is:

double v1 = 1.0/3.0;
printf("%f", v1);
double computation = v1 * 3;

Then the print statement will show the user some value for v1 that fits in a double. So I would expect that it is then no longer free to treat v1 as exactly 1/3, because the use has already observed a different value.

Best,
  TIm

On 07/03/2018 18:43, Mário Costa wrote:

I don't really understand what you mean.

lets say you have

double v1 = 1.0/3.0;

double computation = v1 * 3;

Using, O2, the compiler might resolve, by performing expression analisys at compile time, that you have always a constant value of 1/3 * 3 = 1.

Now using O0, it might not optimize, and compute the value at runtime (just as an example).

Because 1/3 does not fit into any number of bits without lossing precision (unless you'd be using some symbolic library), multiplying v1 * 3, will result in a value diferent from 1, that will be the rule.

Another issue Degski mentions, its the error propagation associated with the order the expressions are evaluated/executed ...

 

 

On Wed, Mar 7, 2018 at 5:09 PM, Tim van Erven via Boost-users <[hidden email]> wrote:

Hi Degski,

But if I assign 1/3 to a double and then print that to the console, then I would expect the compiler to be committed to the value shown in the console, but apparently it isn't, and still does optimization.

Best,
  Tim

 

On 07/03/2018 15:53, Mário Costa via Boost-users wrote:

Not sure this "random" aswer explains it:

1/3 it's a recurring decimal (https://en.wikipedia.org/wiki/Repeating_decimal), that means it is not possible to represent it within a limited number of bits (even in 64 bits).

So your "bug" example, I would say that it's not a bug. Using optimization in one of the cases the compiler can figure out, that the value is 1, by analyzing constant values assigned to v2 and lazy expanding the expression (template expansions, etc), (which is correct math value).
The other option v1, uses memory storage, and because of that, it cannot lazy expand the expression at the function call, so because you cannot store 1/3 perfectly in 64 bits, after multiplying it, it always results in a number lower 1.

 

On Wed, Mar 7, 2018 at 11:54 AM, degski via Boost-users <[hidden email]> wrote:

On 6 March 2018 at 20:43, Nathan Ernst via Boost-users <[hidden email]> wrote:

This does feel like a floating-point optimization switch, though, at least to me.

 

Additionally, the optimization level it self could interfere with the order of calculation, leading to possibly different results.

degski


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users

 



_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________ Boost-users mailing list [hidden email] https://lists.boost.org/mailman/listinfo.cgi/boost-users

-- 
Tim van Erven [hidden email]
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
On 8 March 2018 at 02:36, Tim van Erven via Boost-users <[hidden email]> wrote:
My (somewhat naive) view of compilers is that they can only do optimization that has no visible effects on the behavior of the program.

Here is what the different floating point behaviors (strict/fast/precise) mean with VC (I realize you're on linux, but the same must count/exist on linux as these are CPU features). As you can see, it's intricate.

In general what you are trying to do is not possible (that way), you'll need to allow for some delta between values that if the difference is smaller that delta, this qualifies them as being equal. So far so good so. We have our friends FLT_EPSILON and DBL_EPSILON from the STL. These don't help you much, the delta defined by those two is the difference between 2 consecutive floats on the interval from 1. to 2.. So towards 0., the EPSILON is getting smaller, over 2. it's getting bigger, suggesting different approaches are needed for different parts of the real numbers line.

Daniel Lemire has written about this subject and has suggested solutions in the past (search the blog for more articles pertaining to this subject), for only to (partially) revert those opinions later. It's quite a problem really. In case you are really working with fractions (as in your example 3 * (1/3)), it would be better to use a fractions library.


degski

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list

 

 

From: Boost-users [mailto:[hidden email]] On Behalf Of degski via Boost-users
Sent: 08 March 2018 13:03
To: [hidden email]
Cc: degski
Subject: Re: [Boost-users] [Interval] Help debugging compiler optimization

 

On 8 March 2018 at 02:36, Tim van Erven via Boost-users <[hidden email]> wrote:

My (somewhat naive) view of compilers is that they can only do optimization that has no visible effects on the behavior of the program.

 

Here is what the different floating point behaviors (strict/fast/precise) mean with VC (I realize you're on linux, but the same must count/exist on linux as these are CPU features). As you can see, it's intricate.

In general what you are trying to do is not possible (that way), you'll need to allow for some delta between values that if the difference is smaller that delta, this qualifies them as being equal. So far so good so. We have our friends FLT_EPSILON and DBL_EPSILON from the STL. These don't help you much, the delta defined by those two is the difference between 2 consecutive floats on the interval from 1. to 2.. So towards 0., the EPSILON is getting smaller, over 2. it's getting bigger, suggesting different approaches are needed for different parts of the real numbers line.

Daniel Lemire has written about this subject and has suggested solutions in the past (search the blog for more articles pertaining to this subject), for only to (partially) revert those opinions later. It's quite a problem really. In case you are really working with fractions (as in your example 3 * (1/3)), it would be better to use a fractions library.

 

Boost.math function float_distance

 

http://www.boost.org/doc/libs/1_66_0/libs/math/doc/html/math_toolkit/next_float/float_distance.html

 

will show you the ‘(signed) numbers of bits different’ between two values (or any floating point type)

 

or more formally

 

“Returns the distance between a and b: the result is always a signed integer value (stored in floating-point type FPT) representing the number of distinct representations between a and b.”

 

This more helpful than std::numeric_limits<FPT>::epsilon()

 

You can also use the related functions float_next and float_prior (and next_after) to show how the values are shown in decimal.

 

But if you really, really want rationals, then you probably should use the rationals library?

 

http://www.boost.org/doc/libs/1_66_0/libs/rational/index.html

 

HTH

 

Paul

 

---

Paul A. Bristow

Prizet Farmhouse

Kendal UK LA8 8AB

+44 (0) 1539 561830

 

 

 


_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
In reply to this post by Boost - Users mailing list
Ilja, Paul, Degski,

Thanks for your help. I think that Ilja's comment below explains what's
going on: compilers don't strictly respect IEEE floating point when they
have optimizations turned on (see also the explanation here:
https://stackoverflow.com/questions/7517588/different-floating-point-result-with-optimization-enabled-compiler-bug#7517877).

@Degski: I am not actually interested in fractions. The background is
that I am using the interval library to compare the numerical accuracy
of various statistical algorithms in a setting where we know that some
of the methods will run out of precision for large data sets, and I am
getting incorrect intervals (that do not contain the correct answer)
when compiling with clang, but everything works when compiling with gcc.
When trying to debug this I found that the numerical results diverged
right from the first interval calculations and I could not understand
why. I think I have a better handle on this now.

Best,
   Tim

On 08/03/2018 09:49, Ilja Honkonen wrote:

> Hello
>> My (somewhat naive) view of compilers is that they can only do
>> optimization that has no visible effects on the behavior of the program.
>
> I wouldn't be surprised if ieee floating point is an exception to
> that, especially with optimizations enabled.
>
>> What I meant is:
>> double v1 = 1.0/3.0;
>> printf("%f", v1);
>> double computation = v1 * 3;
>> Then the print statement will show the user some value for v1 that
>> fits in a double. So I would expect that it is then no longer free to
>> treat v1 as exactly 1/3, because the use has already observed a
>> different value.
>
> I think in order to show the number to the user it would have to be
> copied to memory or at least a regular register but further
> computation could continue to use the more accurate floating point
> register. And the compiler might still optimize the computation away
> and just show you the result of printf(..., 1.0/3.0). I don't know if
> writing constexpr double v1 = 1.0/3.0 would change anything, at least
> if particular optimizations are enabled and/or disabled.
>
> Ilja

--
Tim van Erven <[hidden email]>
www.timvanerven.nl

_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users
Reply | Threaded
Open this post in threaded view
|

Re: [Interval] Help debugging compiler optimization

Boost - Users mailing list
On Mon, Mar 12, 2018 at 10:08 AM, Tim van Erven via Boost-users
<[hidden email]> wrote:

> Ilja, Paul, Degski,
>
> Thanks for your help. I think that Ilja's comment below explains what's
> going on: compilers don't strictly respect IEEE floating point when they
> have optimizations turned on (see also the explanation here:
> https://stackoverflow.com/questions/7517588/different-floating-point-result-with-optimization-enabled-compiler-bug#7517877).
>
> @Degski: I am not actually interested in fractions. The background is that I
> am using the interval library to compare the numerical accuracy of various
> statistical algorithms in a setting where we know that some of the methods
> will run out of precision for large data sets, and I am getting incorrect
> intervals (that do not contain the correct answer) when compiling with
> clang, but everything works when compiling with gcc. When trying to debug
> this I found that the numerical results diverged right from the first
> interval calculations and I could not understand why. I think I have a
> better handle on this now.

Indeed, getting deterministic results using floating point numbers is
hard (and even more so if you move between compilers, processors,
architectures, etc.). If you want to achieve it, you will want a very,
very, very through test suite to feel confident about the results :-)
There are some good articles/books on the topic -- for instance, you
can look for advice from some multiplayer games which implemented
deterministic physics simulations between machines. However, note that
those don't care about the error, as long as it is the same error
everywhere.

If you need proper results, then you can save yourself a lot of
trouble using a library that allows you to use arbitrary precision
numbers (specially if later on you need higher precision for something
else). See for instance the GMP (Multiple Precision Arithmetic), MPFR
(Multiple Precision Floating-Point Reliable) and MPC (Multiple
Precision Complex) libraries, used by GCC itself.

For an example of how to implement complex algorithms ensuring *exact*
results by increasing the precision as needed, see the CGAL
(Computational Geometry Algorithms Library).

Hope that helps!
Miguel
_______________________________________________
Boost-users mailing list
[hidden email]
https://lists.boost.org/mailman/listinfo.cgi/boost-users