OK, this was confusing. I was getting repeatable five minute builds, i.e. "make clean; reboot; time make -j3" (more or less :-) was reliably showing five minutes wall clock time.
Then I did "make distclean; ./configure; make depend".
After that, I got repeatable nine minute builds, i.e. make clean; reboot; time make -j3 was showing nine minute wall clock time.
Turns out, the difference was... I had been building without optimization. So configuring with CFLAGS="-g -O0" is almost a 2x speedup!
This is with gcc-4.2.3.
Might be handy to know when doing regressions.
OK, this was confusing. I was getting repeatable five minute builds, i.e. "make clean; reboot; time make -j3" (more or less :-) was reliably showing five minutes wall clock time.
Then I did "make distclean; ./configure; make depend".
After that, I got repeatable nine minute builds, i.e. make clean; reboot; time make -j3 was showing nine minute wall clock time.
Turns out, the difference was... I had been building without optimization. So configuring with CFLAGS="-g -O0" is almost a 2x speedup!
Hi! Yes, you are right, turning optimization off speeds up the compilation substantially. HOWEVER, it changes the generated code and due to various features of the compiler (like inlining or another) being present/absent, the code can, in rare cases, behave differently. I have many experiences that for example a program was repeatedly crashing, when compiled by default way, i.e. with optimalization, and when I compiled it without optimalization and with -g for debugging, it never crashed and worked perfectly under the debugger. I had to debug the optimized version, which is harder, because the generated code doesn't track the source exactly anymore. With regards, Pavel Troller
Pavel Troller wrote:
OK, this was confusing. I was getting repeatable five minute builds, i.e. "make clean; reboot; time make -j3" (more or less :-) was reliably showing five minutes wall clock time.
Then I did "make distclean; ./configure; make depend".
After that, I got repeatable nine minute builds, i.e. make clean; reboot; time make -j3 was showing nine minute wall clock time.
Turns out, the difference was... I had been building without optimization. So configuring with CFLAGS="-g -O0" is almost a 2x speedup!
Hi! Yes, you are right, turning optimization off speeds up the compilation substantially. HOWEVER, it changes the generated code and due to various features of the compiler (like inlining or another) being present/absent, the code can, in rare cases, behave differently. I have many experiences that for example a program was repeatedly crashing, when compiled by default way, i.e. with optimalization, and when I compiled it without optimalization and with -g for debugging, it never crashed and worked perfectly under the debugger. I had to debug the optimized version, which is harder, because the generated code doesn't track the source exactly anymore. With regards, Pavel Troller
Is that a GCC bug then? And, more importantly, was that with a recent GCC version?
Thanks, Scott Ritchie
Pavel Troller wrote:
OK, this was confusing. I was getting repeatable five minute builds, i.e. "make clean; reboot; time make -j3" (more or less :-) was reliably showing five minutes wall clock time.
Then I did "make distclean; ./configure; make depend".
After that, I got repeatable nine minute builds, i.e. make clean; reboot; time make -j3 was showing nine minute wall clock time.
Turns out, the difference was... I had been building without optimization. So configuring with CFLAGS="-g -O0" is almost a 2x speedup!
Hi! Yes, you are right, turning optimization off speeds up the compilation substantially. HOWEVER, it changes the generated code and due to various features of the compiler (like inlining or another) being present/absent, the code can, in rare cases, behave differently. I have many experiences that for example a program was repeatedly crashing, when compiled by default way, i.e. with optimalization, and when I compiled it without optimalization and with -g for debugging, it never crashed and worked perfectly under the debugger. I had to debug the optimized version, which is harder, because the generated code doesn't track the source exactly anymore. With regards, Pavel Troller
Is that a GCC bug then? And, more importantly, was that with a recent GCC version?
Hi! It cannot be clearly said. Some nuances of the C language are "implementation dependent" and it's perfectly OK to compile them differently with or without optimization. Sometimes the programmer incorrectly relies on such a nuance and then using the different set of options can cause his program to behave incorrectly. However, yes, there were, and maybe still are, GCC bugs regarding optimisation, but in recent versions they are very rare. And my experiences mentioned above don't count only for wine, there are many other programs, which do this. With regards, Pavel Troller
Am Freitag, den 13.06.2008, 08:08 +0200 schrieb Pavel Troller:
and when I compiled it without optimalization and with -g for debugging, it never crashed and worked perfectly under the debugger. I had to debug the optimized version, which is harder, because the generated code doesn't track the source exactly anymore. With regards, Pavel Troller
Is that a GCC bug then? And, more importantly, was that with a recent GCC version?
Hi! It cannot be clearly said. Some nuances of the C language are "implementation dependent" and it's perfectly OK to compile them differently with or without optimization. Sometimes the programmer incorrectly relies on such a nuance and then using the different set of options can cause his program to behave incorrectly.
Even more probable than encountering "implementation defined" behavior is encountering "undefined" behavior. In that case, the same program compiled with the same options might crash one day and not the other. A typical case of undefined behavior is using an uninitialized pointer. Due to address space randomization it could happen that the value in the uninitialized pointer sometimes points to valid memory and sometimes faults.
In the case of optimization dependent behavior, the most common source is a buffer overflow on a local (stack-contained) array. gcc -O0 puts all declared variables onto the stack, in the order they were declared. gcc -O2 sometimes elides variables completely or reuses the same space for different variables, so you get a totally different stack layout. This means you are overwriting different data depending on optimization, and the chance to hit a location that is currently not in use is higher without optimization. This is of course not a gcc bug. The program always does a forbidden access out of array bounds, you just don't notice without optimization, or put in other words: Without optimization the program is as wrong as with optimization, it just happens to do the right thing anyways.
Regards, Michael Karcher
On Thu, Jun 12, 2008 at 10:43 PM, Pavel Troller patrol@sinus.cz wrote:
Yes, you are right, turning optimization off speeds up the compilation substantially. HOWEVER, it changes the generated code and due to various features of the compiler (like inlining or another) being present/absent, the code can, in rare cases, behave differently. I have many experiences that for example a program was repeatedly crashing, when compiled by default way, i.e. with optimalization, and when I compiled it without optimalization and with -g for debugging, it never crashed and worked perfectly under the debugger.
Indeed. I think I've had a case where it worked properly with optimization, but crashed with -O0, too. Every once in a while we should make sure that Wine passes its tests when compiled without optimization.
(Incidentally, Valgrind gives somewhat better info with -O0, and it catches some of the problems you're describing.)
But the main point of my post was to suggest a way to speed up regression testing. I've add a note about it to http://wiki.winehq.org/RegressionTesting - Dan