On 02/05/2014 01:04 PM, Peter De Wachter wrote: > I was testing a C++ project of mine with older versions of GCC, and > mistakingly used the following command: > $ ./configure CXX=gcc-4.7 > > Autoconf didn't spot any problem: >> checking whether the C++ compiler works... yes > But of course gcc-4.7 doesn't know how to link a C++ program, so my > build failed. > > The attached patch fixes this problem by changing the C++ null program > into: > > int main() { (void) new int; return 0; } > > Any C++ compiler, even an ancient one, should be able to pass that test, > but C compilers will fail. In GCC's case, the program will compile but > the link will fail because libc doesn't contain an 'operator new'.
Actually, in gcc's case, your proposed program is now a syntax error: $ gcc -o foo foo.c foo.c: In function ‘main’: foo.c:1:21: error: ‘new’ undeclared (first use in this function) int main() { (void) new int; return 0; } ^ foo.c:1:21: note: each undeclared identifier is reported only once for each function it appears in foo.c:1:25: error: expected ‘;’ before ‘int’ int main() { (void) new int; return 0; } But that's equally useful for weeding out invalid C++ compilers :) I'm wondering if we should avoid the memory leak by using 'delete new int' instead of '(void) new int', just so we are less likely to trip up on a compiler warning causing a false negative. Or maybe even some other construct that doesn't involve memory allocation but is truly a no-op C++ program that fails to compile under C (such as AC_LANG_PROGRAM([class foo]) by exploiting 'class' rather than 'new'). Any opinions? Otherwise, the idea for your patch looks good to me. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature