On Dec 2, 2002, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote: > You may already know this, but just to be careful I'll mention it > anyway... it is not necessarily safe to configure more than one > subdirectory at a time. One possibly gotcha is the updating of > $cache_file. It isn't done atomically and so if you happen to have two > subdirectories with the same (essentially) configure.in, and they > happen to get started on an SMP system by make -j2 at (essentially) the > same time, you can blow out the cache file.
> This isn't entirely hypothetical. A similar thing happened in > libiberty with two invocations of config.status. This was a bug in autoconf 2.13, that's fixed in autoconf 2.5x. As for config.cache, the update isn't atomic but, in general, you'll probably just lose part of the cache. Unfortunately, this is not guaranteed by the latest version of autoconf, but it shouldn't be too hard to fix in the common case by replacing `cat confcache >$cache_fileŽ with say `cat confcache >$cache_file.$$ && mv $cache_file.$$ $cache_file || cat confcache >$cache_fileŽ in _AC_CACHE_DUMP. Unless we want to preserve the semantics that $cache_file retains the same inode, so that, if there are hard-links to it, they're also updated, which might be a reasonable thing to do in case a global cache-file is maintained. Without atomic updates, it's not safe to run multiple configures in parallel, indeed. But I believe you'll most often just lose some updates to the cache. You can't really help that, anyway, unless you serialized configure runs or acquire a lock on it and read it back in just in case it was updated before writing it out and releasing the lock; I don't think we want to do that. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{redhat.com, gcc.gnu.org} CS PhD student at IC-Unicamp oliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist Professional serial bug killer