Re: Best means to override CXXFLAGS locally

2006-10-23 Thread Sander Niemeijer

Hi Ralf,

I have been struggling with the 'single file disable optimization  
issue' for a while and used to have a somewhat ugly solution up to  
this point.
The recent fixes in automake that ensure that target_??FLAGS always  
overrides AM_??FLAGS make your suggestion a valid solution now.


In our project we have a single file (part of a libtool library  
target) that is too big to be compiled with optimization turned on.

Based on your suggestion I now use the following approach:

In configure.ac I have the following (at the bottom of the file!,  
just before AC_OUTPUT):


# here are some rules that set the value for DISOPT_FLAG for certain  
compilers. For gcc this is -O0

# ...
AC_SUBST(DISOPT_FLAG)

# store CFLAGS in CFG_CFLAGS and clear CFLAGS so we can override the  
flags (e.g. DISOPT_FLAG) in our Makefile(s)

CFG_CFLAGS=$CFLAGS
CFLAGS=
AC_SUBST(CFG_CFLAGS)


Then in each Makefile.am I have at the top:
---
CFG_CFLAGS = @CFG_CFLAGS@
AM_CFLAGS = $(CFG_CFLAGS)
---
and for each target_CFLAGS setting in a Makefile.am I add '$ 
(CFG_CFLAGS)'


The single file that has to be build with optimization turned off is  
now build as a libtool 'convenience library':

---
noinst_LTLIBRARIES = bigfile.la

bigfile_la_sources = bigfile.c
bigfile_la_CFLAGS = $(CFG_CFLAGS) $(DISOPT_FLAG)
---
and this convenience library is added to the original library by  
adding 'libbigfile.la' to the _la_LIBADD setting of the original  
library.


The catch with this approach is that running a 'make CFLAGS=other flags>' (if you ever do such a thing) will no longer replace  
the initial CFLAGS settings that were provided to the configure  
script (or set by AC_PROG_CC). It will just add them at the end (and  
also override the DISOPT_FLAG!).
With the way I have set up the CFG_CFLAGS setting in Makefile.am you  
can now still replace the original CFLAGS by using 'make  
CFG_CFLAGS='.


Best regards,
Sander

On 22-okt-2006, at 13:46, Ralf Wildenhues wrote:


Hello Akim,

* Akim Demaille wrote on Sun, Oct 22, 2006 at 01:10:02PM CEST:

What would be the cleanest means to handle this exception?


Here is what I did:


I think it's much better to do the munging in configure.ac.  Store the
CFLAGS set by the user and/or AC_PROG_CC in AM_CFLAGS, and override
CFLAGS.  Depending upon whether the changed flags settings apply to  
only

a few, non-portability-problematic files in your project, the rest of
the configure tests should run with the flags set by the user, and you
should only override at the end, or the converse.

Much cleaner, involves much less changes.  If you have something that
works nicely and is reasonably generally useful, post a summary, or
even better, a FAQ addition to the manual.  ;-)

Cheers,
Ralf








Re: Per-library CFLAGS to appear after user-defined CFLAGS

2006-09-18 Thread Sander Niemeijer
I'd like to force a compiler flag on a certain library in my  
Makefile.am

and ensure that the user cannot override it's behaviour. I want to do
something like:

mylib_la_CFLAGS = -O0

to disable optimization.

However, automake puts mylib_la_CFLAGS *before* the user-defined  
CFLAGS

so it ends up using:

-O0 -some-user-flag -O2 -other-user-flags

and this isn't what I want, since my flag is overridden.

To get around this, I am forced to do:

override CFLAGS := `echo @CFLAGS@ | sed 's/-O[0-9]//g'`
mylib_la_CFLAGS = -O0

(and then if I want any other libraries in the same Makefile.am
optimized I must explicitly set an optimization level for each one..)

Is there a nicer way?


AFAIK there still isn't a nice solution to this problem. There were  
several e-mail discussions about this in the past.
One of those was in September 2005, but unfortunately http:// 
sources.redhat.com/ml/automake/2005-09/ seems to come up empty :(


Below is a copy of an e-mail I send to the list back then.

On 26-sep-2005, at 11:09, Sander Niemeijer wrote:


On zaterdag, sep 24, 2005, at 20:05 Europe/Amsterdam, Brian wrote:

I have a need to force three files to not be optimized. I've  
followed the
instructions in the manual for setting them up in their own  
library, and

then using LIBADD to combine it with the original library.

If I use AM_CXXFLAGS, the -O0 is superceded by a -O2. The same  
occurs if I
use libx_la_CXXFLAGS. I am not allowed to override CXXFLAGS (and  
don't want

to).



The 'convenience library' solution does indeed not work because  
CXXFLAGS is always put after AM_CXXFLAGS/libx_la_CXXFLAGS. This is  
especially problematic if you want to let the user be able to  
provide both the default optimization compiler flag _and_ the  
specific compiler flag that disable optimization for those few  
files that need to be compiled without optimization (e.g. for the  
sun workshop compiler this would be -xO4 and -xO0).


There was also a thread about this issue on the mailinglist in  
december 2004. My final mail in this thread stated the same  
problem: http://sources.redhat.com/ml/automake/2004-12/msg00075.html


I have since found a solution (it is more of a hack, but at least  
it works) that handles the specific override of the optimization  
flag for a single file. The trick I used is based on a recursive  
call to make (I will give the example for C, but for C++ it should  
work the same):


Suppose we want to have the file foo.c compiled without  
optimization and all other files with the default optimization.  
Because we want the user of our package to provide his own compiler  
specific option to disable optimization we use a AC_SUBST variable  
called DISOPT_FLAG (which will be set to -O0 for gcc).

Now add the following to your Makefile.am:
---
FOO_O = foo.o
$(FOO_O): foo.c
$(MAKE) foo.o CFLAGS="$(CFLAGS) $(DISOPT_FLAG)" FOO_O=dummy-foo.o
---
if you use foo.c in a libtool library you should also add the same  
rules for the .lo file:

---
FOO_LO = foo.lo
$(FOO_LO): foo.c
$(MAKE) foo.lo CFLAGS="$(CFLAGS) $(DISOPT_FLAG)" FOO_LO=dummy-foo.lo
---
The way this works is as follows. First FOO_O is set to foo.o so  
our $(FOO_O) rule overrides the default .c.o/.c.lo rule. This rule  
will recursively call make asking to build foo.o again, but now  
CFLAGS is extended with our DISOPT_FLAG (at the end, so it really  
overrides any compiler optimization flags that were already in  
CFLAGS) and in addition FOO_O is set to some dummy value so our own  
build rule is disable and the default .c.o pattern rule from  
automake is used.


The nice thing about this approach is that the dependency  
generation rules still work and all settings such as CPPFLAGS, CC,  
etc. are nicely preserved.
There is however also a downside to this approach (and there may be  
more that I haven't encountered yet): your foo.c and your generated  
foo.(l)o should be in the same directory as your Makefile (my  
approach depends on the default .c.o suffix rule generated by  
automake, but this rule does not support alternate source/object  
directories).
In our project the source files are in a different location, but  
the .o files do end up in the directory where the Makefile is  
located, therefore I added the following additional rule to copy  
foo.c from the source directory to the Makefile directory (this  
rule also works when you use separate source and build directories):

---
foo.c: $(top_builddir)//foo.c
cp `test -f '$(top_builddir)//foo.c' || echo '$ 
(srcdir)/'`$(top_builddir)//foo.c foo.c

---


Best regards,
Sander Niemeijer





Re: using FC *and* F77, or FC *instead of* F77?

2006-01-09 Thread Sander Niemeijer

Hi,

I do not know whether it is recommended behavior, but you could add  
the following to the top of your Makefile.am

---
F77=$(FC)
FFLAGS=$(FCFLAGS)
---
and only use a AC_PROG_FC in your configure.ac.

This is what I usually do when I have mixed F77/F90 code.

Best regards,
Sander

On 8-jan-2006, at 0:09, Ed Hartnett wrote:


Howdy all!

I compile fortran code on a bunch of platforms. Most f90 compilers
seem to support compiling F77 code, so I thought, in configure.ac, I
could just use:

AC_PROG_FC

But when I do an autoreconf, I get the following automake error:

nf_test/Makefile.am: Preprocessed Fortran 77 source seen but `F77'  
is undefined

nf_test/Makefile.am:
nf_test/Makefile.am: The usual way to define `F77' is to add  
`AC_PROG_F77'

nf_test/Makefile.am: to `configure.ac' and run `autoconf' again.
autoreconf: automake failed with exit status: 1

I suspect that's because the files in nf_test are *.F files, instead
of *.F90 files.

Is there any way to just tell autoconf to use the FC compiler for
everything?

Otherwise I have to also call AC_PROG_F77, and then there are issues
because they might find different compilers.

I also have a firm requirement that commercial compilers are to be
prefered to GNU. Of course, everything would be much easier if gcc was
used everywhere, because then I wouldn't have to mess with compilers
from Sun, AIX, Irix, Intel, etc. But that's the way the cookie
crumbles.

In fact, what I end up doing is this:

AC_PROG_F77([mpxlf_r xlf f95 fort xlf95 ifort ifc efc pgf95 lf95
gfortran frt pgf77 f77 fort77 fl32 af77 f90 xlf90 pgf90 epcf90 g77])


AC_PROG_FC([mpxlf_r xlf90 f95 fort xlf95 ifort ifc efc pgf95 lf95
gfortran frt pgf77 f77 fort77 fl32 af77 f90 xlf90 pgf90 epcf90 g77])

If I were Harry Potter, I would wave a magic wand, and make AC_PROG_FC
do everything, including telling me somehow whether it can handle F90
code.

Is there a better way? Or is it required that I have AC_PROG_F77 and
AC_PROG_FC?


Thanks!

Ed
--
Ed Hartnett  -- [EMAIL PROTECTED]



___
Autoconf mailing list
Autoconf@gnu.org
http://lists.gnu.org/mailman/listinfo/autoconf






Re: Incorrect directory creation with 'make dist' and EXTRA_DIST

2006-01-05 Thread Sander Niemeijer

Hi,

On 4-jan-2006, at 21:00, Stepan Kasal wrote:


Hello,

On Wed, Jan 04, 2006 at 06:12:11PM +0100, Sander Niemeijer wrote:

distdir: $(DISTFILES)
$(am__remove_distdir)
mkdir $(distdir)
--->$(mkdir_p) $(distdir)/$(top_srcdir)/data


The problem is caused by the following line in Makefile.am:

EXTRA_DIST = $(top_srcdir)/data/foo.txt

There are two possible answers:

1)  The patch attached to this mail fixes it.


The patch works. thanks!


2)  Please use
EXTRA_DIST = data/foo.txt
Automake finds the fine in the src tree.

If you are using this in a Makefile.am, then use
EXTRA_DIST = ../../data/foo.txt
or
EXTRA_DIST = $(top_builddir)/data/foo.txt
It is a bit counter-intuitive: top_builddir always expands to
a relative path, so the above mkdir works.


In the example I provided this would indeed work, but in the package  
where I first encountered the problem we use the top_srcdir prefix  
for files that we need as input during the build process for a custom  
build step. Since I do not know how to handcraft the nifty first-look- 
in-buildir-then-in-srcdir reference to these files in my Makefile.am  
I just reference these files from the source location (i.e. using  
top_srcdir). This way, separate builddir builds still work.


I have thought about using different references to the files for  
EXTRA_DIST (using top_builddir) and our custom build step (using  
top_srcdir), but this would introduce some nasty inconsistency if by  
accident one of the files ended up in the build dir (we would then be  
building using the one in top_srcdir, but shipping the one in  
top_builddir).


Best regards,
Sander







Incorrect directory creation with 'make dist' and EXTRA_DIST

2006-01-04 Thread Sander Niemeijer

Hi all,

I think I have found a bug in automake.

Attached is an example that reproduces the problem.

The problem is triggered by configuring the foo-1.0 package using a  
full path to configure (or to use a build directory that differs from  
the source directory) and running a 'make dist'.


If I do e.g.

$ tar -zxf foo-1.0.tar.gz
$ cd foo-1.0
$ /Users/sander/foo-1.0/configure
$ make dist

I will end up with an empty directory 'Users/sander/foo-1.0/data' in  
my newly created foo-1.0 package.


The problem seems to come from the creation of directories for each  
of the EXTRA_DIST entries.

From Makefile.in:
---
distdir: $(DISTFILES)
$(am__remove_distdir)
mkdir $(distdir)
--->$(mkdir_p) $(distdir)/$(top_srcdir)/data
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
  case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$ 
$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$ 
$topsrcdirstrip/|$(top_builddir)/|"`;; \

  esac; \
  if test -f $$file || test -d $$file; then d=.; else d=$ 
(srcdir); fi; \

  dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
  if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
  else \
dir=''; \
  fi; \
..
---

Since srcdir (and thus also top_srcdir) is derived from the directory  
component from the call to configure (i.e. the '/Users/sander/ 
foo-1.0/' part from '/Users/sander/foo-1.0/configure') this will  
translate into

---
$(mkdir_p) $(distdir)/Users/sander/foo-1.0/data
---
which is clearly wrong.

Furthermore, I wonder why this directory creation for EXTRA_DIST  
entries is included in the first place, since a couple of lines below  
in the Makefile.in we have:

---
  dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
  if test "$$dir" != "$$file" && test "$$dir" != "."; then \
    dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
  else \
dir=''; \
  fi; \
---
which seems to take care of creation of the directory if it did not  
already exist.





foo-1.0.tar.gz
Description: GNU Zip compressed data


Best regards,
Sander Niemeijer



Re: Force -O0 flags, inhibit the default -O2 flags

2005-09-28 Thread Sander Niemeijer


On woensdag, sep 28, 2005, at 17:04 Europe/Amsterdam, Harald Dunkel 
wrote:



autoconf sets CFLAGS/CXXFLAGS to "reasonable defaults", that's all. If
these defaults cause problems on your platforms, you have to override
them.



They cannot be called "defaults", if they get a higher priority
than the flags set in my Makefile.am. And I do not think that
setting CXXFLAGS='-g -O2' is a reasonable default, unless autoconf/
automake's assumption is that the user is supposed to debug the
developer's code.

IMHO the priorities for setting build flags should be (highest
first):
1)  user
2)  developer
3)  autoconf/automake

Surely it is OK that autoconf/automake can provide default build
flags somehow, but the flags set by the developer (e.g. AM_CXXFLAGS)
should get a higher priority, if they are set. And automake/autoconf
should provide just the bare minimum.


As a developer you have full control over both AM_xxxFLAGS and xxxFLAGS 
variables. There is no ownership difference between these two types of 
flags from a developer/autoconf point of view. If you don't like the 
default for CXXFLAGS that autoconf choses then just replace it in the 
way Ralph explained:


CXXFLAGS=${CXXFLAGS-""}

There is however an important difference between AM_xxxFLAGS and 
xxxFLAGS variables in the sense that a user should be able to override 
the xxxFLAGS variables by using ./configure xxxFLAGS=value>, which is not true for AM_xxxFLAGS.


For this reason you should be careful that you only 'set' xxxFLAGS in 
the way mentioned above. When you as a developer define your CXXFLAGS 
these should function as a _default_ for users of your package (just as 
autoconf allows its default of '-g -O2' to be overruled). A user should 
always be able to provide its own CXXFLAGS when calling configure and 
these settings should override any default value for CXXFLAGS that a 
developer may have specified.


The way I see it, the place where AM_xxxFLAGS comes into play is when 
you want to _extend_ xxxFLAGS.
Because of the way automake orders AM_xxxFLAGS and xxxFLAGS it can not 
be used to override flags (except for path specification flags where 
the leftmost entry takes precedence)


Suppose we need to extend CPPFLAGS with an additional -I entry. If this 
extension should be applied globally to the package you can add 
something like 'CPPFLAGS="-I $(CPPFLAGS)"' in 
configure.ac. However, if an extension only makes sense within the 
scope of a single Makefile, then you can also add this -I option to the 
AM_CPPFLAGS of the Makefile.am file.






According to the documentation you (as a developer) are not allowed 
to

set CFLAGS/CXXFLAGS (Automake manual, 2.5, or GNU Coding Standards).


Yes, you as a package developer, are supposed to let them pass through
unchanged, if a user specifies them.



The documentation says that these flags are reserved for the user.
It does not say that these variables are reserved for the user and
for Automake. As a development tool, autoconf/automake has to follow
this rule, too. CXXFLAGS is off-limits. Or the documentation should
mention that autoconf/automake might predefine these flags in an
unpredictable manner (e.g. by adding -g to the compiler flags), and
that the developer has no chance to override this without violating
the GNU coding standard, or redefining Automake's internals.


You are perfectly free to provide your own _defaults_. What you 
shouldn't do is explicitly set these flags and thus overriding any 
xxxFLAGS settings that the user may have provided at configure time.





The GNU coding standard talks about developers and users only:

http://www.gnu.org/prep/standards/standards.html#Command-Variables

We have a 3rd party, namely autoconf/automake. IMHO the relation 
between

autoconf/automake and developer should be similar to the relation
between developer and user. The flags set by the developer (e.g.
AM_CXXFLAGS) can be extended or overriden by the user (by setting
CXXFLAGS). Similar to this it should be possible for the developer
to extend or override the flags set by autoconf/automake 
(_CXXFLAGS)

by setting "his" flags (AM_CXXFLAGS).

This would mean that next to CXXFLAGS and AM_CXXFLAGS there should
be a 3rd variable to be set by autoconf/automake, e.g. AM_AM_CXXFLAGS.
The compile rules should be modified accordingly, e.g.

CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)

   ||
  \||/
   \/

CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
$(AM_AM_CPPFLAGS) $(AM_CPPFLAGS) $(CPPFLAGS) \
$(AM_AM_CXXFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)


This is not necessary, just override the defaults from autoconf using 
the approach mentioned above.


Best regards,
Sander





Re: Force -O0 flags, inhibit the default -O2 flags

2005-09-28 Thread Sander Niemeijer

Hi All,

On woensdag, sep 28, 2005, at 06:24 Europe/Amsterdam, Ralf Corsepius 
wrote:



On Tue, 2005-09-27 at 19:38 -0600, Brian wrote:
We have several files which are not able to be optimized, and when 
our mac
mini tries to build the project it chokes up when attempting to do 
so. It
seems incorrect to say that the package developer is the least 
qualified to

judge compiler flags,
Sorry, but experience tells he is - a developer's view and knowledge 
are

very limited.

Normally, a developer only knows about the problems he is facing on his
development platforms. Additionally, he  might have received a couple 
of

reports from people using other development platforms. So he might have
some ideas on particular problems on a set of platforms he doesn't have
access to.


 and it also seems to avoid the point. The package
developer should be able to override the autotools, and the user 
should be

able to override the developer.

The point is: Technically, package developers can override the
autotools, if they really like to, however in most of all such cases
they will not be able to do so, because though developers think they
know what they are doing, they do not actually know it.


From my experience, in 99% of all of such cases, developers trying to

set up CFLAGS/CXXFLAGS end up outsmarting themselves.

For example:
Harald wants to override -O2
a) to work around a compiler bug
b) because the compilation speed/optimization ratio doesn't seem worth
it.

wrt. a):
He would have to detect if the compiler is affected by the bug and how
to work around it.

What he actually knows is that "i386-redhat-gcc-4.0.1-20050731 -O2" as
shipped with "Fedora Core 3 updates as of 20050920" causes a
segmentation fault with a particular code example and that the same
example doesn't segfault with -O2.
He probably doesn't know how the same compiler behaves on other
architectures, nor does he know how other compilers behave on other
hosts/OSes. I.e. he doesn't know the actual origin of the problem and
therefore can't actually provide a work-around. All he can try is to
reduce the likelihood of hitting this issue.

wrt. b)
His decision is based on personal observation on his development
machines and targets, but he has no chance to know about how this
situation is on other platforms.
* Compilation speed on an i386 w/128MB RAM will be completely different
from that on an N-processor SMP sparc-development host w/XX GB RAM
* Compiler memory requirements will be different on different
development haost
* The object sizes for i386 targets will be different from that on
sparcs.
* The "optimal compiler flags" for GCC will be different from that for 
a

commercial compilers.

So all autoconf and the developer do here, must be somehow
wrong/sub-optimal somewhere. The only person to answer these questions
for a particular host is the "system integrator", i.e. the package's
user.


I agree that the 'system integrator' should have the final say. But my 
problem is that with the current way automake handles the flags 
ordering I can't let the user of my package have the final say! At 
least not in an elegant way that is (the solution I provided in my 
previous post in the thread is the best I could come up with).


In our case there is just one file that takes an extremely long time to 
compile if we leave optimization on. We could disable optimization for 
the whole project, but then the runtime performance of our software 
would be too slow.


For the users of our software it will probably not be a problem to wait 
for the compiler to finish, because they will do the compilation only 
one time (so they can just compile everything using their preferred 
optimization level).
But for our developer team it is important to have both fast 
compilations _and_ fast runtime performance (for running tests). So 
what I want is a way to just disable optimization for this single file 
so the developers of our package (who work on different platforms by 
the way) don't have to wait unnecessarily long.


As a sidenote, there are other considerations, which take some time to 
explain, why other solutions such as splitting up this file into 
smaller files is not a viable solution.


Mind that there are no platform specific tests involved in what I want. 
All I want is a way to have someone that runs configure be able to say: 
'disable optimization for this one file' and 'here are my default 
CFLAGS' and 'here is my specific flag to disable optimization for a 
file'. By using such an approach the one that runs configure has full 
say over everything (no possible faulty enabling/setting of options by 
the configure script).


I implemented this in configure.ac by introducing an AC_ARG_VAR 
variable called DISOPT_FLAG. If the caller to configure does not 
specify a value to this, then no optimization disabling takes place. 
But if it is set then the flag that is specified should be used for the 
compilation of the large file (and this file only). This means

Re: Force -O0 flags, inhibit the default -O2 flags

2005-09-26 Thread Sander Niemeijer

Hi Brian,

I had the exact same problem.

On zaterdag, sep 24, 2005, at 20:05 Europe/Amsterdam, Brian wrote:

I have a need to force three files to not be optimized. I've followed 
the
instructions in the manual for setting them up in their own library, 
and

then using LIBADD to combine it with the original library.

If I use AM_CXXFLAGS, the -O0 is superceded by a -O2. The same occurs 
if I
use libx_la_CXXFLAGS. I am not allowed to override CXXFLAGS (and don't 
want

to).


The 'convenience library' solution does indeed not work because 
CXXFLAGS is always put after AM_CXXFLAGS/libx_la_CXXFLAGS. This is 
especially problematic if you want to let the user be able to provide 
both the default optimization compiler flag _and_ the specific compiler 
flag that disable optimization for those few files that need to be 
compiled without optimization (e.g. for the sun workshop compiler this 
would be -xO4 and -xO0).


There was also a thread about this issue on the mailinglist in december 
2004. My final mail in this thread stated the same problem: 
http://sources.redhat.com/ml/automake/2004-12/msg00075.html


I have since found a solution (it is more of a hack, but at least it 
works) that handles the specific override of the optimization flag for 
a single file. The trick I used is based on a recursive call to make (I 
will give the example for C, but for C++ it should work the same):


Suppose we want to have the file foo.c compiled without optimization 
and all other files with the default optimization. Because we want the 
user of our package to provide his own compiler specific option to 
disable optimization we use a AC_SUBST variable called DISOPT_FLAG 
(which will be set to -O0 for gcc).

Now add the following to your Makefile.am:
---
FOO_O = foo.o
$(FOO_O): foo.c
$(MAKE) foo.o CFLAGS="$(CFLAGS) $(DISOPT_FLAG)" FOO_O=dummy-foo.o
---
if you use foo.c in a libtool library you should also add the same 
rules for the .lo file:

---
FOO_LO = foo.lo
$(FOO_LO): foo.c
$(MAKE) foo.lo CFLAGS="$(CFLAGS) $(DISOPT_FLAG)" FOO_LO=dummy-foo.lo
---
The way this works is as follows. First FOO_O is set to foo.o so our 
$(FOO_O) rule overrides the default .c.o/.c.lo rule. This rule will 
recursively call make asking to build foo.o again, but now CFLAGS is 
extended with our DISOPT_FLAG (at the end, so it really overrides any 
compiler optimization flags that were already in CFLAGS) and in 
addition FOO_O is set to some dummy value so our own build rule is 
disable and the default .c.o pattern rule from automake is used.


The nice thing about this approach is that the dependency generation 
rules still work and all settings such as CPPFLAGS, CC, etc. are nicely 
preserved.
There is however also a downside to this approach (and there may be 
more that I haven't encountered yet): your foo.c and your generated 
foo.(l)o should be in the same directory as your Makefile (my approach 
depends on the default .c.o suffix rule generated by automake, but this 
rule does not support alternate source/object directories).
In our project the source files are in a different location, but the .o 
files do end up in the directory where the Makefile is located, 
therefore I added the following additional rule to copy foo.c from the 
source directory to the Makefile directory (this rule also works when 
you use separate source and build directories):

---
foo.c: $(top_builddir)//foo.c
cp `test -f '$(top_builddir)//foo.c' || echo 
'$(srcdir)/'`$(top_builddir)//foo.c foo.c

---

Best regards,
Sander Niemeijer





Re: adding specific C flags for a SINGLE source file

2004-12-16 Thread Sander Niemeijer
On vrijdag, dec 10, 2004, at 21:35 Europe/Amsterdam, Alexandre 
Duret-Lutz wrote:

user_CFLAGS=$CFLAGS
AC_PROG_CC
if test "x$user_CFLAGS" = x; then
  # If the user didn't specify CFLAGS, then CFLAGS contains
  # a subset of -g -O2 selected by AC_PROG_CC.  This is not
  # a user setting, and we want to be able to override this
  # locally in our rules, so put these flags in a separate
  # variable and empty CFLAGS.
  AC_SUBST([DEFAULTFLAGS], [$CFLAGS])
  CFLAGS=
fi
and in Makefile.am use
  foo_CFLAGS = $(DEFAULTFLAGS)
and
  libfoo_a_CFLAGS = $(DEFAULTFLAGS) $(O3)
as appropriate.
($(O3) being the Makefile variable that contains -O3 if the compiler
support it).
Does that sound sensible?
This is indeed a solution I hadn't thought about yet. But unfortunately 
it has some downsides. If you have a project with a lot of targets (as 
we have) and only one target needs the $(O3) override, then this 
approach would force you to create per target CFLAGS entries for all 
other targets (e.g. 'bar_CFLAGS = $(DEFAULTFLAGS)'). Furthermore, using 
a 'make CFLAGS=' will not work anymore (you will now need 
to call 'make DEFAULTFLAGS='.

By the way, I totally agree with the argument that the user should 
always have the last say, which is indeed possible by using a variable 
for the flags you are appending (in your case the $(O3) and in my case 
I have called it $(DISOPT_FLAG)).

After some more pondering I think I might know an even cleaner solution 
to the problem (which would require a change to automake however): How 
about omitting CFLAGS when target_CFLAGS are provided?

This means one would be able to write:
libfoo_a_CFLAGS = $(AM_CFLAGS) $(CFLAGS) $(O3)
and if libfoo_a_CFLAGS is defined empty no CFLAGS will be used at all.
Of course, introducing this would break backward compatibility. So how 
about enabling this behavior by using some automake flag in 
AM_INIT_AUTOMAKE? By default a target_CFLAGS COMPILE target will 
contain '$(target_CFLAGS) $(CFLAGS)', but if this special automake flag 
is provided it will only be '$(target_CFLAGS)' (but only when using 
target specific flags, the '$(AM_CFLAGS) $(CFLAGS)' combination for the 
default COMPILE rule will _not_ be modified). Does this sound sensible? 
The implementation effort should at least be quite low as far as I can 
guess.

Best regards,
Sander




Re: adding specific C flags for a SINGLE source file

2004-12-16 Thread Sander Niemeijer
user_CFLAGS=$CFLAGS
AC_PROG_CC
if test "x$user_CFLAGS" = x; then
  # If the user didn't specify CFLAGS, then CFLAGS contains
  # a subset of -g -O2 selected by AC_PROG_CC.  This is not
  # a user setting, and we want to be able to override this
  # locally in our rules, so put these flags in a separate
  # variable and empty CFLAGS.
  AC_SUBST([DEFAULTFLAGS], [$CFLAGS])
  CFLAGS=
fi
and in Makefile.am use
  foo_CFLAGS = $(DEFAULTFLAGS)
and
  libfoo_a_CFLAGS = $(DEFAULTFLAGS) $(O3)
as appropriate.
($(O3) being the Makefile variable that contains -O3 if the compiler
support it).
I just noticed something else I don't like about this approach. If you 
do it this way a user will never be able to change CFLAGS and still be 
able to only provide specific optimization flags for libfoo. For 
example, it won't be possible to have CFLAGS='-g -O0' but still have 
libfoo compiled with e.g. '-O2'.

Although I agree that users should always be able to have the last say, 
I do not think that a user provided CFLAGS setting should override 
_all_. If it were possible to have '$(CFLAGS) $(O3)' for libfoo (which 
is not the case in the code example above), then a user would still be 
able to have the last say if the O3 variable was an AC_ARG_VAR. He 
would be able to change the globally default flags by providing his own 
CFLAGS, but in addition would also still be able to tune the 
optimization of _just_ libfoo by supplying his own values for O3.

With the example above, a user can only tune both global CFLAGS _and_ 
libfoo optimization flags if he used a 'make DEFAULTFLAGS=... O3=...', 
which is in my opinion not an ideal approach (it would defeat the 
purpose of having a CFLAGS variable).

Best regards,
Sander



Re: adding specific C flags for a SINGLE source file

2004-12-13 Thread Sander Niemeijer
On vrijdag, dec 10, 2004, at 21:39 Europe/Amsterdam, Alexandre 
Duret-Lutz wrote:

That documentation was delivered to your mailbox 10 days ago.
It will be in 1.9.4.
  From: Alexandre Duret-Lutz <[EMAIL PROTECTED]>
  Subject: RFC for new FAQ entry: Flag Variables Ordering
  To: [EMAIL PROTECTED]
  Date: Tue, 30 Nov 2004 01:49:07 +0100
Totally missed that one. Sorry for that.
I just read it, and it contains exactly the information I was hoping to 
find. Very nice!

Best regards,
Sander




Re: svn copy conflicts with autotools ?

2004-12-02 Thread Sander Niemeijer
Hi all,
We had the same problems, so for the (re)bootstrapping of our project 
we now use our own 'bootstrap' script (which is included in both CVS 
and the source package):

---
#!/bin/sh
echo "---removing generated files---"
# products from ./configure and make
if test -f Makefile ; then
# touch all automatically generated targets that we do not want to 
rebuild
  touch aclocal.m4
  touch configure
  touch Makefile.in
  touch config.status
  touch Makefile
# make maintainer-clean
  make -k maintainer-clean
fi

# products from autoreconf
rm -Rf autom4te.cache
rm -f Makefile.in aclocal.m4 compile config.guess config.sub configure \
  config.h.in depcomp install-sh ltmain.sh missing py-compile ylwrap
if test "$1" != "clean" ; then
  # bootstrap
  echo "---autoreconf---"
  autoreconf -v -i -f
fi
---
Feel free to use this as a template for your own project(s).
Best regards,
Sander Niemeijer
On donderdag, dec 2, 2004, at 10:31 Europe/Amsterdam, Eric PAIRE wrote:
Bob Friesenhahn wrote:
On Wed, 1 Dec 2004, Eric PAIRE wrote:
It this solution is so obvious, I don't understand why autotools 
developers have not already set up a tool which automatically 
removes the files generated by the autotools (perhaps this tool 
exists and I don't know about).

It is called 'make maintainer-clean'.  Unfortunately, if time stamps 
are wrong, the Makefile may try to rebuild itself before executing 
the target.

I tried this command, but it does not clean up all configure, stamp*, 
aclocal.m4, ... that was generated by the autotools.
I guess that these must be removed by hand (or better ignored in the 
versioning system, if it is possible to get an
exhausitve list of files generated).

Eric





Re: [Fwd: Can nobase_pkgdata_DATA take directories? Does not seam so.]

2004-03-12 Thread Sander Niemeijer
On dinsdag, maa 9, 2004, at 23:04 Europe/Amsterdam, Bob Friesenhahn 
wrote:

Wildcards are for lazy programmers who are willing to allow wrong
files to be built or distributed by accident.
I don't agree when it comes to a bunch of files (>100) that get 
generated automatically by a tool (such as documentation files by 
doxygen). You don't want to update such a list manually each time the 
list changes.
I'm not saying a simple wildcard system would be the solution, but some 
automation would definitely be in order.

The existing Automake -hook targets provide package authors with
considerable control over what gets distributed, installed, and
uninstalled.  Since they can execute arbitrary shell code, wildcards
can be supported for any target which supports -hook.
Using run-time Makefile includes is not portable.
I hope you are referring to using 'include' within a Makefile and not 
the automake 'include' directive that I use within my Makefile.am (BTW, 
I'm aware that changing my include file will require a rerun of 
automake). The automake 'include' directive /is/ portable, right?

Regards,
Sander




Re: [Fwd: Can nobase_pkgdata_DATA take directories? Does not seam so.]

2004-03-11 Thread Sander Niemeijer
On vrijdag, maa 5, 2004, at 19:54 Europe/Amsterdam, Hans Deragon wrote:

Sander Niemeijer wrote:
Hi Hans,
Automake does not support wildcard specification of files (e.g. whole 
directories). This is to make sure that:
- a 'make dist' only includes those files from your subdirectories 
that really should go in a distribution
- a 'make distcheck' is able to check whether any generated files in 
your subdirectories are properly cleaned by a 'make distclean'
- a 'make uninstall' does not remove any files a user may have put in 
one of your subdirectories
Mmm... the dist target takes directories via EXTRA_DIST, which in a 
sense is a wildcard.  As for the installation, why shouldn't in *_DATA 
variables allow wildcards?  Its up to the author to check if any 
unwanted files are being installed.

The only place where wildcards should not be allowed is in the 
uninstall target.  If the uninstall target uses *_DATA to determine 
which file to uninstall, maybe the code should be changed.  A 
dynamically created file should record what specific file has been 
installed to record what file actually has been installed.  The 
uninstall target would then use the content of this dynamically 
created file to uninstall only the relevant files and yes, 
directories. :)
There are however also some technical problems involved. If all your 
data files are already there then using some wildcard feature could be 
an option I guess. However, how would one deal with the case where the 
files you specify through wildcards need to be generated first?
This is the case we had to solve for our BEAT package. The wildcard 
files are documentation files generated by doxygen. In our package 
these files are treated similar to other generated files (such as .c 
files generated by tools such as lex and yacc) in the sense that they 
are included with a distribution (so the user does not have to create 
the documentation himself and so doesn't need to have doxygen 
installed). But if the user does have the proper documentation 
generation tools, he is always able to make changes to the sources and 
recreate the documentation if he wants.
The problem now comes when you want to create a dependency for the 
install/dist target on the generated files. We can distinguish two 
cases:

1) The generated files are not yet there and we want a dependency rule 
to create the files.
This is of course a chicken-egg problem. Because the generated files 
are not there yet, a wildcard specification would turn up empty and the 
files would never be generated. Unless you can specify some suffix 
rules to create the generated files (which we can't for our doxygen 
generated files) you are forced to enumerate each and every file 
somewhere.
In our BEAT package I have tried to create a proper solution for this. 
When the generation tool is run, I keep track of the files it generates 
and create a Makefile include for this (but the include file only gets 
updated if any changes to the list of files was found, which prevents 
unnecessary reruns of automake). This Makefile include is stored in our 
CVS and included in our distribution. There are some bootstrapping 
issues involved (which your rightly pointed out) that I'll describe 
below.

2) The generated files are already there but the sources from which 
they were generated have changed, so we want a dependency rule to 
automatically update our generated files.
In this case the wildcard specification would give the proper list of 
files, but the problems we may now face are that each file within the 
wildcard specification could have different dependencies, or, which is 
the case for our doxygen rules, that the generated files are dependent 
on a wildcard specification of source files.
Because my head was already cracking when trying to solve issue 1) and 
because 2) was not really important for us, I have not implemented any 
automatic regeneration rules for our documentation in our package :-)


And probably even more reasons...
For a project I am working on we were faced with a similar problem. I 
ended up writing my own script that creates a 'makefile include' 
containing rules for all my 'wildcard files' (in our case generated 
documentation files). If you want to see how we did this, just have a 
look at our open-source BEAT package (downloadable from 
http://www.science-and-technology.nl/beat/).
I tried your trick and it works.  Thanks, I have two question though.  
I have the following line:

  include file.lst

Now, if file.lst does not exist, make complains with an error and 
aborts, even though I created a target to specify how to build it.  Do 
you have a suggestion?
This is the Makefile include bootstrapping problem I mentioned. The way 
I do this is to start with an empty include file ('touch file.lst') and 
then perform the generation (this will update the file.lst and include 
rules for all generated files)

Re: [Fwd: Can nobase_pkgdata_DATA take directories? Does not seam so.]

2004-03-08 Thread Sander Niemeijer
Hi Hans,

Automake does not support wildcard specification of files (e.g. whole 
directories). This is to make sure that:
- a 'make dist' only includes those files from your subdirectories that 
really should go in a distribution
- a 'make distcheck' is able to check whether any generated files in 
your subdirectories are properly cleaned by a 'make distclean'
- a 'make uninstall' does not remove any files a user may have put in 
one of your subdirectories

And probably even more reasons...

For a project I am working on we were faced with a similar problem. I 
ended up writing my own script that creates a 'makefile include' 
containing rules for all my 'wildcard files' (in our case generated 
documentation files). If you want to see how we did this, just have a 
look at our open-source BEAT package (downloadable from 
http://www.science-and-technology.nl/beat/).

Regards,
Sander
On woensdag, maa 3, 2004, at 15:24 Europe/Amsterdam, Hans Deragon wrote:

Mmm...  Posted this message two days ago but have yet to receive any 
answer.  I believe my question is pretty basic, so anybody care to 
give me an answer? Surely someone with years of experience can answer 
this newbie's question?

Thanks in advance,
Hans Deragon
--
Consultant en informatique/Software Consultant
Deragon Informatique inc. Open source:
http://www.deragon.bizhttp://autopoweroff.sourceforge.net
mailto://[EMAIL PROTECTED] (Automatically poweroff home servers)
 Original Message 
Greetings.
nobase_pkgdata_DATA = \
  engine \
  smarty/lib \
  smarty/plugins \
  smarty/themes/examples
Above, all of the entries in nobase_pkgdata_DATA are directories.  
However, when
I am performing "make install", I get the error (Same error for the 
other
directories):

cp: omitting directory `./smarty/themes/examples'

This is because install.sh calls the cp command without the -r 
parameter.

Is it possible to define directories in nobase_pkgdata_DATA?  I want 
to avoid to
list the files because they change allot during development and can 
become numerous.

In the Automake documentation, all the examples use files, but nowhere 
have I
found some warning that directories do not work.

Best regards,
Hans Deragon
--
Consultant en informatique/Software Consultant
Deragon Informatique inc. Open source:
http://www.deragon.bizhttp://autopoweroff.sourceforge.net
mailto://[EMAIL PROTECTED] (Automatically poweroff home servers)






Re: Library dependencies & make install

2004-01-19 Thread Sander Niemeijer
I second this. With our toolbox containing both modules and shared 
libraries we have stumbled upon this exact same problem. The modules 
needed to be linked against a shared library but automake was 
installing them before the shared library (which resulted in either 
linking against old libraries that were already installed, or in a 
linking failure). Our current solution is to link the .o files of the 
shared libraries in the modules via an internal library (we had to 
create an extra noinst_LTLIBRARIES target for this), but this results 
in a lot of big module files. I would therefore really welcome it if 
automake would gain support for installing libtool libraries in the 
appropriate order.

Regards,
Sander Niemeijer
On donderdag, jan 15, 2004, at 21:03 Europe/Amsterdam, Bob Friesenhahn 
wrote:

I am using Automake 1.8 with libtool.  Automake is doing a good job at
building libraries in a correct order based on dependencies (or I
could just be lucky) but 'make install' is not paying any attention to
library dependencies.  It appears that libraries are installed in the
same order as specified by lib_LTLIBRARIES.  If the ordering of
lib_LTLIBRARIES does not jive with the library dependency order, then
libtool fails to re-link the library because some libraries it depends
are not installed yet.  Even worse, it may appear that installation is
successful, but some of the libraries are accidentally linked with
older versions of the libraries which were already installed.
It seems to me that Automake should compute an optimum library
installation order based on the specified libtool library (.la)
dependencies.  This would help ensure that installation errors do not
occur due to some hap-hazard lib_LTLIBRARIES list order (e.g. they
could be in alphabetical order).
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED]
http://www.simplesystems.org/users/bfriesen







Re: Fortran 9x support

2003-10-10 Thread Sander Niemeijer
Hmmm... Seems I failed to read the right postings in the autoconf 
archive.
I did some catching up and it all makes sense now. Sorry for the 
noise...

Regards,
Sander
On donderdag, okt 9, 2003, at 20:29 Europe/Amsterdam, Steven G. Johnson 
wrote:

On Thu, 9 Oct 2003, Sander Niemeijer wrote:
I can understand keeping F77/FFLAGS/FLIBS/AC_PROG_F77 for backwards
compatibility. However, the current approach is to keep using these
macros and variables for f77 and use the new FC* ones only for f90 and
upwards. My question is whether it is not possible to also include f77
in the FC approach?
It depends on what you mean.  If you have F77 code that is compatible 
with
the latest Fortran standards, sure, you can compile it with $FC.

  - Only allow the user to use either AC_PROG_F77 or AC_PROG_FC (the 
old
or the new way of doing fortran).
I think you are missing the point here.  The reason for keeping F77,
FFLAGS, etcetera, is not just for compatibility with old Makefile.in
files.  The reason is that Fortran 77 and Fortran 9x are essentially
different languages, and a number of people need to compile both in the
same project, using separate compilers and flags.  (This issue came up
whenever Fortran 9x support was discussed on the autoconf mailing 
list.)

So, it is essential that a user be able to call both AC_PROG_F77 and
AC_PROG_FC simultaneously.  This was the design goal (otherwise,
AC_PROG_FC would just be an alias for AC_PROG_F77.)
Steven






Re: Fortran 9x support

2003-10-09 Thread Sander Niemeijer
Hi,

I can understand keeping F77/FFLAGS/FLIBS/AC_PROG_F77 for backwards 
compatibility.
However, the current approach is to keep using these macros and 
variables for f77 and use the new FC* ones only for f90 and upwards. My 
question is whether it is not possible to also include f77 in the FC 
approach?

If so, this might open up a different way to go forward:
 - Only allow the user to use either AC_PROG_F77 or AC_PROG_FC (the old 
or the new way of doing fortran).
 - Use FFLAGS and FLIBS for both F77 and FC. This would make the FC 
approach compatible with GNU make which uses the FC/FFLAGS combination 
for compiling fortran. The fact that we only allow F77 or FC will 
prevent conflicts with these variables.
 - After some time the F77 macro/variable set might be phased out in 
favor of using the FC approach to do f77 compilation.

Just my 2c.

Regards,
Sander
On vrijdag, okt 3, 2003, at 20:12 Europe/Amsterdam, Steven G. Johnson 
wrote:

Dear Automake folks,

Recently, we've committed a set of patches to autoconf CVS to support
newer revisions to the Fortran language standard.  The current plan is 
to
not document these (yet) in the next autoconf release, but it might be 
a
good idea to start thinking about how to support these in automake,
especially if revisions to the autoconf macros are required.

I've attached a draft of the documentation for the new macros.  The 
basic
idea is to divide Fortran support into two categories: legacy Fortran 
77,
and modern Fortran:

* For legacy F77 code, there are the current AC_PROG_F77 etc. macros to
find $F77, $FFLAGS, $FLIBS, and so on.
* For newer Fortran dialects, the idea is to treat them more like C and
C++, in that we don't attempt to have separate macros and variables for
separate language versions going forward.  Instead, there are a new 
set of
macros AC_PROG_FC, etcetera (just like the F77 macros with s/F77/FC/) 
to
find output variables $FC, $FCFLAGS, $FCLIBS, etcetera.

Basically, automake should just compile .f90, .f95, etcetera files with
these new output variables.  There is one additional twist: some 
Fortran
compilers (xlf, ifc) expect all source files to end with .f, and 
require a
special flag for other extensions (even "standard" ones like .f90).  
So,
there is a new autoconf macro AC_FC_SRCEXT to figure out how to do
this.  For each source-file extension required, the user should call 
it,
e.g.:

AC_FC_SRCEXT(f90)
AC_FC_SRCEXT(f95)
...
This defines output variables $FCFLAGS_f90, $FCFLAGS_f95, ... that you 
can
use to compile .f90 and .f95 files.  Due to compiler quirks, however,
these flags must appear immediately before the source file to be 
compiled
(and only one source file can be compiled at a time).  For example, you
might do:

foo.o: foo.f90
 $(FC) -c $(FCFLAGS) $(FCFLAGS_f90) foo.f90
Cordially,
Steven G. Johnson






Re: convenience binaries

2003-09-22 Thread Sander Niemeijer
Yes. At least for libraries. For libtool you use noinst_LTLIBRARIES to 
create convenience libraries. These are often used as intermediate 
libraries for a series of object files that are later on included in a 
final executable or library which /will/ be installed.

Regards,
Sander
On maandag, sep 22, 2003, at 14:31 Europe/Amsterdam, Andrew Suffield 
wrote:

On Mon, Sep 22, 2003 at 10:01:24PM +1000, Robert Collins wrote:
On Mon, 2003-09-22 at 21:22, Warren Turkal wrote:
Robert Collins wrote:
yes,
noinst_PROGRAMS = convenience_binaries
Can these convenience programs be built for the host arch in a
cross-compiled environment?
probably, you'll likely need to override the default build recipe
though.. I haven't tried, perhaps someone else here has more details.
Can it ever be correct for a noinst object to be built for the target
environment? By definition, they should only exist on the build
system.
--
  .''`.  ** Debian GNU/Linux ** | Andrew Suffield
 : :' :  http://www.debian.org/ |
 `. `'  |
   `- -><-  |






Re: Problem using same .y file in multiple libs and AM_YFLAGS=-d

2003-09-18 Thread Sander Niemeijer
After some further thinking I managed to come up with a solution myself.

It seems that it is possible to use (the generated) foo.c for 
libfoo_internal_la_SOURCES instead of foo.y. The rule to create foo.c 
from foo.y is already part of the rules for libfoo.la, so the 
dependencies are correct. And of course the biggest advantage is that 
automake now only creates one rule to generate foo.h.

Regards,
Sander
On woensdag, sep 17, 2003, at 12:12 Europe/Amsterdam, Sander Niemeijer 
wrote:

Hi all,

I am using a yacc/bison source file in a libtool library, but I create 
both an installable static/shared version of the library as well as an 
internal convenience library (so I can later on link the PIC objects 
directly into another shared library). Since I need the header file 
with the parser definitions I use the "AM_YFLAGS = -d" option to have 
automake do this for me.

The problem is that automake (1.7.7) doesn't see that the .y source 
files in the _SOURCES definitions of two libraries (the installable 
one and the internal convenience one) are actually the same and thus, 
because of the AM_YFLAGS definition, creates two identical rules in 
the final Makefile for the .h file. This is something the make program 
obviously doesn't like and I therefore receive a 'warning: overriding 
commands for target' warning.

A simple Makefile.am example that shows this behavior is:
---
AM_YFLAGS = -d
lib_LTLIBRARIES = libfoo.la
noinst_LTLIBRARIES = libfoo_internal.la
BUILT_SOURCES = foo.h

libfoo_la_SOURCES = foo.y
libfoo_la_LDFLAGS = -version-info 0:0:0
libfoo_internal_la_SOURCES = foo.y
---
Does anybody know how I can get rid of the make warnings (and, just 
out of interest, will these warnings stay warnings when I use a 
different make program instead of GNU make, or might other make 
programs see the double .h rule definition as an error)?

P.S. I also have, for anyone who is interested, a small example 
package available with just a single .y file that reproduces the 
problem.

Regards,
Sander Niemeijer







Problem using same .y file in multiple libs and AM_YFLAGS=-d

2003-09-17 Thread Sander Niemeijer
Hi all,

I am using a yacc/bison source file in a libtool library, but I create 
both an installable static/shared version of the library as well as an 
internal convenience library (so I can later on link the PIC objects 
directly into another shared library). Since I need the header file 
with the parser definitions I use the "AM_YFLAGS = -d" option to have 
automake do this for me.

The problem is that automake (1.7.7) doesn't see that the .y source 
files in the _SOURCES definitions of two libraries (the installable one 
and the internal convenience one) are actually the same and thus, 
because of the AM_YFLAGS definition, creates two identical rules in the 
final Makefile for the .h file. This is something the make program 
obviously doesn't like and I therefore receive a 'warning: overriding 
commands for target' warning.

A simple Makefile.am example that shows this behavior is:
---
AM_YFLAGS = -d
lib_LTLIBRARIES = libfoo.la
noinst_LTLIBRARIES = libfoo_internal.la
BUILT_SOURCES = foo.h

libfoo_la_SOURCES = foo.y
libfoo_la_LDFLAGS = -version-info 0:0:0
libfoo_internal_la_SOURCES = foo.y
---
Does anybody know how I can get rid of the make warnings (and, just out 
of interest, will these warnings stay warnings when I use a different 
make program instead of GNU make, or might other make programs see the 
double .h rule definition as an error)?

P.S. I also have, for anyone who is interested, a small example package 
available with just a single .y file that reproduces the problem.

Regards,
Sander Niemeijer




Re: RFC: Building a Shared Library (take 2)

2003-08-01 Thread Sander Niemeijer
FYI: It is indeed possible to have libtool create static libraries even 
when you provided --enable-shared and --disable-static to ./configure. 
This happens for instance if you use options like:

libfoo_la_CFLAGS = -static
libfoo_la_LDFLAGS = -static


Regards,
Sander Niemeijer
On vrijdag, aug 1, 2003, at 08:02 Europe/Amsterdam, Tim Van Holder 
wrote:

"Tim" == Tim Van Holder <[EMAIL PROTECTED]> writes:
[...]

 Tim> It currently isn't; --enable-shared --disable-static
still builds
 Tim> static libraries.  There's a thread on the libtool
mailing list about
 Tim> this; seems some people want this behaviour changed.
Are you sure?? AFAICT --disable-static correctly disables the
building of static libraries for the whole package (I'm using it
every day).
Hmm - I'm reasonably certain that I saw that behaviour recently
on an AIX - but I could be wrong.  Just ignore my comments then :-)








Multiple makefiles and rules for CONFIG_HEADER and CONFIG_FILEentries

2003-02-13 Thread Sander Niemeijer
Hi,

I have a package that uses multiple makefiles: one toplevel makefile 
and some makefiles in subdirectories.
The build in several of these subdirectories requires the use of a 
single include file (lets call it foo.h) that is created from foo.h.in 
by configure. Now I recently discovered that automake places the rules 
to rebuild foo.h from foo.h.in (with the use of a stamp-h* file) in the 
Makefile of the directory where foo.h will appear if that directory has 
a Makefile and otherwise put it in the toplevel Makefile. But it won't 
put these rules in any of the other Makefiles.
Of course this leaves me with a problem if I change foo.h.in and 
perform a make, since for targets in my subdirectories that depend on 
foo.h make won't try to rebuild foo.h and thus these targets won't be 
rebuild either.

I would like to know whether anybody knows a way to have the foo.h.in 
-> foo.h (with appropriate stamp-h* usage) dependencies included in all 
my makefiles that contain targets with dependencies on foo.h.

I'm currently considering to let configure create a separate foo.h in 
each of the directories that need foo.h (through a 
AC_CONFIG_HEADERS([dir1/foo.h:inc/foo.h.in dir2/foo.h:inc/foo.h.in]), 
but I would rather generate only one version of foo.h in the directory 
where foo.h.in resides.

And I already know that using only one Makefile for my project would 
also solve the problem, but I would rather like to know whether my 
problem would also be solvable within a multiple Makefile project.

Regards,
Sander





Re: making script executable?

2003-02-04 Thread Sander Niemeijer
I assume the script is part of your distribution and only needs to have 
executable permission there, right?
In order to give it these executable permissions when you build your 
distribution use a
---
dist-hook:
	chmod 755 your_script
---
in your top level Makefile.

Hope this helps.

Regards,
Sander Niemeijer

On Monday, Feb 3, 2003, at 23:20 Europe/Amsterdam, [EMAIL PROTECTED] 
wrote:



I have a shell script which I want to run as part of a testsuite.  
However
when I do a 'make distcheck' this script (which does not get 
configured or
anything at build time) ends up with execute permissions turned off.
Since I want to be able to properly deal with a read only source tree,
what should I do?

Thanks
-Dan










Fix for _AC_AM_CONFIG_HEADER_HOOK (CONFIG_HEADER stamp file creation)bug

2003-02-03 Thread Sander Niemeijer
This replacement for the _AC_AM_CONFIG_HEADER_HOOK macro (located in 
m4/init.m4) fixes an incorrect naming issue of stamp-h* files when only 
specific headers are recreated through config.status.

The problem can be reproduced by creating a configure.ac file 
containing:
---
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_HEADERS([someotherfile.h])
---
When configure first creates the .h files from the .h.in files the 
proper stamp-h1 and stamp-h2 files are created. Now modify 
someotherfile.h.in and call 'make' again. This results in a call to 
'$(SHELL) ./config.status someotherfile.h' to recreate someotherfile.h. 
However, config.status now incorrectly creates a stamp-h1 file instead 
of stamp-h2 file.

The following proposed replacement for _AC_AM_CONFIG_HEADER_HOOK fixes 
this issue:
---
AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK],
[_am_stamp_count=0
_am_local_stamp_count=0
for ac_stamped_file in $config_headers : ; do
  _am_local_stamp_count=`expr ${_am_local_stamp_count} + 1`
  test $ac_stamped_file = $1 && _am_stamp_count=$_am_local_stamp_count
done
echo "timestamp for $1" >`AS_DIRNAME([$1])`/stamp-h[]$_am_stamp_count])
---

P.S. A better replacement would be to have the $config_headers list 
passed along as variable to the _AC_AM_HEADER_HOOK macro. This would 
keep the macro independent of any outside variables. But on the other 
hand this would mean that the every invocation of this macro should be 
changed and I don't really know whether that could brake any other 
packages (like e.g. autoconf) that might depend on this macro. Anybody 
any idea about that?

Regards,
Sander Niemeijer