Re: [PATCH] Fix a typo on secondary expansion example

2012-10-28 Thread Paul Smith
On Sat, 2012-10-27 at 01:22 +0900, Namhyung Kim wrote:
> * doc/make.texi: Fix a typo

Thanks; this fix is already present in the current latest version of the
document.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: make: eval template example bug?

2012-11-07 Thread Paul Smith
On Wed, 2012-11-07 at 12:33 +0100, Daniel Borkmann wrote:
> Hi together,
> 
> I have make 3.81 on a Debian stable machine. I tried out the example
> from the eval function given in
> http://www.gnu.org/software/make/manual/make.html#Eval-Function . The
> example *only* works for me if I remove the trailing '=' character, as
> provided in the patch below. If the '=' is present, it only tries to
> link non-existant object files. Did this behavior change in make 3.82?

Exactly.

You should always install and read the documentation on your local
system, that came with your distribution.  That way you'll have the
right version of the documentation for whatever version of the program
you distro is using.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Feature request, with implementation, test and rationale.

2012-11-07 Thread Paul Smith
On Wed, 2012-11-07 at 14:39 +0100, Fredrik Öhrström wrote:
> To get around this particular problem I implemented a workaround
> marcro called ListPathsSafely that writes the contents of a variable
> to disk.

There is already a new $(file ...) function in the current CVS version
of GNU make, which writes to a file.

The description in the manual reads:

-
The `file' function allows the makefile to write to a file.  Two modes
of writing are supported: overwrite, where the text is written to the
beginning of the file and any existing content is lost, and append,
where the text is written to the end of the file, preserving the
existing content.  In all cases the file is created if it does not
exist.

   The syntax of the `file' function is:

 $(file OP FILENAME,TEXT)

   The operator OP can be either `>' which indicates overwrite mode, or
`>>' which indicates append mode.  The FILENAME indicates the file to
be written to.  There may optionally be whitespace between the operator
and the file name.

   When the `file' function is expanded all its arguments are expanded
first, then the file indicated by FILENAME will be opened in the mode
described by OP.  Finally TEXT will be written to the file.  If TEXT
does not already end in a newline, a final newline will be written.
The result of evaluating the `file' function is always the empty string.




___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Variable Assignment Consistency

2012-12-06 Thread Paul Smith
On Thu, 2012-12-06 at 14:12 -0800, Chris Penev wrote:
> I would expect there to be no difference between the two hashes.

That expectation would not be correct, clearly :-).  Make has its own
escaping and expanding procedures that it follows, in addition to /
aside from what the shell does.

> If instead of piping the output of make to md5sum I convert the output
> to hex, I see that ...
> 
> In one case (the first one) 
>   * make converts the newline character to a space.
> In the other case (the second one) 
>   * make does not convert the newline character to a space 
>   * make deletes the characters \044 \045 
>   * leading me to think make tried to expand $% as
> a variable. 

You have perfectly captured the differences.  They are easily
explainable.

>From the GNU make manual description of the $(shell ...) function:

   The `shell' function performs the same function that backquotes
(``') perform in most shells: it does "command expansion".  This means
that it takes as an argument a shell command and evaluates to the
output of the command.  The only processing `make' does on the result
is to convert each newline (or carriage-return / newline pair) to a
single space.  If there is a trailing (carriage-return and) newline it
will simply be removed.

In the second case, you are assigning a value to a variable (that is,
make sees just an assignment like "E=abcedfg$%hijk" or whatever, and
values are always expanded.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: need help on "make -j" parameter, it will let the system hung easily.

2012-12-14 Thread Paul Smith
On Fri, 2012-12-14 at 12:45 +, Wang, Warner wrote:
> Hello everyone, make experts,
> 
> when I use "make -j" (without specifying a number after it) to compile
> Linux kernel,  it will always make my machine hung, get no response at
> all and the kernel's watchdog (khungtaskd) will complain because there
> are processes in TASK_UNINTERRUPTIBLE status for 120 seconds.  No
> matter I use a HP mainframe with 160 CPU cores, or using a dual-core
> desktop PC, it will always get hung. (my OS is Red Hat Enterprise
> Linux 6)

That seems like an issue with the configuration of your system.  Are you
trying to build the code as root?  Maybe you need to add some per-user
limits.

However, the main issue is your use of -j.

> I just want to know is this an expected behavior? Or is this a
> problem?  Or it is an incorrect usage?   Based on the man-page of make
> I don't get any suggestions about the job numbers of make -j
> parameter:
>-j [jobs], --jobs[=jobs]
> Specifies the number of jobs (commands) to run
> simultaneously.  If there is more than one -j option, the last  one
> is
> effective.   If the -j option is given without an
> argument, make will not limit the number of jobs that can run simul-
> taneously.

This basically says that if you use "-j" with no arguments, make will
run as many jobs as the _makefile_ allows (defined by your prerequisite
rules).  It pays no attention to the limits of your system.

So in an environment (like the Linux kernel) where there are tons of
source files that need to be compiled and they do not depend on each
other, using "-j" with no limit means make will attempt to fork all of
them at the same time.  That could be, in a large codebase, 100's of
compiles all trying to run at the same time.

Typically you would provide an argument with "-j"; usually (assuming you
want to use up the entire system) the number of cores on your system
plus a few (since compiles are mostly CPU but there is some disk I/O
where the CPU will be idle).  The exact number that's optimal for your
build environment (makefiles + hardware) can only be determined by trial
and error.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: need help on "make -j" parameter, it will let the system hung easily.

2012-12-15 Thread Paul Smith
On Fri, 2012-12-14 at 17:07 +0200, Eli Zaretskii wrote:
> Does it even make sense to use -j with no arguments?  Should we
> perhaps remove that possibility, or have some internal sane limit,
> like twice the number of cores, say?

In general I'd say no, the current behavior is not ideal.  However I
don't want to remove the behavior.  I'd rather have the default, if
given no argument, choose a "sane" limit.  However, how does one detect
the number of cores on a system in a portable way?  It's easy enough on
Linux but...


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: NLS-related failure when building make from CVS

2013-01-06 Thread Paul Smith
On Sun, 2013-01-06 at 19:04 +0100, Stefano Lattarini wrote:
> Here is the error:
> 
>   make[2]: Entering directory `/devel/bleeding/src/make/po'
>   make[2]: `be.gmo' is up to date.
>   make[3]: Entering directory `/devel/bleeding/src/make/po'
>   File cs.po does not exist. If you are a translator, you can create it 
> through 'msginit'.

That's odd; I just did this earlier today, from scratch, and it worked
fine.  It seems like for some reason your "make update" step failed to
download the cs.po file from translationproject.org (but it obviously
got at least some of the other .po files).

Do you have a log of the "make update" step?  Can you see if there was
any error using wget to get cs.po?



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Bug report: Make cannot handle the word or path that contains space

2013-01-29 Thread Paul Smith
On Tue, 2013-01-29 at 14:42 +0100, Liang, Jian wrote:
> Make cannot handle the word or path that contains space.

You're right.  The syntax of makefiles precludes this, and no version of
make supports it.

https://savannah.gnu.org/bugs/?712



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Infinite loop bug with parallel make

2013-02-22 Thread Paul Smith
On Sat, 2013-02-23 at 02:32 +, Ian Lynagh wrote:
> The problem was that our compiler generates 2 output files (foo.o and
> foo.hi) when compiling one source file, and we had thus ended up with
> a bunch of rules like
> %.hi: %.o ;

The right way to declare a rule that generates multiple targets is:

   %.o %.hi : %.c
...

In particular this won't break things when parallel builds are involved.
Is there some reason that doesn't work for you?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Infinite loop bug with parallel make

2013-02-23 Thread Paul Smith
On Sat, 2013-02-23 at 17:28 +, Ian Lynagh wrote:
> On Sat, Feb 23, 2013 at 06:57:27AM +0200, Shachar Shemesh wrote:
> > 
> > What I'm also interested in is why .SECONDARY made everything slow.
> 
> I've put a cut-down makefile demonstrating this here:
> http://urchin.earth.li/~ian/tmp/Makefile

I haven't looked deeply into this; however handling an intermediate
prerequisite is more expensive than handling a normal prerequisite.  By
adding the .SECONDARY: target with no prerequisites you're essentially
declaring every single file in the makefile as an intermediate file.
That's causing your slowdowns.

There is probably an opportunity here to add some kind of
short-circuiting (I think make is doing the same work multiple times in
this situation).

I've never really been clear on the purpose and use of .SECONDARY; the
comments in both the GNU make manual and in the code seem odd to me.  I
would really appreciate anyone out there who is using this (either for
specific targets or all by itself as in this example) to explain why
they use it / what they need it for.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Infinite loop bug with parallel make

2013-02-23 Thread Paul Smith
On Sat, 2013-02-23 at 21:02 +, Ian Lynagh wrote:
> On Sat, Feb 23, 2013 at 03:45:59PM -0500, Paul Smith wrote:
> > 
> > I've never really been clear on the purpose and use of .SECONDARY; the
> > comments in both the GNU make manual and in the code seem odd to me.  I
> > would really appreciate anyone out there who is using this (either for
> > specific targets or all by itself as in this example) to explain why
> > they use it / what they need it for.
> 
> When I use .SECONDARY:, what I really want to say is "don't delete any
> files you make".

Hm.

I'm getting this very deja vu feeling when I consider the .PRECIOUS,
.INTERMEDIATE, and .SECONDARY special targets.  I think these targets
all overlap in weird ways that yield unexpected behavior and don't let
people specify exactly what they really want, and I'm pretty sure this
issue has come up before.

I'd still like to know if anyone uses .SECONDARY for some OTHER purpose,
than keeping intermediate files from being deleted.  Speak up if you're
out there.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #38420] $(realpath ...) doesn't recover from signals

2013-02-27 Thread Paul Smith
On Wed, 2013-02-27 at 12:47 -0700, Brian Vandenberg wrote:
> What it doesn't make clear is, if it's configured with 'nointr' will
> that just cause the system function to block?  That seems the most
> plausible.

Correct.  With nointr, you won't be able to (for example) ^C a program
that is hung waiting for the NFS server to respond.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #38433] Example for "eval" in documentation contains error with "define"

2013-02-27 Thread Paul Smith
On Wed, 2013-02-27 at 13:56 -0800, Daniel Wagenaar wrote:
> I appreciate your correction, but I still feel that the documentation
> on the website would be more helpful if it at least mentioned that
> older versions of make fail quietly when there is a "=" at the end of
> the line.

GNU make has been around for 25+ years.  I don't think it's reasonable
to include information in the manual describing every change, which
version of GNU make it appeared in, and what the consequences might be
if you are using older versions.

We do publish an extensive, and easy-to-read, NEWS file with every
release.  If you go to the Git repository and check the latest version
of that file you'll be able to verify which version of GNU make various
features were added in.

http://git.savannah.gnu.org/cgit/make.git/tree/NEWS

>  The reason is that make v. 3.81 is still in very wide use. For
> instance, it is part of Ubuntu 12.04-LTS as well as Mint 14.

That's a decision taken by the various GNU/Linux distributions.  The GNU
project and the FSF don't control this or have any input into it, and we
can't base our release or documentation efforts on what downstream
distributors do, or don't do, or how long it takes them to do it.

Philip is correct: your GNU/Linux distribution provides to you the
documentation for GNU make for the version of GNU make that it ships,
that you can view on your own system.  The best way to ensure you're
reading the appropriate version of documentation is to read that version
rather than the web version.


Cheers!


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #38433] Example for "eval" in documentation contains error with "define"

2013-02-27 Thread Paul Smith
On Wed, 2013-02-27 at 19:04 -0800, David Boyce wrote:


> I think you and the others in the "nay" camp may be being a bit
> unfair. As far as I can see, nobody has proposed (in this thread) that
> the entire manual be reworked to note the version in which each
> feature appeared. You're absolutely right that that would stink but
> it's a straw man. The proposal, as I understand it, is to add a note
> about this particular incompatibility because of the mysterious,
> silent way it fails.

I'm not excited about it but if someone produces a patch for this aspect
alone, I'll look at it.  If it's not too ugly I'll add it in.

My opinion remains that the only way to be sure you're not being misled
by the documentation is to use what is provided by your GNU/Linux
distribution.  That documentation will always be the correct version for
the version of GNU make that you're using in your distribution.

> There may even be a couple of other cases that could get similar
> treatment. It doesn't have to become a slippery slope.

Hm.  Maybe it's just me but this seems somewhat ironic :-p :-).

> If the GNU website were to require you to select the version
> of make
> you wanted to see the documentation for, I think that would be
> a
> reasonable 'solution'.  Perhaps a layout like
> http://gcc.gnu.org/onlinedocs/ could be done without too much
> complexity.

>  Another nice example is the Python docs
> (http://docs.python.org/2/index.html). Notice the dropdown at the top
> left where you get to pick the version you care about.

I would love this and it would solve all the problems, however it's not
something I can do.  The FSF/GNU website is maintained by a completely
separate group and I have virtually no say in how it's presented, not
even the GNU make area.  I control only the "front page" for GNU make.

I can ask the maintainers to see what they think.  I have a vague
recollection that something like this has been discussed in the past
however.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: 'no rule' warning not precise enough language

2013-03-29 Thread Paul Smith
On Fri, 2013-03-29 at 19:20 +0800, jida...@jidanni.org wrote:
> using
> %.kml:%.html %.xq; basex $*.xq < $< | ./postprocessor
> 
> make q.kml
> will say 'no rule to make q.kml'
> until one creates a q.xq file
> 
> So it should say something different than 'no rule'
> because there indeed is a rule.

We've already had this conversation at least once:

http://lists.gnu.org/archive/html/bug-make/2008-06/msg00013.html

I found this easily:

http://lmgtfy.com/?q=jidanni+%22no+rule+to+make%22

You'll note even there I thought we'd discussed it before then as well.

Cheers!



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Intermediate files and multiple target rules

2013-04-01 Thread Paul Smith
On Mon, 2013-04-01 at 12:56 -0400, Andriy Sen wrote:
> I have found inconsistent behavior of make in regard to chain of implicit 
> rules with multiple targets.
> 
> For example lets suppose we have a rule that generated multiple c++ files:
> 
> %1.cpp %2.cpp: %.ext
> [command]
> 
> The c++ files are subsequently compiled and linked. GNU make is treating the 
> file that triggered the above rule as intermediate and removes it at the end 
> but leaves the other one alone.

I believe this is a known bug:

https://savannah.gnu.org/bugs/index.php?32042

Cheers!


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Quirk with rules producing multiple output files

2013-04-04 Thread Paul Smith
On Wed, 2013-04-03 at 21:24 -0500, Roger Pepitone wrote:

> 
> TEST_TEXTS := test1.txt test2.txt test3.txt
> $(TEST_TEXTS) : xtest.txt
> echo "Rebuilding $@"
> touch $(TEST_TEXTS)
> xtest: $(TEST_TEXTS)
> ##
> 
> make clean-xtest
> make xtest
> touch xtest.txt
> make xtest

> The first call to "make xtest" runs the rule 3 times, even
> though it should only need to do it once.
> The second call correctly only runs it once.

This is expected behavior.  A rule like:

foo bar:
@echo $@

is exactly the same thing, to make, as writing:

foo:
@echo $@
bar:
@echo $@

It's just a shorthand for writing a lot of identical rules; it does NOT
mean that a single invocation if the rule will generate all three
targets, which is what you are expecting.

There is no way, in make, to get the behavior you want with explicit
rules other than using a separate "sentinal" target, like this:

.build_test_texts : xtest.txt
echo "Rebuilding $@"
touch $(TEST_TEXTS)
touch .build_test_texts
xtest: .build_test_texts

Of course this has its own issues, because make is not linking the
targets directly.  So for example, if you run make then delete one of
the targets then run make again, it won't be recreated.

If you have a naming convention that lets you write a pattern rule, then
you can do this more directly because pattern rules, unlike explicit
rules, DO allow multiple targets to be created from a single invocation
of the command script.  For instance, given your example this:

%1.txt %2.txt %3.txt : x%.txt
echo "Rebuilding $@"
touch $(TEST_TEXTS)
xtest: $(TEST_TEXTS)

will work as you expect.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Behaviour of $(shell command args) is dangerously different from `command args`

2013-04-10 Thread Paul Smith
On Wed, 2013-04-10 at 18:18 +0200, Vincent de Phily wrote:

> > SHELL := /bin/bash
> > date := $(shell date -R)
> > prevtag := $(shell git describe --tags|cut -d- -f1)

This is not good: what if your tag contains a "-" in it?

I think you want: $(shell git describe --tags --abbrev=0)

> > release:
> > #   sed -i "1s#^#$(VERSION) ($(date))\n$(shell git log HEAD...$(prevtag)
> > #   '--pretty=format:\\t* %s\\n'|tr -d '\n')\n#1" Changelog
> > sed -i "1s#^#$(VERSION) ($(date))\n`git log HEAD...$(prevtag)
> > '--pretty=format:\\t* %s\\n'|tr -d '\n'`\n#1" Changelog
> 
> The problem is that with the first (commented) version of the command, if I 
> have a commit message containing something between backquotes, that something 
> gets executed by make. In my case I was lucky and just ran into an infinite 
> loop executing `make release`, but I would have been in bigger trouble if I 
> had a commit to, say, "Protect against unintentional `rm -rf /`."
> 
> If the behaviour is expected (why ?), it would be usefull to explain the 
> difference between `command` and $(shell command) in the info pages.

There is no difference between `command` and $(shell command), except
the order in which they're executed.

Remember that make will expand all make functions and variables in the
recipe FIRST, then pass the resulting text string to the shell for the
shell to execute.  Make doesn't interpret the results of the functions
or variables itself.

If you want to understand how make works, then you can emulate it from a
shell command line by doing this:

First, run the command you provided to $(shell ...) from the prompt:

  $ git log HEAD...`git describe --tags --abbrev=0` --pretty=format:\\t* 
%s\\n'|tr -d '\n'

You'll get a bunch of output.

Now enter your sed command, but CUT AND PASTE the output of the git log
command into the right spot:

  $ sed  -i "1s#^# (`date -R`)\n\n#1" Changelog

You'll see the same behavior here as you get with make, because that's
what make is doing: it's not interpreting the output of the $(shell ...)
command at all: it's just taking that output and pasting it directly
into the command string, then passing the whole thing to the shell.

When you use `` in the shell, the shell is invoking the `` command and
the shell will treate the output of that command differently (for
example, the shell does not recursively expand the output of the ``
command so if that output contains more `` commands, they are not
interpreted).

In general you should never, and never need to, use $(shell ...) inside
a recipe command.  You're already running a shell, so why?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Behaviour of $(shell command args) is dangerously different from `command args`

2013-04-10 Thread Paul Smith
On Wed, 2013-04-10 at 19:56 +0200, Vincent de Phily wrote:
> On Wednesday 10 April 2013 13:28:38 Paul Smith wrote:
> > On Wed, 2013-04-10 at 18:18 +0200, Vincent de Phily wrote:
> > > If the behaviour is expected (why ?), it would be usefull to explain the
> > > difference between `command` and $(shell command) in the info pages.
> > 
> > There is no difference between `command` and $(shell command), except
> > the order in which they're executed.
> > 
> > Remember that make will expand all make functions and variables in the
> > recipe FIRST, then pass the resulting text string to the shell for the
> > shell to execute.  Make doesn't interpret the results of the functions
> > or variables itself.
> > 
> > (...)
> 
> Thanks, this makes sense.
> 
> Still an easy trap to fall into because Makefile syntax looks so much like 
> shell that you quickly forget it isn't.
> 
> The info pages probably point out that double-evaluation gotcha somewhere 
> (I'll check tomorrow), but it's the kind of detail that you easily miss 
> because it sounds obvious so you'll skip the section. Sigh...

I doubt this particular side-effect is discussed explicitly in the
manual, although the statement of how variables and functions are
expanded before the shell is invoked is definitely described... you're
left to connect the dots in this respect yourself, I believe.

> > In general you should never, and never need to, use $(shell ...) inside
> > a recipe command.  You're already running a shell, so why?
> 
> Probably because I'm not a fan of the `` syntax: I prefer the shell's $() 
> syntax which is more readable and can be nested. Make's $(shell) syntax 
> seemed 
> like a drop-in replacement.

There's no reason you can't use the shell's $() if you want, just double
the "$" to escape it from make: "$$(git log HEAD...)"

It's slightly annoying to need the extra "$", but overall it's less
typing than "$(shell ...)" after all :-).


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Quirk with rules producing multiple output files

2013-04-11 Thread Paul Smith
On Thu, 2013-04-11 at 12:14 +0200, Reinier Post wrote:
> > It's just a shorthand for writing a lot of identical rules; it does NOT
> > mean that a single invocation if the rule will generate all three
> > targets, which is what you are expecting.
> 
> Incidentally: other workflow/inference languages can express this
> distinction perfectly and still allow the resulting specifications to
> be analyzed for proper termination (e.g. safe Petri nets, Datalog);
> I'd love to know of an alternative to make that is based on such a
> language, but it seems too much to ask for make to be extended
> in this way.

I'm not sure exactly what you mean by "this distinction", but GNU make
already supports multi-target generators with pattern rules, as
mentioned in the part of the email you clipped.  So the basic
infrastructure exists.  There were proof-of-concept patches floating
around to support it for explicit rules as well.

Really the trickiest part is the user interface (makefile syntax): it
must be backward-compatible with existing makefiles, or at least be sure
to break virtually none of them.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 11:30 +0300, Eli Zaretskii wrote:
> > Date: Tue, 16 Apr 2013 05:54:13 +
> > From: "Paul D. Smith" 
> > 
> > I did a little bit of code rearrangement, but I still think this code will 
> > not
> > work on Windows and might possibly not compile on Windows.
> 
> Indeed, it will not.  Some cursory comments below.

I was hoping that if OUTPUT_SYNC is not #defined, the Windows code would
compile OK (although obviously without this feature), until we get it
working.

> > Hopefully we can fix that.
> 
> We shall see...
> 
> Here's what I see in the changes that is not friendly to Windows:
> 
>  . STREAM_OK uses fcntl and F_GETFD which both don't exist on Windows.

This is to test if somehow make's stdout or stderr has been closed (not
just redirected to /dev/null, but closed).

>  . FD_NOT_EMPTY will only work if its argument descriptor is connected
>to a file, not the console.  is this condition always true?

Yes, because here we're testing the temporary file that we're saving
output in.  Either the fd will be -1 (not redirected to a temp file), or
it will be a temp file.

>  . open_tmpfd will need close examination on Windows, especially since
>it closes the stream (the issue is whether the file will still be
>automatically deleted when the dup'ed file descriptor is closed).

Yes, I suspected this would not work well.  On UNIX the file is actually
deleted FIRST, by tmpfile(), because on UNIX a file is not actually
deleted until the last file descriptor using it is closed, even if it's
not visible in the filesystem anymore.  This is a very nice way to get a
truly anonymous temporary file that cannot be accessed by anything other
than the process that created it.

I'm not sure what the semantics of tmpfile() are on Windows.

>  . pump_from_tmp_fd will need to switch to_fd to binary mode, since
>this is how tmpfile opens the temporary file; we need output mode
>match the input mode.

That's a good point.

>  . acquire_semaphore uses fcntl and F_SETLKW, which don't exist on
>Windows.  the commentary to that function is not revealing its
>purpose, so I'm unsure how to implement its equivalent on Windows.
>can someone explain why is this needed and what should it do?  the
>name seems to imply that using fcntl is an implementation detail,
>as a crude semaphore -- is that right?  similarly for
>release_semaphore.  (see also the next item.)

Yes, this is the guts of the feature.  It ensures that only one make
process is writing output at a time.  On other systems like Windows a
different method might be more appropriate.  Since the resource we're
locking on is the output, on UNIX we lock the output fd.  This saves us
from having to create a separate lock file, etc.

I'm pretty convinced that it works fine, even if stdout/stderr are
redirected.  For example, a recursive make which is redirected to a
different file will work OK; the locking for the sub-make will happen on
that file which is different than the locking for other make instances,
but that's OK because they're writing to different places anyway.  The
sub-make's output will be internally consistent, and not interfere with
the parent make's output which is what you want.

Windows has LockFileEx() but we'd need to examine the semantics to
verify it will do what we want.

>  . is there any significance to the fact that sync_handle is either
>stdout or stderr?  is that important for the synchronization to
>work, or was that just a convenient handle around?  also, the
>documentation I have indicates that locking works only on files;
>is it guaranteed that these handles are redirected to files before
>acquire_semaphore is called?

They are definitely NOT guaranteed to be redirected to files.  The lock
is taken on stdout (if open) before any redirection happens, so normally
it would be taken on stdout going to the console.  On Linux it works
fine.  I'll need to read the standard more closely and maybe do some
testing on other systems.  It's just a convenient handle... but if it
doesn't always work and we have to create our own handle then that's
some extra work as we have to communicate the handle info to the child
make processes.

>  . calculation of combined_output in start_job_command will need to be
>reimplemented for Windows, since the reliance on st_dev and st_ino
>makes assumptions that are false on Windows.

Yes, not surprising.

> Other notes:
> 
>  . outputting stdout and stderr separately is IMO a misfeature: it
>breaks the temporal proximity of messages written to these streams,
>and this makes it harder to understand failures.

I agree 100%; that's what the combined output test above is supposed to
handle.  If stdout and stderr are going to the same place then we
redirect them to the same temporary file so they will ultimately appear
in the same order as they would have on the terminal.  Only if they are
not going to the same place anyway do we keep th

Re: feature request: parallel builds feature

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 01:34 -0700, Jim Michaels wrote:
> I have been toying with this idea of parallel builds to gain project
> compile speed (reducing time to a fraction) for quite a while.

Can you explain the difference between what you're suggesting and the
existing --jobs (-j) feature available in GNU make?



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 09:57 +0100, Tim Murphy wrote:
> What would be super cool is being able to get make to expand some sort
> of variable at the start and another one at the end of the output so
> that there was a way to see where one rule ended and the next one
> began.

Well, the new feature adds enter/leave notations around the output for
each target (or sub-make if you run in that mode).  That may be more
heavy-weight than you had in mind but it should give you the delineation
you're seeking.

As I mentioned a few times, though, I think it needs a bit of
examination.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [PATCH 3/4] Compile fix for when not using output-sync

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 13:40 +0100, Ray Donnelly wrote:
> Pretty simple, needs little explanation.

Maybe not but a patch would be nice :-) :-p



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 09:57 +0100, Tim Murphy wrote:
> When most rules are a single job this doesn't seem important but when
> you're doing anything non trivial it becomes hard to see what is
> where.

Just to be clear: in this implementation the output from all individual
commands in a recipe are collected and printed at once.  It's not
actually a per-job thing, but rather a per-target thing.

Don't know if that helps at all (I think it's better this way).


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 16:43 +0300, Eli Zaretskii wrote:
> > I'm not sure what the semantics of tmpfile() are on Windows.
> 
> The file is automatically deleted when closed.  But the documentation
> doesn't say what happens if it is open on more than one descriptor, or
> what happens if the original descriptor is dup'ed.  I will need to
> test that, and perhaps provide a work-around.

It might be that we have to allow use of a file handle on Windows,
rather than a descriptor.  The original code actually didn't close the
file if dup() failed, but this left the file open forever so I changed
it to fail.  Some portability glue could be added for this.

> Do we even need to lock a file?  If all that's needed is a semaphore
> (actually, a mutex, AFAICS), Windows has good support for that which
> doesn't involve files at all.

Yes a system-wide mutex would be fine.  That's not so easy to do
portably on UNIX systems.  File locking is the most straightforward,
widely-supported means of handling this there.

The descriptor-based mutex has the very slight advantage over a
system-wide mutex in that if a sub-make's output is redirected it now
has its own lock domain.  However I imagine this would happen very
rarely (how many makefiles run sub-makes with stdout/stderr redirected?)
and probably won't have much performance impact anyway.

> This page:
> 
>http://pubs.opengroup.org/onlinepubs/009695399/functions/fcntl.html
> 
> says, immediately prior to describing F_SETLKW and its friends:
> 
>The following values for 'cmd' are available for advisory record
>locking. Record locking shall be supported for regular files, and
>may be supported for other files.
> 
> I don't know what is the de-facto situation in this regard on Posix
> systems.

Yeah, I saw that too.  I'll try to run some tests on different systems.
If this is not portable enough we'll have to pick a real file, then
communicate the information (file name or descriptor) to sub-makes.

> But this redirection can be changed several times by the commands run
> _after_ the initial decision described above was made: the shell
> commands run by the job can do anything with these two handles, right?
> So it could easily be the case that the output and error streams get
> separated under OUTPUT_SYNC, where they originally appeared together,
> interspersed.

I'm not sure I'm seeing the issue.  Sure, commands in a shell can
redirect their output however they want.  I don't see a situation where
we'd get different behavior than expected.  Can you give a scenario?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 15:31 +0100, Tim Murphy wrote:

> So this is great and you can see that there are 4 targets in my
> makefile and that each one is a "start X" followed by an "end X".  I
> don't see any enter/exit delimitation - have I missed out some option?

Um.  Yes, the enter/leave only happens inside make recursion (just like
the normal entering/leaving output).

As I say, this aspect needs some love :-)


> Is that 3 rules, 2 rules or 1?  What I'd like to be able to do is
> demarcate them.

Yep, I get it.

There's the new --trace option as well but this probably doesn't do
everything you'd want.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-16 Thread Paul Smith
On Tue, 2013-04-16 at 19:20 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: bo...@kolpackov.net, bug-make@gnu.org, f.heckenb...@fh-soft.de
> > Date: Tue, 16 Apr 2013 10:44:39 -0400
> > 
> > On Tue, 2013-04-16 at 16:43 +0300, Eli Zaretskii wrote:
> > > > I'm not sure what the semantics of tmpfile() are on Windows.
> > > 
> > > The file is automatically deleted when closed.  But the documentation
> > > doesn't say what happens if it is open on more than one descriptor, or
> > > what happens if the original descriptor is dup'ed.  I will need to
> > > test that, and perhaps provide a work-around.
> > 
> > It might be that we have to allow use of a file handle on Windows,
> > rather than a descriptor.
> 
> That doesn't matter, really.  One can get one from the other on
> Windows.

Ah interesting.  In UNIX you can get _A_ file handle back from a file
descriptor (using fdopen()), but it's not guaranteed to be the SAME file
handle you had originally.  That is, if you run:

   FILE* f1 = fopen(...);
   int fd = fileno(f1);
   FILE* f2 = fdopen(fd, ...);
   fclose(f2);

you don't get back f2 == f1.  And although fd will be closed here, I'm
pretty sure not all the resources associated with f1 are freed, which is
a resource leak that will eventually lead to running out of file
handles.

> > The descriptor-based mutex has the very slight advantage over a
> > system-wide mutex in that if a sub-make's output is redirected it now
> > has its own lock domain.
> 
> I didn't mean a system-wide mutex, I meant a process-wide mutex.  Will
> this be OK?

I don't think so: especially now that we support full jobserver
capabilities in Windows we can have recursive make invocations all
running jobs in parallel, and we'd want them to synchronize their output
across multiple processes.  If we were only concerned about a single
process we really wouldn't need even mutexes since make is
single-threaded.

On the other hand I guess "system-wide" is not really right either.
Ideally what we want is a mutex shared between all the recursive make
instances and the root make, but we would still want multiple completely
independent make instances to not interfere with each other.

I guess this points out a potential issue with the current UNIX
implementation as well: if you had some utility you were running as part
of your build that was also trying to lock stdout, then it would
interfere with make.  That seems unlikely, but maybe to avoid this and
to deal with the potential issue of locking not supported on non-files,
we should just create a temporary file descriptor and pass it around,
like we do with the jobserver FDs.

> E.g., Make sees that both are connected to the same device and
> redirects them to the same file, but then the job redirects stderr to
> a separate file using shell redirection ("2> foo").  Or vice versa.

Sure... but I don't see the problem.  Maybe I've lost the thread.  When
the command starts both stdout and stdin are writing to the same
destination.  If the command does nothing special then the output will
be a combination of stdout and stderr in the order in which the command
generated them, which is good.  If the command script changes one or
both of stdin/stdout, then they won't be cojoined anymore, yes, but
that's what the user asked for...?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-17 Thread Paul Smith
On Wed, 2013-04-17 at 19:10 +0300, Eli Zaretskii wrote:
> That could be a misunderstanding on my part: I didn't realize that by
> "handle" you mean a FILE object.  I thought you meant Windows specific
> HANDLE objects (which underly every open file).

I'm not very familiar with Windows terminology.  Is a HANDLE equivalent
to a UNIX file descriptor?  Or is it a third thing, different from UNIX
fd's or C standard FILE*'s?

> Anyway, I'm not sure why the current code calls tmpfile, which
> produces a FILE object, but then only uses its file descriptor and
> read/write functions.  Why not keep the FILE object in the child
> struct, and use fread/fwrite instead?

I believe the thinking is that some implementations may have a much
smaller number of open streams (FILE*) allowed, than open file
descriptors.  The POSIX standard, for example, allows this:

> Some implementations may limit {STREAM_MAX} to 20 but allow {OPEN_MAX}
> to be considerably larger.

Also, a stream is much more resource-heavy than a file descriptor, as it
implies buffering etc. in addition to the open file.  We wouldn't use
the buffering, but it's still there.  We might need two different temp
files per running job, and for high values of -j (people are doing -j
builds on very large systems these days) that may be significant.

However, I don't know if it's worth it in reality systems.

> As a nice benefit, you get to avoid leaking the resources due to the
> fact that no one calls fclose on those FILE objects, or so it seems.

They are closed in open_tmpfd(), that's why we dup() the file descriptor
first (so when we close the FILE* we don't lose the underlying file).

But you're right, for Windows this may not make sense and we should
simiply use the FILE*: this is what I was referring to by some kind of
portability layer.

> > Sure... but I don't see the problem.
> 
> Maybe there's no problem, I don't know.

OK.  I think it will behave just as I want, but I'm the one who
suggested this behavior so I'm probably biased.  Let me know of you
think of something that doesn't work about it.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-17 Thread Paul Smith
On Wed, 2013-04-17 at 23:00 +0300, Eli Zaretskii wrote:
> I'd be surprised if this were a real problem nowadays.  E.g., the
> Windows C runtime is documented to allow up to 512 FILE streams, which
> can be enlarged to 2048 by calling a function.  The max number of file
> descriptors is also 2048.

GNU make is still used on some pretty ancient UNIX versions, but of
course they probably aren't using -j512 either.  I don't know if it's a
problem in reality.

> > Also, a stream is much more resource-heavy than a file descriptor, as it
> > implies buffering etc. in addition to the open file.  We wouldn't use
> > the buffering, but it's still there.
> 
> What's wrong with using the buffering?

Nothing, really, but we just don't need it.  We don't write, ourselves,
to the temporary file: the jobs we invoke write to them.  What we do is
after the job exits we seek to the beginning of the file, then pump the
exact contents out of the temporary file and into our stdout (and/or
stderr) as quickly and efficiently as possible (because this is done
while holding the lock and thus is potentially blocking other jobs from
finishing).  Because of this we're using read(2) and write(2) with a big
buffer.

There's no particular reason I know of that we couldn't use, for
example, fread()/fwrite() instead, other than efficiency.  One assumes
that using a stream interface introduces an extra copy operation on both
the read and write side (instead of kernel->buffer->kernel, we would
have kernel->stream->buffer->stream->kernel), but I don't have any
particular opinion on the difference it would make: it would require
some testing.

Of course there's no reason we have to use fread()/fwrite() even if we
keep FILE*; that can be transformed into a file descriptor (for POSIX)
or HANDLE (for Windows) for more efficiency.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-18 Thread Paul Smith
On Thu, 2013-04-18 at 19:09 +0200, Frank Heckenbach wrote:
> This mechanism was unaffected by my output-sync patch, and I
> expected your change broke it.

I was reading your email with interest, waiting for the punch-line, but
then after all that description you just said that the change broke it,
with no explanation :-).  My question is, what about the new behavior
does not work for you?

The change I made was to have all the jobs in a target recipe write
(appending of course) to the same temp file, then after the last job in
the recipe is completed the output for the entire recipe is generated.
So for example, if you have:

  foo bar:
  : $@ one
  : $@ two
  : $@ three

and you run "make -j -O", the way you had it the output from each job
would be mixed something like this (the actual result order will be
slightly random):

  foo one
  bar one
  foo two
  bar two
  bar three
  foo three

with my change you still get the same results but only after all jobs
are complete, and they will be collected:

  foo one
  foo two
  foo three
  bar one
  bar two
  bar three

> So I'd plead to revert this bit (since one can still use .ONESHELL
> if wanted). Or we could add another mode like "--output-sync=job".
> Shouldn't be too hard now (if you like, I can implement it).

I'd prefer to not add another option here unless there's a compelling
case for it, even though you're correct that we have the flexibility
now.

> > I think we're
> > doing too much here.  I would have left it alone but I think some tools
> > that parse make results expect to see that output to help them figure
> > out what's going on.
> 
> As I described in https://savannah.gnu.org/bugs/?33138#comment3, the
> problem is how to interpret messages that contain file names when
> different jobs run in different subdirectories.

I do get it.  I just think we might need to do more drastic surgery on
this to REALLY get it right.  Probably we'll want to allow the user to
have more control over it, as well.  Maybe a similar flag that lets you
choose whether to trace on a per-target or per-make basis.  And, if you
choose per-target you don't need to ALSO generate output per-make.

We do need to be careful about tracing non-target output, for example
output generated by $(info ...), $(warning ...), and $(error ...).

> > I guess this points out a potential issue with the current UNIX
> > implementation as well: if you had some utility you were running as part
> > of your build that was also trying to lock stdout, then it would
> > interfere with make.
> 
> I don't think so, since its stdout is now our temp file which it may
> lock as it pleases. (In fact, that's just what recursive makes do
> with --output-sync=make.)

True.  And if they were trying to use stdout like we are, to lock
between different recipes, they just can't use -O at all since by
definition each target will be writing to a different file.

> PS: In assign_child_tempfiles(), the following declarations are no more
> needed and can be removed:
> 
>   FILE *outstrm = NULL, *errstrm = NULL;
>   const char *suppressed = "output-sync suppressed: ";
>   char *failmode = NULL;

Yeah, I have a commit in my local repo that fixes this.

Thanks!


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-18 Thread Paul Smith
On Thu, 2013-04-18 at 20:36 +0200, Frank Heckenbach wrote:
> And with my progress mechanism, that's exactly what I want. In my
> case it'd look like this:
> 
> [Start] Compiling foo.c
> [Start] Compiling bar.c
> # time passes
> foo.c: some error
> # time passes
> bar.c: some error
> # time passes
> [End] Compiling bar.c
> # time passes
> [End] Compiling foo.c
> 
> This is useful (to me) because at any time, I know what's running.
> ("[Start]" messages minus "[End]" messages.)

Thanks, this is the reason I was looking for; that use-case wasn't clear
to me based on the previous email.

> > Probably we'll want to allow the user to
> > have more control over it, as well.  Maybe a similar flag that lets you
> > choose whether to trace on a per-target or per-make basis.
> 
> I think it should in principle be possible without requiring the
> user to specify any more options.

I was thinking more that the user may want to not want all the
enter/leave output even if it is ambiguous from a programmatic sense: I
know where my code lives and so if I see the my_foo.o fails, I know that
the my_foo.c file lives in src/my/my_foo.c.  I might prefer to see a
cleaner output from make and rely on my innate knowledge of the codebase
to navigate.

But maybe you'd still like to see the per-make enter/leave, even if
you're running with -Otarget.

> But it would be some work, requiring make to keep track of which
> directory message was output last, delay the "leaving" message in case
> the next one will be "entering" the same directory etc., and
> synchronize this among recursive makes in the different modes.

Synchronization between recursive makes is not something I want to get
into.  As long as the messages are coherent within a single make
instance, either before/after everything the make instance does or
before/after each target (or job, if that is needed) that's enough for
me.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Building Make out of Git: Gettext requirements

2013-04-20 Thread Paul Smith
On Sat, 2013-04-20 at 13:50 +0300, Eli Zaretskii wrote:
> Do we really need to require 0.18.1 or can this restriction be lifted?
> I hacked configure.ac to require 0.17, and didn't see any problems
> afterwards.

You can see this bug:

http://savannah.gnu.org/bugs/?37307

I confess I didn't get a satisfactory answer to my question, of why the
minimum version in configure.ac must be changed.  It seems to me that if
I build the make distribution tarball with a newer version of gettext,
regardless of the minimum version specified in configure.ac, it should
be good enough.

However, Brad was clear that he believed that the minimum version MUST
be increased in configure.ac in order to get the benefits.  I was going
to build with the new version anyway so I just changed it.

If this is a problem we can get back into it and ask Brad for more
clarification, and maybe check on the OpenBSD lists for details.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Example use of findstring in documentation can be problematic

2013-04-20 Thread Paul Smith
On Fri, 2013-04-19 at 18:07 -0600, David Sankel wrote:
> In section 7.3 and 8.2 the function 'findstring' is recommended as a
> means to search a space separated list for a given value. This
> suggestion is problematic as findstring really searches for
> substrings. So, for example $(findstring car,bicicle airplain
> carriage) will return a non-empty value. Instead it seems that
> 'filter' should be used for this kind of problem.

> Should the documentation in those sections be modified to note the
> problem and suggest using filter as an alternative?

The use in 7.3 specifically requires the use of findstring.  It won't
work to use filter, because the flags in MAKEFLAGS are condensed
together into a single string.

The example in the findstring description could be made more clear, yes,
by showing specifically that the match is on a substring and not the
whole word.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Building Make out of Git: Gettext requirements

2013-04-20 Thread Paul Smith
On Sat, 2013-04-20 at 19:38 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: bug-make@gnu.org
> > Date: Sat, 20 Apr 2013 11:44:02 -0400
> > 
> > On Sat, 2013-04-20 at 13:50 +0300, Eli Zaretskii wrote:
> > > Do we really need to require 0.18.1 or can this restriction be lifted?
> > > I hacked configure.ac to require 0.17, and didn't see any problems
> > > afterwards.
> > 
> > You can see this bug:
> > 
> > http://savannah.gnu.org/bugs/?37307
> 
> That just says you must use Gettext 0.18 to be able to avoid static
> linking.  It doesn't say the build won't work

I didn't say that it said the build wouldn't work.

> But even if building the tarball is not enough, it is IMO wrong to
> solve the problem like this.  For starters, it punishes OpenBSD users
> themselves, because previously they could build Make, albeit
> statically linked with gettet -- now they won't be able to do that at
> all, unless they upgrade Gettext!

I believe this is only a problem building from git.  If you build from a
distribution tarball then you don't need to have any particular version
of gettext installed.  For developers who build from git, they do need
to have certain versions of tools installed.

However, if we build our distribution tarball with an older version of
gettext, then the resulting tarball distribution doesn't work correctly
in this situation.

Upgrading gettext is usually pretty simple, and this release is almost 3
years old.

> > However, Brad was clear that he believed that the minimum version MUST
> > be increased in configure.ac in order to get the benefits.
> 
> I don't see how he could be right in that.

I investigated how this works and he is right, actually.

The GETTEXT_VERSION macro is not used by gettext at all.  It's used by
the autopoint tool, which is run to build distributions from SCM.  It
grabs the gettext M4 files and includes them in the source directory so
autoconf can find them.

The gettext distribution contains an archived version of the entire
source code control repository (!), e.g.
/usr/share/gettext/archive.git.tar.gz which is the contents of a git
repository (.git directory for example).

When autopoint runs it unpacks the archive tarball, then checks out the
tagged version of the M4 files based on the value of GETTEXT_VERSION and
includes those versions of the gettext m4 files in the target.

So, whatever version you put there is the version of the gettext m4
files you'll get, regardless of which version of gettext is installed on
your system.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-21 Thread Paul Smith
On Fri, 2013-04-19 at 14:09 +0300, Eli Zaretskii wrote:
> > Date: Fri, 19 Apr 2013 11:54:05 +0200
> > Cc: bo...@kolpackov.net, bug-make@gnu.org
> > From: Frank Heckenbach 
> > 
> > > Is there a simple enough Makefile somewhere that could be used to test
> > > this feature, once implemented?
> > 
> > We have a test in the test suite (output-sync). Can you use that?
> 
> I hoped for something simpler and not involving Perl or Unixy shell
> features, because I'd like to use this in the native Windows
> environment, where the Windows port of Make runs.  However, if that's
> the best possibility, I guess I'd craft something based on that test.

The basic feature can be tested trivially like this:

  all: one two

  one two:
  @echo start $@
  @sleep 1
  @echo stop $@

Now if you run this using "make -j" you'll get:

  start one
  start two
  stop one
  stop two

If you run this using "make -j -O" you should get:

  start one
  stop one
  start two
  stop two

There's more to test than that: before it's done we need to test
recursive make invocations for example.  But the above is simple.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-21 Thread Paul Smith
On Fri, 2013-04-19 at 12:36 +0300, Eli Zaretskii wrote:
> Also, where is the best place to put the emulated Posix functions?
> Some new file in w32/compat/? 

I'd like to see it there.  I'm thinking I want to move the new stuff out
of job.c even for POSIX systems.  The ifdefs are really getting to me.
I was thinking of creating a "posix.c" file for example.  The problem is
there's a lot of variation even in POSIX, and there's a lot of POSIX-y
stuff we actually do use in Windows too.  We might need to break it down
further than just "posix.c".

For this feature I probably won't have a clear feel for how to split it
until I see what the Windows version looks like.

Anyway I'll probably wait until after the next release to do major
renovations.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: feature request: parallel builds feature

2013-04-22 Thread Paul Smith
On Mon, 2013-04-22 at 00:42 -0700, Jim Michaels wrote:
> it currently has a problem with stdin, because at this point there is
> only one of those, only 1 of them gets it, and the others starve. so
> if your build needs stdin or creates files from the commandline using
> heredocs, you can't use it (check first!). you will get an error. gnu
> has not yet figured out a solution yet (I have, multiple shells IF you
> can control them... probably can't without some work creating batch
> files for the jobs). so there is a solution. even with a batch file,
> make would need some sort of way of reporting back error conditions. I
> think there are ways of managing that with files via presence-detect,
> sort of like semaphores. they should get cleared when the job ends, or
> when a switch is given to clear the state for that session if the
> session was broken with ctrl-c. well, I suppose a ctrl-c handler
> should still kill those terminals or cmd shells and clear up those
> files.
> what do you think?
> if a terminal is opened, it should be created without a window. some
> OS's have that option. some don't, like freeDOS, which would not have
> the ability to do terminals, parallel shell windows, or even the
> --jobs feature (but that's implementation-dependent).

Please keep the mailing list CC'd.  Thanks.

I'm afraid I still don't understand what you're asking for here.  You'll
need to back up and provide a description of your needs in a clear,
orderly way without digressions.

Yes, it's true that GNU make only provides its stdin to one job at a
time and which job gets it is essentially random.  In order to address
this we'd need to see a specific use-case or requirement, but my
suspicion is that all such possible use-cases are better solved by a
change of process at a level above what make can provide.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-23 Thread Paul Smith
On Tue, 2013-04-23 at 21:40 +0300, Eli Zaretskii wrote:
> > Date: Tue, 23 Apr 2013 11:29:35 -0700
> > From: David Boyce 
> > Cc: Frank Heckenbach , bug-make 
> > 
> > Since you asked basic questions I'm going to start this at a basic level.
> > Apologies if it covers some stuff you already know or if I misinterpreted
> > the questions. Note that I haven't actually looked at the patch that went
> > in so this is generally wrt the original.
> > [...]
> 
> Thanks, I will dwell on this.

When thinking about this, remember that it's not enough to consider how
a single make invocation will work.  If you run with a single make
instance under -j, then redirecting each job's output to a temp file and
then when make reaps each job, copying the contents of that temp file to
stdout, is a sufficient solution.  You just need to be able to redirect
stdout/stderr of a given job to temporary files.  In UNIX of course this
is done by dup()'ing the file descriptors after the fork and before the
exec.  Presumably on Windows there's some other way to do this.

However, in a situation where we have recursive make instances all
running at the same time under -j then each one of those make instances
is also running one or more jobs in parallel.  In this case it's not
good enough for each make to synchronize its own jobs' output.

So in addition to the temp file change above, you ALSO need a way to
synchronize the use of the single resource (stdout) that is being shared
by all instances of recursive make.  On UNIX we have chosen to use an
advisory lock on the stdout file descriptor: it's handy, and it's the
resource being contended for, so it makes sense.

You asked, what if someone redirected the stdout of a sub-make.  In that
case things still work: that sub-make will not participate in the same
locking as the other sub-makes, it's true, but that's OK because the
output is going to a different location from the other sub-makes so
there's no need to synchronize them.  Meanwhile any sub-sub-makes using
the same output file will still be synchronous with each other.


I'm not sure if the lock locks the FD (so that if you dup'd the FD but
it still pointed to the same output, you could take exclusive locks on
both), or if it locks the underlying resource.  If the former I guess
it's possible to break the synchrony if you worked at it.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-23 Thread Paul Smith
On Tue, 2013-04-23 at 23:16 +0300, Eli Zaretskii wrote:
> > All it requires is inheriting the redirected stdout/stderr to child
> > processes. This was already possible under Dos (with the exception
> > that since there was no fork, you had to redirect in the parent
> > process, call the child, then undirect in the parent, IIRC).
> 
> Inheritance is not the problem; _finding_ the inherited object or its
> handle is.  With stdout, its name is global, but not so with a handle
> to some mutex I can create or its name (unless we indeed make the name
> fixed and thus system-global, which I don't like, if only because it's
> not how Make works on Unix).

You're right: we cannot use a fixed name.  That would allow only one
(top-level) invocation of make per system to use -O.  That's not
acceptable.

> > They can have their own stdout, in particular with the
> > "--output-sync=make" option. But that's actually the harmless case:
> > Each sub-make runs with its stdout already redirected to a temp file
> > by the main make. In turn, it redirects the stdout of its children
> > to separate temp files, and when they are done, collects the data to
> > its stdout, i.e. the temp file from the main make. When the sub make
> > is finished, the main make collects its output to the original
> > stdout. So unless I'm mistaken, no locking is actually required in
> > this case.
> 
> But we still acquire the semaphore in this case, don't we?  Perhaps we
> shouldn't.

We do acquire it but it's on a different resource.  We still do need the
semaphore because we want the output directed to the other file to be
synchronized as well, at least within itself.  If you run "make -O" and
your output to stdout is synchronized by you run "make -O >out" and the
contents of the "out" file are not synchronized, that would be bogus.

However, I'm not worried about losing this ability to change the "lock
domain" for sub-makes based on redirection.  If all the sub-makes were
synchronized together even though some redirected to a different file
that is fine.

First, I sincerely doubt there are many makefiles which run sub-makes
with IO redirected like that.  And second, the slight loss in
parallelization is not worth a huge increase in complexity.  It's a nice
thing that we get it for free with stdout but I wouldn't expend days
trying to implement it if we didn't... in fact without stdout I just
don't see that it's even possible.  I guess if the child could somehow
detect that it was using a different stdout it could be done, but that
would require the parent passing information about stdout to the
child... ugh.  Not worth it.

> IOW, the top-level design is indeed quite general, but the low-level
> algorithmic details are not, and therefore just replacing these
> functions will not necessarily do the job.

Right.  The locking is the part that's not portable, and we need a lock
that can be shared between all the make instances for one "top-level"
make, but not shared with another "top-level" make.

Without knowing what kind of resource Windows can take locks on, we
can't really know how to help with that.  There must be precedent for
this kind of "shared between a subset of processes" lock on Windows.
What handle do we need to know, and how can that handle be communicated?
We can pass a filename through the environment, we can add a
command-line flag to the sub-makes, we can do whatever the equivalent of
inheriting an open file descriptor is on Windows...

> If we really want to make this reasonably portable (and without that,
> I cannot see how Paul's dream about less ifdef's is going to
> materialize), this code needs IMO to be refactored to make the
> algorithm know less about the implementation details.

Personally I've never had any luck trying to create portable code from
scratch, at least not unless I'm very familiar with all the platforms
which is certainly not the case here.

Once we see the second implementation it will be a lot more obvious what
needs to be done to make this more portable.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-24 Thread Paul Smith
On Wed, 2013-04-24 at 21:17 +0300, Eli Zaretskii wrote:
> There's one issue that perhaps needs discussing.  A mutex is
> identified by a handle, which on Windows is actually a pointer to an
> opaque object (maintained by the kernel).  As such, using just 'int'
> for sync_handle is not wide enough, certainly not in 64-bit builds.
> Is it OK to use intptr_t instead?  Doing this cleanly might require to
> have a macro (I called it FD_FROM_HANDLE) to extract a file descriptor
> that is passed to a Posix fcntl; on Windows that macro will be a
> no-op, and on Posix platforms it's a simple cast.
> 
> Is this OK?

I think the "lock" or "lock handle" or whatever can be encapsulated into
a typedef, which will be different between the different platforms.
That sounds like a good abstraction to me.

We can even generalize the way in which we communicate the handle to
sub-makes; for example by calling a function with the handle that
returns a char*: if that value is non-NULL it's added to the command
line (maybe as a third element to the current jobs-fd argument).  If
it's null, nothing is added.  Then the submake can parse it out and hand
it back to a function that returns a handle again
(serialize/deserialize).  I'm not sure if this is necessary; it depends
on the details of the Windows model.

> > +/* Test whether a file contains any data. */
> > +static int
> > +fd_not_empty (int fd)
> > +{
> > +  return fd >= 0 && lseek (fd, 0, SEEK_END) > 0;
> > +}
> 
> Isn't this unnecessarily expensive (with large output volumes)?  Why
> not use fstat?

This lseek() doesn't actually move the file reference: SEEK_END plus an
offset of 0 is a no-op so it doesn't matter how large the file is.  This
is just seeing if the position has moved since we opened the file (still
at 0 or not); it just returns the current position in the file, which is
known to the system directly without having to go ask anyone (it has to
be so, since each file descriptor has its own position).

I would be greatly surprised if fstat(), which has to go to the
directory (probably) to look up all the information on the file such as
ownership, permissions, etc., is faster.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-24 Thread Paul Smith
I'm fully prepared to accept the blame for not doing the best job
getting buy-in etc. on this effort.  Can we leave the discussion on the
process behind?  I'd prefer that, unless there are real constructive
comments on how to do better next time rather than rehashing what was
done wrong.  I think we have discussed that enough, and I'd prefer to
avoid going further along that road.

Thanks.


On Wed, 2013-04-24 at 20:46 +0200, Frank Heckenbach wrote:
> That's true about SEEK_CUR which was there originally. I actually
> changed it to SEEK_END, which does move the position to the end.

Oh right.  My head cold is keeping me foggy.  What was the reason to
change to SEEK_END, again?

I'm not so sure fstat() is that cheap.  struct stat contains a lot of
information.  Although I guess since we are only ever talking about temp
files, not NFS files or something, it's probably not too bad.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-24 Thread Paul Smith
On Wed, 2013-04-24 at 22:25 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: e...@gnu.org, bug-make@gnu.org
> > Date: Wed, 24 Apr 2013 15:07:21 -0400
> > 
> > I'm not so sure fstat() is that cheap.  struct stat contains a lot of
> > information.  Although I guess since we are only ever talking about temp
> > files, not NFS files or something, it's probably not too bad.
> 
> We could time it if we are afraid of the cost, but I'd be surprised if
> 'fstat' wasn't extremely fast on Posix platforms.  Most of the
> information you get in struct stat is already available when the file
> is open.  In particular, the OS tracks the file's size as it is being
> written.

True.  It's probably all right there and not measurably different if you
have a file descriptor already.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-24 Thread Paul Smith
On Wed, 2013-04-24 at 22:39 +0300, Eli Zaretskii wrote:
> > Nothing is actually read by lseek (and even if it were, it would
> > only need to look at the first and last part of the file, not read
> > all the content, if that was the worry).
> 
> Are you sure?  How can lseek "jump" to the last byte of the file, if
> the file is not contiguous on disk, except by reading some of it?

If fstat() can get the size from an internal structure then lseek() can
do the same, then just update the file descriptor's position.  I don't
think there's more to it than setting that value, but it could be.
Certainly at the filesystem layer we don't know, and we don't care,
about things like whether the file is kept contiguously at the block
layer.

As you say, we should just measure.

> Or maybe we should abandon this optimization and take the lock
> regardless.  How bad can that be?

Well, we want to know if the file has any content anyway: for example we
don't want to output the enter/leave notifications if there's nothing to
print.  So there's no extra cost to avoiding the lock here.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-24 Thread Paul Smith
On Wed, 2013-04-24 at 22:55 +0100, Tim Murphy wrote:
> why not use a named semaphore wherever possible (windows and linux)
> and lock a file where not instead of trying to pass kernel object
> handles around (seems a bit nasty to me)?

Hi Tim; I think you're late to the party :-).  Let me summarize a lot of
discussion then we can use this as a reference for other questions.

Named semaphores on POSIX are kind of sucky.  They don't get
automatically cleaned up when the process exits, for one thing.  And my
suspicion is that they're not very portable across different variations
of UNIX especially older ones.

File locking, on the other hand, is very portable, very simple, very
fast, and any held locks are automatically released when the descriptor
is closed.  No muss, no fuss.

If you agree file locking is the right way to go on a POSIX system, then
we just have to decide what to lock.  It can be anything as long as all
children of a given top-level make can find it.  We could use a
temporary file, sure.  If we did that we'd have to pass around either
the name of the file or, more likely, use tmpfile() and send along the
file descriptor, handle close-on-exec, etc. like we do with the
jobserver pipe.  Which we could certainly do: we do it today with
jobserver.

However we already have a descriptor available to us: stdout.  If we use
that we don't have to pass around anything because our children are
already inheriting it and it's on a well-known FD.  We don't have to
worry about using "+" before sub-make rules like we do with jobserver.

There's one other advantage of using stdout: if a sub-make redirects its
output to a separate file, that magically starts a new "lock domain" and
that sub-make and its children don't contend with the other sub-makes
and their children.  This is not a big deal and if we didn't get it for
free, we'd never go to the effort of implementing it.  But it's nice.


Eli is working on the Windows port; I have no idea how he's decided to
implement this there yet.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Fwd: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-25 Thread Paul Smith
On Thu, 2013-04-25 at 07:14 +0100, Tim Murphy wrote:
> To be honest, I have done all this before with named semaphores
> including the "file that gets left over" problem and it's all solvable
> quite nicely.  You pass the build id in the environment which is,
> after all, what it's for.

Sure.  Given enough effort all programming problems are solvable.  The
question is what is the effort, and what is the benefit?

At the moment to me it looks like the effort is not insignificant and I
don't see the benefit.  However, that may well just mean I haven't
thought of something.  Can you give a quick outline of how the cleanup /
error handling might work, and what advantages using named semaphores
has?

Cheers!



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-27 Thread Paul Smith
On Sat, 2013-04-27 at 13:09 +0300, Eli Zaretskii wrote:
> Is this intended behavior that these two messages:
> 
>   mkfsync:6: recipe for target 'two' failed
>   gnumake[1]: [two] Error 1 (ignored)
> 
> are separated and wrapped by separate "Entering...Leaving" blocks,
> instead of being produced together?  They are both produced by a call
> to 'message', which outputs the message to stdout, so it's not like
> they went to two different streams.  Am I missing something?

Frank mentioned this as well and it's a bug that needs to be fixed.
I'll look into it this weekend.  I need to check the algorithm; one
simple fix, if it's too complex to do it another way, would be to have
make write (append) the error message to the temp file rather than
printing it directly.  That way we know it will come out in the right
order when the temp file is dumped to stdout.

However that may not be necessary.

> If this is intended behavior, can we somehow cut down on these
> "Entering...Leaving" pairs?  They add a lot of clutter and make it
> hard to read the output of a Make run.

Yes, this was also discussed: I've been thinking about a separate way to
allow the user to choose.  We also need to ensure, as best as we can,
that extra unneeded lines are not being generated.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-27 Thread Paul Smith
On Sat, 2013-04-27 at 14:39 +0300, Eli Zaretskii wrote:
> The changes needed to make -O work on MS-Windows are now committed to
> the master repository, see commits da7df54 and 049f8e8.  Please review
> and comment.

Thanks Eli!!

I'll take a look over the next few days.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: .ONESEHLL not working as expected in 3.82

2013-04-27 Thread Paul Smith
On Sat, 2013-04-27 at 20:55 +0300, Eli Zaretskii wrote:
> Note: there's one more major feature in current git repo that needs to
> be made available on Windows: dynamic loading of extensions.  That is
> my highest priority for Make todo list.

Yes.  I wonder if there are features of gnulib which make this work in a
cross-platform way?  I've often wondered if we might get some win out of
utilizing gnulib more in GNU make code.

However, I think that investigation and reworking will need to wait
until after the next release.  I don't want to open that can of worms
yet... at least not widely.  But if you see something there that would
make your work for "load" simpler let me know and we'll look into it.


One other thing: I want to make a format cleanup commit at some point to
deal with a number of annoying issues related to TABs, EOL syntax, etc.,
as well as update copyright dates, etc.  I don't want to do that when it
will conflict with major work anyone else is doing, so let me know when
a good time is for you.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: .ONESEHLL not working as expected in 3.82

2013-04-27 Thread Paul Smith
I took with make-w32 list off.

On Sat, 2013-04-27 at 22:18 +0300, Eli Zaretskii wrote:
> I added a similar facility to Gawk, but there a problem was much
> simpler, because Gawk itself was tracking the loaded extensions in
> platform-independent code.  So my emulation of dlopen didn't need to
> support NULL as its first argument, and didn't care about RTLD_GLOBAL.
> 
> By contrast, the way Make's code in load.c is written, it relies on
> dlopen to track the shared libraries already loaded.  This is a PITA
> on Windows, because there's no similar functionality built in, and the
> emulation of dlopen will have to record each loaded extension in some
> internal data structure, and search that when dlopen is called with
> its 1st argument NULL.  Not rocket science, granted, but it does make
> the job larger and the required testing more extensive.

Well, we already maintain a list of modules that are loaded in
the .LOADED variable.  Although it's not written like that today, I have
no problem changing the code to check that variable to see whether the
module is loaded or not.  We already make that check, to ensure we don't
call the user's init function twice.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: .ONESEHLL not working as expected in 3.82

2013-04-27 Thread Paul Smith
On Sat, 2013-04-27 at 23:00 +0300, Eli Zaretskii wrote:
> That would be nice, indeed.

OK, pushed.  You should be able to simply write a new load_objects()
function and drop it in.  Or put it into a w32 file or whatever.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-27 Thread Paul Smith
On Thu, 2013-04-18 at 22:36 +0200, Frank Heckenbach wrote:
> > > This is useful (to me) because at any time, I know what's running.
> > > ("[Start]" messages minus "[End]" messages.)
> > 
> > Thanks, this is the reason I was looking for; that use-case wasn't clear
> > to me based on the previous email.
> 
> OK, so what are we going to do about it? Leave, revert, new option?

I've pushed a change to add a new argument to the -O/--output-sync
option, "job", to write output after each line of the recipe.  Please
give it a try and make sure it works for your situation.  It worked OK
in my more limited testing.


I'm not excited about that term ("job"); it's kind of accurate, but in
the documentation for example we're really mushy about exactly what a
"job" is, vs. a "recipe" or a "command line" etc.  I'd like to pick some
terms for this, define them in a solid way, then clean up the
references.  It would be best to do this before the release to avoid
changing things later.

For example, we currently use "target" as the name; maybe "recipe" is
better?

If anyone has opinions I'm listening.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Quirk with rules producing multiple output files

2013-04-27 Thread Paul Smith
On Fri, 2013-04-12 at 13:41 +0200, Reinier Post wrote:
> Hmm, indeed:
> 
> | /tmp % cat Makefile
> | %.1:; echo $*.1 for $@ > $@
> | %.e.1 %.f.1:; echo $*.1 for $@ > $@
> | %.c.1 %.d.1:; for f in $*.c.1 $*.d.1; do echo $$f for $@ > $$f; done
> | %.ab.2: %.a.1 %.b.1; cat $+ > $@
> | %.cd.2: %.c.1 %.d.1; cat $+ > $@
> | %.ef.2: %.e.1 %.f.1; cat $+ > $@
> | /tmp % make -rR stage.ab.2 stage.cd.2 stage.ef.2
> | echo stage.a.1 for stage.a.1 > stage.a.1
> | echo stage.b.1 for stage.b.1 > stage.b.1
> | cat stage.a.1 stage.b.1 > stage.ab.2
> | for f in stage.c.1 stage.d.1; do echo $f for stage.d.1 > $f; done
> | cat stage.c.1 stage.d.1 > stage.cd.2
> | echo stage.1 for stage.f.1 > stage.f.1
> | cat stage.e.1 stage.f.1 > stage.ef.2
> | cat: stage.e.1: No such file or directory
> | make: *** [stage.ef.2] Error 1
> | rm stage.a.1 stage.b.1 stage.c.1
> | /tmp % 

It would be faster/easier for me to read and respond if you stripped out
the non-essential parts.  The interesting make line is:

> %.e.1 %.f.1:; echo $*.1 for $@ > $@

And the error:

> > | echo stage.1 for stage.f.1 > stage.f.1
> > | cat: stage.e.1: No such file or directory
> > | make: *** [stage.ef.2] Error 1

> I didn't realise make would skip the creation of stage.e.1;
> yet this is documented very clearly:
> 
>   http://www.gnu.org/software/make/manual/html_node/Pattern-Intro.html
> 
> I thought this was just describing an optimization;
> I didn't realize it would actually affect the build process.

It's not an optimization: it's a fundamental difference in what rules
with multiple targets mean.  Explicit rules with multiple targets are
treated as if you'd defined the same rule once per target for that
target.  That is, this rule:

  foo bar : ; touch $@

is absolutely identical to this:

  foo : ; touch $@
  bar : ; touch $@ 

Pattern rules with multiple targets means that a single invocation of
the recipe will create all the targets.  That means for this rule:

  %.f %.b : ; ...recipe...

make will expect ...recipe... to build BOTH the two targets when it is
run ONE time.

> Why does it work this way?  What is a possible use case?

There are a large number of situations where a single invocation of a
rule builds multiple output files.  Consider the yacc (bison) tool for
example, where it outputs both a C source and header file.

> In any case, it may be useful to also be able to state that
> a multi-target pattern rule only makes its actual target.

Yes, just as being able to specify multiple targets from a single
invocation of an explicit rule would be useful, so might be being able
to specify multiple pattern rules in a single statement.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: .ONESEHLL not working as expected in 3.82

2013-04-28 Thread Paul Smith
On Sun, 2013-04-28 at 20:19 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: bug-make@gnu.org
> > Date: Sat, 27 Apr 2013 16:58:54 -0400
> > 
> > On Sat, 2013-04-27 at 23:00 +0300, Eli Zaretskii wrote:
> > > That would be nice, indeed.
> > 
> > OK, pushed.
> 
> Thanks!  But I see you kept global_dl and the call to dlopen with the
> 1st argument NULL.  What is the purpose of these now?

Basically it's there to handle Guile being already compiled into make,
via direct linking at build time, in a generic way.  If it was linked
then the global symbols would be found without needing to load anything.
However this particular issue could just be handled by the #define that
links it.

> > You should be able to simply write a new load_objects() function and
> > drop it in.  Or put it into a w32 file or whatever.
> 
> My plan was to write dlopen and dlsym, and add them to
> w32/compat/posixfcn.c.  But I need to understand the semantics of
> global_dl in order to do that correctly.

It's up to you how you think it best to implement, whether it makes more
sense to try to reimplement POSIX functions, or do it at a higher level.
Whatever is simpler.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: .ONESEHLL not working as expected in 3.82

2013-04-28 Thread Paul Smith
On Sun, 2013-04-28 at 21:14 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: make-...@gnu.org, bug-make@gnu.org
> > Date: Sat, 27 Apr 2013 12:54:10 -0400
> > 
> > On Sat, 2013-04-27 at 19:17 +0300, Eli Zaretskii wrote:
> > > The .ONESHELL feature is now supported on MS-Windows, for the default
> > > Windows shell (cmd.exe) or compatible replacements, in the development
> > > version (commit e56aad4).
> > 
> > Nice!
> 
> I see you followed this up with a commit (30843de) which moved code
> around that deals with the one_shell use case.  What was the reason
> for that change?  I think it breaks the use case where a Unixy shell
> is used on MS-Windows under .ONESHELL; this used to work before.

I moved it because the way you had it caused one of the test cases to
fail: you had changed that block of code to be inside the if-statement,
when before it was outside/after the if-statement.

> The code block that you moved is needed on MS-Windows as well, when
> the shell passes the is_bourne_compatible_shell test.

I think you're right, sorry.  I thought that code was inside a unix-only
section but I see now it wasn't.

The code used to look like this:

  if (is_bourne_compatible_shell())
{
   ... modify the script to remove @-+ ...
   *t = '\0';
}

  /* create an argv list ... */
  {
...
new_argv[n++] = NULL;
  }
  return new_argv;

So, the setup of the new_argv[] was AFTER and outside the if-statement.
In the change you made, it was moved to be INSIDE the if-statement.
That caused a test case to fail because in the situation where we do
have one-shell and we do not have a POSIX shell, new_argv[] was empty
and no command was invoked.

I moved it back, but accidentally made it not work for Windows.

The goal of this code in the if-statement is to implement a special case
allowing ONESHELL to be easier to add in the case where you DO have a
standard shell.  In that case, and ONLY in that case, we remove the
internal @-+ characters.  This allows you to have something like:

  foo:
  @echo hi
  @echo there
  @echo how are you

And have it continue to work if you add ONESHELL (for performance
reasons) without rewriting all the recipes.

However, if you do NOT have a POSIX shell, then we do NOT remove these
internal characters: we simply provide the script as-is and only the
first line is checked for special characters.  This lets you use
something like Perl, where @ is a special character, for example:

  SHELL = /usr/bin/perl
  foo:
 @print "hi";
 @array = qw(there how are you);
 print "@array\n";

I think the implementation you have is not quite right.  I think the
parsing of the @-+ stuff is common across all platforms if we have a
shell, so you don't need the "else /* non-posix shell */".

I think it pseudo-code it would look something like this:

  if (posix-shell)
{
  ...strip out @-+ from LINE...
}
#ifdef WINDOWS32
  if (need a batch file)
{
  ...write LINE to the batch file & setup new_argv for batch...
}
  else
#endif
{
  ...chop LINE up into new_argv...
}
  return new_argv;

Or something.  Also, I'm not sure about adding things like @echo off to
the batch file.  That assumes that we'll always be using command.com to
run the batch file, but what if the user specified C:/perl/bin/perl.exe
or something as their SHELL?

I'm probably missing something about the implementation though.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: .ONESEHLL not working as expected in 3.82

2013-04-28 Thread Paul Smith
On Sun, 2013-04-28 at 22:41 +0300, Eli Zaretskii wrote:
> > I think the implementation you have is not quite right.  I think the
> > parsing of the @-+ stuff is common across all platforms if we have a
> > shell, so you don't need the "else /* non-posix shell */".
> 
> I do need a separate code, because it doesn't just remove the @-+
> stuff, it also removes escaped newlines, so that this:
> 
>   foo && \
>  bar && \
>  baz
> 
> is transformed into a single line
> 
> foo && bar && baz
> 
> That's because stock Windows shells don't know about escaped
> newlines.  I also remove leading whitespace from each logical line,
> while at that, because I don't want to rely on Windows shells too much
> (some of their internal commands are quite weird).

Ah, OK.

I feel like we probably do this parsing more than once.  The entirety of
job.c needs a good cleaning.  But that's for another day.

> > Also, I'm not sure about adding things like @echo off to the batch
> > file.  That assumes that we'll always be using command.com to run
> > the batch file, but what if the user specified C:/perl/bin/perl.exe
> > or something as their SHELL?
> 
> This is not supported yet; if the user tries that, @echo off will be
> the least of their problems ;-)

Oh, OK.  Sounds fine.  I guess I thought you were writing a batch file,
then invoking the shell with the batch file name as the command to run.
E.g., "command.com " vs. "perl " etc.  I am naive
but it seems like that should work OK even in Windows :-).

I guess there are some tricky bits if you sometimes use the command line
and sometimes a batch file (different shell options might be needed for
example).


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-28 Thread Paul Smith
On Sun, 2013-04-28 at 20:00 +0300, Eli Zaretskii wrote:
> > I've pushed a change to add a new argument to the -O/--output-sync
> > option, "job", to write output after each line of the recipe.
> 
> What is its purpose?  To avoid mixing in the same screen line
> characters from several parallel sub-makes?  (That does happen, albeit
> rarely.)  Or is it something else?

I'm not sure exactly what you mean by your second sentence.  However I
asked the same question you did about this feature.

Frank had a use-case: he was tracking which jobs were active/still
running by making all his recipes look like this:

  target:
  @echo start: $@
  ... recipe ...
  @echo end: $@

This allows a higher-level, dynamic interface to track which jobs are
running, when they started, etc. and track the build.

Although I implemented this because it was simple, I'm not so sure this
is a real use-case.  Or to be more accurate, I agree that it's a real
use-case but I don't think this is a good solution to the problem.

I suspect that a better solution might be to create a "machine
interface" mode for make, as some other GNU CLI tools like GDB, etc.
have.  This interface would be well-defined and unchanging and easily
machine-parseable, and allow people to write front-ends to more
accurately examine make's output.

However, for now this new output-sync mode doesn't seem to be harmful.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-04-28 Thread Paul Smith
On Thu, 2013-04-18 at 22:36 +0200, Frank Heckenbach wrote:
> % make -Omake # same with -Otarget
> m:2: recipe for target 'foo' failed
> make: *** [foo] Error 1
> foo:error
> 
> This seems at least strange to me: The conclusion "recipe failed" is
> printed before the reason (the messages from the job).
> 
> Reverting this part (i.e., moving the sync_output() call to where it
> was before) corrects this:
> 
> % make -Otarget # same with -Otarget
> foo:error
> m:2: recipe for target 'foo' failed
> make: *** [foo] Error 1
> 
> To fix it without reverting the behaviour seems to require calling
> sync_output() before child_error() (2 places).

Unfortunately that isn't sufficient either.  It's possible for
child_error() to be invoked, but then we don't stop but continue on with
the recipe (if the error is ignored).  If we ran child_error() here we
would show half the recipe output, then the error message, then the rest
later, in a separate grouping.

I went a different way: I modified the child_error() function so that if
there was an output-sync file, the error message would be written to the
end of that file instead of printed directly.  This way when the output
is shown to the user she sees the entire thing, including error messages
from make, in order.

This also allowed me to drop the layer of enter/leave notices around
error messages.

Please check/verify this change in your own environments.


PS. Some may be tsking to themselves noting this change would have been
a lot simpler if we'd kept a FILE* handle instead of a file descriptor.
Unfortunately it's not clear that would work.  I created a test program
to verify that this method of having the parent append messages to the
file after the child exits, and with a FILE* handle I couldn't make it
work right.  My fseek() after the child exited didn't skip past the
child's output and my message printed by the parent ended up overwriting
that output.  I'm sure it would have worked if I'd closed the file in
the parent and re-opened it, but since I'm using tmpfile() that is not
possible.  I might have done something wrong, but anyway it wasn't a
slam-dunk.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Default output-sync setting (was: Re: [bug #33138] .PARLLELSYNC enhancement with patch)

2013-04-28 Thread Paul Smith
Now that we seem to have a workable solution for output synchronization
for both POSIX and Windows systems, I wonder if we shouldn't consider
enabling it as the default mode when parallel builds are running.

I understand that this will be a change that could be visible (beyond
the collection of output) due to using a temp file instead of a
terminal.  Of course people can still use -Onone if they want old
behavior.

However assuming the new mode works and is reliable, and is not a
performance bottleneck, I'm hard-pressed to see why a well-ordered
output would not be preferable to just about everyone, and hence
shouldn't be the default.

Of course it's possible that writing to a file, rather than spewing to
stdout, WOULD be noticeably performance impacting at least in some
situations/systems.


Comments?



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Dynamic objects (was: .ONESEHLL not working as expected in 3.82)

2013-04-29 Thread Paul Smith
On Mon, 2013-04-29 at 19:33 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: bug-make@gnu.org
> > Date: Sat, 27 Apr 2013 16:58:54 -0400
> > 
> > On Sat, 2013-04-27 at 23:00 +0300, Eli Zaretskii wrote:
> > > That would be nice, indeed.
> > 
> > OK, pushed.  You should be able to simply write a new load_objects()
> > function and drop it in.  Or put it into a w32 file or whatever.

> 1. Doesn't the FSF frown upon capability to load _any_ dynamic
>objects?  I think they like the GCC method whereby each extension
>is required to define a symbol with a certain name
>(plugin_is_GPL_compatible in GCC), which is tested at load time,
>before the dynamic object is allowed to be loaded.

Hm.  I guess the concern is that someone will introduce a proprietary
"plug-in"?  My position on this is that it would be a violation of the
GPL.  I don't believe there is any useful way to utilize this feature
without becoming a derived work of GNU make, since the only way to do
anything with this feature is to invoke GNU make functions.  As GNU make
is GPL'd there's no dynamic linking exception.  I'll check with the
legal folks.

> 2. The fact that the dynamic object's file extension (.so) is exposed
>to the Makefile is unfortunate, because it will hurt portability of
>Makefiles: the extension on Windows is .dll.  Can we omit the
>extension and append it internally?

Yes, that should be possible.  My concern is that, at least on UNIX, the
rules for this are complex and I don't want to reimplement the runtime
linker :-).  Maybe something like, first try the path as given and if
that fails, try adding arch-specific extensions?

The other problem here is that we want to allow rebuilding of dynamic
objects then re-exec'ing make... if we're trying different extensions
THAT can be a real problem... what order do we do this in?

I'm not sure I can come up with a reliable algorithm for this that's
understandable.

> 3. I suggest to extend the search for dynamic object to a
>Make-specific directory (${prefix}/share/make/), before falling
>back to the "system-specific path".  Or maybe even not fall back to
>any system-specific defaults, because those are generally set for
>shared libraries, not for plugins.  (You do NOT want to know where
>Windows will look for shared libraries.)

I'm not sure about having a make-specific directory.  It's not so easy
to do in UNIX--we'd have to modify the LD_LIBRARY_PATH env. var. I
suppose.  Also we don't really have a precedent of a "make-specific"
directory like that.

On UNIX there's no way to avoid looking in the system-specific locations
except by forcing the object path to contain a "/".  I suppose that if
the object didn't contain a "/" we could prefix it with "./" to force
the avoiding of system paths.  On the other hand we DO have precedence
for searching system paths; for example make's "include" will search for
included makefiles in places like /usr/include, /usr/local/include, etc.
even though I can't see how THAT makes sense.

> 4. It would be good to have at least a single simple example of a
>dynamic extension, either in the tarball or in the manual.  The
>only ones I found are in the test suite; did I miss something?

No.  The documentation does need to be enhanced.

> 5. Is the following a valid 'load' directive?
> 
>  load /foo/bar/
> 
>If it is valid, what is its semantics?  If it is invalid, the code
>in load_object should detect it and give a diagnostics; currently
>it will happily use this, and will try to call a symbol _gmk_setup.

Hm.  That's odd.  It shouldn't try to call an init function unless the
load of the dynamic object succeeds, and I would think that trying to
dlopen("/foo/bar/") would fail.  I'll check it out.

> 6. The diagnostics in read.c:
> 
>   if (! load_file (&ebuf->floc, &name, noerror) && ! noerror)
> fatal (&ebuf->floc, _("%s: failed to load"), name);
> 
>is IMO misleading: it says "failed to load" also in the case that
>the dynamic object was successfully loaded, but the function called
>from it returned zero.  It would be better to make a more precise
>message in that case.

Yes, good point.

> 6. API:
> 
>. I suggest to request that the buffers for expansions and
>  evaluation by gmk_expand and gmk_eval be provided by the caller.
>  It is not safe (and not very convenient) to return buffers
>  allocated internally by these functions, because the dynamic
>  object might be compiled/linked with an inc

memory allocation (was: Re: Dynamic objects)

2013-04-29 Thread Paul Smith
On Mon, 2013-04-29 at 19:30 +0100, Tim Murphy wrote:
> I must clarify - I think that make should provide plugins with an
> allocation mechanism. Not the other way around.

It's probably a good idea for make to provide a "gmk_free()" function
that will free memory returned to the plugin when it calls gmk_*()
functions such as gmk_expand().  Is that sufficient to deal with this
problem?

> the snprintf model for dealing with expansion is not so bad - I mean
> the problem is that nobody knows how big an expansion is going to be
> in the end, right?  So how does make deal with this already? The same
> way would be fine for the plugin and it would be nice to not simply
> push that problem on to all plugin writers.

make calls malloc() and if the buffer is not big enough it calls
realloc().  But the problem is that while make is expanding and
allocating, it's also interpreting.  And that interpretation might have
side-effects.

Suppose, for example, we enhanced gmk_expand() to take a buffer and a
plugin invoked:

gmk_expand(buf, buflen, "$(info expanding) $(FOO)");

This will return the expansion of the FOO variable, but it will ALSO
print "expanding" as the result of calling the info function.  Now
suppose that the result of expanding $(FOO) was too large to fit into
buf, so the function returns the length needed and the plugin
reallocates the buffer to be large enough and re-invokes gmk_expand()...
now it will print "expanding" AGAIN.

Of course the side-effect might not be so innocuous as double-printing.

On Mon, 2013-04-29 at 22:34 +0300, Eli Zaretskii wrote:
> At least on Windows, it can be a real problem, because the libraries
> with which a shared object was linked are hardcoded into it, and
> there's more than one way of allocating memory.
> 
> How about a callback for allocating memory?  Then Make could call that
> callback and get memory that the extension could free.

This could work, at the cost of an extra allocation and buffer copy for
each invocation of gmk_expand() (etc.)  We would basically call our
normal expand and get back a buffer allocated by us, then if there was a
separate allocator registered we'd call that to get enough memory to
hold the buffer, then copy over from our buffer to theirs, then free our
buffer and pass back theirs.

If the method at the beginning of this email (providing a gmk_free()
function that will free memory returned from gmk_expand()) is
sufficient, though, wouldn't that be a better/more efficient solution?

As long as we don't have one side (between make and the object)
allocating memory and the other side freeing it, is that enough?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


dynamic object searching (was: Re: Dynamic objects)

2013-04-29 Thread Paul Smith
On Mon, 2013-04-29 at 22:34 +0300, Eli Zaretskii wrote:

> > Yes, that should be possible.  My concern is that, at least on UNIX, the
> > rules for this are complex and I don't want to reimplement the runtime
> > linker :-).  Maybe something like, first try the path as given and if
> > that fails, try adding arch-specific extensions?
> 
> No, much simpler: _always_ append _a_single_ arch-specific extension,
> and try loading that.  We should document that extension; using the
> one that is used by default by the compiler for producing shared
> libraries should be good enough, I think.

It's not so simple, though, as just .so vs. .dll.  MacOS for example
uses .dylib.  And I think AIX does something else weird that I've
forgotten about.  Others probably do as well.

Plus on UNIX any extension is acceptable since we're using dlopen()
(even with the normal linker you can give any library name you want,
it's only the -l flag that makes assumptions).  Maybe someone wants to
write pattern rules to build their GNU make loadable objects with a
suffix ".gmkso" to distinguish it (and use a different rule) from
building normal .so shared objects.

I want to be sure the benefits outweight the loss of flexibility before
we go down that path.

On Mon, 2013-04-29 at 23:13 +0300, Eli Zaretskii wrote:
> > the same way one creates 1 makefile that can build the same code for 2
> > operating systems - something done every day.  You make it up. You run
> > uname with $(shell) or you pass in an argument from a top level script that
> > does know the platform or whatever.   In the end you have a bit of makefile
> > that says:
> 
> First, there's no uname on Windows.  You are in fact saying that in
> order to run a Makefile one would need something similar to autoconf.

It's probably a good idea to have make predefine a variable containing
the "host" architecture, to avoid the need for uname.  We currently have
an internal variable "make_host" which is the GNU autoconf --host value
on systems where configure runs, and the various config.h templates have
hardcoded values.  Maybe we could do something with this (just using the
--host value might be too arbitrary, I'd have to look at the options).

Which is kind of beside the point, but just a thought :-).


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: dynamic object searching (was: Re: Dynamic objects)

2013-04-29 Thread Paul Smith
On Mon, 2013-04-29 at 17:00 -0400, David Boyce wrote:
> On Mon, Apr 29, 2013 at 4:34 PM, Paul Smith  wrote:
> > Plus on UNIX any extension is acceptable since we're using dlopen()
> > (even with the normal linker you can give any library name you want,
> > it's only the -l flag that makes assumptions).  Maybe someone wants to
> > write pattern rules to build their GNU make loadable objects with a
> > suffix ".gmkso" to distinguish it (and use a different rule) from
> > building normal .so shared objects.
> >
> > I want to be sure the benefits outweight the loss of flexibility before
> > we go down that path.
> 
> Why not try opening the pathname as given, and if that fails append
> the platform-standard extension and try again?

The issue is that we rebuild loaded objects if they are out of date.
That means we need to do the whole "try to remake" thing.  That gets
more complex if there are multiple possible names for the loaded object.

We can do the search and if we find one then it's easy: we just try to
rebuild it.

But if we don't find one, how do we choose what to build?  We could use
the same algorithm and try to build each one in turn (although that
would be a pretty big change to the code; we don't really have a way to
say "build A or B or C... stop after the first one that succeeds or fail
if none do").  Although starting with an unadorned file like "foo" is
not going to work so well since it will match lots of try-anything
rules.

It's just not pretty.

> > It's probably a good idea to have make predefine a variable containing
> > the "host" architecture, to avoid the need for uname.  We currently have
> > an internal variable "make_host" which is the GNU autoconf --host value
> > on systems where configure runs, and the various config.h templates have
> > hardcoded values.  Maybe we could do something with this (just using the
> > --host value might be too arbitrary, I'd have to look at the options).
> 
> I've twice submitted a patch to provide make_host as a make variable,
> but they've gone into the ether. It's a one-line patch and hugely
> useful IMHO considering the number of people who've had to reinvent
> the $(if $(shell uname)) etc wheel. It's impossible to determine how a
> given user or community will define "platform", so no variable will
> work for everybody, but I think make_host is pretty hard to improve
> upon.

Well, David, when you suggested it I wasn't so sure.  But now that I've
thought of it myself... brilliant!! :-p :-)

I'm not saying make_host is wrong.  I do wish there was something more
generic available (maybe in addition) that let people know "posix" vs
"windows" vs. "vms" vs. "amiga" vs. whatever, and avoid a lot of
makefiles with "if linux or freebsd or openbsd or aix or hpux or ...".

But I guess you get into sticky areas: is MacOS "posix" even though it
has so many differences from stock BSD?  Are "windows" and "msdos" the
same?  What about "msys" vs. "cygwin" etc.?

>From the standpoint of writing a makefile I guess what you really want
to know is, does this environment have a POSIX shell environment, or
Windows command.com, or VMS shell (whatever that is)?  Or something else
(I'm not so familiar with all the variations on the Windows side)


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Duplicated "Entering/Leaving directory" when new option -O is used

2013-04-30 Thread Paul Smith
On Tue, 2013-04-30 at 11:19 +0200, Stefano Lattarini wrote:
> The above has been obtained with GNU make built from latest
> git version (commit 'moved-to-git-46-g19a69ba').

Yes.  I know the email lately has been daunting but if you wade through
it you'll see a number of emails discussing this issue; it definitely
needs to be addressed before release.

Thanks for testing Stefano!



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-04-30 Thread Paul Smith
Just to be clear, you're saying that the testsuite runs as one long
operation, updating one target, and the recipe invokes one test script,
right?  I can see how that environment might be problematic for this new
feature.  It works much better for lots of smaller targets.

However, you could avoid this issue if you specify that the target to be
built is a sub-make, by prefixing it with a "+" operator for example.
If this is true and you use -Otarget or -Ojob then make will not capture
the output from that job.

Of course this also has the unfortunate side-effect that running "make
-n" will still try to run the test scripts.  Hm.

On Tue, 2013-04-30 at 11:48 +0200, Stefano Lattarini wrote:
> So please don't ever make that option the default; if you really
> really want to, at least put in place some smart checks that only
> enable '-O' when the output is *not* a tty, so that we can have the
> best of both worlds (useful feedback for interactive runs, more
> readable logs for batch runs).

This is a possibility for sure.

I have to say that my experience with parallel builds hasn't been as
wonderful as others here.  I often get output which is corrupted, and
not just by intermixing whole lines but also by having individual lines
intermixed (that is the beginning of the line is one job's output, then
the end of the line is another job's output, etc.)

This kind of corruption is often completely unrecoverable and I simply
re-run the build without parallelism enabled.

I think it depends on how much output jobs emit and how many you are
running at the same time.  It could also be that Windows is better about
avoiding interrupted writes to the same device.


Tim Murphy  writes:
> What I mean is that:
> ./make -Otarget
> might be a good interactive default rather than -Omake.

I always intended that and never suggested -Omake would be the default.
I think -Omake is only really useful for completely automated,
background builds.  It wouldn't ever be something someone would use if
they wanted to watch the build interactively.

> I haven't tested to see if this is how the new feature works or not. I
> don't think it's completely necessary to keep all output from one
> submake together. so turning that off might make things more
> interactive,  Per-target syncing is a valid compromise.

This is the default.  If you use -Otarget or (-Ojob) then what is
supposed to happen is that make uses the same sub-make detection
algorithms used for jobserver, etc. and if it determines that a job it's
going to run is a submake it does NOT collect its output.

However, I have suspicions that this is not working properly.  I have a
make-based cross-compilation environment (building gcc + tools) I've
been using for years, and I tried it with the latest make and -O and saw
similar problems (sub-make output collected into one large log instead
of per-target).

Thinking about it right now I think I might know what the problem is,
actually.  I'll look at this later.


Stefano Lattarini  writes:
> I wasn't even aware of those differences; as of latest Git commit
> 'moved-to-git-46-g19a69ba', I don't see them documented in either
> the help screen, the manpage, the texinfo manual, nor the NEWS file.

I don't see where that code comes from?  There is no Git commit in the
standard GNU make archive with a SHA g19a69ba.  The current HEAD on
master is:

  19a69ba Support dynamic object loading on MS-Windows.

At any rate, the new option and option arguments are documented in the
manual and there is an overview including the option arguments in the
man page.  The NEWS file doesn't discuss the individual option
arguments, only the option.  Ditto the --help output.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-04-30 Thread Paul Smith
On Tue, 2013-04-30 at 16:04 +0200, Stefano Lattarini wrote:
> On 04/30/2013 03:37 PM, Paul Smith wrote:
> > Just to be clear, you're saying that the testsuite runs as one long
> > operation, updating one target, and the recipe invokes one test script,
> > right?
> >
> No; the testsuite runs as a recursive make invocation (yes, this is
> sadly truly needed in order to support all the features offered by the
> Automake parallel testsuite harness --- sorry), but each test script
> (and there can be hundreds of them, as is the case for GNU coreutils
> or GNU automake itself) is run as a separate target, explicit for
> tests which have no extension and pattern-based for tests that have an
> extension.

This should work very well with -Otarget then, except for the
colorization/highlighting issue... once it works as expected.  I'll look
into this issue later and I would be interested to see your experience
with it once it's resolved.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: dynamic object searching (was: Re: Dynamic objects)

2013-04-30 Thread Paul Smith
On Tue, 2013-04-30 at 17:48 +0100, Tim Murphy wrote:
> i.e. I don't just have 
> load X.dll

> I have to supply the recipe to build it on windows:

> X.dll:
>   cl.exe  /Fdo$@   # use microsoft's compiler

> and on Linux
> 
> X.so:
>gcc -o $@  ... # using gcc

Actually this supports Eli's point perfectly.  This is no problem.  You
can just put both of those rules into your makefile, and if make defines
an extension EXT for the current platform you can use "load X.$(EXT)"
and when you're on Windows it will build one way and when you're on
Linux it will build the other way.

However, I'm still undecided on how to handle this.  I'll look at it
again shortly.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: feature request: parallel builds feature

2013-04-30 Thread Paul Smith
On Tue, 2013-04-30 at 17:20 -0700, Jim Michaels wrote:
> I wasn't digressing.  I was explaining the point.  the concept I am
> trying to present as a solution to the problem of making parallel
> stdin for --jobs in gnu make (which currenty doesn't work and is I
> guess single-threaded) is to make a separate terminal or command shell
> for each job, such as via a generated batch file or shell script.
> 
> this is as simple as I can make it.

You need to give a concrete example of the problem you're trying to
solve.  When the manual discusses stdin it means that only one job at a
time can read from the make program's stdin.

Multithreading won't help because there is only one input to read from,
regardless of how many threads do the reading.

What this limitation is discussing is if there were a makefile like
this:

read: read1 read2
read1 read2: ; @echo $@: enter a word: ; read word ; echo $@: $$word

Both of these two targets read from stdin.  If you run them serially, it
works:

$ make
   read1: enter a word:
   fooblatz
   read1: fooblatz
   read2: enter a word:
   barflatz
   read2: barflatz

If you run in parallel then both the read1 and read2 targets run at the
same time and both want input from stdin, at the same time.  There's no
way this can work: when you typed a word how could you know or specify
which read operation got it?  So make arbitrarily chooses one of the
jobs to get the input and the others have their stdin closed.

But if course, this doesn't impact in any way rules like this:

read: read1 read2

read1 read2: ; @word=`cat $@.input`; echo $@: $$word

Now if you have files like read1-input and read2-input, those will be
read inside these rules and behave properly.

> I have learned that on a machine with 12 threads and 64GB of memory,
> you can have 50+ jobs running.

This depends very much on what those jobs are doing.  Obviously you CAN
run as many jobs as you want.  However I've never heard of being able to
get more than #CPUs plus a few jobs running at the same time without
making the build _slower_.  At some point the kernel will simply thrash
trying to keep all those jobs running at the same time, if they
seriously outnumber the cores available to run them on.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Change in $(MFLAGS) format breaks automake-generated rules

2013-04-30 Thread Paul Smith
On Wed, 2013-05-01 at 00:59 +0200, Stefano Lattarini wrote:
># With make 3.82, compiled from official tarball:
>$ make -f- <<<'all:; @echo $(MFLAGS)' -I none
>-I none
> 
># With development version of make:
>$ make -f- <<<'all:; @echo $(MFLAGS)' -I none
>-Inone

I think MFLAGS is deprecated.  Is there a reason you use this instead of
MAKEFLAGS?

> Is there a reason behind this change?  If not, could it be reverted?
> No big deal if the change is intended, as I can certainly and easily
> improve the Automake recipes instead.

I did make the change on purpose, because the new -O flag with an
optional argument wasn't getting parsed correctly with the space (it was
being parsed as -O, which defaults to target sync mode, plus a goal
"none").

I didn't think it would affect anyone so I used the simplest solution,
of removing the space.

However, I can probably make it work the old way as well and revert that
change.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-01 Thread Paul Smith
On Tue, 2013-04-30 at 10:39 -0400, Paul Smith wrote:
> On Tue, 2013-04-30 at 16:04 +0200, Stefano Lattarini wrote:
> > On 04/30/2013 03:37 PM, Paul Smith wrote:
> > > Just to be clear, you're saying that the testsuite runs as one long
> > > operation, updating one target, and the recipe invokes one test script,
> > > right?
> > >
> > No; the testsuite runs as a recursive make invocation (yes, this is
> > sadly truly needed in order to support all the features offered by the
> > Automake parallel testsuite harness --- sorry), but each test script
> > (and there can be hundreds of them, as is the case for GNU coreutils
> > or GNU automake itself) is run as a separate target, explicit for
> > tests which have no extension and pattern-based for tests that have an
> > extension.
> 
> This should work very well with -Otarget then, except for the
> colorization/highlighting issue... once it works as expected.  I'll look
> into this issue later and I would be interested to see your experience
> with it once it's resolved.

OK, I found this bug.  Definitely make recursion was not being handled
properly with -Otarget and -Ojob in some situations; this broke as a
side effect of my cleanup to reuse the same temporary file for the
entire target, regardless of the output mode.

This should be fixed now.  Those who use recursive makefiles and were
seeing annoying delays in output with -O, please try again with the
latest commit and see if it works any better for you now.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-01 Thread Paul Smith
On Wed, 2013-05-01 at 18:26 +0300, Eli Zaretskii wrote:
> You forgot to make the same change in the WINDOWS32 branch.  I did
> that in commit a87ff20.

Sorry, I missed that.

> > This should be fixed now.  Those who use recursive makefiles and were
> > seeing annoying delays in output with -O, please try again with the
> > latest commit and see if it works any better for you now.
> 
> Unfortunately, the delays are still here.

Very odd.  This is the test program I used; can you verify:

  recurse: ; $(MAKE) -f $(firstword $(MAKEFILE_LIST)) all
  all: 2 4 6
  .RECIPEPREFIX := |
  2 4 6 :
  |@ echo start $@
  |@ sleep $@
  |@ echo end $@

Now running:

  $ make -Omake --no-print-directory -j -f ...

should wait for 6 seconds, then print everything at the end.  Running
this on the other hand:

  $ make -O --no-print-directory -j -f ...

should show "start 2"/"end 2" after 2 seconds, "start 4"/"end 4" after 2
more seconds (4 total), etc.

And, just for completeness, running this:

  $ make -Ojob --no-print-directory -j -f ...

should show all the "start 2", "start 4", "start 6" right away (possibly
out of order), then after 2 seconds "end 2", then 2 seconds later "end
4", etc.

Is that what you see?  If so then the feature is working as expected.
I'd be interested to know more details of the makefiles where you're
still seeing stuttering behavior.  Are the targets in this environment
printing a lot of output per recipe?

> Moreover, it looks like
> this change introduced some kind of regression.  With the following
> Makefile:

I don't think this is a regression.  It's unfortunate but I don't see
any alternative.

BTW, you might find it simpler to use --no-print-directory as above to
get rid of the enter/leave stuff until I work out how to do it better.

> Notice in particular how start rec1..stop rec1 occludes its
> sub-targets, and the same for rec2.

"Occludes"?  I don't sense anything occluded here.  Maybe a terminology
error... did you mean something like "brackets"?

Yes, that's the behavior we'd expect to see if make was not treating the
sub-make properly: it saves up the sub-make output and prints it once
the sub-make is completed.  The goal of this fix is to change that...

> After the change I see this:

>   gnumake[1]: Leaving directory 'D:/gnu/make-3.82.90_GIT_2013-05-01'
>   gnumake[1]: Leaving directory 'D:/gnu/make-3.82.90_GIT_2013-05-01'
>   start rec1
>   stop rec1
>   start rec2
>   stop rec2
> 
> And now rec1 and rec2 are announced only at the end.

This isn't a bug, this is expected behavior.  It's slightly unfortunate,
I agree.  Consider your rule:

> rec1 rec2:
> @echo start $@
> $(MAKE) -f mkfsync simple
> @echo stop $@

Here we have a 3-line recipe: the first and third are not considered by
make to be recursive, while the second one is (due to the presence of
$(MAKE))

When we run with -O (-Otarget), make will save up the output from the
entire recipe and print it at the end, EXCEPT that output from a
recursive line is not saved: that's the entire point of this feature, to
allow sub-makes to show their output as they go.

So, the results of lines one and three (the echo's) are saved until the
"rec1" target is completely built, and printed at the end, while the
results of the make invocation in between are not saved, which is just
what you see.

If you want different behavior you can change your rule to use "+" on
the two echo lines, so that they're also considered recursive and not
saved up.  Now this recipe is basically run in -Ojob mode, BTW.  "+" has
other side-effects though (running even in -n mode) which might be
unpleasant.

The only alternative to this would be for make to try to figure out if
ANY of the lines in the recipe will be recursive, and if so treat ALL
the lines as if -Ojob was enabled (show output as soon as the line is
complete), while not treating them as recursive.  However since today we
don't parse lines completely until we're about to execute them, this
would be a not-insignificant change I think.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-01 Thread Paul Smith
On Wed, 2013-05-01 at 22:08 +0300, Eli Zaretskii wrote:
> Yes.  But I thought the change was about -Otarget, not -Ojob.  Stefano
> was complaining about a plain -O, so -Ojob is not what was his
> problem.

Yes, it is about -Otarget.  As I said, I added -Ojob output "just for
completeness".  The important distinctions (for this thread) are between
-Otarget and -Omake.  If you see the behavior I describe for those two
flags then things are working as expected.

If you still see choppiness in the output with -Otarget and you see the
behavior I describe, I'd be interested to know more about the targets
you're building.  I'd also be interested to know if using -Otarget vs.
-Ojob makes any difference in the choppiness.

If your recipe normally runs for 5 seconds (say) and it continually
generates output during that time, then yes, certainly the -O feature
will result in choppiness because instead of a sequence of continuous
output over 5 seconds you get 5 seconds of silence, followed by all the
output.  That is the nature of the beast... if this bothers you (more
than having interleaved parallel output) best to not use -O.

I suggest that for MOST makefile targets, this is not the normal
behavior.  Most makefile targets (building a .o for example) are built
relatively quickly and, more importantly, they don't expect to see much
output except on failure.

I would also point out that the more output your targets generate the
more likely you are to get output corruption when running with high -j,
and the more you'd be likely to benefit from -O.

> That is completely unexpected for me: I thought that -Otarget meant
> that all the output from making a target, e.g. rec1, will be output as
> a single unit.  If that's not the intent, then why is it called
> "target"?

That IS the intent.

However, in the presence of makefile recursion this model fails.
Consider if you have a makefile, like every single automake makefile for
example!, where the "top-level" target is nothing more than a recursive
invocation of a sub-make with some extra arguments, and the sub-make
actually does all the work:

   all: config.h
   $(MAKE) $(AM_MAKEFLAGS) all-recursive

Now, if you do nothing special for recursive make, you'll get no output
from the entire build until it is completely done, because all the
output from the recursive make command is going to the temporary file
for that target, then it all gets dumped at the same time.

I think this makes the output sync feature much less useful: it's only
appropriate for building in the background where no one looks at the
output until it's done.  If you want that behavior, though, that's
exactly what the -Omake option selects so it's available.

For -Otarget we introduce an exception to deal with this problem: if we
detect we're about to invoke a recursive make, we do NOT redirect the
output to the temp file.  We let each target of that make print its
output immediately.  This way you don't have to wait for an entire
recursive make to finish before you see any of its output.

> Why is it important to make that exception?  And shouldn't we have an
> option _not_ to make such an exception, but without -Omake,
> i.e. without waiting for the whole session to end?  Whenever any
> top-level recipe finishes, it is flushed to the screen as a single
> unit.  Does this make sense?

I don't understand the change that you're suggesting.  That's exactly
what -Omake does today: whenever any recipe finishes it is flushed to
the screen as a single unit, and no special handling is given to
recursive makes.

If we can improve on this I'm very interested to hear the details.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: feature request: parallel builds feature

2013-05-02 Thread Paul Smith
On Wed, 2013-05-01 at 20:38 -0700, Jim Michaels wrote:

>  again, problem solved with what I proposed. think. separate shell
> window for each job.

You can do that today by just writing your recipes such that they start
a screen session or xterm or whatever.  Those tools allocate and manage
their own PTY's and so each has its own "stdin".

You haven't provided any use case description, at a level _above_ the
implementation.  Sure, the manual documents a restriction but not every
restriction needs to be lifted.  We only do that work if there's a real
need for it that can't be met more easily a different way.

So we need to understand at a higher level what problem you're trying to
solve.  Then maybe there's a good way to do it with existing make
capabilities, maybe the best way is using capabilities available outside
of make in conjunction with make, and maybe the best way is to enhance
make.  We'll have to see.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-02 Thread Paul Smith
Eli Zaretskii  writes:
> > If you want different behavior you can change your rule to use "+" on
> > the two echo lines, so that they're also considered recursive and not
> > saved up.
> 
> If I do that, the echo from rec1 and rec2 mix up:
> 
>   D:\gnu\make-3.82.90_GIT_2013-05-01>gnumake -f mkfsync3 -j -O
>   start rec2start 
>rec1 <<
>   gnumake -f mkfsync simple
>   gnumake -f mkfsync simple
> 
> Is this also expected?

You're right, I'm wrong.  Using "+" is not really like -Ojob, because
with -Ojob we still sync the output of each line.  Using "+" the job
just sends the output directly to the terminal with no sync.  Thus this
mixing up is what I'd expect (same as not using -O at all).

I begin to wonder if this really requires a new per-line special prefix
character like "@", "+", "-", that controls the syncing of that
particular line.  I'm very reluctant to go there as it is a BIG change
and a backward-compat issue.  Also it seems that unlike the existing
prefix characters, for this one we'd have as much need for a way to turn
OFF sync as to turn it ON... bleah.


On Thu, 2013-05-02 at 05:53 +0300, Eli Zaretskii wrote:
> > Now, if you do nothing special for recursive make, you'll get no output
> > from the entire build until it is completely done, because all the
> > output from the recursive make command is going to the temporary file
> > for that target, then it all gets dumped at the same time.
> 
> Not every Makefile looks like that on its top level.  I agree that we
> should cater to the above, but perhaps we could do that without
> "punishing" the other use cases.  For example, perhaps we should have
> a -Osub-make option that will _not_ activate sync-output on the top
> level, only in the sub-make's.  This should produce the desired
> results, I think.

Can you clarify what the "desired results" are?  I seem to have lost the
thread.  What is the behavior you see now that you are trying to avoid
(or you don't see that you're trying to achieve)?

Capture of the sub-make will mean that the entire output of that
sub-make, and all of its recipes including ITS sub-sub-makes, will be
sent to a temporary file and not displayed on the screen until the
entire sub-make is completed.  In what situation would we want to choose
this, regardless of level of sub-make?

In general I see no benefit in trying to special-case any particular
level of make.  For some builds the top level does all the work.  For
some the second level.  For some the third.  For many, different levels
for different parts of the same build.

> > I don't understand the change that you're suggesting.  That's exactly
> > what -Omake does today: whenever any recipe finishes it is flushed to
> > the screen as a single unit, and no special handling is given to
> > recursive makes.
> 
> In my case, I see all the output at once.  Maybe I misunderstand what
> -Omake is supposed to do, too.

I think you and I said the same thing: the output from recursive makes
is saved up and flushed all at once...?

Tim Murphy  writes:
> One optimisation I have thought of in the past for this situation
> would be to allow a single "job"  to hold onto the lock when it
> obtained it.
> 
> This way it could output directly to the console while all other jobs
> would have to buffer. When it released, the next job lucky enough to
> grab the lock might have a full buffer already.
> 
> It might appear to be less choppy.  Not sure how it would perform.

It might be less choppy (or it might not: it depends on your targets:
are they all more-or-less equally chatty?) but we discussed this
possibility and decided that it would be too costly in terms of
performance.

All you need is for one long-running job to get the terminal and pretty
soon it's the only job running as all others have finished but can't
continue with the next one until they can grab the terminal and dump
their output.  Personally I think this is a serious enough problem that
it's probably not even worth doing the work in GNU make to provide this
as an option.  I suspect people would probably much rather just not use
-O and live with interleaved output, or use -O and live with some choppy
output, than suffer essentially random increases in build times.  I
could be wrong though.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Another issue with -O?

2013-05-02 Thread Paul Smith
On Thu, 2013-05-02 at 20:30 +0300, Eli Zaretskii wrote:
> With this simple Makefile:
> 
> all:
> @echo foobar!
> true

Yes this is a bug.  I thought of this while we were having our
discussion yesterday.  Unfortunately in all our tests we were using "@"
to silence make's output of the command line, so we didn't notice that
the command line make prints before it runs the command is not printed
using the child's output context, it's just printed directly to stdout.
This causes the mis-order you see.  I have a solution
partly-implemented.  I think the solution will actually make things a
little more clean.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-02 Thread Paul Smith
On Thu, 2013-05-02 at 20:24 +0300, Eli Zaretskii wrote:
> The "desired results" in my original use case are that the output of
> remaking each target is shown as one chunk in the order in which it is
> expected, i.e. from the first command to the last.  "Remaking a
> target", a.k.a. "recipe" in this context are all the commands that
> rebuild a single target.  E.g., in this snippet:
> 
> foo: bar
>  cmd1
>  cmd2
>  $(MAKE) -C foo something-else
>  cmd3
> 
> all the 4 commands and whatever they emit should be output in one go,
> and in the order they are in the Makefile.

This is the way -Omake works.  If you want that, you should set -Omake.

> The "desired results" in the example you showed, i.e.
> 
> >all: config.h
> >$(MAKE) $(AM_MAKEFLAGS) all-recursive
> 
> are that -O is in effect only for the sub-make which runs
> all-recursive.

I'm not sure what you mean by "ONLY for the sub-make" but if I
understand correctly this is the way -Otarget works.

So, you can have either method.  Maybe what you're suggesting is that
you want BOTH methods, but for different targets, in the same build?
Today we don't support that.  This is what I was talking about when I
referred to a new prefix character on a per line basis to turn on/off
ordered mode.  I think trying to control this through the command line,
by choosing different MAKELEVEL values to behave differently, is not a
good solution.

However I don't want to go there yet.  It's a big, disruptive change and
I'm not convinced that (a) we can't do something to obviate it and (b)
even if we can't that the use-case is important enough to justify it.
The build still behaves just as it did before, it's just that in your
original case the output is shown in a strange order.

> > In general I see no benefit in trying to special-case any particular
> > level of make.  For some builds the top level does all the work.  For
> > some the second level.  For some the third.  For many, different levels
> > for different parts of the same build.
> 
> The user always knows what she is going to run, or at least ought to
> know.  I think we already established that blindly appending -O to the
> Make command might cause surprising and even disappointing or annoying
> results, if one does it for a use case that does not play well with
> this feature.  So some degree of adaptation between -O and its
> sub-modes to the depth and "breadth" (in parallelism sense) of the
> build is necessary anyway.

I don't think I agree with much of the above.  But it's a matter of
opinion so we'll just have to wait and see.

> > > > I don't understand the change that you're suggesting.  That's exactly
> > > > what -Omake does today: whenever any recipe finishes it is flushed to
> > > > the screen as a single unit, and no special handling is given to
> > > > recursive makes.
> > > 
> > > In my case, I see all the output at once.  Maybe I misunderstand what
> > > -Omake is supposed to do, too.
> > 
> > I think you and I said the same thing: the output from recursive makes
> > is saved up and flushed all at once...?
> 
> No, that's not what I said.  I said whenever a _recipe_ finishes, not
> whenever the entire Make run finishes.

-Omake only has any relevance when doing recursive makes.  If you run
"make -Otarget" and "make -Omake" in a non-recursive make environment,
you will get identical behavior.

The one and only difference between them is that when running a
recursive make, -Otarget WILL NOT capture the output of the sub-make,
leaving whatever it prints going to the same stdout as the parent make,
and -Omake WILL capture the output of the sub-make in the temporary
file, to be dumped after the recipe is complete.

There's no -O mode which will save up the entire output of a
non-recursive make and dump it all at the end.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-03 Thread Paul Smith
On Fri, 2013-05-03 at 09:50 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: stefano.lattar...@gmail.com, bug-make@gnu.org
> > Date: Thu, 02 May 2013 16:21:36 -0400
> > 
> > The one and only difference between them is that when running a
> > recursive make, -Otarget WILL NOT capture the output of the sub-make,
> > leaving whatever it prints going to the same stdout as the parent make,
> > and -Omake WILL capture the output of the sub-make in the temporary
> > file, to be dumped after the recipe is complete.
> 
> Thanks for explaining that.  I will have to try a few things to make
> sure I really get it this time, but one thing I can already say is
> that 'target' and 'make' are not very good names for these modes,
> since their semantics is quite different from the literal meaning of
> these two words.  That difference creates a semantic dissonance that
> we should try to avoid, I think.

It's good that we're having this discussion: I want to use it to try to
inform my editing of the GNU make manual to be sure it's as clear as
possible.

I don't think the names are so inaccurate.  I don't want to name things
based solely on the details of how they differ from one another, which
is all I was describing above.  I prefer to name them based on how their
most salient behavior manifests to the user.

The way the user experiences the -Ojob option's results is that the
output of every line of each recipe is dumped as soon as that line is
complete.

The way the user experiences the -Otarget option's results is that the
output of all the lines of a given target are dumped at the same time
once the target is completely built.

The way the user experiences the -Omake option's results is that the
output of an entire recursive make invocation is dumped, together, once
the recursive make has completed.

The issue of how -Otarget handles recursive make is, IMO, a detail
necessitated by the architecture of recursive make invocations.  I don't
know that it's feasible to reflect that detail in the name.

On the other hand I'm certainly not married to the current terms and I'm
quite happy to change them if better ones can be found.  It has already
been suggested that -Oline would be better than -Ojob, and -Orecipe
would be better than -Otarget, and -Omakefile would be better than
-Omake.  The current names are based more around _actions_ while the new
suggestions are based more around semantic elements of make.

To me -Omake is the most problematic.  -Omakefile is not much better; in
fact it might be worse (after all you can and often do invoke a
recursive make on the same makefile).  It would be nice to be more clear
about the fact it applies only to recursive make invocations.  Something
like -Osubmake might be more accurate, except that I don't think we use
the term "sub-make" in the documentation: we use "recursive make".  Is
-Orecursive better?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Another issue with -O?

2013-05-03 Thread Paul Smith
On Fri, 2013-05-03 at 14:02 +0200, Reinier Post wrote:
> Reading this discussion, as a bystander I can't help wondering whether
> the addition of -O is worthwhile.  Unix tools are supposed to be
> small and dedicated. Using a separate utility seems to be a clean
> solution here, and that is fact how it was originally done:
> 
>   http://lists.gnu.org/archive/html/bug-make/2011-04/msg00018.html
> 
> Using a separate utility is less performant and more cumbersome,
> but it is more modular.  The semantics are clear.  The utility can
> be documented and developed separately from GNU Make.  As a GNU Make
> user I worry that with the -O option I won't be sure how it works.

Adding -O in no way precludes you from using a separate utility if you
prefer.  And anyway, even with a separate program you'll still have all
the same problems dealing with recursive builds (David, the original
author, doesn't use recursive make IIRC so he doesn't notice these
things :-)).

And I see no possible way of supporting today's -Otarget option using
the external program method with no modifications to make.  I believe
David uses .ONESHELL where there's no difference between multi-line
recipes and single-line recipes.

I think having this facility built into make is a win, especially as
parallel builds become predominant.  I would be even more happy about it
if we can get it to the point where it can be enabled by default, and
users don't even have to worry about it.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Some serious issues with the new -O option

2013-05-03 Thread Paul Smith
On Fri, 2013-05-03 at 15:22 +0300, Eli Zaretskii wrote:
> > The issue of how -Otarget handles recursive make is, IMO, a detail
> > necessitated by the architecture of recursive make invocations.  I don't
> > know that it's feasible to reflect that detail in the name.
> 
> It is a detail that IMO significantly qualifies the "target" part.  In
> particular, targets that include little or nothing except a recursive
> invocations will be entirely exempt from this "target" scope.

Personally I think it would be MORE confusing if you requested -Otarget
and the output for many or even all the targets in your build was not
printed until the build finished.  You're concentrating on the one
recursive make target and saying "this doesn't follow the rule", while
I'm concentrating on all targets in the sub-make and saying "let's make
sure all of these follow the rule" (that their output is shown as soon
as that target is complete).  Recursive make targets are merely
artifacts of the build.  Users don't care about them; they're just used
by makefile authors to organize things.  If the makefile author rewrote
the makefiles to be non-recursive, users wouldn't notice at all.

Anyway that's how I look at it.

Anyway.  I'm happy to entertain naming suggestions that try to capture
this exceptional treatment of recursive make but I have no ideas myself.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Loading dynamic objects on Windows

2013-05-03 Thread Paul Smith

On Fri, 2013-05-03 at 16:13 +0300, Eli Zaretskii wrote:
> > The two problems I discovered and fixed are:
> 
> No comments, so I went ahead and pushed these changes (see commit
> a66469e).

Thanks Eli.  I hadn't gotten around to examining those comments in
detail.  In general I was OK with what I read with one exception: I'm
not so excited about adding a new pointer to the file structure, which
is an extra 8 bytes (on 64bit hardware) for every file struct for the
rare situation that the file represents a loaded shared object.  Make
already uses far too much memory, IMO.

I would prefer to do something like use a boolean bit in the file object
to specify that it refers to a shared object, and keep the pointers in a
different structure.  Then if the bit is set we can take the hit of
looking up the pointer; this will be very rare anyway.

I can look into this more this weekend.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


possible solution for -Otarget recurse (was: Re: Some serious issues with the new -O option)

2013-05-03 Thread Paul Smith
I have a solution for this problem that I think will work well, and will
be simple to implement.

Suppose we do this: if we're about to invoke a line marked recursive and
we're in -Otarget mode, then before we run it we'll show the current
contents of the temp file (using the normal synchronized output
function).

This will ensure that output from lines before the recursive make will
be shown before the targets in the recursive make.  It's not 100%
identical but I can't see any way to do better.

Thoughts?

On Fri, 2013-05-03 at 16:39 +0300, Eli Zaretskii wrote:
> then how about if this exemption will only be applied if the recipe
> has a single command?

In this case, if a recipe consisted of only one line then every target
in the submake will be output immediately when it's finished, but as
soon as you add another line to the recipe, like "@echo "Build is
done!", now all of a sudden you get no output from the entire sub-make
until the end.  That would be too confusing.

> If the single-command-in-recursive-invocation is _not_ the use case
> which -Otarget is optimized for, then what use case is?

-Otarget is not really about recursive invocations at all.  It's there
for the non-recursive targets.  It would be nice if it worked well for
the recursive targets, too, obviously.

Consider every target in the entirety of build, including all submakes
and all their targets as well, as one long list of targets (for example
the results of a serial build).  The fraction of those targets that are
invoking sub-makes will be, at most, very small.

-Otarget wants to collect the output of each individual target, then
show the output for each target as soon as that target finishes.  That's
what users (should) expect when using this option.

In the case of recursive make targets, this presents an unsolveable
contradiction: how can you both show the output of EVERY target
(including the targets run by the submake) as soon as it completes, and
still not show the output of any target (including the targets that
invoke a submake) before it's complete?  You can't do both!

The -Omake option chooses the latter as more important: it will delay
the output until the submake is complete.

The -Otarget option chooses the former as more important: it wants to
behave properly for the large majority of targets which are not invoking
a recursive make.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: possible solution for -Otarget recurse (was: Re: Some serious issues with the new -O option)

2013-05-03 Thread Paul Smith
On Fri, 2013-05-03 at 21:15 +0300, Eli Zaretskii wrote:
> > This will ensure that output from lines before the recursive make will
> > be shown before the targets in the recursive make.  It's not 100%
> > identical but I can't see any way to do better.
> 
> Why isn't it identical?

It's not identical in two ways: first it's not identical to -Otarget
because the output before and after the recursion are not shown in a
continuous block.  In:

  all:
  @echo start
  $(MAKE) -C subdir
  @echo end

the "start" and "end" will have other stuff (not just the other targets
in that sub-make, but ANY other targets that happen to finish during
that time) between them.  Obviously that's kind of the point;
nevertheless it's a difference.

Second, there's a difference from some ideal solution if the recursive
line generates output; suppose instead of the above you wrote it like
this:

  all: ; @echo one ; echo two ; $(MAKE) -C subdir ; echo end

Since the output of the entire recursive line is not synchronized but
just left to go to stdout, the output generated by the "echo" commands
can (a) interrupt other synchronized output, (b) have other output
interrupt it.

But, it will still be shown in order! :-)

> > Consider every target in the entirety of build, including all submakes
> > and all their targets as well, as one long list of targets (for example
> > the results of a serial build).  The fraction of those targets that are
> > invoking sub-makes will be, at most, very small.
> 
> That depends.  I think you will find a different fraction in Emacs,
> especially when it bootstraps itself (which involves byte-compiling
> all the Lisp files in a 3-level deep recursion, the last of which
> passes to Make a very long list of files which all can be compiled in
> parallel; see lisp/Makefile.in in Emacs).

Sure, but that's just one invocation of recursive make then that one
recursive make builds all those targets.  So that counts as one
recursive make target and lots of individual bytecompile targets.

Of course it's possible to write your makefile such that you run a
separate recursive make command for each target, but that seems pretty
wasteful of resources.

> > -Otarget wants to collect the output of each individual target, then
> > show the output for each target as soon as that target finishes.  That's
> > what users (should) expect when using this option.
> > 
> > In the case of recursive make targets, this presents an unsolveable
> > contradiction: how can you both show the output of EVERY target
> > (including the targets run by the submake) as soon as it completes, and
> > still not show the output of any target (including the targets that
> > invoke a submake) before it's complete?  You can't do both!
> 
> But I don't think there's a requirement to avoid showing incomplete
> output.  The only requirement is not to mix output from two or more
> parallel jobs, that's all.

That's the lowest-level requirement.  But if that were the ONLY
requirement we'd simply implement -Ojob and we'd be done.  That's all we
need to ensure that two or more parallel job outputs are not mixed:
-Ojob says that the output of each line of a recipe will be printed
without any interruption from any other lines running in parallel.

The -Otarget option makes a stronger statement: it says that the output
from all of the lines of a recipe will be printed together without any
interruption from any other recipe.  In the case of targets that
generate output AND invoke recursive make, we can't achieve this
stronger promise, at least not completely.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Another issue with -O?

2013-05-03 Thread Paul Smith
On Fri, 2013-05-03 at 16:16 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Date: Fri, 03 May 2013 08:57:57 -0400
> > Cc: bug-make@gnu.org
> > 
> > I think having this facility built into make is a win, especially as
> > parallel builds become predominant.  I would be even more happy about it
> > if we can get it to the point where it can be enabled by default, and
> > users don't even have to worry about it.
> 
> I think enabling it by default will be a no-brainer if/when we come up
> with a way to get it to produce the same output as without -j.  IOW,
> run a parallel build, but output its results as if it were a
> non-parallel build.

If you mean literally exactly the same as a non-parallel build, that is
enormously difficult.

Doing that would require that instead of dumping output in the order
that jobs finish, make would have to remember the order in which they
were started and dump the output in that order.  This ordering would
have to be maintained across recursive make invocations.  This also
means that either we have to divorce the handling of the temporary files
from the job (so we can maintain these temporary files after the job
completes, until we have permission to dump them), or else we will lose
parallel capability as jobs finish, but new ones can't be started until
the old ones can dump their output.

I'm not going to commit to implementing that before enabling some kind
of sync output by default (actually I don't think it's solveable at
all).  And I don't see any justification for making such a requirement,
since parallel builds today don't make that requirement.

My position is if we can get output sync to a level where it is no worse
than today's parallel build behavior (excepting the issue of output to a
TTY vs. file which can never be resolved), we should enable it by
default.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: possible solution for -Otarget recurse (was: Re: Some serious issues with the new -O option)

2013-05-03 Thread Paul Smith
On Fri, 2013-05-03 at 23:12 +0300, Eli Zaretskii wrote:
> > the "start" and "end" will have other stuff (not just the other targets
> > in that sub-make, but ANY other targets that happen to finish during
> > that time) between them.
> 
> This last part (about ANY other targets) is not what I thought you had
> in mind.

No.  With -Otarget and -Ojob it's never the case that an entire sub-make
will "own" the output lock such that no other jobs running in parallel
from other sub-makes can display output.

This is something someone else mentioned the other day.

Doing this would seriously compromise the parallelization.  Given that
today people are more-or-less satisfied to have garbled output rather
than slow down their parallel builds, I find it impossible to believe
they'd rather have ordered output if it reduced parallelization.

When running in parallel it's always been the case, and is still the
case with -O, that you must consider all the targets that could possibly
be started by any make (at any level of recursion) as big grab-bag of
targets that could be run at any time (subject to prerequisite
relationships).  Recursion is not a "sequence point" in your build, when
parallelization is enabled.

-O in no way changes that behavior, all it does is ensure that output
from any individual line or target of the recipe will not interfere with
any other individual line or target.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Another issue with -O?

2013-05-04 Thread Paul Smith
On Sat, 2013-05-04 at 09:57 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Cc: reinp...@win.tue.nl, bug-make@gnu.org
> > Date: Fri, 03 May 2013 16:51:47 -0400
> > 
> > > I think enabling [-O] by default will be a no-brainer if/when we come up
> > > with a way to get it to produce the same output as without -j.  IOW,
> > > run a parallel build, but output its results as if it were a
> > > non-parallel build.
> > 
> > If you mean literally exactly the same as a non-parallel build, that is
> > enormously difficult.
> 
> Yes, literally, but with one exception: the order of producing on the
> screen output from targets that are remade in parallel (i.e. are
> independent of each other), and are on the same level of recursion,
> does not have to be preserved.

Well that's not "literally exactly the same" then :-).

But I won't agree to the caveats here, in particular the phrase "on the
same level of recursion".  Today with parallel builds and recursion, we
don't guarantee anything about the order of execution between different
sub-makes and as I mentioned, I see no justification for adding that new
requirement to synchronized output.

> Example:
> 
> all: a b
> a: xa
>   @echo $@
> xa xb xc:
>   @echo $@
> b: xb xc xd
>   @echo $@
> xd:
>   $(MAKE) foo
>   @echo $@

> By contrast, I _would_ mind to see this, for example:
> 
>   xa
>   xb
>   a
>   xc
>   xd
>   $(MAKE)
>   b

I agree that is less than ideal and is one of the two issues I mention
below.  This will be fixed.  However, you may see this:

  xa
  xb
  a
  $(MAKE) foo
  xc
  xd
  b

There's nothing that can be done about that (and that's true of today's
parallel build as well).

> > My position is if we can get output sync to a level where it is no worse
> > than today's parallel build behavior
> 
> If we want it to be "no worse", then why do we need it at all, let
> alone have it turned on by default?  I thought -O should actually
> improve something, so "no worse" is too weak to describe that, IMO.

Obviously we gain synchronized output.  I mean, "no worse" in terms of
other behavior.

> And what are our criteria for deciding whether it's no worse?  The
> current default behavior might be confusing and hard to interpret in
> some cases, but at least it's familiar.  -O changes that to a
> different behavior which is confusing (at least to me) in different
> situations, and is unfamiliar.

I believe we can get to the point where anyone who can read and
understand parallel output can even more easily read and understand the
output from -O.  Conceptually all we're doing is ensuring the output
comes out at the same time, uninterrupted, instead of interleaved.  It
should not be hard to understand that.

There are two known issues right now that are causing confusing output.
Hopefully once I fix those the output will make more sense than normal
parallel output.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-05-04 Thread Paul Smith
On Sat, 2013-05-04 at 08:57 +0200, Frank Heckenbach wrote:
> I shouldn't have written that. :-( Shortly afterwards, I found a bug
> or perhaps two:
> 
> foo:
> @echo foo
> +@echo bar
> 
> (a)
> % make -Ojob
> foo
> bar
> foo
> 
> (b)
> % make -Otarget
> bar
> foo
> 
> As you see, (a) "-Ojob" writes "foo" twice and (b) "-Otarget" writes
> the messages in the wrong order.

The second one is known and I mentioned it the other day (hard to keep
up with all the messages, I know).  I'm working on a fix.

The first one I've seen but hadn't had time to debug.  I'll look at your
patch.  I left the truncate where it was rather than doing it after the
sync_output() because I was hoping to avoid truncating a file that we'll
never use again anyway, but I guess it isn't a big deal.


COMMANDS_RECURSE _does_ mean to recurse.  The reason for the '+'
prerequisite is to tell make that this line, even though it may not look
like it, will run a recursive make.

Since make doesn't parse the command line it can't know for sure which
ones actually recurse.  It uses a heuristic, by looking for $(MAKE) or
${MAKE} in the unexpanded line.  But this is easily defeated if your
sub-make invocation doesn't use that variable for some reason.  Hence,
using "+" to force it.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: possible solution for -Otarget recurse (was: Re: Some serious issues with the new -O option)

2013-05-04 Thread Paul Smith
On Fri, 2013-05-03 at 12:55 -0400, Paul Smith wrote:
> Suppose we do this: if we're about to invoke a line marked recursive
> and we're in -Otarget mode, then before we run it we'll show the
> current contents of the temp file (using the normal synchronized
> output function).

I've implemented this feature and it seems to work as expected.  I also
implemented the fix to the duplicate output being shown in some cases.

I have two open issues I want to address before calling this feature
done: first, fix make's writing of the command it's going to run (for
rules that don't start with "@") as that's not working right.  Second,
fix the enter/leave issue.  It turns out that these are currently
somewhat bound together so I may have to solve the second one first.

Oh, and a renaming as well :-)

Eli, I did some cleanup in job.c to try to make things simpler and
reduce duplication.  I tried to be careful but it's quite possible I did
something to disrupt the Windows version again.  It's up to you if you
want to fix any problems now or wait until I solve the above two issues
and look at it all at the same time.  There will be more disruption I
think.

One other thing: I changed the pump function to read from a FD but write
to a FILE*, because all our other uses of stdout/stderr use FILE* and
it's not wise to mix them.  It works fine.  While I was in there I
noticed the handling of the text/binary mode.  I wonder if this is not
quite right.  It seems to me that since we're going to be writing to
stdout/stderr anyway, if we're going to set the mode at all we should be
setting the mode on the temporary file to match the mode on
stdout/stderr, before we write to it, rather than setting the mode on
stdout/stderr to match the temporary file.

What do you think?



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #33138] .PARLLELSYNC enhancement with patch

2013-05-04 Thread Paul Smith
On Sun, 2013-05-05 at 04:37 +0200, Frank Heckenbach wrote:
> > COMMANDS_RECURSE _does_ mean to recurse.  The reason for the '+'
> > prerequisite is to tell make that this line, even though it may not look
> > like it, will run a recursive make.
> 
> OK, let me just say that the meaning of "recursive" may not be
> perfectly clear. Though the manual says: "The @samp{+} prefix marks
> those recipe lines as ``recursive'' so that they will be executed
> despite use of the @samp{-t} flag.", the example immediately
> preceding this sentence has:
> 
> +touch archive.a
> +ranlib -t archive.a
> 
> which are clearly not recursive make invocations.
> 
> I gather that make uses recursive in a wider sense as "anything to
> be run regardless (because it probably arrages by itself not to do
> anything serious in a dry run or so)", while the current
> implementation of output-sync uses it in the more specific meaning
> of a recursive invocation of GNU make (which will do its own
> syncing).

It's not just this new feature that relies on this meaning.  The
jobserver feature, which also wants to know which commands are running
recursive make, also does.

If people misuse it then they'll get odd behavior.  I don't see that
there's anything we can or should do about that.

You're right, though, that this example in the make manual might not be
the best.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: possible solution for -Otarget recurse

2013-05-04 Thread Paul Smith
On Sun, 2013-05-05 at 00:44 +0200, Stefano Lattarini wrote:
> The test 'features/output-sync' now fails for me:
> 
>   Test timed out after 6 seconds
>   Error running /storage/home/stefano/src/gnu/make/tests/../make \
> (expected 0; got 14): /storage/home/stefano/src/gnu/make/tests/../make \
> -f work/features/output-sync.mk.1 -j --output-sync=target
>   Caught signal 14!
>   FAILED (4/6 passed)
> 
> Can you reproduce the failure?  If not, let me know which information you
> need to debug this, and I'll try to provide them.

It didn't fail for me.  However, it's possible that the 6 second timeout
is just a little too short for reliability.

Look at line 178 in the test/scripts/features/output-sync file.  It will
look like this:

  #MAKE#[1]: Leaving directory '#PWD#/bar'\n", 0, 6);

The "6" at the end is the timeout.  Try changing that to 10 just to see
if it helps.  If not then it's a real problem.  I'll need details about
your system.  Also please send the contents of the work subdirectory
after the failure.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: possible solution for -Otarget recurse

2013-05-05 Thread Paul Smith
On Sun, 2013-05-05 at 11:11 +0200, Stefano Lattarini wrote:
> Sorry to add this only now, but I realized the failure is only
> reproducible if I run the testsuite with "make -j", as in "make -j8
> check; and even in that case, the failure is racy.  With a bare "make
> check", things work for me as well.  On the other hand, increasing the
> parallelism even more, other tests start to fail as well:

The test suite definitely cannot be run in parallel.  However this
should not happen (and does not happen in my environment when I run the
commands above) because the test harness cleans out the environment,
which will remove any of the MAKEFLAGS or MFLAGS variables that might
tell the make to run in parallel when it's not expected.

Can you examine your shell configuration files etc. to see if they're
setting MAKEFLAGS or MFLAGS?  Although if that's true then the tests
should fail all the time.

Can you verify that there don't seem to be any leftover test files in
the tests directory?  Sometimes if something doesn't get cleaned up
correctly that can cause future builds to fail.  However if that were
the case then "make check" without -j would fail as well.

I don't have an explanation for this.



___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: possible solution for -Otarget recurse (was: Re: Some serious issues with the new -O option)

2013-05-05 Thread Paul Smith
On Sun, 2013-05-05 at 19:36 +0300, Eli Zaretskii wrote:
> However, I wonder what was the reason for splitting the definition of
> GMK_EXPORT in two, and putting each part in a different file.

Well, because the gnumake.h file is intended to be installed into the
user's /usr/include or similar, and included in a binary package build
of make such as RPM or DEB or whatever, and be included by the user's
code, and when it's included there it should NEVER (IIUC) be using the
in-make variant.  I wanted to separate that in-make variant out so that
users would never see it or have the possibility of accidentally using
it, so I moved it into our internal headers which are never installed
anywhere outside our source tree and would not be included in a GNU make
binary package.

This is slightly more potential maintenance on our part internally but
is much safer for the user which is a tradeoff I'll almost always
choose.

However, if you really want it back the way it was please do choose a
more unique name than "MAIN".  Something prefixed with "GMK_" at least.

> > One other thing: I changed the pump function to read from a FD but write
> > to a FILE*, because all our other uses of stdout/stderr use FILE* and
> > it's not wise to mix them.  It works fine.  While I was in there I
> > noticed the handling of the text/binary mode.  I wonder if this is not
> > quite right.  It seems to me that since we're going to be writing to
> > stdout/stderr anyway, if we're going to set the mode at all we should be
> > setting the mode on the temporary file to match the mode on
> > stdout/stderr, before we write to it, rather than setting the mode on
> > stdout/stderr to match the temporary file.
> > 
> > What do you think?
> 
> Make never changes the I/O mode of its standard streams.  And it
> shouldn't, since everything Make itself writes or reads is always
> plain text.  Since the standard streams always start in text mode,
> your suggestion boils down to make input from the temporary file be
> always in text mode.

Well, I assumed that something that invoked make could set the mode and
then make could inherit that mode.  I don't know if that's how it works
or not in Windows.  And of course I doubt anyone does that.

I understand your point.  I just wonder if this difference might end up
being visible to the user in some way.  But, we'll leave it as-is.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Output sync completed (?)

2013-05-05 Thread Paul Smith
Hi all.  I've recently pushed changes to solve the last open issues that
I'm aware of with the --output-sync feature:

  * If command line printing is not suppressed ("@" is not used) the
command line is attached to the output.
  * Extraneous enter/leave lines are not printed any longer.
  * I renamed the options to -Oline, -Otarget (not changed) and
-Orecurse.
  * I took another stab at documentation; please check the "Parallel
Output" node and see if it is more understandable now.  Some
picky details of -Otarget I didn't spell out but hopefully it's
more clear.

Please test the heck out of this with the different modes in your most
difficult build environments, and let me know if you see any anomalies.
I'm also interested to know if the enter/leave stuff is really correct
now; it is in my testing but it sort of happened more by accident than
real planning.

We are right on the cusp of a release candidate for the next GNU make
version.  If you have the ability to build and test from Git feel free
to start your testing early.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Output sync completed (?)

2013-05-06 Thread Paul Smith
On Mon, 2013-05-06 at 20:31 +0300, Eli Zaretskii wrote:
> If the next release is close, then what about the following 2 issues,
> which were discussed, but AFAIK not finalized:
> 
>  . the plugin_is_GPL_compatible symbol in the dynamic objects as a
>prerequisite for their loading into Make

I talked to RMS about this and he agreed this was needed.  It's on my
todo list.

>  . a more portable way of loading dynamic objects, e.g. by providing a
>platform-dependent $SOEXT variable that could be used instead of a
>literal .so or .dll extensions

This one is on my "think about" list.

Plus there are other cleanups needed.  And, even after the first
candidate is posted experience tells me it will be several weeks of
testing before the release itself.

I just want people to start paying attention, if they haven't been so
far...


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


enter/leave messages (was: Re: Output sync completed (?))

2013-05-07 Thread Paul Smith

On Mon, 2013-05-06 at 10:26 +0200, Stefano Lattarini wrote:
> >   * Extraneous enter/leave lines are not printed any longer.
> >
> I still see them actually, if I try to build the latest coreutils

I guess it depends on your definition of "extraneous" :-).

I was considering "extraneous" to mean "multiple identical enter/leave
lines in a row", which we used to get and are clearly useless.  I don't
see those anymore, even with the coreutils build.

If you consider "extraneous" to mean "more than there used to be", then
they're still there: there's one surrounding each synchronized sequence
of output with -O enabled.

After thinking about it, the output-sync changes introduced two separate
features bundled together: not only the sync changes but also changes to
the enter/leave behavior.  One doesn't require the other, really.

I understand the impetus for the change: tools that read make output and
try to track the build can be confused during parallel builds, since
targets are built across different recursive make invocations at the
same time.  However this is no worse than it was before output-sync.

I think you want to be able to select the granularity of enter/leave
separately from whether or not you're using -O.

But that brings us to the next issue: there's no portable way to control
this.  It has to be added to MAKEFLAGS but automake, for example, which
wants to generate a portable makefile, is out of luck: it can't add this
non-portable flag to MAKEFLAGS in a portable makefile.  You pretty much
have to leave it up to the person invoking make to remember to do it.

Given all this I think the best thing would be to not force the new
enter/leave behavior by default, and come up with a different way to
enable/disable it.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: "*** INTERNAL: readdir: Invalid argument" error

2013-05-15 Thread Paul Smith
On Wed, 2013-05-15 at 12:29 +0300, Eli Zaretskii wrote:
> Question: Why do we force a fatal error when 'readdir' fails in that
> way?  Would it be better to display a warning, perhaps even under some
> debug command-line option, and otherwise simply treat that as the end
> of the directory stream?

Looking at the history of the file, I made this change in 2006.  Prior
to that, readdir() was not checked for error returns; we simply broke
out of the loop (if (d == 0) break;).

There's no indication in the changelog as to _why_ this change was made.
Probably it seemed like a good thing to do, to show some sort of message
if readdir() failed and, in POSIX anyway, there's really no "nice" error
you can get back from readdir().

I guess the idea is that if make is trying to find a target or
prerequisite file during a build and readdir() stops working, that is a
pretty serious error.

However I have no problem making this something non-fatal.  I think it
would be better to keep some kind of message, since otherwise it may be
very confusing to users as to why make is silently ignoring some files.
Obviously the current message is not appropriate.  Something that
mentions the filename at least, and maybe the directory (on Windows; on
other systems we don't have that information available)?


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: "*** INTERNAL: readdir: Invalid argument" error

2013-05-15 Thread Paul Smith
On Wed, 2013-05-15 at 17:45 +, Edward Welbourne wrote:
> > Exactly.  Right now, the code already walks along PATH, and constructs
> > /sh.exe, but instead of calling 'stat' or 'access', it calls
> > 'file_exists_p', and the side effect of that is that Make reads the
> > entire directory  into its internal storage, hashes it, etc.
> 
> Am I the only one finding it hard to believe that's anything but
> hugely inefficient ?  Testing for the existence of a file should not
> involve iterating the whole of its directory !  Well, at least, not in
> userspace code - if the O/S's implementation of the system function
> does that, that's the O/S's affair.

This function was not specifically designed for this use-case.  It's
mainly used to look up targets.  Part of what it does is cache the
results of the directory read so that subsequent lookups don't have to
use a system call, they can just check the cached contents of the
directory and determine whether a target exists or not.  In order to
make that cache it obviously has to use readdir() so it takes over some
aspect of the work that the OS would do anyway to look up the file.

Because of the amount of file and directory lookups make does (remember,
for every target that make might want to build, if it uses an implicit
rule there are a LOT of stat()-type calls to see if some variation of
that file exists, its prerequisites, etc.) this caching is has been
proven to be a big win in the past.

It's unlikely that the caching will be helpful when looking through your
%PATH%, in fact it may be counter-productive (adding useless directories
to make's internal cache).  Make will likely not be trying to build
targets in directories in %PATH%.  So really it's probably not the best
idea to use the caching readdir functionality to search for sh.exe.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


GNU make release candidate 3.99.90 available

2013-05-17 Thread Paul Smith
Hi all.  The first release candidate for the next release of GNU make,
GNU make 4.0, is now available for download:

http://alpha.gnu.org/gnu/make/make-3.99.90.tar.gz  
37c2d65196a233a8166d323f5173cdee
http://alpha.gnu.org/gnu/make/make-3.99.90.tar.bz2 
40c0a62e1f4e0165d51bc4d7f93a023c

There are many bug fixes and new features.  Please see the NEWS file for
full details.

http://git.savannah.gnu.org/cgit/make.git/tree/NEWS

Note I will be away this weekend and not reading email so please don't
expect a response from me until next week.

Cheers, and happy making!



signature.asc
Description: This is a digitally signed message part
___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: GNU make release candidate 3.99.90 available

2013-05-20 Thread Paul Smith
On Sat, 2013-05-18 at 14:00 -0400, Boris Kolpackov wrote:
> I've also tested the up-to-date check time compared to 3.81
> and the new version is significantly faster (5.63s vs 8.15s).
> That's very welcome.

There's still a serious regression in the code due to the change in
pattern rule searching added in 3.82.  In some (not that unusual)
circumstance GNU make will chew _enormous_ amounts of memory, compared
to what it used to use in 3.81 and below.

This is because in the current algorithm, every single time we do an
implicit rule search and compute possible target and dependency names
they are all added to the string cache, even if they are deemed to be
useless and not needed because that implicit rule is not chosen.  In
cases where there are lots of futile implicit rule searches the string
cache gets bloated with these useless strings.

> I would like to use the RC in more every day development, however,
> I am currently travelling and don't do much of that. So it would
> be great if you could give a couple of months for everyone to
> have a chance to test the new release thoroughly. It will also
> allow the distributions to pick it up (e.g., Debian experimental).
> This will give you essentially free testing with a wide range of
> packages.

I can't promise to wait months.  People who care should be picking this
up ASAP.  I'm looking into adding it to various automated build farms,
etc.

Also I've not seen Debian experimental pick up release candidates like
this in the past; is that something they do?

Personally I think getting into something like Gentoo may be more
beneficial since their package management tool is running make quite a
bit.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: GNU make release candidate 3.99.90 available

2013-05-20 Thread Paul Smith
On Sat, 2013-05-18 at 14:20 +0300, Eli Zaretskii wrote:
> > From: Paul Smith 
> > Date: Fri, 17 May 2013 04:12:15 -0400
> > 
> > Hi all.  The first release candidate for the next release of GNU make,
> > GNU make 4.0, is now available for download:
> 
> Paul, can you please add 4.0 to the list of versions accepted by the
> Savannah bug tracking UI, so that bugs fixed before the release could
> be marked as fixed in that version?

Typically what I do is have all issues resolved before the release
marked with "SCM", then at release time I change all the bugs marked as
fixed in SCM to be marked fixed in 4.0.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: GNU make release candidate 3.99.90 available

2013-05-20 Thread Paul Smith
On Fri, 2013-05-17 at 19:42 +0200, Denis Excoffier wrote:
> Compared with make-3.82, the new make-3.99.90 breaks those Makefiles,
> like in tiff-v3.6.1 (rather old i know, before 2003 at least), that
> use the construction:
> 
> make -${MAKEFLAGS}

Hrm.  This is actually specifically discouraged by the documentation.
However reading the POSIX standard shows that make is required to accept
this format, at least for standard arguments.

The problem is that the new flags we're adding are causing some pain; I
may need to tweak the algorithm that generates the MAKEFLAGS values.

I'll take another look at this.


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


<    1   2   3   4   5   6   7   8   9   10   >