Re: [bug #65908] Make fails with 'Makefile:3857: *** missing 'endif'. Stop.

2024-06-22 Thread Henrik Carlqvist
On Sat, 22 Jun 2024 15:49:26 +
Martin Dorey  wrote:
> Um, Henrik... 4.4.90 is the latest in git:

Yes, I now see that you are right and that has been the "version number" of
all commits in git since version 4.4.1.

> ...the de facto OS/2 maintainer...

Whoops, my bad... I did remember when support for OS/2 was dropped, but I did
not remember that eventually someone picked it up again.

regards Henrik



Re: [bug #65908] Make fails with 'Makefile:3857: *** missing 'endif'. Stop.

2024-06-22 Thread Henrik Carlqvist
On Sat, 22 Jun 2024 16:57:17 +0200
Henrik Carlqvist  wrote:

> > Used make is 'GNU Make v4.4.90' from git repo whose head is commit e3f938,
> > and I'm working on OS/2.
> > 
> > v4.4.1 works fine.
> 
> In lack of OS/2 maintainer the official GNU Make dropped support for OS/2
> somewhere around version 4.4. It seems as if some unofficial fork was done
> at https://github.com/komh/make-os2 but that fork does not seem to have put
> the effort in to follow continued work with GNU Make.

Whoops, I now see that version 4.4.1 is the latest official release. I am not
awer of any 4.4.90 version, there was a 4.4.0.90 version which is older than
4.4.1. Commit e3f938 is the latest commit, but no official release. That
commit was made to the official GNU Make repo which has dropped OS/2 support.

regards Henrik



Re: [bug #65908] Make fails with 'Makefile:3857: *** missing 'endif'. Stop.

2024-06-22 Thread Henrik Carlqvist
> Used make is 'GNU Make v4.4.90' from git repo whose head is commit e3f938,
> and I'm working on OS/2.
> 
> v4.4.1 works fine.

In lack of OS/2 maintainer the official GNU Make dropped support for OS/2
somewhere around version 4.4. It seems as if some unofficial fork was done at
https://github.com/komh/make-os2 but that fork does not seem to have put the
effort in to follow continued work with GNU Make.

regards Henrik



Re: set $*, $@ upon reaching the colon

2024-06-12 Thread Henrik Carlqvist
On Thu, 13 Jun 2024 07:55:13 +0800
Dan Jacobson  wrote:
> Or, document that they are purposely not expanded, to maintain
> compatibility. But in fact there is no compatibility to maintain, as in
> the past nobody would have used $* or $@ after the colon, because they
> were useless.

Did you read chapter 3.9 in the documentation about secondary expansion?

regards Henrik



Re: [bug #65759] handling of "-" and "--" on command line

2024-05-19 Thread Henrik Carlqvist
On Sun, 19 May 2024 18:02:45 -0400 (EDT)
> Many programs use a single dash to mean "read from stdin" but make doesn't
> do this.

Yes, it does:

-8<
nazgul:/tmp> make -f -
all: ; echo hello
** pressing ctrl-d on this next line **
echo hello
hello
nazgul:/tmp> make --version
GNU Make 4.1
Built for x86_64-slackware-linux-gnu
Copyright (C) 1988-2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
nazgul:/tmp> 
-8<

regards Henrik



Re: [PATCH] src/main.c: Add -J to detect number of job slots based on nproc.

2024-04-12 Thread Henrik Carlqvist
> Isn't nproc or nproc+1 too much?  On systems with hyper-threading,
> this will try using too many jobs, and might cause the system be
> significantly less responsive.  Maybe nproc/2 is a better default?

There have been many other good suggestions of choices for make -j and the
choice depends upon your situation.

In your example, when you are considered about responsivness, you are probably
considering a system being used by some user interactively. For such a
situation you might want to consider a low number of parallell jobs and/or
to start make with a nice priority.

The other extreme example would be a system not being used interactively with
only a few low priority, low cpu usage, background processes. If you on such a
system want make to finish as quick as possible you want it to fully utilize
all the resources in a machine. For such situations, I usually use nproc+1,
with the idea that the machine has nproc CPU threads and 1 disk (locally or
remote) for the project. The disk will be fully utilized if 1 or more
processes are waiting for disk, the  CPU threads will be fully utilized if 1
or more processes are waiting for CPU. In either case you will have a
bottleneck. Having more than one process waiting for disk will not increase
IO performance, most likely the IO performance will instead be degraded. The
same applies for processes waiting for CPU which will get slightly less
performance from context switching of unfinished processes.

On a system with a directory like /tmp on a separate disk the performance
might benefit from nproc+2. The same applies if you are reading sources and
writing compiled binaries to different disks.

As allways, your mileage might vary. If the bottleneck during compilation
several times switches between disk and CPU you might benefit from more
processes to fully utilize the resource that currently is not the bottlneck.

Another thing that might be worth to consider is if your CPU(s) are configured
to adjust the frequencies of the cores depending on the load. On one hand, you
might want to keep the cores busy to avoid that they go up and down in
frequency. On the other hand, depending on your CPU, you might be able to
reach some higher turbo frequency on few working cores than max allowed
frequency when all cores are working.

regards Henrik



Re: [PATCH] src/main.c: Add -J to detect number of job slots based on nproc.

2024-04-11 Thread Henrik Carlqvist
On Fri, 12 Apr 2024 02:13:36 +0100
Matt Staveley-Taylor  wrote:
> Browsing the mailing list I can see that the behaviour of -j with no
> arguments has been discussed a few times. Given it would affect
> backwards compatability to change how -j works, introducing a new
> flag seems appropriate.

Yes, it has been discussed. I would not mind if the default without argument
for -j would be to limit to nproc or nproc+1, instead of like now, creating a
fork bomb when compiling a big project.

Slightly related to this is also a patch contributed by me at
https://savannah.gnu.org/bugs/index.php?51200 which makes it possible to
adjust the number of jobs of an existing make process with SIGUSR2 and SIGUSR1
signals. As it does not seem to have been much interest in those patches I
have instead created a fork of gnu make at
https://github.com/henca/Henriks-make where I have done some continued work.
In the latest version I have limited the default number of processes to 3, but
nproc or nproc+1 would have been a better choice.

I will wait and see how your patch is received. If it gets accepted, I will
simply pull from upstream to my repo at next release. Regardless of if it gets
accepted I might inspired by your code limit the default number of jobs to
nproc instead of now 3 in my fork.

regards Henrik



Re: Please check about the GIT problem.

2024-04-06 Thread Henrik Carlqvist
On Sat, 6 Apr 2024 16:50:33 +0900 (KST)
12zz12 <12z...@kakao.com> wrote:
> root@uk91-Korea:/home/u/다운로드# tar -zxf git-2.38.5.tar.gz 
> root@uk91-Korea:/home/u/다운로드# ls
> git-2.38.5  git-2.38.5.tar.gz  google-chrome-stable_current_amd64.deb
> root@uk91-Korea:/home/u/다운로드# cd git-2.38.5
> root@uk91-Korea:/home/u/다운로드/git-2.38.5# ls

The "#" in your prompt indicate that you are running the commands as user
root. Usually it is considered best practice to build software as a normal
unprivilged user and switch to the root account only for the installation
step.

> make[2]: 디렉터리 '/home/uk91/다운로드/git-2.38.5' 나감
> * new asciidoc flags
> ASCIIDOC git-add.html
> /bin/sh: 1: asciidoc: not found

This is why the build fails, you don't seem to have the asciidoc program
installed.

> root@uk91-Korea:/home/u/다운로드/git-2.38.5# sudo make install ... -y

There is no need to use sudo if you are already root.

> install-doc install-html install-info -y make: 부적절한 옵션 -- 'y'

Make complains about the "-y", what was your intention of that? If that was
intended as an option for make, such options are usually placed before the
targets. However, gnu make does not have any "-y" option.

Even with a correct call of make the installation would fail if you previously
was unable to build.

regards Henrik



Re: Segmentation Fault on Exported Resursively Expanded Variable

2024-01-21 Thread Henrik Carlqvist
On Sun, 21 Jan 2024 14:45:00 -0500
Dmitry Goncharov  wrote:
> i bet, the purpose of having (*ep)[nl] == '=' check before strncmp was
> to relieve make from running strncmp unless the variables have the
> same length.

Yes, that is also my guess. Unfortunately, it could give a segfault if
(*ep)[nl] is not a valid memory location.

> For every char, strncmp does two checks, n and the character. strlen
> does only one check.
> Without doing any measurements, i expect, strlen do better than
> strncmp when strlen (*ep) is shorter than nl.
> On the other hand, when v->name is half the length of *ep, we'd prefer
> strncmp.

I haven't done any measurements either, but my guess is that strlen is faster
for the cases when the strlen(*ep) is shorter than about double the position
of the first different characters in *ep and v->name. So strncmp might be
faster if the strings differ in the beginning as strncmp probably will abort
at the first difference but strlen will not abort until the end of the string.
But on the other hand, strlen will be at least twice as fast for every 
position as it only cares about a single string and does not care about any
max length. 

regards Henrik



Re: Segmentation Fault on Exported Resursively Expanded Variable

2024-01-16 Thread Henrik Carlqvist
On Tue, 16 Jan 2024 20:53:19 +0200
Eli Zaretskii  wrote:
> From: Henrik Carlqvist 
> > On Tue, 16 Jan 2024 06:59:30 +
> > MIAOW Miao  wrote:
> > > if ((*ep)[nl] == '=' && strncmp (*ep, v->name, nl) == 0)
> > 
> > Looking at that line, the rather obvious fix would be to change it to:
> > 
> > if (strncmp (*ep, v->name, nl) == 0 && (*ep)[nl] == '=')
> > 
> > That way, *ep can be no shorter than having \0 at position nl and
> > accessing that position should not cause any segfault.
> 
> But it's less efficient when the (*ep)[nl] == '=' test fails.

Yes, that is true, but to avoid a possible segfault it is necessary to somehow
check that (*ep)[nl] is a valid address. The current fix at
https://savannah.gnu.org/bugs/index.php?65172 also works fine, but also that
fix might be even less efficient as strlen will read every char up to and
including \0 in *ep.

regards Henrik



Re: Segmentation Fault on Exported Resursively Expanded Variable

2024-01-16 Thread Henrik Carlqvist
On Tue, 16 Jan 2024 06:59:30 +
MIAOW Miao  wrote:
> if ((*ep)[nl] == '=' && strncmp (*ep, v->name, nl) == 0)

Looking at that line, the rather obvious fix would be to change it to:

if (strncmp (*ep, v->name, nl) == 0 && (*ep)[nl] == '=')

That way, *ep can be no shorter than having \0 at position nl and accessing
that position should not cause any segfault.

regards Henrik



Re: Segmentation Fault on Exported Resursively Expanded Variable

2024-01-15 Thread Henrik Carlqvist
I was not able to reproduce it with Make 4.1 or Make 4.3. 

I don't have the original Make 4.4.1, but I had a binary of my fork 4.4.1h2
from https://github.com/henca/Henriks-make and got a segfault in that one
which probably means that I have been able to reproduce the report from Miao.

Opening a core dump i ddd it seems as if the segfault was at:
src/expand.c:119
called from variable.c:1143
called from function.c:1878
called from function.c:2693
called from expand.c:282
called from expand.c:441
called from expand.c:448
called from expand.c:590
...

Or copy paste from the debugger:

Core was generated by `./make'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  recursively_expand_for_file (v=v@entry=0x13e5990, file=file@entry=0x0)
at src/expand.c:119
(gdb) bt
#0  recursively_expand_for_file (v=v@entry=0x13e5990, file=file@entry=0x0) at
#src/expand.c:119 1  0x00429bad in target_environment
#(file=file@entry=0x0, recursive=recursive@entry=0) at src/variable.c:1143 2 
#0x004133dc in func_shell_base (o=0x13f2680 "", argv=,
#trim_newlines=1) at src/function.c:1878 3  0x00413923 in
#handle_function (op=op@entry=0x7ffd1edc1428,
#stringp=stringp@entry=0x7ffd1edc1420) at src/function.c:2693 4 
#0x0040d3c2 in variable_expand_string (line=,
#line@entry=0x0, string=,
#length=length@entry=18446744073709551615) at src/expand.c:282 5 
#0x0040db5e in variable_expand (line=) at
#src/expand.c:441 6  variable_expand_for_file (file=0x0, line=)
#at src/expand.c:488 7  allocated_variable_expand_for_file (file=0x0,
#line=) at src/expand.c:590 8  recursively_expand_for_file
#(v=v@entry=0x13e5990, file=file@entry=0x13e5a30) at src/expand.c:164 9 
#0x00429bad in target_environment (file=,
#recursive=) at src/variable.c:1143 10 0x004188d2 in
#start_job_command (child=child@entry=0x13f25c0) at src/job.c:1431 11
#0x004192d4 in start_waiting_job (c=c@entry=0x13f25c0) at
#src/job.c:1646 12 0x00419cf2 in new_job (file=0x13e5a30) at
#src/job.c:1960 13 0x00425a95 in remake_file (file=0x13e5a30) at
#src/remake.c:1313 14 update_file_1 (depth=, file=) at src/remake.c:905 15 update_file (file=file@entry=0x13e5a30,
#depth=) at src/remake.c:367 16 0x004263e1 in
#update_goal_chain (goaldeps=) at src/remake.c:184 17
#0x00409045 in main (argc=, argv=,
#envp=) at src/main.c:2951
(gdb) 

Looking at the variables in ddd I see that *ep is a string "VDPAU_LOG=0"
probably taken from my environment. The variable nl is 41 which is way beyond
that string and probably causing the segfault. The nl variable comes from
strlen(v->name) where v->name is "THIS_LONG_VARIABLE_NAME_PRODUCE_THE_ERROR".

So it seems to me that such a long variable name can cause a segfault if an
environment variable is short enough.

I realize a report like this would be more valuable if I had the original gnu
make. On my fork I have modified main.c and job.c (and posixos.c) making any
line numbers from those files not trustworthy. However, I still hope that my
analysis will be useful.

regards Henrik

On Mon, 15 Jan 2024 14:37:31 -0500
Paul Smith  wrote:

> On Mon, 2024-01-15 at 11:21 +, MIAOW Miao wrote:
> > I found name of exported resursively expanded variable with $(shell
> > ...) cannot be too long in gnu make version >= 4.4, otherwise a
> > segmentation fault is triggled. I'm not sure if variable-name-too-
> > long is a bug. However, make is
> > supposed to tell me what's going wrong.
> > 
> > Here is a Makefile that can reproduce the segmentation fault:
> > > THIS_LONG_VARIABLE_NAME_PREDUCE_THE_ERROR= $(shell echo hello)
> > > exportTHIS_LONG_VARIABLE_NAME_PREDUCE_THE_ERROR
> > > 
> > > all: ; echo "abc"
> 
> I was not able to reproduce this problem, either with my own build of
> GNU Make 4.4.1 or with a binary extracted from the RPM from the link
> you provided.  I tried running under valgrind and with a binary
> compiled with ASAN, with and withoug debugging enabled, ran the test
> 1000 times.  I also tried GNU Make 4.4.
> 
> If you can generate a coredump please examine it with GDB and send
> along at least the backtrace.
> 
> -- 
> Paul D. Smith Find some GNU make tips at:
> https://www.gnu.org   http://make.mad-scientist.net
> "Please remain calm...I may be mad, but I am a professional." --Mad
> Scientist
> 



Re: [bug #64472] $(CP) is an empty string

2023-07-26 Thread Henrik Carlqvist
On Wed, 26 Jul 2023 01:37:11 -0400 (EDT)
> ... I see that rm, which was on the list of directly invokable utilities
> with cp, nonetheless has an RM namesake, which contains the very much
> conventional, but non-obvious and misleading, -f.  AR is there alright but,
> had the OP chosen INSTALL as their example, there'd be more of a case to
> answer.  A cross-reference to
> https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html
> might help to explain what can be relied upon without Autotools or such.

If you want to use $(CP) or $(INSTALL) in your Makefile you will need to
assign those variables yourself. Doing so might be a good idea the day when
someonne wants to port your Makefile to another system, maybe "cp" is called
something like "copy" on that system. If so, it is a lot easier to only change
the assignment of $(CP) at a single row in the Makefile than to replace "cp"
in several rules.

regards Henrik



Re: Order-only prerequisites

2023-06-07 Thread Henrik Carlqvist
On Wed, 07 Jun 2023 08:29:15 +0200
> As I said, a way to specify in which order recipes are invoked
> (here, a before b) if they are invoked, without influencing whether
> they are invoked (only a, only b or a and b, as given on the command
> line).

So you really don't want any target to depend upon another target? Then why do
you care about the order? If order is important, what would happen if make is
run with -j and starts several jobs in parallell? 

In practice, targets are built in the order that they are given on the command
line or as dependencies in a rule. However, there is no guarantee that any
Make is implemented that way. If you really have a need to specify the order
of things you probably have that need because of some dependency between
targets.

regards Henrik



Re: Order-only prerequisites

2023-06-06 Thread Henrik Carlqvist
> Consider this makefile:
> 
> .PHONY: a b
> a:; @echo a
> b:; @echo b
> b: | a

Your problem with this Makefile is that it never creates any files a or b.
This means that your order only prerequisite on a allways has to be made.

> % make b a
> a
> b
> make: Nothing to be done for 'a'.
> 
> Correct order, but the last message seems strange.

First as a (order-only) prerequisite for b, a is made. Then b is made. Then
make was asked to make a but it does remember that a already has been made and
prints out that nothing to be done.

If  your Makefile would have created the file a the order-only prerequisite
would have made sense. It would then mean that b would depend upon the
existance of a but it would not care about the timestamp of a. So if a exist
and is newer than b there is no need to remake b of a is a order-only
prerequisite.

> If so, is there another way to achieve what I want?

This totally depends upon what you want.

regards Henrik



Re: new feature idea: ingesting processed rulesets

2023-05-23 Thread Henrik Carlqvist
> > On May 23, 2023, at 5:13 AM, Zoltán Turányi
> >  wrote:
> > I use make with autotools in multiple directories and have observed that
> > parallel builds are limited to each directory, as autotools invoked make
> > separately for each directory.

In my experience, if make is called recursively correctly with "$(MAKE)"
rather than with a  new non-parallell invocation of "make" this is a
non-problem as all parallell jobs will be put in the recursive call of make
which needs it.

On Tue, 23 May 2023 12:07:51 -0400
"David A. Wheeler"  wrote: 
> The solution is to *NOT* use recursive make. Have *ONE* process run the
> makefile, with the correct data. Now you can enable parallel jobs, and have
> it run really quickly, because the make process has the correct information.
> Using this approach you can routinely run large make jobs in a fraction of a
> second.

If you prefer to do it that way, you can have your top Makefile include all
sub.mk files in the directory structure that it is able to find.

regards Henrik



Re: [PATCH] Use UTF-8 active code page for Windows host.

2023-03-21 Thread Henrik Carlqvist
On Tue, 21 Mar 2023 15:08:52 +
Costas Argyris  wrote:
> I am simply re-attaching the patch I originally sent in
> this thread, because that was already developed and built on 4.4.1
> tarball which is still the latest AFAICT.

Yes, at the time of this writing version 4.4.1 is the latest release of make
and that version with tag 4.4.1 is also the latest commit to master on the git
repo.

regards Henrik



[bug #51200] Improvement suggestion: listen to signals to adjust number of jobs

2023-03-20 Thread Henrik Carlqvist
Follow-up Comment #3, bug #51200 (project make):

My latest patch signal_num_jobs6.patch is almost a complete rewrite to work
better with recursive calls to make. Jobs are simply added by putting tokens
in the pipe from the signal routine for SIGUSR2. Revoking jobs are slightly
more tricky, a separate process is spawned with fork which tries to remove a
token from the pipe. However, some jobs might be able to start before the
process is able to revoke a token. Making that process busy waiting on the
pipe much increases the probability that it will become the first process to
get a token, but I didn't want to do that. Another option might have been to
make the ordinary make processes sleep some ms each time before they pick a
token, but I didn't want to do that either. My choice was to live with the
fact that it might take a long time before the number of jobs are decreased.

An slightly intrusive change from this patch is that the job server is always
setup initially. This is as it cannot be done in a signal safe way from the
signal handlers, it has to be ready when the signal handlers need them. This
also means that the job server is setup when -j is called without a value.
That was supposed to give an infinite number of jobs, but the job server has
to have a limit. My choice for now was to set that limit to 1 jobs. In
practice 1 jobs are enough to be backwards compatible with the old
behavior of make to by default become kind of a fork bomb on a big source tree
to build when no value is given to -j. By definition 1 jobs are not an
infinite number of jobs, but in practice I see no big difference. Maybe I
would prefer to by default limit the number of jobs to a low number and let
the people who really know what they are doing specify higher numbers if they
want to, but that would break backwards compatibility.

(file #54516)

___

Additional Item Attachment:

File name: signal_num_jobs6.patch Size:5 KB




___

Reply to this item at:

  

___
Message sent via Savannah
https://savannah.gnu.org/




[bug #51200] Improvement suggestion: listen to signals to adjust number of jobs

2023-03-20 Thread Henrik Carlqvist
Additional Item Attachment, bug #51200 (project make):

File name: signal_num_jobs6.patch Size:5 KB




___

Reply to this item at:

  

___
Message sent via Savannah
https://savannah.gnu.org/




Re: please keep up supporting AmigaOS

2022-11-03 Thread Henrik Carlqvist
Thank you for joining the group of Amiga users that now are spamming all the
subscribers of the GNU mailing list.

May I humbly suggest that any contribution to the GNU Make sources making it
work again on Amiga will be rejected until 2 cents have been paid for each of
those spams to all Gnu Make mailing list subscribers.

regards Henrik

On Thu, 3 Nov 2022 17:39:21 +
Michael Bergmann  wrote:

> Sad to hear that plans are to stop supporting AmigaOS. The Amiga actually
> has aHype, not only because of retro gamers, but due to serious developers.
> You just have to look towards AmigaOS 3.2.1 and ist companion NDK. As long
> as I can think back, there was allways Amiga support in GNU. As the founder
> of probablly the biggest Amiga developer group on facebook, I noticed lots
> of people beeing unhappy with your decission. Please overthink!
> 
> Kind regards,
> 
> 
> Mit freundlichen Grüßen,
> 
> Michael Bergmann
> Betreuer
> Kaufmann im Gesundheitswesen (IHK)
> Gesundheits- und Krankenpfleger
> 
> Mühlstraße 29
> 63543 Neuberg
> Tel.: 06185 - 647 869 1
> Mobil: 01525 - 2859 893
> 
> 



Re: »Ø¸´£º Implicit rule for linking multiple object files

2022-08-11 Thread Henrik Carlqvist
On Thu, 11 Aug 2022 14:18:29 +0800
"ljh"  wrote:

> main : c.o b.o a.o d.o main.o

> A: note: imports must be built before being imported
> A: fatal error: returning to the gate for a mechanical issue
> compilation terminated.
> make: *** [ 3. ok: with target.o and recipe
> 
> $ rm -fr *.o gcm.cache main
> $ cat Makefile
> CXXFLAGS = -Wall -Wextra -std=c++2a -fmodules-ts -g # -O3 -fPIC
> main : c.o b.o a.o d.o main.o
> $(CXX) $^ -o $@
> $ make
> g++ -Wall -Wextra -std=c++2a -fmodules-ts -g  -c -o c.o c.cpp
> g++ -Wall -Wextra -std=c++2a -fmodules-ts -g  -c -o b.o b.cpp
> g++ -Wall -Wextra -std=c++2a -fmodules-ts -g  -c -o a.o a.cpp
> g++ -Wall -Wextra -std=c++2a -fmodules-ts -g  -c -o d.o d.cpp
> g++ -Wall -Wextra -std=c++2a -fmodules-ts -g  -c -o main.o
> main.cpp g++ c.o b.o a.o d.o main.o -o main

That Makefile might seem fine and seem to work, but what would happen if you
would run "make -j 5"? Then, again, those other object files would not have
been built before main.o. So to work correctly with parallell builds, that
Makefile would probably also need a rule like:

main.o: a.o b.o c.o d.o

regards Henrik



Re: [bug #62840] make --version in pipe return SIGPIPE

2022-07-30 Thread Henrik Carlqvist
On Sat, 30 Jul 2022 13:26:46 -0400 (EDT)
Martin Dorey  wrote:
> Follow-up Comment #2, bug #62840 (project make):
> 
> Just for completeness or academic interest, then, this makes it happen
> reliably for me:
> 
> 
> set -o pipefail; { ruby -we '$stdout.write("x" * 4096)'; make --version; } |
> head -n1; echo $?

It was not a reliable way to reproduce the "bug" here:

-8<---
bash-4.3$ set -o pipefail; { ruby -we '$stdout.write("x" * 4096)'; make
--version; } | head -n1; echo $?
  
...
GNU Make 4.1
0
-8<---

But I would agree that it is not a bug. A program writing to a pipe is
expected to get a SIGPIPE and possibly return an error code if the pipe closes
before the program is finished.

However, the program is not guaranteed to get a SIGPIPE, there are also
buffers which the program might be able to write to even though those buffers
might not be emptied by the receiving process.

regards Henrik



Re: Goodbye to GNU make's "build.sh" ... ?

2022-06-25 Thread Henrik Carlqvist
On Sat, 25 Jun 2022 15:28:52 -0800
Britton Kerin  wrote:

> On Sat, Jun 25, 2022, 1:48 PM Paul Smith  wrote:
> > If #2 is chosen, then a bootstrap process would involve first obtaining
> > an older version of make, such as GNU make 4.3 or lower, and building
> > that with its build.sh, then using the resulting make to build the
> > newer version.

> I realize it's easy to say and pretty obvious but option 2 from a user
> perspective is sort of horrific.

Would  this "old version of make" have to be GNU make? Otherwise, if the
Makefiles to build GNU make did not rely on ony GNU extensions those future
systems might be able to initially rely on some other version of make like BSD
pmake.

The scary part of relying on an old version of some software is that old
versions sometimes suffer from bit rot caused by changes in language standards
or API changes in libraries.

Maybe we would have to rely on cross compiling GNU make for new systems?

regards Henrik



Re: [bug #62441] Recursive make & PHONY = targets being rebuilt [feature reques]

2022-05-11 Thread Henrik Carlqvist
> I have two makefiles:
> 
> *Makefile*
> 
> .PHONY: other_file
> other_file:
>   $(MAKE) -f Makefile2 other_file
> 
> my_file: other_file
>   touch $@
> 
> 
> *Makefile2*
> 
> other_file:
>   touch $@
> 
> 
> 
> When I first run the command, it builds my_file and other_file as expected.
> 
> > make my_file --trace --no-print-directory
> Makefile:6: target 'other_file' does not exist
> make -f Makefile2 other_file
> Makefile2:3: target 'other_file' does not exist
> touch other_file
> Makefile:9: update target 'my_file' due to: other_file
> touch my_file
> 
> 
> However, the next time I run the same command, it doesn't rebuild
> other_file, but it does rebuild my_file.
> 
> make my_file --trace --no-print-directory
> Makefile:6: target 'other_file' does not exist
> make -f Makefile2 other_file
> make[1]: 'other_file' is up to date.
> Makefile:9: update target 'my_file' due to: other_file
> touch my_file
> 
> 
> This is obviously because other_file is marked as PHONY.
> 
> What I would like is this functionality for recursive make support: When it
> is time to consider such a target, make will run its recipe unconditionally.
> After the recipe is run, make will use the target's timestamp to determine
> if other targets are out of date.
> 
> It would look something like: 
> *Makefile*
> 
> .RECURSIVE_MAKE: other_file
> other_file:
>   $(MAKE) -f Makefile2 other_file
> 
> my_file: other_file
>   touch $@
> 
> 
> *Makefile2*
> 
> other_file:
>   touch $@
> 
> 
> 
> > make my_file --trace --no-print-directory
> Makefile:6: target 'other_file' does not exist
> make -f Makefile2 other_file
> Makefile2:3: target 'other_file' does not exist
> touch other_file
> Makefile:9: update target 'my_file' due to: other_file
> touch my_file
> 
> > make my_file --trace --no-print-directory
> Makefile:6: target 'other_file' requires recursive make.
> make -f Makefile2 other_file
> make[1]: 'other_file' is up to date.
> make: 'my_file' is up to date.

Wouldn't the easy solution be to not make other_file a phony target and
instead force it to be rebuilt?

-8<-
other_file: FORCE
$(MAKE) -f Makefile2 other_file

my_file: other_file
touch $@

FORCE:
-8<-

regards Henrik



Re: Archive Members as Targets

2022-05-10 Thread Henrik Carlqvist
On Tue, 10 May 2022 22:05:22 -0400
Dmitry Goncharov  wrote:

> > But not on the Linux boxes there make always rebuild everything. On all
> > machines I am using GNU Make.
> ...
> > Can anyone confirm that?
> 
> i can confirm that for me on linux the latest make from git as well as
> make-4.3 correctly detect that libfoo.a is up to date.

I can confirm that it works fine also on Slackware Linux 14.2 using make
version 4.1:

nazgul:/tmp> make
cc-c -o foo.o foo.c
ar cr libfoo.a foo.o
ranlib libfoo.a
rm foo.o
nazgul:/tmp> make
make: Nothing to be done for 'all'.
nazgul:/tmp> make --version
GNU Make 4.1
Built for x86_64-slackware-linux-gnu
Copyright (C) 1988-2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
nazgul:/tmp> cat /etc/slackware-version
Slackware 14.2

regards Henrik



Re: Invalid use of const pointer?

2022-01-09 Thread Henrik Carlqvist
On Sun, 09 Jan 2022 10:17:10 -0500
Paul Smith  wrote:
> In any event, the bug still exists whether you say the argument is
> const or not: the expectation when this function is called is that
> after it returns the string passed to it has the same content as before
> it was called.

If so I agree that the best quick and easy fix would be to restore the
original value. A fix involving more work would be to copy the const
input string to some temporary string that the function is free to alter as it
wants.

Best regards Henrik



Re: Invalid use of const pointer?

2022-01-09 Thread Henrik Carlqvist
On Sat, 08 Jan 2022 17:29:33 -0500
Paul Smith  wrote:
> It turns out to be innocuous because none of the callers care that the
> value of the input string is modified if we return a different string,
> but it's still wrong and should be fixed.

If so, the easy and more correct fix might be to to remove const from the
function variable declarations rather than to restore the value.

> Thanks for noticing!

This Thanks mostly goes to Joe Filion who initally noticed this.

Best regards Henrik



Re: Invalid use of const pointer?

2022-01-08 Thread Henrik Carlqvist
On Sat, 08 Jan 2022 15:36:46 -0500
Paul Smith  wrote:

> On Sat, 2022-01-08 at 19:47 +0100, Henrik Carlqvist wrote:
> >  But now, with both userend and pwent set it seems as if the calling
> > function will have its const string modified. If this final case were
> > fixed at least no calling function would suffer from a modified const
> > string.
 
> I'm not sure what you mean here.  It is never the case that the
> incoming string (name) is ever modified under any circumstances, as far
> as the calling function is concerned.

I mean this code, where name is a const char * which was an input variable to
the function:

# if !defined(_AMIGA) && !defined(WINDOWS32)
  else
{
  struct passwd *pwent;
  char *userend = strchr (name + 1, '/');
  if (userend != 0)
*userend = '\0';  <---  ** Here userend modifies the content
  pwent = getpwnam (name + 1);
  if (pwent != 0)
{
  if (userend == 0)
return xstrdup (pwent->pw_dir);
  else
return xstrdup (concat (3, pwent->pw_dir, "/", userend + 1)); <--X
}  
  else if (userend != 0)
*userend = '/';  <--- ** Here userend restores the content
}
# endif /* !AMIGA && !WINDOWS32 */

> If the incoming string needs to expanded then a new string is allocated
> and returned from this function containing the expansion.  If the
> incoming string is not expanded, then no new string is allocated and 0
> (null pointer) is returned.

But what about the case marked with <--X above? To me it seems as if the
function returns after having modified const char *name bu using userend.

Best regards Henrik



Re: Invalid use of const pointer?

2022-01-08 Thread Henrik Carlqvist
On Sat, 08 Jan 2022 10:37:17 -0500
Paul Smith  wrote:
> The const-correct way to write this code would be to allocate a new
> string and modify that (or, rework the entire program to use the
> equivalent of C++'s std::string_view), but the author of this code
> (might be me, might be someone else: I didn't investigate) decided that
> the quick and dirty way had enough benefits to outweigh the gross-ness.

Another correct way to do this would be to not declare the input variable
*name as const, but that would need to spread up to calling functions.

There are cases in tilde_expand when *userend is restored restored to '/'
after having being altered to '\0'. In those cases at least no
permanent changes has been made to the const string seen from the calling
functions point of view. But now, with both userend and pwent set it seems as
if the calling function will have its const string modified. If this final
case were fixed at least no calling function would suffer from a modified
const string.

Best regards Henrik



Re: make -j does not work on RedHat7.7 VM

2021-12-23 Thread Henrik Carlqvist
On Thu, 23 Dec 2021 12:49:51 +
"Zhu, Mason"  wrote:
> I checked our project make file. Yes, we are using recursive Make, but does
> not explicitly set -j options in MAKEFLAGS.
> 
> In GNU Make 3.82, it seems that -j option will be finally added if Make
> determines my VM has the parallel build capability. However in GNU Make
> 4.2.1, there is no parallel build if I does not explicitly set -jN option.
> 
> Does it mean that I have to explicitly set -jN in MAKEFLAGS  for GNU Make
> 4.2.1 now?

My guess is that you in your Makefile are making recursive calls to "make"
instead of recursive calls to $(MAKE)

If you failed to call $(MAKE) and instead called "make" the "make" you called
will be the first one in your path, probably not the "make" you intended to
call in this case.

Having one version of make calling another version of make might work less
good for parallell jobs as those version might not be able to communicate
between each other about tyhe parallell jobs.

regards Henrik



Re: [bug #57751] Improve POSIX support for SCCS

2020-11-08 Thread Henrik Carlqvist
On Sat,  7 Nov 2020 20:56:37 -0500 (EST)
Bruce Lilly  wrote:

> I've seen it used, e.g. where there are many executables, each built from a
> single source file.  So, for example:
> 
> 
> cat date echo ls pwd rm sleep sync test : $@.o
> 
> 
> suffices to specify (with default rules) everything needed to build those
> executables.  Otherwise, you'd have to have many separate dependency lines,

I agree that allowing $@ in prerequisites would allow easy to read and easy to
write Makefiles, but the lack of that functionality does not mean that the
same build method for multiple targets needs to get split up to different
dependency lines. With a static pattern rule you can do the same on with a
single line:

cat date echo ls pwd rm sleep sync test : % : %.o

regards Henrik



Re: GNU Make 4.3: Makefile rule spooky action at a distance

2020-10-05 Thread Henrik Carlqvist
On Mon, 5 Oct 2020 22:48:56 +0200
Danny Milosavljevic  wrote:

> Hi,
> 
> On Mon, 05 Oct 2020 15:41:52 -0400
> Paul Smith  wrote:
> 
> > It would be interesting to know if adding an explicit export solves the
> > problem.  
> 
> I tried adding an explicit export at the toplevel makefile right before the
> invocation of the submake--it does not solve the problem.  Behavior is
> exactly the same.
 
I have also tried to export the variable in the shell, it does not solve the
problem:

bash-4.3$ export CFLAGS=ok  
bash-4.3$ echo $CFLAGS
ok
bash-4.3$ make
make -C foo all
make[1]: Entering directory '/tmp/mk_test/foo'
echo internal
internal
make[1]: Leaving directory '/tmp/mk_test/foo'
bash-4.3$ 

However, setting the variable among the make options seems to work as
expected:

bash-4.3$ make CFLAGS=ok
make -C foo all
make[1]: Entering directory '/tmp/mk_test/foo'
echo ok
ok
make[1]: Leaving directory '/tmp/mk_test/foo'
bash-4.3$ 

The above tests were done with make version 4.1 on a Slackware 14.2 system.

regards Henrik



Re: "make -jN" requires mechanical changes to a Makefile [SOLVED]

2020-09-14 Thread Henrik Carlqvist
On Mon, 14 Sep 2020 12:15:58 +0200
Bruno Haible  wrote:

> Henrik Carlqvist wrote:
> > 2) Don't mention some of the extra targets:
> > ===
> > all : copy1
> >  
> > copy1: Makefile
> > install -c -m 644 Makefile copy1
> > install -c -m 644 Makefile copy2
> > install -c -m 644 Makefile copy3
> > install -c -m 644 Makefile copy4
> > ===
> 
> Fails (D) and (E). => Not a solution to the problem.
> 
> Bruno
> 

Ok, assuming that solution 1 did not meet your requirements, lets give it
another shot:

===
COPIES=copy1 copy2 copy3 copy4

.INTERMEDIATE: dummy

all: $(COPIES)

$(COPIES): %: dummy
$(RM) $<

dummy: Makefile
install -c -m 644 Makefile copy1
install -c -m 644 Makefile copy2
install -c -m 644 Makefile copy3
install -c -m 644 Makefile copy4
touch $@
===

I create an extra file "dummy" for a short time in an attempt to live up to
all your requirements.

regards Henrik



Re: "make -jN" requires mechanical changes to a Makefile [SOLVED]

2020-09-13 Thread Henrik Carlqvist
On Sun, 13 Sep 2020 20:07:27 +0100
Bruno Haible wrote:
> Continuing this thread from May 2019
> :
> The problem was:
> 
>   How can a rule that generates multiple files be formulated so
>   that it works with parallel make?
> 
> For example, a rule that invokes bison, or a rule that invokes
> a different Makefile. For simplicity, here, use a rule that
> creates 4 files copy1, copy2, copy3, copy4.
> 
> ===
> all : copy1 copy2 copy3 copy4
> 
> copy1 copy2 copy3 copy4: Makefile
>   install -c -m 644 Makefile copy1
>   install -c -m 644 Makefile copy2
>   install -c -m 644 Makefile copy3
>   install -c -m 644 Makefile copy4
> ===

I would say there are two obvious solutions to make this work with parallel
make:


1) Allow the targets to be tuilt in paralllel:
===
all : copy1 copy2 copy3 copy4
 
copy1 copy2 copy3 copy4: Makefile
install -c -m 644 Makefile $@
===

However, this solution 1 will not work for a single command that generates
multiple files, so we might need a solution 2:

2) Don't mention some of the extra targets:
===
all : copy1
 
copy1: Makefile
install -c -m 644 Makefile copy1
install -c -m 644 Makefile copy2
install -c -m 644 Makefile copy3
install -c -m 644 Makefile copy4
===

regards Henrik



Re: Adjusting jobserver size (was: Re: No follow up on patches to support newer glibc ?)

2020-07-13 Thread Henrik Carlqvist
I have added an updated patch to bug #51200 and hope that you will reconsider
adding the functionality into next release. It is true that SIGUSR is already
used for debug toggling, but the behavior of SIGUSR1 isn't changed to
decreasing number of jobs until a SIGUSR2 signal is received. So make will be
fully compatible with its current behavior unless it receives a SIGUSR2 when
the new features kicks in and also replaces debug toggling.

The changes in my latest patch are:

* Adopted to new directory structure

* Making sure that no signal unsafe functions are called from the signal
  functions. To really make sure of this no other functions are called from
  signal functions. This means that both increasing and decreasing the number
  of jobs will not be done until a job finishes.

Best regards Henrik

On Sat, 07 Apr 2018 16:46:00 -0400
Paul Smith  wrote:

> On Thu, 2018-04-05 at 00:26 +0200, Henrik Carlqvist wrote:
> > On Wed, 04 Apr 2018 15:42:51 -0400
> > Paul Smith  wrote:
> > > It does look like we need to make a new release soon.
> > 
> > If so, is there anything I can do to get the functionality of my
> > contributed patch in bug #51200 into the upcoming new release?
> 
> Thanks for the reminder Henrik.  For those interested, the link is:
> https://savannah.gnu.org/bugs/index.php?51200
> 
> I'll confess I'm on the fence about this.  On the one hand I could
> imagine where it would be useful.
> 
> On the other hand, it's a complex change (I'm not convinced that your
> implementation is complete: for example, it's not immediately clear to
> me how the decrement handles the "free token" concept of the job server
> implementation... also, it's not a good idea to use fputs() in a signal
> handler, and I haven't traced down what other possible issues the other
> calls in increase_job_signal_handler() might have); it doesn't have
> testing with it to make sure it continues to work, and while useful in
> specific situations this feature likely won't be widely needed and so
> get less testing.  And it is limited to only working on POSIX systems
> as others don't support SIGUSRx (IIRC).  I get that we already use
> SIGUSR1 for debug toggling so there is precedent.
> 
> When I realize I started make with a jobserver value I don't like, I
> typically just kill the make and restart it.
> 
> ___
> Bug-make mailing list
> Bug-make@gnu.org
> https://lists.gnu.org/mailman/listinfo/bug-make



[bug #51200] Improvement suggestion: listen to signals to adjust number of jobs

2020-07-04 Thread Henrik Carlqvist
Additional Item Attachment, bug #51200 (project make):

File name: signal_num_jobs5.patch Size:3 KB




___

Reply to this item at:

  

___
  Message sent via Savannah
  https://savannah.gnu.org/




Re: [bug #58056] Forced prerequisite order is not honored with pattern rules

2020-03-30 Thread Henrik Carlqvist
On Mon, 30 Mar 2020 15:35:01 -0400 (EDT)
anonymous  wrote:

> If this behavior is allowed, I think the documentation should clarify
> what the order-only prerequisites actually means.

Maybe a better name than "order-only-prerequisites" would have been
"exist-only-prerequisites".

> I still haven't figured out their purpose or how they operate based on
> the documentation or what has been said here. 

I thinke the example in the documentation is rather clear and exactly
shows the need of order-only-prerequisites. You have a Makefile creating
object files from source files and want these object files in a directory
of their own and that directory should also be created by the Makefile.

Before the Makefile creates any object file it will have to create the
directory, therefore the object files will depend upon the directory.

But even though the object files depend upon the directory you do not want
to rebuild the object files becuase the timestamp of the directory changes
as it will change whenever some file is added or removed from the
directory.

That is exactly what the order-only-prerequisites is for, if the
order-only-prerequisite does not exist it will be created. If the
order-only-prerequisite does exist and has a newer timestamp than the
target it will still not cause the target to be rebuilt.

>To me, the documentation basically implies that everything after the '|'
>is serialized.

What part of
https://www.gnu.org/software/make/manual/html_node/Prerequisite-Types.html
made you think that? Was it that sentence that you read only half the
sentence, stopping at the emphasized important part of the sentence? I
don't think that sentence could have been any more clear, the important
part is even emphasized as bold and italic.

> I usually never assume any serialization of the prerequisites, but I
> needed to enforce it in a particular scenario and though the '|' was for
> this.

Instead you might want to look at what the existance of the .NOTPARALLEL
target in a Makefile does.

regards Henrik



Re: [bug #58056] Forced prerequisite order is not honored with pattern rules

2020-03-30 Thread Henrik Carlqvist
On Mon, 30 Mar 2020 12:17:24 -0400 (EDT)
anonymous  wrote:

> Wait, so order-only prerequisites does NOT mean serialized make for
> these prerequisites?

Nope.

> "a situation where you want to impose a specific
> ordering on the rules to be invoked" ← That's not what this means?

Please read the rest of that sentence, and pay special attention to the
bold italic *without*:

"...*without* forcing the target to be updated if one of those rules is
executed".

Tha page has a good example explaining that order-only-prerequisites will
be created if needed but their updates will not cause your targets to be
rebuilt.

regards Henrik



Re: Error In Installing FreeBayes

2020-03-05 Thread Henrik Carlqvist
On Fri, 6 Mar 2020 09:38:19 +0530
Dr Priyanka Jain  wrote:
>   I am trying to clone freebayes from following link :

> /usr/bin/gmake: unrecognized option '--jobserver-auth=3,4'

It seems as if your project "freebayes" requires a newer version of gnu
make than you have installed. You can check which version of gnu make you
have with:

gmake --version

Example of output:

gmake --version
GNU Make 4.1
Built for x86_64-slackware-linux-gnu
Copyright (C) 1988-2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
 This is free software: you are free to
change and redistribute it. There is NO WARRANTY, to the extent permitted
by law.

In the above example I have Gnu Make version 4.1.

According to
https://lists.gnu.org/archive/html/info-gnu/2016-05/msg00013.html it
seems as if the --jobserver-auth option was introduced with version 4.2 of
Gnu Make.

So you will need to update your installed version of Gnu Make or adjust
the build scripts to comply with your older version.

regards Henrik



Re: What about the name of the second prerequisite?

2019-06-15 Thread Henrik Carlqvist
On Sat, 15 Jun 2019 08:35:13 +0800
Dan Jacobson  wrote:

> (info "(make) Automatic Variables") has
> 
> '$<' The name of the first prerequisite...
> '$?' The names of all the prerequisites that are newer than the
> target...'$^' The names of all the prerequisites, with spaces between
> them...'$+' This is like '$^', but prerequisites listed more than once
> are...'$|' The names of all the order-only prerequisites...
> 
> OK, OK, OK, OK, OK!
> 
> But it really should also mention the official recommended way to (drum
> roll)...
> 
> Get the name of the second prerequisite.

I would use the function word for that:

-8<--
all: dummy

dummy: dummy1 dummy2 dummy3
echo first: $< second: $(word 2, $^) third: $(word 3, $^)

dummy%:
touch $@
-8<--

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: append assignment operator in target specific variable

2019-05-19 Thread Henrik Carlqvist
> 1. "foo" is simple variable.
>so result have to be 100 but is 200
> 
> foo :=
> val := 100
> 
> all : foo += $(val)
> all :
>   @echo foo : $(foo)
> 
> val := 200
> 
> result is : 200

Yes, the result will become 200 because foo is not expanded until it is
usead at the line "@echo foo : $(foo)". At that time, before building
"all" the entire Makefile will have been read and val was assigned to 200
at the last line in the Makefile.

> ---
> 
> 2. If i change '+=' operator to ':=' then result is 100
> 
> foo :=
> val := 100
> 
> all : foo := $(val)
> all :
>   @echo foo : $(foo)
> 
> val := 200
> 
> result is : 100
> --

Yes, because ":=" unlike "=" and "+=" is expanded at that very line. 

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: "make -jN" requires mechanical changes to a Makefile

2019-05-12 Thread Henrik Carlqvist
On Mon, 13 May 2019 00:05:59 +0200
Bruno Haible  wrote:

> Howard Chu wrote:
> > >> Example with one rule creating 4 files:
> > >>
> > >> all : copy1 
> > >>
> > >> copy1: Makefile
> > >> install -c -m 644 Makefile copy1
> > >> install -c -m 644 Makefile copy2
> > >> install -c -m 644 Makefile copy3
> > >> install -c -m 644 Makefile copy4
> > > 
> > > I think the "representative" file should be copy4 here, because it's
> > > the one that gets created last.
> > 
> > That sort of thing is only true in serial make, you can't rely on it
> > in parallel make.
> 
> The sequence of lines of the recipe of a rule gets executed in order,
> even in parallel make, no?

Yes, they will be run in sequence even with parallel make and copy4 might
be a better known target than copy1.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: "make -jN" requires mechanical changes to a Makefile

2019-05-12 Thread Henrik Carlqvist
On Sun, 12 May 2019 22:23:12 +0200
Bruno Haible  wrote:
> Now, when my use-case is:
>   - one rule that produces N files (N > 1),
>   - I want "make" to execute the rule only once, not N times,
> even with parallel make.
> What is the solution?

I think that the only good solution is to make sure than only 1 of the N
created files is a known target for the Makefile. If you write single
rules that on one call creates multiple targets your Makefile will not be
compatible with parallel make.

Example with one rule creating 4 files:

all : copy1 

copy1: Makefile
install -c -m 644 Makefile copy1
install -c -m 644 Makefile copy2
install -c -m 644 Makefile copy3
install -c -m 644 Makefile copy4

A better way would be to have one rule to create multiple targets, but
only on target for each call, example:

all: copy1 copy2 copy3 copy4

copy%: Makefile
install -c -m 644 Makefile $@

The above simple Makefile would be fully compatible with parallel make.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: "make -jN" requires mechanical changes to a Makefile

2019-05-10 Thread Henrik Carlqvist
> In the current state, supporting parallel make requires extra work
> for the maintainer.
> 
> Or would you recommend that I add this snippet to the top-level
> Makefile of all my projects?
> 
> # This package does not support parallel make.
> # So, turn off parallel execution (at least in GNU make >= 4.0).
> GNUMAKEFLAGS = -j1

If you really prefer to write rules which generates more than one target
the "right" way to avoid parallel make would be to add the .NOTPARALLEL
target in the Makefile.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: GNU Make for Java projects

2019-01-18 Thread Henrik Carlqvist
On Fri, 18 Jan 2019 05:42:44 -0600
Blake McBride  wrote:
> I have found Make very easy to understand and use.  Given the amount of
> work they do in the background, I have found build tools such as Maven
> and Gradle to be very confusing.

When I some years ago wrote Android apps I found eclipse too buggy to be
usable. Point and click might be nice, but far too often eclipse crashed.

I resorted to editing the source files in emacs and build the Android
projects with gnu Make, just as I was used to with C projects.

> It is my incomplete understanding that certain aspects of Make don't
> lend themselves well to the building of Java projects.  This is what has
> driven the quest for better Java fitting build tools.

Gnu Make is very simple, it uses some command to create something and does
that if something else indicates that it is needed. This simplicity makes
Make very useful for far more things than just compiling programs.

> I wondered whether it might be possible to enhance certain
> aspects of GNU Make to better accommodate the needs of a Java
> environment.

What kind of need would that be? The hardest thing for me migrating from
eclipse to gnu Make was finding out everything that was done below the
sheets by eclipse. In the end it was really just about running different
commands with arguments to generate files.

> Anyway, I thought I would raise the issue and possibly spark an
> interesting conversation.

My Android app projects are today abandoned, but if you want to study my
Makefile to genereate the .apk files from Java code you can download it
as described at http://halttimer.cvs.sourceforge.net/

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


[bug #54529] [Makefile:5: foobar] Segmentation fault

2018-08-17 Thread Henrik Carlqvist
Follow-up Comment #2, bug #54529 (project make):

"Segmentation fault (core dumped)" means that you got a core file to analyze.
A first simple step might be to see what generated that core file, it could be
done with something like "file core". Next you might want to open the core
file in a debugger to see a stack trace.

regards Henrik

___

Reply to this item at:

  

___
  Message sent via Savannah
  https://savannah.gnu.org/


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Adjusting jobserver size (was: Re: No follow up on patches to support newer glibc ?)

2018-04-07 Thread Henrik Carlqvist
Thanks for your feedback!

> On the other hand, it's a complex change (I'm not convinced that your
> implementation is complete: for example, it's not immediately clear to
> me how the decrement handles the "free token" concept of the job server
> implementation...

The variable decrease_jobs is set to 0 unless a SIGUSR1 has been received.

With decrease_jobs set to 0 the code in free_child will behave as without
my patch applied, if we have a job server and jobserver_tokens > 1 a job
will be released to the job server and our own remaining jobserver_tokens
will be decreased by 1.

If SIGUSR1 has been used one or more times decrease_jobs will have a value
bigger than 0. Then the code in free_child will still decrease
jobserver_tokens by 1 but instead of relasing the job back to the job
server the variable decrease_jobs will be decreased by 1. That way the
function free_child will make sure that from now on one less parallell job
is being run by make as was intended by one received SIGUSR1.

> also, it's not a good idea to use fputs() in a signal
> handler, 

I can remove those printouts like "Decreased number of jobs to %d" as they
are not necessary for the functionality, they are only a help to the user
to see what is going on. 

> and I haven't traced down what other possible issues the other
> calls in increase_job_signal_handler() might have); it doesn't have
> testing with it to make sure it continues to work,

If you so wish, I can also remove those initializing jobserver-function
calls from the signal handler. Without those function calls it will no
longer be possible to increase number of jobs unless make initially was
started with the -j flag. The call to jobserver_release could also be
moved out of the signal handler, but that would be at the cost of having
to wait for the next finished job before an extra new job will be spawned.
For that reason I would prefer to keep the call to jobserver_release and
add a big fat comment in (at least the posix version of) jobserver_release
that no non reentrat code might be added to that function.

> And it is limited to only working on POSIX systems as others don't
> support SIGUSRx (IIRC). 

This is true, if you have any idea for a more portable solution I am
willing to rewrite my patch. I am also willing to implement different
solutions for different environments, but I am only able to test on Linux
(posix) myself.

> When I realize I started make with a jobserver value I don't like, I
> typically just kill the make and restart it.

This usually works fine for a normal compile. But make can be used for so
much more than compiling programs. Some processes started by make might
need a long time to finish and restarting them might be costful for
different reasons.

I understand that my patch as it looks now is not going to make it into
the next release. However, as I find its functionality useful I am willing
to put more work into the patch if it will help. I am also willing to
sacrifice the ability to add jobs if make was started without "-j" and any
helpful output from the signal handlers.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: No follow up on patches to support newer glibc ?

2018-04-04 Thread Henrik Carlqvist
On Wed, 04 Apr 2018 15:42:51 -0400
Paul Smith  wrote:
> It does look like we need to make a new release soon.

If so, is there anything I can do to get the functionality of my
contributed patch in bug #51200 into the upcoming new release?

Best regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


[bug #51200] Improvement suggestion: listen to signals to adjust number of jobs

2017-11-19 Thread Henrik Carlqvist
Follow-up Comment #1, bug #51200 (project make):

As the directory structure recently changed in git, would you like me to send
an updated patch?

regards Henrik

___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Target-specific variable in subdirectory problem

2017-08-02 Thread Henrik Carlqvist
> The example in his question makes very clear what he wants: he wants a
> pattern-specific variable assignment.

Most likely yes. If you look at the subject of this thread it says
"Target-specific variable in subdirectory problem" so he probably only
wants the variable to be set when the target matches the pattern.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


[bug #51495] Notice when a rule changed, so target needs rebuilding

2017-07-18 Thread Henrik Carlqvist
Follow-up Comment #1, bug #51495 (project make):

The quick and easy way to accomplish this today is of course to also add the
Makefile to the prerequisites of targets. If you don't want every target to be
rebuilt when only one rule has changed it is also possible to split the
Makefile up into several files with only one rule in each file. Example:

Makefile:
-8<--
all: myprog

include $(wildcard *.mk)
-8<--

link.mk:
-8<--
OBJFILES=file1.o file2.o
LDFLAGS=-lm

myprog: $(OBJFILES) link.mk
gcc -o $@ $(OBJFILES) $(LDFLAGS)
-8<--

compile.mk:
-8<--
CFLAGS=-ansi -pedantic

%.o: %.c compile.mk
gcc -c $(CFLAGS) -o $@ $<
-8<--



___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/


___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #51309] Determination of a file list from a single folder without changing the working directory

2017-07-02 Thread Henrik Carlqvist
On Sun,  2 Jul 2017 16:14:31 -0400 (EDT)
Markus Elfring  wrote:
> It might matter more under other circumstances.

Are you able to provide any example proving how it matters under some more
or less rare circumstances?

> (The current software can be fast enough to some degree.)

Or do you find the current software good enough and no changes needed?

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Error while running make command

2017-06-21 Thread Henrik Carlqvist
On Wed, 21 Jun 2017 19:48:57 +0530
"PARIMEY DNYANESHWAR PATIL 4-Yr B.Tech. Electrical Engg."
 wrote:
> I am getting this error after running above command in Ubuntu 16.04 lts.
> Please tell me what to do to solve?

You need to carefully read the instructions that you now are following to
compile something. Most of all you must make sure that you before running
make change into the right directory. That directory has probably just
been unpacked from some kind of archive and most important it contains a
Makefile.

If all this talk about files and directories seem like strange nonsense
you might first need to read up on some basic unix usage in the shell.
Chapter 4 in the Linux Users Guide might be a good start, you can find it
here: http://downloads.tuxpuc.pucp.edu.pe/manuales/otros/user-guide.pdf

On the other hand, if you are familiar with how to navigate in a shell and
how to provide a Makefile for gnu make you might instead want to read
something like http://www.catb.org/esr/faqs/smart-questions.html

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: [bug #51267] Improve error handling after a special command

2017-06-19 Thread Henrik Carlqvist
On Mon, 19 Jun 2017 12:08:35 -0400 (EDT)
Markus Elfring  wrote:
> The semicolon indicates at the end that the return value is ignored
> there. I imagine that further data processing should usually only be
> performed if this command succeeded.
> Would you like to improve the exception handling for such a situation?

If you really think that is a bug you should file that bug to the writers
of the shell. Make calls commands in a shell and the result is as in the
shell.

Try the following at your shell prompt:

true && echo everything is fine
echo $?
false && echo evertyhing is fine
echo $?
false ; echo everything is fine
echo $?

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Checking file generation for a test script

2017-06-19 Thread Henrik Carlqvist
On Mon, 19 Jun 2017 13:20:23 +0200
SF Markus Elfring  wrote:

> > It would have been generated if you would have called make with a
> > command like:
> 
> elfring@Sonne:~/Projekte/Bau> LANG=C make --no-builtin-rules -f
> ../rule-check2.make MOTD.log make: *** No rule to make target
> 'MOTD.log'.  Stop.
> 
> My pattern example does not work with the current make software in the
> way I hoped would be occasionally convenient.

No it does not. Did you read my entire previous answer?

>>> and if you had a rule to build MOTD.log
...
>>> In fact, there is no rule at all for MOTD.txt as your rule for
>>> MOTD%.txt has a pattern which must match a none empty string.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Checking application of dependencies from make rules without recipes

2017-06-18 Thread Henrik Carlqvist
On Sun, 18 Jun 2017 19:45:34 +0200
SF Markus Elfring  wrote:
> A rough approximation for further discussion:
> 
> i_compilation?=echo
> o_compilation?=echo
> a_generation?=$(o_compilation) 'Checked modules: '
> 
> parsing_c.cma: ast_c.cmo token_annot.cmo
>   $(a_generation) '$<' > $@
> 
> %.cmi: %.mli
>   $(i_compilation) '$<' > $@
> 
> %.cmo: %.ml %.cmi
>   $(o_compilation) '$<' > $@
> 
> includes.cmi: ast_c.cmo
> 
> 
> elfring@Sonne:~/Projekte/Coccinelle/20160205/parsing_c> LANG=C make
> --no-builtin-rules -f parsing-rule-check1.make make: *** No rule to make
> target 'ast_c.cmo', needed by 'parsing_c.cma'.  Stop.
> 
> 
> How do you think about such a test result?

I think that the test shows that even though you have a pattern rule that
could be applied to build ast_c.cmo that rule fails because ast_c.ml
and/or (ast_c.cmi or ast_c.mli) is missing. But that is only my guess. The
true cause could be found by running make -d.

I also think this might not be the right place to ask for support on how
to write Makefiles. This list is more intended to report found bugs in
make or suggest improvements.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Checking file generation for a test script

2017-06-18 Thread Henrik Carlqvist
On Sun, 18 Jun 2017 13:03:10 +0200
SF Markus Elfring  wrote:

> I have tried the following small script out together with the program
> “GNU Make 4.2.1-1.7” on my openSUSE Tumbleweed system.

That "script" seems like a makefile to me.
 
> my_compilation?=echo
> my_preparation?=cat
> footer?=MOTD.txt
> prepared_file?=MOTD.in
> 
> MOTD%.log: MOTD%.txt MOTD%.in
>   ${my_compilation} "$<: $$(cat ${prepared_file} ${footer})" > $@
> 
> ${prepared_file}: MOTD.draft
>   ${my_preparation} $< > $@
> 
> 
> elfring@Sonne:~/Projekte/Bau> my_message=MOTD.log && rm -f
> ${my_message}; touch MOTD.draft MOTD.txt && LANG=C make
> --no-builtin-rules -f ../rule-check2.make && LANG=C ls -l MOTD.in
> ${my_message} cat MOTD.draft > MOTD.in
> ls: cannot access 'MOTD.log': No such file or directory
> -rw-r--r-- 1 elfring users 6 Jun 18 12:56 MOTD.in
> 
> 
> Now I wonder why the log file is not generated by this build approach at
> the end. Where is my knowledge and understanding incomplete for this
> software situation?

Your MOTD.log is not generated because you did not tell make to generate
it. It would have been generated if you would have called make with a
command like:
LANG=C make --no-builtin-rules -f ../rule-check2.make MOTD.log
and if you had a rule to build MOTD.log

It would also have been generated if your makefile would have contained a
default rule which caused that file to be generated. 

The default rule in your makefile is ${prepared_file} which in this case
is MOTD.in gets generated.

There is no default rule for MOTD.txt, your makefile know how to generate
every MOTD?*.txt but it does not know if any of them should be generated
by default. In the documentation at
https://www.gnu.org/software/make/manual/html_node/Rules.html it says:

"a target that defines a pattern rule has no effect on the default goal."

In fact, there is no rule at all for MOTD.txt as your rule for MOTD%.txt
has a pattern which must match a none empty string.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Unlink failure on abort

2017-06-16 Thread Henrik Carlqvist
On Fri, 16 Jun 2017 01:16:09 +0300
Orgad Shaneh  wrote:

> > On Thu, Jun 15, 2017 at 10:33 PM, Orgad Shaneh 
> > wrote:
> >> mingw32-make[1]: *** Deleting file 'obj/main.o'
> >> mingw32-make[1]: unlink: obj/main.o: Permission denied

> Another thing I've noticed is that make (on Windows/MinGW) leaves behind
> suspended processes when it is aborted. Maybe one of these processes
> holds the file and prevents it from being deleted?
> 
> If you can suggest ways to debug and fix this problem, I'll be thankful.

I have seens similar problems when running make in mingw in a virtualized
qemu environment when compiling on network shared drives which are shared
by some versions of samba. For me possible workarounds have been to switch
working directory to a local drive of the virtualized machine or to switch
to an older version of samba. I had an old version 2.2.10 of samba laying
around only for the purpose of working together with qemu and
compilations. These problems seems to have disappeared about a year ago
with the latests versions of qemu and/or samba.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Are prerequisites made in deterministic order when parallelism is disabled?

2017-06-14 Thread Henrik Carlqvist
On Wed, 14 Jun 2017 11:25:35 -0400
Kyle Rose  wrote:

> The right answer is always to write your makefiles so the rules execute
> in the required order whether run in parallel or not. Relying on
> whichever arbitrary ordering GNU make employs, even if that behavior is
> stable(either historically or by design), is likely to result in sadness
> at some point, such as when someone calls into your makefile recursively
> from their own -j build.

Sometimes I write Makefiles considering the performance at parallel builds
and those Makefiles get their prerequisites ordered by approximately how
much time each prerequisite needs to be built. Let me give you an example
of such a rule:

final_target: 3_hour_prerequisite 2_hour_prerequisite 1_hour_prerequisite
do_something_quickly

With the above example calling make without -j will take about 6 hours for
all prerequisites to build. On a machine with 3 or more cores calling make
with "-j 3" all prerequisites will be finished in 3 hours when also the
most time consuming one is done.

But what if we have a build machine with only 2 CPU cores? If so, calling
make with "-j 2" will be the fastest way to compile, it will still be done
in 3 hours. First the 3 hour job and the 2 hour job will be started in
parallell, then after 2 hours one job will be finished and the remaining 1
hour job will be started. After a total of 3 hours the 3 hour job and 1
hour job will finish about at the same time.

What would happen with "-j 2" if GNU Make changed the order of how
prerequisites where compiled? Then it might start with the 1 hour job and
the 2 hour job. Not until the 1 hour job where finished the 3 hour job
would start and the total build time would be 4 hours instead.

Even though the compile would complete successfully I think that not being
able to depend on job start order would give a significant lack of
performance.

I can also come to think of examples when non parallel builds would
benefit from a deterministic build order of prerequisites even though none
of the prerequisites depend on each other.

regards Henrik

___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Re: Improvement suggestion: listen to signals to adjust number of jobs

2017-05-23 Thread Henrik Carlqvist
On Tue, 23 May 2017 22:31:14 +0300
Eli Zaretskii <e...@gnu.org> wrote:

> > From: Henrik Carlqvist <hc...@poolhem.se>
> > My intention has been to make the patch work with
> > both Unix and Windows, but unfortunately I have no Windows machine to
> > test with. I'm not even sure if Windows supports bsd signals. If not,
> > my changes to w32/w32os.c should better be undone.

> There are no signals on Windows, not in the Posix sense.  Certainly
> there is nothing similar to SIGUSR2 there.

Thanks for the information! To this message I attach an updated patch
where no attempt is made to modify w32os.c.
 
> If we want to support this feature on Windows, we need to use some
> other mechanism, like maybe the Ctrl-BREAK handler?

Unfortunately I will not be able to give much help in suggesting how this
should be implemented on Windows, but please feel free to improve my patch
so that it also becomes useful on Windows! If Windows lack SIGUSR* I
suppose that make does not support toggling of debug output on Windows?

regards Henrik


signal_num_jobs3.patch
Description: Binary data
___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make


Improvement suggestion: listen to signals to adjust number of jobs

2017-05-23 Thread Henrik Carlqvist
Many times after starting make I have wished that I started it with more
parallel jobs than I did. A few time I have also wished that I started
make with fewer parallel jobs.

The attached patch allows you to send SIGUSR2 to your top level make
process to add one more parallel job.

Once a SIGUSR2 is received the functionality of SIGUSR1 also changes from
toggling debug output to decrease the number of jobs by 1. No running job
will be killed, but when the next target is finished one less job will be
started again.

If make was started with parallel jobs SIGUSR2 will immediately increase
the number of jobs with one. If make was started without -j SIGUSR2
will not cause make to work in parallel until the current target is
finished.

The patch was written against latest from git, but also works with latest
stable version 4.2.1. My intention has been to make the patch work with
both Unix and Windows, but unfortunately I have no Windows machine to test
with. I'm not even sure if Windows supports bsd signals. If not, my
changes to w32/w32os.c should better be undone.

I hope that you will find my patch useful and that it somehow will make it
into upcoming stable releases of make.

Please feel free to discuss improvements of the patch, I have subscribed
to this mailing list.

regards Henrik


signal_num_jobs2.patch
Description: Binary data
___
Bug-make mailing list
Bug-make@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-make