RE: New conditional assignm ent facility

2024-01-29 Thread rsbecker
On Monday, January 29, 2024 5:18 AM, Edward Welbourne wrote:
>rsbec...@nexbridge.com (27 January 2024 23:45) wrote:
>> My take on it is that +:= (because of the : ) means that you have to
resolve
>everything at that point.
>
>Surely it could equally mean: fully expand the right-hand side immediately,
append
>to the left-hand variable, preserving its type if set, else making it
immediate.  Then,
>if it was previously deferred-evaluation, its prior value remains
deferred-evaluated -
>only the part appended is evaluated right away.

I think we are saying the same thing.




RE: New append operators (was: Re: New conditional assignment facility)

2024-01-28 Thread rsbecker
On Sunday, January 28, 2024 5:36 PM, Paul Smith wrote:
>On Sat, 2024-01-27 at 17:45 -0500, rsbec...@nexbridge.com wrote:
>> My take on it is that +:= (because of the : ) means that you have to
>> resolve everything at that point.
>
>Yes, I understand what you are saying.  The question is, is that the right 
>conception?
>Here's another way to look at it:
>
>FOO +:= bar
>
>can be interpreted as working like this:
>
>FOO := $(FOO) bar
>
>which is what you and others are arguing for.  Or it can be interpreted as 
>working
>like this:
>
>__FOO := bar
>FOO += $(__FOO)
>
>(where the value of __FOO is immutable).  This is what I was thinking.

I do not think the above two are equivalent. FOO += $(__FOO) will not be 
immutable until it is referenced. __FOO is immutable on the basis of the :=, 
but += is a lazy instantiation, by definition. Changing that semantic would 
have fairly broad impacts.

>My argument is, if you want to write "FOO := $(FOO) bar" you can just write 
>that
>today: you don't need "+:=" (you will have an extra leading space if FOO was 
>not set
>already but that's unlikely to matter much).
>
>But if you want the second form it's tricky to do.
>
>It can also be argued that the second form is closer to the behavior of "+=" 
>since
>"+=" keeps the pre-existing type of the variable rather than changing it.  
>Although of
>course it's still special in some respects.




RE: New conditional assignm ent facility

2024-01-27 Thread rsbecker
On Saturday, January 27, 2024 4:14 PM, Paul Smith wrote:
>On Sat, 2024-01-27 at 15:52 -0500, rsbec...@nexbridge.com wrote:
>> > I'm interested in peoples' opinions about which of these two
>> > implementations they would feel to be more "intuitive" or "correct".
>> > Also please consider issues of "action at a distance"
>> > where a variable is assigned in one makefile and appended to in some
>> > other makefile, potentially far away.
>>
>> Intuitive reading of this would seem that "bar=2 2 3" is better, as
>> +:= should force complete resolution of the string applied to bar,
>> not partial resolution of foo keeping an instance of $(foo) for
>> resolution later.
>
>Hm, maybe I'm just weird.  Or maybe I chose a poor example.
>
>What if the example was like this:
>
>  foo_ARGS = -a
>  bar_ARGS = -b
>  ARGS = $($@_ARGS) -x
>
>  all: foo bar
>  foo bar: ; cmd $(ARGS) $@
>
>  ARGS +:= $(shell gen-args)
>
>where the "gen-args" program outputs "-z".
>
>and now the value of ARGS would either be "$($@_ARGS) -x -z" using the method I
>was suggesting, or it would be " -x -z" using the alternative method where ARGS
>was converted into a simple variable.
>
>So using the first method make would run:
>
>  cmd -a -x -z foo
>  cmd -b -x -z bar
>
>and using the alternative method make would run:
>
>  cmd  -x -z foo
>  cmd  -x -z bar
>
>Is it still more intuitive?  Maybe the answer is "well, just don't do that" 
>:).  Or maybe
>"+!=" could be used in this specific situation, but you get the idea.
>
>The problem is that when you write the assignment +:= it's not always so 
>simple to
>know what the side-effects might be because you don't know what the current
>behavior of the variable is, and it might change in the future.  Of course, 
>that's
>always a problem with makefiles.

My take on it is that +:= (because of the : ) means that you have to resolve 
everything at that point. So whatever ARGS might be when +:= is encountered, it 
is resolved completely. The result of ARGS prior to applying +:= could be 
anything, but +:= appends -z to whatever its current value is. The problem is 
that foo bar: ; cmd $(ARGS) is not resolved until the end of parse, but the 
value of ARGS just after recipe is resolved and applies to the command. If that 
is not correct, maybe do not do this.




RE: New conditional assignm ent facility

2024-01-27 Thread rsbecker
On Saturday, January 27, 2024 3:33 PM, Paul Smith wrote:
>On Mon, 2024-01-22 at 08:15 -0500, Paul Smith wrote:
>> Let's step back and I'll try to think more clearly about this.
>
>Sorry for the delay in replying.
>
>I can see that I was thinking about this one way but there's another way to 
>look at it
>that I didn't think of.  We are talking only about
>(a) append operators _other than_ +=, and (b) situations where the variable 
>already
>has a value when the append operator is parsed.
>
>In all cases we would expand the right-hand side of the variable according to 
>the
>assignment operator: e.g., if it were +:= we would immediately expand the RHS.
>
>My proposal was to keep the type of variable (recursive vs. simple) the same 
>and
>then "fix up" the result of the RHS so it could be appended in a correct way.  
>In this
>conception the operator applies ONLY to the RHS value, and will set the type 
>of the
>variable only if the variable doesn't already exist, as a side-effect.
>
>The other way to think about it is that the assignment operator overrides the 
>type
>of the variable as well: we would re-evaluate the LHS value then append the 
>RHS.
>E.g., if it were +:= we would immediately expand the RHS as well and change the
>type of simple.  In this conception the operator resets the type of the 
>variable as its
>primary function, not just as a side-effect, and modifies not just the RHS 
>value but
>also (possibly) the LHS value as well.
>
>An example to make this clearer:
>
>Given:
>
>foo = 1
>  bar   = $(foo)
>foo = 2
>  bar +:= $(foo)
>foo = 3
>  bar  += $(foo)
>
>$(info bar=$(bar))
>
>In my original version, the result would be that "bar" is a recursive variable 
>with the
>value "$(foo) 2 $(foo)" and the output of the info function would be "bar=3 2 
>3".
>
>In the alternative version, the result would be that "bar" is a simple 
>variable with the
>value "2 2 3" and the output of the info function would obviously be "bar=2 2 
>3".
>
>I'm interested in peoples' opinions about which of these two implementations 
>they
>would feel to be more "intuitive" or "correct".
>Also please consider issues of "action at a distance" where a variable is 
>assigned in
>one makefile and appended to in some other makefile, potentially far away.
>
>
>This discussion has really helped me crystallize the differences and should 
>make the
>resulting documentation, if/when it's written, much more clear so I definitely
>appreciate it!

Intuitive reading of this would seem that "bar=2 2 3" is better, as +:= should 
force complete resolution of the string applied to bar, not partial resolution 
of foo keeping an instance of $(foo) for resolution later.

Just my $0.02.
Randall




RE: Handling references to invalid variables

2023-02-20 Thread rsbecker
On Monday, February 20, 2023 2:50 PM, Paul Smith wrote:
>On Mon, 2023-02-20 at 14:20 -0500, rsbec...@nexbridge.com wrote:
>> I think you need to be able to return to a compatible mode for some
>> users. Having an option like --undefined-variables=warn or --
>> undefined-variables=error (the default) or --undefined-
>> variables=ignore would be prudent.
>
>Hm.  I'm not sure about the "ignore" option.  I'm not a fan of adding lots of 
>options, it
>just makes things that much more complicated to use and test.  Is it important 
>to
>allow these checks to be completely ignored?  If the default is to leave them 
>as
>warnings they won't cause make to fail or change what it will build.

I added =ignore as an afterthought. But build engineers tend to like things to 
stay they way they are for presentation purposes. If new messages show up in 
build streams, there will be questions and inquiries for centuries (cryptic Q 
reference). =ignore would allow the output to stay the same as it is now.




RE: Handling references to invalid variables

2023-02-20 Thread rsbecker
On February 20, 2023 2:11 PM, Paul Smith wrote:
>In the next major release (not the upcoming 4.4.1 release but the one after 
>that) I
>plan to implement notifying users of invalid variable references; for example
>variable names containing whitespace.
>
>So, a makefile like this for example:
>
>  all: ; echo $(cat foo)
>
>will notify the user about the illegal variable reference "cat foo", instead 
>if silently
>expanding to the empty string.
>
>My intent is that this is always enabled, not requiring an extra option like 
>--warn-
>undefined-variables, since it's never legal to have a variable name containing
>whitespace.
>
>The question is, should this notification be a warning?  Or should it be a 
>fatal error?
>Originally I thought it should be fatal but now I'm leaning towards making it a
>warning, at least for a release or two, because I worry about makefiles that 
>might
>have these references that are silently and innocuously expanding to empty 
>strings,
>suddenly stop working completely.

Having worked on (insert large number) of build engines, there are likely users 
who depend on bad or empty strings behaving in a particular way (prior to this 
feature). I like your idea of turning this detection on by default from a 
personal standpoint; however, I think you need to be able to return to a 
compatible mode for some users. Having an option like 
--undefined-variables=warn or --undefined-variables=error (the default) or 
--undefined-variables=ignore would be prudent.

Just my $0.02
--Randall

--
Brief whoami: NonStop developer since approximately
UNIX(421664400)
NonStop(2112884442)
-- In real life, I talk too much.






RE: shell function: confusing error when shebang incorrect

2022-10-09 Thread rsbecker
On October 9, 2022 11:16 AM, Kirill Elagin wrote:
>There is a bit of unexpected behaviour in the `shell` function (due to the
>undocumented fact that it sometimes avoids actually calling the
>shell):
>
>```
>$ cat Makefile
>FOO:=$(shell ./foo.sh)
>
>$ cat foo.sh
>#!/bin/ohno
>echo hi
>
>$ make
>make: ./foo.sh: No such file or directory
>make: *** No targets.  Stop.
>
>$ ./foo.sh
>zsh: ./foo.sh: bad interpreter: /bin/ohno: no such file or directory ```
>
>The “no such file or directory” error from Make is very confusing and 
>unexpected
>in this situation, especially given that it is not the error that the shell 
>would return.
>
>The reason for it is that, while undocumented, the `shell` function will try 
>to avoid
>calling the shell in simple cases like this one and will directly exec the 
>command.
>However, the error returned by `execve` is ambiguous:
>
>> ENOENT  The file pathname or a script or ELF interpreter does not exist.
>
>Shells (bash, zsh) disambiguate it themselves, i.e. there is extra logic for 
>the case
>of ENOENT, while Make simply fails with what it sees, resulting in a puzzling 
>error
>message.

The interpretation of a bad shebang is platform-specific and has no single 
consistent interpretation. Some platforms will report EPERM, EACCESS, or 
ENOENT. The error is not necessarily under bash or zsh control but could come 
from exec[vpe] depending on the platform. I am not sure a good fix is practical 
for this situation. A similarly ambiguous problem happens when the shebang is 
delegated to /bin/env for resolution instead of bash/zsh.
-Randall




RE: [PATCH] Port to 32-bit long + 64-bit time_t

2022-10-03 Thread rsbecker
On October 3, 2022 3:56 PM, Paul Eggert wrote:
>On 10/3/22 09:12, rsbec...@nexbridge.com wrote:
>> This happens in AR and TAR also, which appear to be limited to 32-bit time_t 
>> on
>some platforms. It's a struggle but we have some time to deal with it.
>
>Yes, I've been part of an ongoing effort to make GNU apps Y2038-safe, even on
>32-bit platforms. This is why I've been submitting patches to GNU Make. Many
>other GNU apps are also affected; core apps are mostly fixed already.
>
>It's not just the year 2038, though that's the most pressing.
>Traditional tar format uses an unsigned 33-bit timestamp and stops working 
>after
>2242-03-16 12:56:31 UTC. ar format uses a 12-digit timestamp and stops working
>after 318857-05-20 17:46:39 UTC. Of course these are far-future problems - 
>except
>that people or programs might use 'touch' to create far-future problems today.
>GNU tar has long had format fixes (gnu and pax formats) for this. GNU ar 
>doesn't
>have a fix but that's less important. Etc.
>
>In looking through old dev histories it appears Paul pushes changes every now 
>and
>then, so I'll wait until he's pushed his next batch of changes, which will 
>presumably
>include some timestamp-related fixes, before looking into this again.

Thanks. I'll be keeping an eye out for those.




RE: [PATCH] Port to 32-bit long + 64-bit time_t

2022-10-03 Thread rsbecker
On October 2, 2022 8:07 PM Paul Eggert wrote:
>On 10/2/22 14:09, Paul Smith wrote:
>
>> I applied these changes but made a few mods:
>
>Thanks. I assume you'll push this to savannah at some point? I had been working
>on merging with your more-recent changes to GNU Make, and it wouldn't hurt to
>have another pair of eyes look at this finicky business once you've published 
>your
>mods.
>
>
>> Is there ever a system anywhere that can't represent any remotely
>> useful year value using an int (even if you add
>> 1900 to it :) )?
>
>My use case was if someone sets a file timestamp to something oddball and then
>'make' mishandles the result. This sort of thing happens more often than one
>would like.
>
>As it happens today I fixed a glitch in an oddball part of the GNU Emacs build
>process that uses "TZ=UTC0 touch -t 19700101" to set file timestamps to 0 
>and
>thus fool 'make'. (This was not my idea! and doesn't GNU 'make' treat a zero 
>file
>timestamp specially? but I digress.) It's not a stretch to think of someone 
>using
>'touch' to set file timestamps in the far future, for a similar reason.
>
>For example, here's what happens on a filesystem that supports 64-bit
>timestamps:
>
>   $ TZ=UTC0 touch -d @67767976233532800 foo
>   $ TZ=UTC0 ls -l foo
>   -rw-rw-r-- 1 eggert faculty 0 Jan  1  2147483648 foo
>
>Here the year is 2**31, which works because tm_year is 2**31 - 1900 which fits 
>in
>32-bit int even though 2**31 does not fit. This works with GNU ls, which uses
>strftime which does things correctly. It won't work with GNU Make's C code that
>simply adds 1900 to tm_year.
>
>Come to think of it, if file_timestamp_sprintf simply used strftime instead of
>sprintf that would be more-straightforward fix (this is part of the "finicky 
>business"
>I was talking about earlier...).

At least you get bigger numbers with an unsigned time_t. On my platform, I get 
stuff like May 1954. This happens in AR and TAR also, which appear to be 
limited to 32-bit time_t on some platforms. It's a struggle but we have some 
time to deal with it.

-Randall




RE: [PATCH] Port to 32-bit long + 64-bit time_t

2022-10-02 Thread rsbecker
On October 2, 2022 6:13 PM, Paul Smith wrote:
>On Sun, 2022-10-02 at 17:48 -0400, rsbec...@nexbridge.com wrote:
>> > I understand that this type of reuse makes things easier for the
>> > gnulib folks, but for GNU make I'm not ready to drop support for
>> > platforms that are not POSIX enough to run configure, and that don't
>> > already have "make" available.  So gnulib modules that require them
>> > aren't available to GNU make (at least, not without modifications).
>>
>> Thank you for this comment. Gnulib is not available on the platform I
>> maintain because of its high number of dependencies (including gcc
>> itself, which cannot build on HPE NonStop). Keeping dependencies down
>> is helpful for those outside of the explicit gcc support base.
>
>Really?  I'd be pretty surprised if gnulib modules require GCC.  In my 
>experience
>gnulib-enabled software can be used with lots of compilers include MSVC and
>Clang, plus others that are less well-known of course.
>One of the main points of gnulib is to hide compiler differences (the other 
>being to
>hide OS system differences).
>
>Gnulib does require a C compiler which is at least notionally C99 conforming,
>though.

I was thinking of glibc, not gnulib. My bad. Still, recent version of gnulib 
have been problematic because of configure.




RE: [PATCH] Port to 32-bit long + 64-bit time_t

2022-10-02 Thread rsbecker
On October 2, 2022 5:24 PM, Paul Smith wrote:
>On Thu, 2022-09-22 at 11:00 -0700, Paul Eggert wrote:
>> (This would not be needed if 'make' used Gnulib's inttypes module.)
>
>I would be happy to use it, if using it didn't import a ton of other things 
>that
>require POSIX tools AND an already-working make program.
>
>I understand that this type of reuse makes things easier for the gnulib folks, 
>but
>for GNU make I'm not ready to drop support for platforms that are not POSIX
>enough to run configure, and that don't already have "make" available.  So 
>gnulib
>modules that require them aren't available to GNU make (at least, not without
>modifications).

Thank you for this comment. Gnulib is not available on the platform I maintain 
because of its high number of dependencies (including gcc itself, which cannot 
build on HPE NonStop). Keeping dependencies down is helpful for those outside 
of the explicit gcc support base.

-Randall




RE: Deprecating OS support

2022-10-01 Thread rsbecker
On October 1, 2022 3:01 PM, Paul Smith wrote:
>On Sat, 2022-10-01 at 14:02 -0400, rsbec...@nexbridge.com wrote:
>> The ITUGLIB project team maintains a port of GNU Make for currently
>> supported HPE NonStop Guardian platforms. We do intend to port 4.4
>> when it is released. I am the official maintainer on the team, at
>> present. The HPE NonStop OSS (POSIX-compatible) platform port is
>> maintained by HPE Development as part of their "coreutils" project.
>> The latter's port tends to lag behind our port.
>
>I'm not familiar with the project names so can you be clear about which
>port(s) are used by HPE NonStop?  Do both of these efforts refer to the VMS
>port?

The NonStop ports would use __TANDEM not VMS.

>In specific, I'm looking at the "VMS" preprocessor variable and all the code 
>ifdef'd
>using that, plus the extra source files such as vmsify.c, vmsjobs.c, etc.  Is 
>this what
>is used by the ITUGLIB project?
>
>Does the NonStop OSS port use these as well, or does it configure itself as a
>standard POSIX-style application and not use the VMS- specific code?
>
>> We do intend to port 4.4 when it is released.
>
>A release candidate for 4.4 was made a week or so ago and a second one will be
>made most likely this weekend.  It would be great if someone could try those, 
>that
>way we could resolve any issues found before the release, rather than after.
>
>However, I realize schedules don't always align so if it doesn't work that's 
>OK.

Timing is a big part of it yes. We cannot use the configure process used by GNU 
Make for various resources, so must manually merge changes into our code base 
and manually modify config.h and Makefiles - at least for the Guardian ports. I 
used to maintain the OSS port but that moved over to a different group some 
years ago. Making it all work is a real pain and takes loads of time. We prefer 
to deal with the official release and make it work afterwards. Our changes 
likely not of general interest (handling weird non-POSIX file names, strange 
system variables, process launch differences) so we have not submitted them but 
they are available at GitHub if anyone wants to look.




RE: Deprecating OS support

2022-10-01 Thread rsbecker
On October 1, 2022 1:03 PM, Paul Smith wrote:
>With the upcoming release (4.4) I intend to announce that I'll be removing 
>support
>for the following platforms in the next, post-4.4
>release:
>
>  - OS/2 (EMX)
>  - Amiga
>  - Native MS-DOS
>
>For the first two, I suspect that whatever support we currently have is broken 
>and
>it's not really possible to build GNU make, even today, for those platforms.  I
>haven't received any input from anyone using those platforms in years and it's
>hard for me to believe that, with all the changes, everything still "just 
>works".
>
>I'm less sure about native MS-DOS.  Maybe someone still uses this?
>
>If anyone thinks that any of these should be preserved and I should not 
>announce
>deprecation for them, let me know.
>
>Obviously, versions of GNU make up to and including the upcoming 4.4 would 
>still
>be available for those platforms (if they work).
>
>This would leave us with the following supported platforms, post 4.4:
>
>  - POSIX-based systems
>  - Windows
>  - VMS
>
>For VMS I haven't heard from anyone about it this release cycle so I don't 
>really
>know what the status is, but it was actively supported in GNU make 4.3.

The ITUGLIB project team maintains a port of GNU Make for currently supported 
HPE NonStop Guardian platforms. We do intend to port 4.4 when it is released. I 
am the official maintainer on the team, at present. The HPE NonStop OSS 
(POSIX-compatible) platform port is maintained by HPE Development as part of 
their "coreutils" project. The latter's port tends to lag behind our port.

Regards,
Randall Becker





RE: [PATCH] Port to 32-bit long + 64-bit time_t

2022-09-21 Thread rsbecker
On September 21, 2022 8:58 PM, Paul Eggert wrote:
>On 9/20/22 18:48, rsbec...@nexbridge.com wrote:
>> I am sorry to say that the %j prefix is not safe or portable. There
>> are major production platforms where this is not supported. I work on
>> one of them.
>
>Which platform and version? I'd like to document this in Gnulib. Some other GNU
>programs are using %j now so it might make sense for you to file an enhancement
>request at some point, assuming %j is not supported even in the latest version 
>of
>the platform.
>
>Anyway, thanks, revised GNU Make patch attached; it does not assume %j.

I'm going to back away from complaining on this. The platform I thought did not 
support it was HPE NonStop but it appears that I was looking at an old OS 
version. The current set of supported operating system releases support %j.

Sincerely,
Randall




RE: [PATCH] Port to 32-bit long + 64-bit time_t

2022-09-20 Thread rsbecker
On September 20, 2022 5:22 PM Paul Eggert wrote:
>Don't assume that time_t fits in long, as some hosts (e.g., glibc x86 -
>D_TIME_BITS=64) have 32-bit long and 64-bit time_t.
>This fix uses C99 sprintf/scanf %jd and %ju, which is safe to assume
nowadays.

I am sorry to say that the %j prefix is not safe or portable. There are
major production platforms where this is not supported. I work on one of
them. %l and %ll are supportable and can be selected with a configuration
knob that would be safer.
-Randall




RE: [bug #62654] Add z/OS support

2022-07-03 Thread rsbecker
On July 3, 2022 7:33 PM, Paul Smith wrote:
>I prefer to do the review via email rather than in the Savannah bug tracker 
>which
>has pretty annoying markup.
>
>I would appreciate a somewhat comprehensive commit message or ChangeLog for
>this set of patches, at least explaining some of the less obvious 
>modifications.
>
>> +set -x
>> +if [ ! ${PLATFORM} = "OS/390" ]; then $CC $CFLAGS $LDFLAGS
>> +-L"$OUTLIB" -o "$OUTDIR/makenew$EXEEXT" $objs -
>> lgnu $LOADLIBES
>> +else
>>  $CC $CFLAGS $LDFLAGS -L"$OUTLIB" $objs -lgnu $LOADLIBES -o
>> "$OUTDIR/makenew$EXEEXT"
>> +fi
>
>We don't want set -x here.
>
>Is the point of this that the compiler on OS/390 doesn't allow the -o option to
>come after the objects?  If so we should just change the command line order on 
>all
>systems; no need to check for platforms here.
>Other compilers don't care about the order in which -o comes so it can just 
>come
>early for all of them.

I encountered the issue that the z/OS xlc compiler needs -o file ahead of all 
other objects on the command line. Definitely non-standard.

>> -# define __stat stat
>> +# define __gnustat stat
>
>I suppose OS/390 already defines __stat to something else?  All this code in 
>glob.c
>and fnmatch.c is not really owned by GNU make, we import it from elsewhere.
>But it looks like we'll have to do something about this.

stat also actually needs stat64 to get past the UNIX 2038 rollover.

Regards,
Randall




RE: [bug #61594] suggest new $(hash ...) function

2021-12-02 Thread rsbecker
On December 2, 2021 4:20 AM, Boris Kolpackov
> rsbec...@nexbridge.com  writes:
> 
> > Sadly, the import restrictions do not distinguish between message
> > digests and cryptography [...]
> 
> You seem to be quite knowledgeable on the matter so can you provide one
> concrete example of where one jurisdiction restricts export to another of,
> say, an SHA-1 implementation?

FWIW: My experience comes from some areas: my company that sells software
requiring an Export Control Classification Number because of the set of
message digests we use; being a platform maintainer for OpenSSL and git on
two heritage platforms.

Google is a wonderful thing:
https://en.wikipedia.org/wiki/Restrictions_on_the_import_of_cryptography

The link is non-specific on which cyphers are prohibited, probably because
that changes without notice and I'm not sure Wikipedia is accessible from
all of those locations. Looking at the page history is actually informative
and perhaps relevant to the question.




RE: [bug #61594] suggest new $(hash ...) function

2021-12-02 Thread rsbecker
On December 2, 2021 3:44 AM, Edward Welbourne
> > My first counter-argument comes from the "$(shell git hash-object obj)"
> > suggestion which begs the question: if git, which relies heavily upon
> > SHA-1, is available, doesn't that mean SHA-1 is also natively
> > available? I'm not aware of git being restricted in any jurisdictions.
> 
> Seems unlikely, given that it's considered insecure.
> FIPS doesn't allow use of it.

Again, as I previously responded, using git was an example, not a required
solution. You can use openssl, openssh, gpg, to do the same thing. As of a
few of versions ago, git can use SHA256 instead of SHA1. 




RE: GPL Interpretation on load [Was: [bug #61594] suggest new $(hash ...) function]

2021-12-01 Thread rsbecker
On December 1, 2021 9:41 AM, Eli Zaretskii wrote:
> To: rsbec...@nexbridge.com
> Cc: bug-make@gnu.org; bo...@kolpackov.net
> Subject: Re: GPL Interpretation on load [Was: [bug #61594] suggest new
> $(hash ...) function]
> 
> > From: 
> > Cc: , ,  m...@gnu.org>,
> > 
> > Date: Wed, 1 Dec 2021 09:33:22 -0500
> >
> > > The test doesn't check that the library is under GPL, it tests that
> > > it's
> > "GPL-
> > > compatible", which means it's Free Software.  GNU Make doesn't want
> > > to load non-free modules.
> >
> > That is understood. Is this an official GNU Make policy because it is
> > not specified that way in GPL. Has the GNU Make team modified their
> > copy of the GPL license because it is not indicated as a modified
version?
> 
> That's the official policy of the whole GNU project.  GCC does the same,
as
> does Emacs and others.

It might be policy, but it is not explicitly quantified in the GPLv3. I was
looking for clarification. Thanks.




RE: [bug #61594] suggest new $(hash ...) function

2021-12-01 Thread rsbecker
On December 1, 2021 9:42 AM, anonymous wrote:
> These are all good and useful points, thanks. However, some counter-
> arguments:
> I tried to be careful to distinguish "cryptographic" from "low-collision-rate-
> hash" in the original description because I absolutely do not want to
> "introduce cryptography to GNU make". It's unfortunate that the two
> concepts tend to be implemented by the same algorithms and subject to the
> same license and export rules but they remain logically different. Thus, while
> accepting that things like export restrictions might need to be considered, I
> suggest we elide terms like 'cryptography' from this discussion because it's
> just about a hash.
> 
> My first counter-argument comes from the "$(shell git hash-object obj)"
> suggestion which begs the question: if git, which relies heavily upon SHA-1, 
> is
> available, doesn't that mean SHA-1 is also natively available? I'm not aware 
> of
> git being restricted in any jurisdictions. It's certainly not my area of 
> expertise
> but I've never heard of export restrictions being an issue in git which seems
> to have spread everywhere.
> 
> Second counter-argument: hashing algorithms cover a spectrum from original
> checksum to the latest super-secure hash whatever that is today. GNU make
> itself must have a hashing function built in - after all, you can't have a 
> hash
> table without a hash function. So it seems likely that there's a sweet spot on
> this spectrum, some function which is both unrestricted and has a sufficiently
> low collision rate, to do the job.
> 
> However, having made the suggestion I'm happy to let Paul dispose of it as
> he likes. For my own purposes I'll probably look at the loadable module since
> the shell is creaky and slow.

I provided git as an example, not as a requirement. Anything could be used even 
something that the local team may have written to perform their hash. Even a 
very old SWID might work, which I think is not subject to restrictions anymore. 
If git is available (somehow), it is not the problem of the GNU Make team of 
those importing it. I'm sorry for mentioning git, but it is only an example, 
and a red herring, apparently.

Sadly, the import restrictions do not distinguish between message digests and 
cryptography since MDs are used for one-directional password encryption. Hash 
functions are different than message digests. Some message digests are hashes, 
but not all hash functions are message digests. f(x) = x % 113 is (arguably I 
suppose) not a message digest. Similarly, not all message digests are 
bi-directional cryptographic functions - but I don't think governments have all 
caught up with that point.




RE: GPL Interpretation on load [Was: [bug #61594] suggest new $(hash ...) function]

2021-12-01 Thread rsbecker
On December 1, 2021 9:25 AM, Eli Zaretskii wrote:
> > From: 
> > Date: Wed, 1 Dec 2021 09:09:55 -0500
> > Cc: bug-make@gnu.org, bo...@kolpackov.net
> >
> > On December 1, 2021 9:06 AM, Tim Murphy wrote:
> >
> > 
> > > -load $(XTRA_OUTPUTDIR)/hash$(XTRA_EXT)
> >
> > This thread brings up a question. The load function checks for GPL
> compatibility.
> >
> >   /* Assert that the GPL license symbol is defined.  */
> >   symp = (load_func_t) dlsym (dlp, "plugin_is_GPL_compatible");
> >   if (! symp)
> > OS (fatal, flocp,
> >  _("Loaded object %s is not declared to be GPL compatible"),
> >  ldname);
> >
> > I am wondering why that is the case. A DLL that is loaded by GNU Make is
> not necessarily subject to GPLv2 or GPLv3. GPLvx makes it clear that you
are
> subject to GPLvx if you include portions of the code from the project
under
> license. However, an external DLL that is loaded by GNU Make via dlopen
> does not have to use any code from the code base. Using a published API,
> which would be the function interface has precedent for being excluded
> from license enforcement - the UNIX kernel API is an example that is
purely
> public domain itself, while the individual header files are subject to
licenses.
> 
> The test doesn't check that the library is under GPL, it tests that it's
"GPL-
> compatible", which means it's Free Software.  GNU Make doesn't want to
> load non-free modules.

That is understood. Is this an official GNU Make policy because it is not
specified that way in GPL. Has the GNU Make team modified their copy of the
GPL license because it is not indicated as a modified version?




GPL Interpretation on load [Was: [bug #61594] suggest new $(hash ...) function]

2021-12-01 Thread rsbecker
On December 1, 2021 9:06 AM, Tim Murphy wrote:


> -load $(XTRA_OUTPUTDIR)/hash$(XTRA_EXT)

This thread brings up a question. The load function checks for GPL 
compatibility.

  /* Assert that the GPL license symbol is defined.  */
  symp = (load_func_t) dlsym (dlp, "plugin_is_GPL_compatible");
  if (! symp)
OS (fatal, flocp,
 _("Loaded object %s is not declared to be GPL compatible"),
 ldname);

I am wondering why that is the case. A DLL that is loaded by GNU Make is not 
necessarily subject to GPLv2 or GPLv3. GPLvx makes it clear that you are 
subject to GPLvx if you include portions of the code from the project under 
license. However, an external DLL that is loaded by GNU Make via dlopen does 
not have to use any code from the code base. Using a published API, which would 
be the function interface has precedent for being excluded from license 
enforcement - the UNIX kernel API is an example that is purely public domain 
itself, while the individual header files are subject to licenses.

So, my question is why the GPL enforcement outside of what would be a usual 
interpretation for GPLv3, for external integrations that may have nothing 
specifically to do with the Make code? Sure, if you include function.h then yes 
you are subject to it, but you do not have to. This seems overly aggressive and 
not really necessary. Is there an answer back in history that might explain it?

Thanks,
Randall




RE: [bug #61594] suggest new $(hash ...) function

2021-12-01 Thread rsbecker
On December 1, 2021 9:06 AM, Tim Murphy wrote:
> On Wed, 1 Dec 2021 at 12:37, Edward Welbourne  
> wrote:
>> mailto:rsbec...@nexbridge.com  (1 December 2021 13:08) wrote:
>>> I would suggest that adding cryptography to GNU Make would limit its
>>> reach. There are jurisdictions where it is questionable to import
>>> software containing any cryptography. In addition, there are numerous
>>> tools for doing what you want. Something along the lines, for example,
>>> of:
>>>
>>> $(shell git hash-object obj)
>>>
>>> Is a simple function that is already supported by GNU Make without
>>> having to introduce cryptography. This would make a lot more sense to
>> me to keep hashing out of GNU Make.

>> +1.  It also leaves it to the make file author to decide which hash
>> function to use.  If make took charge of that decision, it would be
>> stuck with the hash selected for all time, since it would have no way of
>> knowing how the resulting hashes have been used by diverse different
>> make files.  An individual project's make files can make the transition
>> from using one hash to another, since it knows how it's been using the
>> hashes and how to ensure a clean transition when it comes to change the
>> hash function used.

> I have added such a function as a loadable library before - you might 
> consider that if you can't get it done another way.
> https://github.com/tnmurphy/extramake
>
> look at hash.c.  To try it :
>
> cd example && make -f http://example.mk
>
> I called the function siphash24 because that's what I used - and its' 
> definitely not cryptographic.  I had similar needs such as generating unique 
> target names from many inputs
> where I didn't want the target name to be of unlimited length or to have 
> illegal characters.
>
> $(shell) was useless as it is far too slow.
>
> I would suggest having $(hash) and $()  - because then people who 
> care about the algorithm will be able to choose it.
>
>
> FYI loadable modules are quite cool:
> SOURCES:=proga.c progb.c
> OBJECTS:=$(SOURCES:.c=.o)
>
>
> include ../http://xtra.mk
> XTRA_OUTPUTDIR:=.
> XTRA_SOURCE:=..
> -load $(XTRA_OUTPUTDIR)/hash$(XTRA_EXT)
>
> LIBVERSION:=$(siphash24 $(SOURCES))
> LIBNAME:=prog_$(LIBVERSION).so

> $(LIBNAME): $(OBJECTS)
> cc -o $@ $^ -shared
>
> $(OBJECTS) : %.o : %.c
> cc -c -o $@ -fPIC $^
> include ../http://hash.mk
>
> The above will build the $(siphash24) function (if it's not there already) 
> and then use it.  

I am fine with using  loadable modules to add functionality to GNU Make. I will 
probably contribute a change here so that it works on other platform(s) at some 
point. I also get that $(shell) can be very slow on some platforms.

Randall




RE: [bug #61594] suggest new $(hash ...) function

2021-12-01 Thread rsbecker
On November 30, 2021 11:37 PM, anonymous wrote:
> To: psm...@gnu.org; bo...@kolpackov.net; bug-make@gnu.org
> Subject: [bug #61594] suggest new $(hash ...) function
> 
> URL:
>   
> 
>  Summary: suggest new $(hash ...) function
>  Project: make
> Submitted by: None
> Submitted on: Wed 01 Dec 2021 04:36:41 AM UTC
> Severity: 3 - Normal
>   Item Group: Enhancement
>   Status: None
>  Privacy: Public
>  Assigned to: None
>  Open/Closed: Open
>  Discussion Lock: Any
>Component Version: SCM
> Operating System: None
>Fixed Release: None
>Triage Status: None
> 
> ___
> 
> Details:
> 
> Historically there's been a certain amount of resistance to adding new native
> functions to GNU make but a few have come in lately so ... I wonder if the
> idea of providing a hashing function has been considered?
> 
> I'm not thinking of a cryptographically secure hash, just something that could
> be used as a convenient digital signature. The use case I'm thinking of in
> particular is the advanced topic of forcing rebuild on command-line changes,
> which of course requires stashing the previous command line for
> comparison.
> Unfortunately, in some cases command lines can be very long and ugly; in
> our case they're >2K chars apiece. Stashing these can be done, and in fact we
> do it, but it would be much nicer and simpler if a SHA-1 or similar signature
> could be stashed instead.
> 
> It seems relatively easy to implement and document, assuming the GNU
> hashing functions are available somehow. The question in my mind, if it was
> to go forward, would be whether to give it a specific name and nail down the
> algorithm, such as $(sha-3,$(data)), or a generic name like $(hash,$(data)).
> Historically every hashing algorithm has been superseded by a better one
> every few years, which argues for $(hash ...), but on the other hand it's not
> intended for security anyway so anything with a sufficiently infinitesimal
> collision rate would be fine and there might be value in being able to
> generate a known hash like SHA-1.
> 
> Just a thought. Close with prejudice if not interested.

I would suggest that adding cryptography to GNU Make would limit its reach. 
There are jurisdictions where it is questionable to import software containing 
any cryptography. In addition, there are numerous tools for doing what you 
want. Something along the lines, for example, of:

$(shell git hash-object obj)

Is a simple function that is already supported by GNU Make without having to 
introduce cryptography. This would make a lot more sense to me to keep hashing 
out of GNU Make.

Respectfully,
Randall




RE: [PATCH 3/3] Introduce $(compare ...) for numerical comparison

2021-11-10 Thread rsbecker
On November 10, 2021 12:19 PM, Jouke Witteveen wrote:
> To: psm...@gnu.org
> Cc: bug-make 
> Subject: Re: [PATCH 3/3] Introduce $(compare ...) for numerical comparison
> 
> On Mon, Nov 8, 2021 at 4:08 PM Paul Smith  wrote:
> >
> > On Fri, 2021-07-16 at 14:04 +0200, Jouke Witteveen wrote:
> > > +@item $(compare
> > > +@var{lhs},@var{rhs},@var{lt-part}[,@var{eq-part}[,@var{gt-part}]])
> >
> > Let me ask this: would it be better to use a format of:
> >
> >   $(compare , , [, [, ]])
> >
> > Then the rule is, if the values are equal we get the  part, if lhs
> > is less than rhs we get , and if lhs is greater than rhs we get
> > .
> >
> > If  is not present then the invocation devolves to:
> >
> >   $(compare , , , )
> >
> > that is, the fourth arg is used for not equal.
> >
> > If  is also not present then  is used if the value is equal
> > else it expands to the empty string.
> >
> 
> Cool, an alternative design that has something to go for it! Here is why I 
> like
> the original design better.
> 
> The only real difference is in four-argument usage. This alternative trades 
> the
> 'ordered' nature of the arguments for favoring equality testing. To some
> degree, equality testing is already possible through string-based equality
> testing, and even if you really want to do a numerical equality check, the
> original design allows you to do
> 
>   $(if $(compare ,,,they_are_equal,),,)
> 
> In fact, in the original design you could get away with just the three-
> argument version of $(compare) in combination with $(if), $(and) and $(or).
> This is not the case for the alternative design.
> 
> I also took a look at other languages. Nearly everywhere I could find three-
> way comparisons, the possible outcomes are listed as LT, EQ, GT, in that
> order. Fortran apparently had an "Arithmetic IF", that was also ordered as in
> the original design of $(compare).

I have a similar function in my own fork called $(vcompare), which is similar, 
but does comparison of version-like strings. It does not have the additional 
arguments, which I would think are better separated off into a control function 
like $(if) instead of having then and else built into the $(compare).

Randall