On Thu, 13 Jun 2002, Richard Levitte - VMS Whacker wrote:

> The maintainance problem comes if there is anything special that needs
> to be done, because the change will be done in the above code as well
> as somewhere in the chain of calls to point.sh and mklink.pl.  And
> honestly, I prefer the control available through the various
> Makefile.ssl rather than the various $cryptn variables above (they're
> missing some header files, like pem2.h...).

I think I see the point. Let me look again. Perhaps there is a way to
simplify this.

> dkaufman> Links to these files are made by "make links" which is run by the
> dkaufman> Configure script. This, in turn, depends on the Makefile in each
> dkaufman> subdirectory. When an algorithm is excluded at configure time, the
> dkaufman> Makefile in that algorithm's directory is not created, and 
> dkaufman> "Make links" is not run.
> 
> I'm not sure how that is a problem.  This means (in your case) that
> you'll have a few zero-length files that you'll never use.  And?  And
> since you added changes to mklink.pl to remove the goal file before
> symlinking or copying it, it means that the files you will actually
> used will be there in the way they're meant, or?

The above code is _only_ for the algorithms that were excluded, so
mklink.pl is never run. Let me see if there is an alternative, where
we change the directories from which mklink.pl is run, rather than
duplicate its function by separate code.

> dkaufman> The header files for the excluded algorithms are not required by any
> dkaufman> of the compiled files, but the compilation stops because they are
> dkaufman> listed as dependencies (at least in crypto/evp/Makefile). I see three
> dkaufman> ways that the "include" problem could be solved. Running "make depend"
> dkaufman> takes care of this. Removing the header files as dependencies in
> dkaufman> the Makefile should work. I chose to copy the header files for the
> dkaufman> excluded directories into include/openssl. That way any changes in
> dkaufman> the header files will cause the ".c" file to be recompiled, but the
> dkaufman> overhead of running "make depend" is avoided.
> 
> You tell us that you end up initially with all header files in
> include/openssl/ as zero-length files.  That means that they are all
> present even if the contents are not.  How can the dependencies for
> available files fail (that's what you suggest, isn't it)?

The zero-length files were not always present. This may depend on
whether it was a release version or a snapshot. It may also depend on
the untar utility used. There are several different ones for DOS; I am
not sure if they all behave the same in regard to symbolic links.

> Oh, wait, you mention "recompiling".  Does that mean that you've
> configured it one way, compiled, and then reconfigured with some
> algorithms removed, and get bit because some .c files do not get
> recompiled properly?  I'm not surprised, I thought that it was common

No. I know to do a "make clean" or "make distclean" before compiling.
Perhaps I used the word "recompiling" in error.

> dkaufman> The test files for each excluded algorithm are still called by
> dkaufman> test/Makefile. Running "make depend" does not fix this problem. The
> dkaufman> test files are all designed to print "No [algorithm] support" if the
> dkaufman> algorithm was excluded at configure time. I see two possible fixes.
> dkaufman> One would be to alter test/Makefile in the Configure script, so that
> dkaufman> it no longer calls the testfiles for excluded algoritihms. With this
> dkaufman> solution, "make test" would no longer give the warning that the
> dkaufman> algorithms were excluded. The warning seemed like a good safety check
> dkaufman> to me. I chose to copy the testfiles for the excluded algorithms to
> dkaufman> the test directory.
> 
> Hmm, wait, are you saying that such a warning is disturbing to the
> eye, or did the test sequence stop with some kind of error?  If it's
> cosmetic, perhaps you could suggest a better message that doesn't
> freak people out (if that's the trouble you anticipate)?

I didn't mean "warning" in terms of a compiler warning. I meant that it
was an alert to the person running "make test" that an algorithm was not
included in the build. If you don't put the testfile for the algorithm
in the test directory, then the output of "make test" won't indicate
that the library is missing one of the default algorithms.
> 
> dkaufman> > However, I did ask on openssl-dev, just a moment ago, if the 
> dkaufman> > automagic 'make depend' should be reenabled.  Could you test that 
> dkaufman> > such an option works properly?
> dkaufman> 
> dkaufman> "Make depend" works on DJGPP.
> 
> I assume it solves most problems that you've encountered with disabled
> algorithms, or did I misunderstand?

It solves the problem with the header files, but not with the testfiles.

> dkaufman> > I don't remember our conversation on this, so bear with me: does the 
> dkaufman> > call to cp not work?  If it does, is this really necessary?
> dkaufman> 
> dkaufman> The call to "cp" does work. Unarchiving the tar.gz file can cause zero
> dkaufman> byte files to be created. I don't see that with the current snapshot,
> dkaufman> but it was a problem with previous releases, which had symbolic links
> dkaufman> in the tar.gz file.
> 
> I may be confused by what you're saying...  it seems like you're
> saying that cp may have problems because of zero-length files
> generated when untarring the distribution.  I'm sure that's a
> misunderstanding on my part.

I meant that you might need to do "cp -f" rather than "cp", or else
make sure that the destination file is removed first.

> dkaufman> I think it would be safest to include the line to 'unlink
> dkaufman> "$from/$file"',
> 
> I've no problem with that :-).
> 
> dkaufman> and if you want to use a call to "cp" do "cp -f".
> 
> Have you encountered cases where -f is needed?  I do not believe it's
> portable.  However, if the unlink statement is moved up (as you
> suggest), that problem should be solved.

That should take care of it.

> dkaufman> The one problem I still see is with point.sh. It works for
> dkaufman> DJGPP, but won't work for other systems without symbolic
> dkaufman> links. I don't know how to test for symbolic links in a
> dkaufman> shell script.
> 
> Is it '-fR' that you're refering to?  I agree...  Would it be possible
> to reimplement the recursive part in terms of sh (removes the need for
> -R), and then use rm to remove each present goal file before it's
> copied anew (removed the need for -f)?  It would probably be easier to
> implement in perl (meaning we get a point.pl and remove point.sh), and
> in that case, the arguments could be used properly so tricks like
> `cd ../../perlasm;pwd` aren't needed any more (I prefer to have
> relative paths, so I can move a tree at will, see).

Actually, now that I look at it again, it might be easier to just
change point.sh to use "ln" rather than "ln -s". This will work fine
on DJGPP, whose implementation of "ln" makes a copy. Will hard links
instead of symbolic links cause a problem on other platforms?

__ 
Doug Kaufman
Internet: [EMAIL PROTECTED]

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to