Hi all,

This is an interesting discussion because definitely there are cases where you 
just want to give people a single module to run a particular executable. In our 
case there is a learning curve for our modules env because of the hierarchy and 
our use of different toolchains. For some users this is already too much and 
they do things to mess up their env (loading X, Y ,Z simultaneously and 
ignoring the lmod warnings this generates). We've had a few tickets on this 
already and the system is not even fully up yet!

As I understand it, the use case is only really for end product applications 
(namd, gromacs, openfoam,git,...) but for libraries the LD_LIBRARY_PATH case 
still stands because of peoples linking requirements (netcdf, 
hdf5,openblas,...). I think it's interesting to consider a hybrid module space 
with one fork being for end product cases and one for a building environment. I 
can see the value of that at our site, particularly since we get to choose the 
toolchain with the best performance for an app rather than maintain multiple 
instances.

Alan

On 4 July 2015 at 01:42, Todd Gamblin 
<tgamb...@llnl.gov<mailto:tgamb...@llnl.gov>> wrote:
Fotis,

On 6/30/15, 4:04 AM, "Fotis Georgatos" 
<fo...@mail.cern.ch<mailto:fo...@mail.cern.ch>> wrote:

>
>Hi,
>
>Although static linking and rpath have their good use-cases, I¹m pretty
>much glad that default eb/modules behaviour is reliance on
>LD_LIBRARY_PATH.

Meh.  LD_LIBRARY_PATH issues are one of the main problems we get from our
users.  Modules help because they simplify environment management, and I'm
not particularly opposed to modules.  They seem like a great way to get
things in your PATH, MANPATH, etc.

However, I am sick of having to load particular modules to run particular
programs.  If the program doesn't run when you invoke it, regardless of
the environment, I think it's broken.  Why do you expect the user to
remember whether it was linked with OpenMPI or mvapich, and load the right
one?

The HMS concept is designed to help with this by loading all the libs you
don't want users to remember, but you can't make a hierarchy for ALL the
dependencies or it becomes unmanageable -- people get around this by
keeping the toolchain consistent and forcing particular codes to build
with particular versions of the ones that *aren't* in the HMNS.  You might
have the freedom to choose your MPI, compiler, etc. but what if you want a
new combination of MPI, compiler, fftw, and some dependency version?  Too
bad.  You still have to recompile, and in EB you would have to make a new
toolchain and tweak the versions in a lot of config files.

RPATH solves the launch issue by recording the dependencies that a program
needs *in the actual binary*.  You run it, it works the way you built it.
Done.  Doesn't matter how you ran it or what the user's environment state
looked like.

>Let me suggest an example here.
>
>On Jun 26, 2015, at 6:22 PM, Todd Gamblin 
><tgamb...@llnl.gov<mailto:tgamb...@llnl.gov>> wrote:
>> the HPC world, I find that to be a pretty rare case.  MPI doesn't even
>> have an ABI, so you're asking for trouble by trying to swap it in.  You
>> need something like the hierarchical modules to do that -- where you
>>swap
>> in/out entire trees.
>
>I am not sure I fully understand you, but I recall that since about v1.4
>of openmpi
>there is documented backwards compatibility and you can freely swap
>forward versions:
>https://www.open-mpi.org/software/ompi/versions/
>http://icl.cs.utk.edu/open-mpi/software/ompi/versions/ ## check this
>about 1.4.0, too!

Yes.  The MPI implementors are trying to do better, and I commend them for
trying to insure sanity within a particular implementation.  Again,
though, you can't just swap in mpich for OpenMPI, even though their API is
*the same*.  Things will explode.  Yuck.  Again, having an HMNS helps with
this, but you have to rebuild a lot of stuff, so you still have to
recompile the world if you're testing whether a bug occurs in a particular
implementation, not just a version of one implementation.

MPI is not the only place where ABIs are a problem.  C++ libs like Boost
have ABI problems all over the place.  C++11 has a different ABI form
C++98.  There are two versions of libelf (RedHat's internal one, and the
german one) that have different ABI versions but the same API.  I think
the ParMetis guy changes not just his ABI but his API on a regular basis
(sigh).

All of this combined means that there is no "one true" software stack for
HPC.  With RPATH you have more flexibility -- you don't care that two
programs depend on different versions of some dependency, because they can
find their particular version just fine and no one has to worry about it.
But yes, you have to rebuild or use patchelf to change it.  That is the
tradeoff.  The question is really which set of problems you like better.
With LD_LIBRARY_PATH, you're enforcing a *specific* version of some .so
that must be loaded by all programs.  When you run a different program,
you may need to change it.  There is a discussion of this here:

        http://www.eyrie.org/~eagle/notes/rpath.html

My claim is that HPC software is not sufficiently robust or well tested to
support this model all the time. I would rather have my programs work
without remembering how I built them.

Obviously, ABI problems arise more with bleeding edge stuff than with more
mainstream libraries.  EB is a good example -- you guys have a nice
consistent stack within each toolchain, and it's tested.  That's great.
We have 5+ production code teams pushing out new versions constantly, and
each of them has a separate software stack that depends on specific
versions of 50+ libraries.  The libraries are maintained by small teams
and can't fix all the bugs fast, so there is a lot of patching that goes
on to ensure that things get rolled out smoothly.  The "toolchain" for the
prior code version might be rather different than the one for the new code
version.  At our site, the app teams maintain their own installations, and
it's easy for them to put RPATH'd binaries out there to avoid issues like
this.

I have been told that Google often builds things statically for similar
reasons to these.  RPATH gets you a nice compromise -- you can share
sub-DAGs.  LD_LIBRARY_PATH allows you to share whatever you want, with the
caveat that whoever runs the binary is responsible for ensuring its
consistency.

>AFAIK, there is *no* other way to resolve a bug like this one, sanely,
>http://www.open-mpi.org/community/lists/users/2015/05/26913.php
>without having to recompile the world, if you do not rely upon
>LD_LIBRARY_PATH.
>Thanks to the latter, we can now do this without recompiling tens of
>builds:
>
>```
>module load HPCBIOS_LifeSciences ## includes buggy historic openmpi
>module swap OpenMPI ## if you have built 1.8.6 this works like a breeze
>```
>
>comments?

I like that use case, as it is a very good way to debug that problem.
FWIW, you CAN actually do it with RPATH'd binaries if you LD_PRELOAD the
particular OpenMPI libraries you want to test with.  This gets
unmanageable quickly, though, because LD_PRELOAD takes a .so, not a path.
You would need to write a script to do it right and that's a pain.

However, I think the example misses the point.  My point is that it's good
for executables to know where their dependencies live, at least the
default ones.  Your point is that LD_LIBRARY_PATH allows you to override
the libraries that a program uses. There is a compromise: build with
RUNPATH instead.

RUNPATH is exactly like RPATH, but its precedence is BELOW that of
LD_LIBRARY_PATH.  For those who don't remember, here's the link precedence
for ld.so:

1. Any libs specified in LD_PRELOAD
2. RPATHs in the binary (starting with the loading object, then its
loader, etc.)
3. LD_LIBRARY_PATH
4. RUNPATHs in the binary (same as RPATH but lower precedence than
LD_LIBRARY_PATH)
5. ld.so.cache
6. default system directories.

Weird caveat: if RUNPATH is *present* in the binary, RPATH options are
ignored.  Just to make things interesting :).

I haven't done it yet, but we could add an option to Spack to build with
RUNPATH instead of RPATH, in which case you would have libraries that know
where the dependencies they were built with live.  Yay!  If you wanted to
debug your use case, you could use LD_LIBRARY_PATH to do it.  If users
have an issue with LD_LIBRARY_PATH, you could tell them to unset it, and
their binaries would still find their default dependencies.

This is still not perfect, because you would still have the problem that
most users have no idea how to manage their LD_LIBRARY_PATH, but at least
you can tell them to just unset it.  Sometimes it's hard for them to
figure out that it's getting set in their .bashrc and other places,
though, so YMMV.  Another downside is that RUNPATH is an ELF-specific
thing.  AFAIK there is no equivalent on platforms that do not use ELF.
Other platforms do have the equivalent of RPATH (often with the same
name), so in that sense it is a bit more portable.

Anyway, I don't think I'm going to convince people that there is one true
way to build, and there are pros and cons to both.  My main point is that
it is nice to have binaries that work fine without a magic module
incantation.  Modules and LD_LIBRARY_PATH are more dynamic, so you get
some flexibility, but you can also shoot yourself in the foot, and users
are good at that.  I think it's worth having systems that support both of
these models.

-Todd





>
>F.
>
>--
>echo "sysadmin know better bash than english" | sed s/min/mins/ \
>  | sed 's/better bash/bash better/' # signal detected in a CERN forum
>
>
>
>
>
>
>

















--
Dr. Alan O'Cais
Application Support
Juelich Supercomputing Centre
Forschungszentrum Juelich GmbH
52425 Juelich, Germany

Phone: +49 2461 61 5213
Fax: +49 2461 61 6656
E-mail: a.oc...@fz-juelich.de<mailto:a.oc...@fz-juelich.de>
WWW:    http://www.fz-juelich.de/ias/jsc/EN


------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------

Reply via email to