PETSc developers,

    As many of you know I have been cleaning up the PETSc handling of external 
packages including the support for —download-xxxx. I am close to being done, 
the major changes are

1) there is no distinction between “PETSc” and BuildSystem packages, they are 
now all in config/BuildSystem/config/packages
2) the PETSc.package.NewPackage is gone
3) All external packages that configure with GNU configure are now derived off 
the GNUPackage class and 
    those that configure with CMake are derived off the CMakePackage class; 
this eliminated a bunch of redundant cute and pasted code
4) I simplified the GNUPackage, removing a bunch of unneeded methods
5) I removed a bunch of dead packages
6) When installing to a —prefix location the external packages install directly 
there instead of first too PETSC_ARCH and then getting moved there when PETSc 
is installed; this means —prefix works for all packages now including MPICH and 
OpenMP. When —prefix is in a system location, the person installing will be 
prompted for the sudo password for the installs; this is not ideal but seems 
the lessor evil
7) the consistency checking is not as good as before but can be improved per 
package as we find issues.

  If all goes well with next for a while, this will go into master and be, more 
or less transparent except for bugs that come up. I am hoping these changes 
will make maintenance and support easier in the future.

  Barry

  For fun I’ve listed below the “configure” options needed by the various 
packages for just two items: MPI and 64 bit integers. This is a little peephole 
into why the HPC software ecosystem is so dang frustrating.

 --------------------------
  Configuring package for MPI
  --------------------------

  GNU configure packages

PETSc                      -with-mpi-dir     OR   --with-mpi-lib 
--with-mpi-include
Sundials                   --with-mpi-root   OR  --with-mpi-incdir 
--with-mpi-libdir --with-mpi-libs
Zoltan                      same
MOAB                       --with-mpi=directory
hypre                                              --with-MPI-include 
--with-MPI-lib-dirs --with-MPI-libs
fftw                       MPICC=
hdf5                       --enable-parallel
Netcfd                     Just knows?
ml                                          --enable-mpi  --with-mpi-libs  and 
pass -I/MPI includes through --with-cflags and --with-cxxflags
mpe                        MPI_CFLAGS=    MPI_CC=

  CMake packages

ParMetis                  just assumes compiler is MPICC?
Elemental                 -DMPI_C_COMPILER= -DMPI_CXX_COMPILER=

  Other packages

SuperLU_dist              IMPI=      MPILIB=
MUMPS                     INCPAR=-I/MPI includes    LIBPAR=MPI libraries
PTScotch                  CFLAGS=-I/MPI includes    LDFLAGS=MPI libraries
PasTix                    CCFOPT=-I/MPI include MPCCPROG=mpicc compiler


  ----------------------------------------
  Configuring packages for 64 bit integers
  ----------------------------------------

  GNU configure packages 

PETSc                      -with-64-bit-indices
Sundials                   NO SUPPORT
Zoltan                     --with-id-type=ulonglong
MOAB                       ???
hypre                      --enable-bigint
fftw                       ???
hdf5                       Hardwired to be size_t
Netcfd      
ml                         NO SUPPORT

  CMake packages

ParMetis                  -DMETIS_USE_LONGINDEX=1
Elemental                 -DUSE_64BIT_INTS=ON

  Other packages

SuperLU_dist              -D_LONGINT
MUMPS                      NO PROPER SUPPORT
PTScotch                  -DINTSIZE64
PasTix                    VERSIONINT=_int64 CCTYPES=-DFORCE_INT64




Reply via email to