Dear Chen,

Below you can see how I have compiled the parallel version of GetFEM with
MUMPS on our cluster (Scientific linux 7). Depending on the specific
versions of compilers and mpi, some parts of these steps need to be
modified. The basic idea that worked for me is to use the 3 environment
variables, LDFLAGS, CPPFLAGS and LIBS to pass any external dependencies not
detected by the configure script (header files, library directories,
libraries respectively). I also used the --with-blas= configure option to
link to my compiled openblas (which also includes lapack). (The use of the
PYTHONPATH variable is specific to a bug in our cluster's mpi4py module.)

I hope this helps. You will probably need one more fix not included in my
instructions, when you come to the final step of building the getfem python
interface, but let's take that once you get there.

Best regards
Kostas

module load python3/3.8.11
module load mpi/4.1.1-gcc-10.3.0-binutils-2.36.1
module load mpi4py/3.0.3-python-3.8.11-openmpi-4.1.1

wget
https://github.com/xianyi/OpenBLAS/releases/download/v0.3.18/OpenBLAS-0.3.18.tar.gz
wget
http://deb.debian.org/debian/pool/main/s/scalapack/scalapack_2.1.0.orig.tar.gz
wget
http://deb.debian.org/debian/pool/main/s/scotch/scotch_6.1.1.orig.tar.xz
wget
http://deb.debian.org/debian/pool/non-free/p/parmetis/parmetis_4.0.3.orig.tar.gz
wget http://deb.debian.org/debian/pool/main/m/mumps/mumps_5.4.1.orig.tar.gz
wget http://deb.debian.org/debian/pool/main/q/qhull/qhull_2020.2.orig.tar.gz
wget
http://download-mirror.savannah.gnu.org/releases/getfem/stable/getfem-5.4.1.tar.gz

tar -zxf OpenBLAS-0.3.18.tar.gz
tar -zxf scalapack_2.1.0.orig.tar.gz
tar -xf scotch_6.1.1.orig.tar.xz
tar -zxf parmetis_4.0.3.orig.tar.gz
tar -zxf mumps_5.4.1.orig.tar.gz
tar -zxf qhull_2020.2.orig.tar.gz
tar -zxf getfem-5.4.1.tar.gz

cd OpenBLAS-0.3.18/
CC=gcc make -j8
CC=gcc make PREFIX=$HOME/tmp_opt/openblas install

cd ../scalapack-2.1.0/
sed '/^\(BLASLIB.*=\).*/ s//\1
$${HOME}\/tmp_opt\/openblas\/lib\/libopenblas.a/' SLmake.inc.example | sed
'/^\(LAPACKLIB.*=\).*/ s//\1 /' > SLmake.inc
sed -i '/^\(NOOPT.*=.*-O0\)\(\>.*\)/s//\1 -fallow-argument-mismatch
-fPIC\2/' SLmake.inc
sed -i '/^\(FCFLAGS.*=.*-O3\)\(\>.*\)/s//\1 -fallow-argument-mismatch
-fPIC\2/' SLmake.inc
sed -i '/^\(CCFLAGS.*=.*-O3\)\(\>.*\)/s//\1 -fPIC\2/' SLmake.inc
sed -i '/^\(RANLIB.*=\).*/s//\1 echo/' SLmake.inc
make -j4
mkdir -p $HOME/tmp_opt/scalapack/lib
cp libscalapack.a $HOME/tmp_opt/scalapack/lib/

cd ../parmetis-4.0.3/
make config prefix=$HOME/tmp_opt/parmetis
make -j4
make install
cp build/Linux-x86_64/libmetis/libmetis.a $HOME/tmp_opt/parmetis/lib/
cp metis/include/metis.h $HOME/tmp_opt/parmetis/include/

cd ../scotch-6.1.1/src
cp Make.inc/Makefile.inc.x86-64_pc_linux2 Makefile.inc
sed -i '/^\(CC[SD].*=\).*/s//\1 mpicc/' Makefile.inc
make -j4 scotch
make -j4 esmumps
make -j4 ptscotch
make -j4 ptesmumps
mkdir -p $HOME/tmp_opt/scotch
cp -r ../lib $HOME/tmp_opt/scotch/
cp -r ../include $HOME/tmp_opt/scotch/


cd ../../MUMPS_5.4.1/
cp Make.inc/Makefile.inc.generic Makefile.inc
sed -i '/^#\(SCOTCHDIR.*\)=.*/s//\1 = $${HOME}\/tmp_opt\/scotch/'
Makefile.inc
sed -i '/^#\(ISCOTCH.*\)=.*/s//\1 = -I$(SCOTCHDIR)\/include/' Makefile.inc
sed -i '/^#\(LSCOTCH.*-lpt.*\)/s//\1 -lscotch/' Makefile.inc
sed -i '/^#\(LMETISDIR.*\)=.*/s//\1 = $${HOME}\/tmp_opt\/parmetis\/lib/'
Makefile.inc
sed -i '/^#\(IMETIS.*\)=.*/s//\1 = -I$${HOME}\/tmp_opt\/parmetis\/include/'
Makefile.inc
sed -i '/^#\(LMETIS .*-lparmetis.*\)/s//\1/' Makefile.inc
sed -i '/^#\(ORDERINGSF .*-Dparmetis.*\)/s//\1/' Makefile.inc
sed -i '/^\(ORDERINGSF .*-Dpord$\)/s//#\1/' Makefile.inc
sed -i '/^\(CC.*= \)\(cc\)/s//\1mpi\2/' Makefile.inc
sed -i '/^\(FC.*= \)\(f90\)/s//\1mpi\2/' Makefile.inc
sed -i '/^\(FL.*= \)\(f90\)/s//\1mpi\2/' Makefile.inc
sed -i '/^\(RANLIB .*ranlib.*\)/s//#\1/' Makefile.inc
sed -i '/^#\(RANLIB .*echo.*\)/s//\1/' Makefile.inc
sed -i '/^\(LAPACK.*=\).*/s//\1/' Makefile.inc
sed -i '/^\(SCALAP.*=\).*/s//\1
$${HOME}\/tmp_opt\/scalapack\/lib\/libscalapack.a/' Makefile.inc
sed -i '/^\(INCPAR.*=\).*/s//\1/' Makefile.inc
sed -i '/^\(\LIBPAR.*=\).*/s//\1 $(SCALAP)/' Makefile.inc
sed -i '/^\(LIBBLAS.*=\).*/s//\1
$${HOME}\/tmp_opt\/openblas\/lib\/libopenblas.a/' Makefile.inc
sed -i '/^\(OPTF.*=.*\)-O\(\>.*\)/s//\1-fallow-argument-mismatch -O3\2/'
Makefile.inc
sed -i '/^\(OPT[CL].*=.*-O\)\(\>.*\)/s//\13\2/' Makefile.inc
make all
mkdir -p $HOME/tmp_opt/mumps
cp -r lib $HOME/tmp_opt/mumps/
cp -r include $HOME/tmp_opt/mumps/


cd ../qhull_2020.2/
make -j4
mkdir -p $HOME/tmp_opt/qhull
mkdir -p $HOME/tmp_opt/qhull/lib
mkdir -p $HOME/tmp_opt/qhull/include
mkdir -p $HOME/tmp_opt/qhull/include/libqhull
cp lib/libqhullstatic.a $HOME/tmp_opt/qhull/lib/libqhull.a
cp src/libqhull/*.h $HOME/tmp_opt/qhull/include/libqhull/

cd ../getfem-5.4.1
sed -i '/^\(.*\/env \)python\($\)/s//\1python3\2/' bin/extract_doc
PYTHONPATH=/appl/mpi4py/3.0.3-openmpi-4.1.1-gcc-10.3.0-binutils-2.36.1-python-3.8.11/lib/python3.8/site-packages/:$PYTHONPATH
\
./configure --prefix="$HOME/tmp_opt" \
LDFLAGS="-L$HOME/tmp_opt/mumps/lib -L$HOME/tmp_opt/parmetis/lib
-L$HOME/tmp_opt/scalapack/lib -L$HOME/tmp_opt/scotch/lib
-L$HOME/tmp_opt/qhull/lib" \
CPPFLAGS="-I$HOME/tmp_opt/mumps/include -I$HOME/tmp_opt/parmetis/include
-I$HOME/tmp_opt/qhull/include" \
LIBS="-lmumps_common -lpord -lgfortran -lscalapack -lptesmumps -lptscotch
-lptscotcherr -lscotch -lparmetis -lmetis -lmpi_usempif08
-lmpi_usempi_ignore_tkr -lmpi_mpifh" \
--with-blas="$HOME/tmp_opt/openblas/lib/libopenblas.a" \
--with-pic --with-optimization=-O3 --disable-matlab --enable-python
--enable-paralevel=2
make -j4
make install


On Tue, Nov 30, 2021 at 8:51 PM Chen,Jinzhen <jche...@mdanderson.org> wrote:

> Dear Dr. Poulios,
>
>
>
> Thank you so much for your prompt response and information.  They are very
> helpful.   I cc’ed this email to getfem-users@nongnu.org so that I don’t
> need to send to your personal email in next one. After reading your
> reference of the configure command, I started from scratch but still got
> compiling errors.
>
>
>
> I installed metis and mumps in the system from EPEL reporistory.
>
>
>
> [root@ ~]# rpm -qa |grep metis
>
> metis-devel-5.1.0-12.el7.x86_64
>
> metis-5.1.0-12.el7.x86_64
>
> [root@ ~]# rpm -qa |grep metis
>
> metis-devel-5.1.0-12.el7.x86_64
>
> metis-5.1.0-12.el7.x86_64
>
> [root@ ~]# rpm -qa |grep MUMPS
>
> MUMPS-srpm-macros-5.3.5-1.el7.noarch
>
> MUMPS-5.3.5-1.el7.x86_64
>
> MUMPS-devel-5.3.5-1.el7.x86_64
>
> MUMPS-common-5.3.5-1.el7.noarch
>
>
>
> I used the openmpi/4.1.1, gcc/3.9.0, blas/3.8.0 and zlib/1.2.11 modules
> for this build. Below is the module list command output.
>
>
>
> [ris_hpc_apps@r1drpswdev3 getfem-5.4.1]$ module list
>
>
>
> Currently Loaded Modules:
>
>   1) openmpi/4.1.1   2) python/3.7.3-anaconda   3) blas/3.8.0   4)
> zlib/1.2.11   5) gcc/9.3.0
>
>
>
> Here is my configuration command running on getfem-5.4.1 directory
>
>
>
> ./configure CXX="/risapps/rhel7/openmpi/4.1.1/bin/mpic++"
> CC="/risapps/rhel7/openmpi/4.1.1/bin/mpicc"
> FC="/risapps/rhel7/openmpi/4.1.1/bin/mpifort"
> LIBS="-L/risapps/rhel7/gcc/9.3.0/lib64 -L/risapps/rhel7/openmpi/4.1.1/lib
> -L/risapps/rhel7/blas/3.8.0 -L/risapps/rhel7/zlib/1.2.11/lib  -L/usr/lib64
> -lmetis -lzmumps -ldmumps -lcmumps -lsmumps -lmumps_common"
> CXXFLAGES="-I/risapps/rhel7/gcc/9.3.0/include
> -I/risapps/rhel7/openmpi/4.1.1/include -I/risapps/rhel7/zlib/1.2.11/include
> -I/usr/include -I/usr/include/MUMPS"
> CPPFLAGES="-I/risapps/rhel7/gcc/9.3.0/include
> -I/risapps/rhel7/openmpi/4.1.1/include -I/risapps/rhel7/zlib/1.2.11/include
> -I/usr/include -I/usr/include/MUMPS"
> CFLAGS="-I/risapps/rhel7/gcc/9.3.0/include
> -I/risapps/rhel7/openmpi/4.1.1/include -I/risapps/rhel7/zlib/1.2.11/include
> -I/usr/include -I/usr/include/MUMPS"
> PYTHON="/risapps/rhel7/python/3.7.3/bin/python" PYTHON_VERSION=3.7.3
> --with-mumps="-L/usr/lib64"
> --with-mumps-include-dir="-I/usr/include/MUMPS"
> --with-blas="-L/risapps/rhel7/blas/3.8.0"
> --prefix=/risapps/rhel7/getfem-mpi/5.4.1 --enable-shared --enable-metis
> --enable-par-mumps -enable-paralevel=2
>
>
>
> The current errors on the make command are the following:
>
> ….
>
> libtool: compile:  /risapps/rhel7/openmpi/4.1.1/bin/mpic++ -DHAVE_CONFIG_H
> -I. -I.. -I../src -I../src -I.. -I/usr/local/include -DGETFEM_PARA_LEVEL=2
> -DGMM_USES_MPI -DGMM_USES_BLAS -DGMM_U  SES_BLAS_INTERFACE
> -I/usr/include/MUMPS -O3 -std=c++14 -MT dal_bit_vector.lo -MD -MP -MF
> .deps/dal_bit_vector.Tpo -c dal_bit_vector.cc  -fPIC -DPIC -o
> .libs/dal_bit_vector.o
>
> In file included from ./gmm/gmm_kernel.h:49,
>
>                  from getfem/bgeot_config.h:50,
>
>                  from getfem/getfem_omp.h:46,
>
>                  from getfem/dal_basic.h:42,
>
>                  from getfem/dal_bit_vector.h:51,
>
>                  from dal_bit_vector.cc:23:
>
> ./gmm/gmm_matrix.h:956:32: error: ‘MPI_Datatype’ does not name a type
>
>   956 |   template <typename T> inline MPI_Datatype mpi_type(T)
>
>       |                                ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:958:10: error: ‘MPI_Datatype’ does not name a type
>
>   958 |   inline MPI_Datatype mpi_type(double) { return MPI_DOUBLE; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:959:10: error: ‘MPI_Datatype’ does not name a type
>
>   959 |   inline MPI_Datatype mpi_type(float) { return MPI_FLOAT; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:960:10: error: ‘MPI_Datatype’ does not name a type
>
>   960 |   inline MPI_Datatype mpi_type(long double) { return
> MPI_LONG_DOUBLE; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:962:10: error: ‘MPI_Datatype’ does not name a type
>
>   962 |   inline MPI_Datatype mpi_type(std::complex<float>) { return
> MPI_COMPLEX; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:963:10: error: ‘MPI_Datatype’ does not name a type
>
>   963 |   inline MPI_Datatype mpi_type(std::complex<double>) { return
> MPI_DOUBLE_COMPLEX; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:965:10: error: ‘MPI_Datatype’ does not name a type
>
>   965 |   inline MPI_Datatype mpi_type(int) { return MPI_INT; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:966:10: error: ‘MPI_Datatype’ does not name a type
>
>   966 |   inline MPI_Datatype mpi_type(unsigned int) { return
> MPI_UNSIGNED; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:967:10: error: ‘MPI_Datatype’ does not name a type
>
>   967 |   inline MPI_Datatype mpi_type(long) { return MPI_LONG; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h:968:10: error: ‘MPI_Datatype’ does not name a type
>
>   968 |   inline MPI_Datatype mpi_type(unsigned long) { return
> MPI_UNSIGNED_LONG; }
>
>       |          ^~~~~~~~~~~~
>
> ./gmm/gmm_matrix.h: In function ‘typename gmm::strongest_value_type3<V1,
> V2, MATSP>::value_type gmm::vect_sp(const
> gmm::mpi_distributed_matrix<MAT>&, const V1&, const V2&)’:
>
> ./gmm/gmm_matrix.h:1012:50: error: ‘MPI_SUM’ was not declared in this scope
>
> 1012 |     MPI_Allreduce(&res, &rest, 1, mpi_type(T()),
> MPI_SUM,MPI_COMM_WORLD);
>
>       |                                                  ^~~~~~~
>
> ./gmm/gmm_matrix.h: In function ‘void gmm::mult_add(const
> gmm::mpi_distributed_matrix<MAT>&, const V1&, V2&)’:
>
> ./gmm/gmm_matrix.h:1023:20: error: there are no arguments to ‘MPI_Wtime’
> that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must
> be available [-fpermissive]
>
> 1023 |     double t_ref = MPI_Wtime();
>
>       |                    ^~~~~~~~~
>
> ./gmm/gmm_matrix.h:1023:20: note: (if you use ‘-fpermissive’, G++ will
> accept your code, but allowing the use of an undeclared name is deprecated)
>
> ./gmm/gmm_matrix.h:1026:21: error: there are no arguments to ‘MPI_Wtime’
> that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must
> be available [-fpermissive]
>
> 1026 |     double t_ref2 = MPI_Wtime();
>
>       |                     ^~~~~~~~~
>
> ./gmm/gmm_matrix.h:1028:19: error: ‘MPI_SUM’ was not declared in this scope
>
> 1028 |                   MPI_SUM,MPI_COMM_WORLD);
>
>       |                   ^~~~~~~
>
> ./gmm/gmm_matrix.h:1029:18: error: there are no arguments to ‘MPI_Wtime’
> that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must
> be available [-fpermissive]
>
> 1029 |     tmult_tot2 = MPI_Wtime()-t_ref2;
>
>       |                  ^~~~~~~~~
>
> ./gmm/gmm_matrix.h:1032:17: error: there are no arguments to ‘MPI_Wtime’
> that depend on a template parameter, so a declaration of ‘MPI_Wtime’ must
> be available [-fpermissive]
>
> 1032 |     tmult_tot = MPI_Wtime()-t_ref;
>
>       |                 ^~~~~~~~~
>
> make[2]: *** [Makefile:941: dal_bit_vector.lo] Error 1
>
> make[2]: Leaving directory '/risapps/build7/getfem-5.4.1/src'
>
> make[1]: *** [Makefile:577: all-recursive] Error 1
>
> make[1]: Leaving directory '/risapps/build7/getfem-5.4.1'
>
> make: *** [Makefile:466: all] Error 2
>
>
>
> I really appreciate your help. Thank you again !
>
>
>
> Best Regards
>
> Jinzhen Chen
>
>
>
>
>
> *From: *Konstantinos Poulios <k...@mek.dtu.dk>
> *Date: *Tuesday, November 30, 2021 at 1:53 AM
> *To: *"Chen,Jinzhen" <jche...@mdanderson.org>
> *Subject: *[EXT] Re: getfem installation
>
>
>
> *WARNING: *This email originated from outside of MD Anderson. Please
> validate the sender's email address before clicking on links or attachments
> as they may not be safe.
>
>
>
> Dear Jinzhen Chen,
>
>
>
> Thanks for your question. Yes you should be able to compile GetFEM, also
> the parallel version of it on Redhat. I haven't tried it but there isn't
> anything distribution specific in the GetFEM code.
>
>
>
> Having said that, the parallel version has not been tested very recently,
> and it might need some performance fixes from our side, to get a good
> scaling for Anne Ceciles problem. Having compiled the parallel version of
> GetFEM on your cluster is a good starting point in any case to detect
> bottlenecks.
>
>
>
> The tricky parts for building GetFEM are normally how to link to mumps,
> metis and other dependencies. If you could send me the compilation errors
> that you get, either on this address or on the getfem mailing list
> getfem-users@nongnu.org I can try to help you to resolve the issues.
>
>
>
> As an additional reference, some time ago, when I asked our cluster
> administrators to compile GetFEM they used the following configure options:
>
>
>
> $ ../configure CXX=mpicxx CC=mpicc FC=mpifort LIBS=
> -L/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib64
> -L/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib64
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/zlib/1.2.11/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/generic/numactl/2.0.11/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/libxml2/2.9.7/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/hwloc/1.11.9/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openmpi/3.0.0/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/parmetis/4.0.3/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/scalapack/204/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/scotch/6.0.4/gnu-7.3.0/lib
> -L/zdata/g
>  roups/common/nicpa/2018-feb/XeonX5550/mumps/5.1.2/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/build-tools/1.0/lib64
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/gcc/7.3.0/lib64
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/zlib/1.2.11/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/generic/numactl/2.0.11/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/libxml2/2.9.7/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/hwloc/1.11.9/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/openmpi/3.0.0/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/parmetis/4.0.3/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/scalapack/204/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/Xeon
>  X5550/openblas/0.2.20/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/scotch/6.0.4/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/mumps/5.1.2/gnu-7.3.0/lib
> -lzmumps -ldmumps -lcmumps -lsmumps -lmumps_common -lpord -lesmumps
> -lscotch -lscotcherr -lparmetis -lmetis
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib
> -lopenblas --disable-openmp --enable-paralevel --enable-metis
> --enable-par-mumps --enable-python --disable-boost --disable-matlab
> --disable-scilab --with-blas=
> -L/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib
> -Wl,-rpath=/zdata/groups/common/nicpa/2018-feb/XeonX5550/openblas/0.2.20/gnu-7.3.0/lib
> -lopenblas --prefix=<change-here-for-your-installation-directory>
>
>
>
> Best regards
>
> Konstantinos
>
>
>
> On Tue, 2021-11-30 at 00:24 +0000, Chen,Jinzhen wrote:
>
> Dear Dr Konstantinos Poulios,
>
>
>
> My name is Jinzhen Chen, a HPC administrator on MD Anderson Cancer Center,
> Houston, USA.  I am helping a user (Dr.  Lesage,Anne Cecile J) to install
> getfem on our HPC cluster,  which OS is rhel7.9.  I got its basic
> installed. However, When I tried to use option –enable-paralevel=2 of the
> configure based onhttps://getfem.org/userdoc/parallel.html
> <https://urldefense.com/v3/__https:/getfem.org/userdoc/parallel.html__;!!PfbeBCCAmug!ygWbJHCuteXJk48UUs0ySWQrr85DGqhePEiBMq7aoO3iDwtVzR_1URQh62UbmXurRzQ$>,
> I kept getting compiling errors. Looks like some libraries are missing, not
> visible or not compatible.   I have metis and MUMPS installed on the system
> and used openmpi/4.1.1 module.
>
>
>
> My question is it is possible to install the parallel version of getfem on
> rhel7 ?  if so, how can I contact developer or someone for the issues?  I
> really appreciate your help and am looking forward to hearing from you.
>
>
>
> Thank you very much !
>
>
>
> *Regards*
>
> *Jinzhen  Chen – HPC Team*
>
> *MD Anderson Cancer Center <http://www.mdanderson.org/>*
>
> *inside:Information Services
> <http://inside.mdanderson.org/departments/information-services/> *
>
> *inside:**HPC Request <http://hpcrequest.mdanderson.edu/>*
>
> *Email: jche...@mdanderson.org <jche...@mdanderson.org> | Tel:
> 713-745-6226*
>
>
>
>
>
> The information contained in this e-mail message may be privileged,
> confidential, and/or protected from disclosure. This e-mail message may
> contain protected health information (PHI); dissemination of PHI should
> comply with applicable federal and state laws. If you are not the intended
> recipient, or an authorized representative of the intended recipient, any
> further review, disclosure, use, dissemination, distribution, or copying of
> this message or any attachment (or the information contained therein) is
> strictly prohibited. If you think that you have received this e-mail
> message in error, please notify the sender by return e-mail and delete all
> references to it and its contents from your systems.
>
> The information contained in this e-mail message may be privileged,
> confidential, and/or protected from disclosure. This e-mail message may
> contain protected health information (PHI); dissemination of PHI should
> comply with applicable federal and state laws. If you are not the intended
> recipient, or an authorized representative of the intended recipient, any
> further review, disclosure, use, dissemination, distribution, or copying of
> this message or any attachment (or the information contained therein) is
> strictly prohibited. If you think that you have received this e-mail
> message in error, please notify the sender by return e-mail and delete all
> references to it and its contents from your systems.
>

Reply via email to