Re: [OMPI users] --mca btl params

2018-10-09 Thread Noam Bernstein
> On Oct 9, 2018, at 7:02 PM, Noam Bernstein  
> wrote:
> 
>> On Oct 9, 2018, at 6:01 PM, Jeffrey A Cummings > > wrote:
>> 
>> What are the allowable values for the –mca btl parameter on the mpirun 
>> command line?
> 
> That's basically what the output of 
> ompi_info -a
> says.

Oops - managed to fail to paste in the actual result.  I can get that tomorrow.

Noam___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] --mca btl params

2018-10-09 Thread Noam Bernstein
> On Oct 9, 2018, at 6:01 PM, Jeffrey A Cummings  
> wrote:
> 
> What are the allowable values for the –mca btl parameter on the mpirun 
> command line?

That's basically what the output of 
ompi_info -a
says.

So it appears, for the moment at least, like things are magically better.  In 
the process of organizing all the information that's requested on the web site, 
I caught some (I thought innocuous, but apparently not) mismatches in 
kernel-related rpms on the nodes.  Once those were cleared up things started 
working.  I don't really know why, but the point is moot.  Thanks.


Noam

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] --mca btl params

2018-10-09 Thread Jeffrey A Cummings
What are the allowable values for the –mca btl parameter on the mpirun command 
line?

– Jeff

Jeffrey A. Cummings
Engineering Specialist
Mission Analysis and Operations Department
Systems Analysis and Simulation Subdivision
Systems Engineering Division
Engineering and Technology Group
The Aerospace Corporation
571-304-7548
jeffrey.a.cummi...@aero.org

From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Andy Riebs
Sent: Tuesday, October 09, 2018 2:34 PM
To: users@lists.open-mpi.org
Subject: Re: [OMPI users] no openmpi over IB on new CentOS 7 system

Noam,

Start with the FAQ, etc., under "Getting Help/Support" in the left-column menu 
at https://www.open-mpi.org/

Andy

From: Noam Bernstein 

Sent: Tuesday, October 09, 2018 2:26PM
To: Open Mpi Users 
Cc:
Subject: [OMPI users] no openmpi over IB on new CentOS 7 system
Hi - I’m trying to get OpenMPI working on a newly configured CentOS 7 system, 
and I’m not even sure what information would be useful to provide.  I’m using 
the CentOS built in libibverbs and/or libfabric, and I configure openmpi with 
just
  —with-verbs —with-ofi —prefix=$DEST
also tried —without-ofi, no change.  Basically, I can run with “—mca btl 
self,vader”, but if I try “—mca btl,openib” I get an error from each process:
[compute-0-0][[24658,1],5][connect/btl_openib_connect_udcm.c:1245:udcm_rc_qp_to_rtr]
 error modifing QP to RTR errno says Invalid argument
If I don’t specify the btl it appears to try to set up openib with the same 
errors, then crashes on some free() related segfault, presumably when it tries 
to actually use vader.

The machine seems to be able to see its IB interface, as reported by things 
like ibstatus or ibv_devinfo.  I’m not sure what else to look for.  I also 
confirmed that “ulimit -l” reports unlimited.

Does anyone have any suggestions as to how to diagnose this issue?


   thanks,

   Noam




___

users mailing list

users@lists.open-mpi.org

https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] no openmpi over IB on new CentOS 7 system

2018-10-09 Thread Andy Riebs

Noam,

Start with the FAQ, etc., under "Getting Help/Support" in the 
left-column menu at https://www.open-mpi.org/


Andy


*From:* Noam Bernstein 
*Sent:* Tuesday, October 09, 2018 2:26PM
*To:* Open Mpi Users 
*Cc:*
*Subject:* [OMPI users] no openmpi over IB on new CentOS 7 system

Hi - I’m trying to get OpenMPI working on a newly configured CentOS 7 
system, and I’m not even sure what information would be useful to 
provide.  I’m using the CentOS built in libibverbs and/or libfabric, and 
I configure openmpi with just

—with-verbs —with-ofi —prefix=$DEST
also tried —without-ofi, no change.  Basically, I can run with “—mca btl 
self,vader”, but if I try “—mca btl,openib” I get an error from each 
process:


   
[compute-0-0][[24658,1],5][connect/btl_openib_connect_udcm.c:1245:udcm_rc_qp_to_rtr]
   error modifing QP to RTR errno says Invalid argument

If I don’t specify the btl it appears to try to set up openib with the 
same errors, then crashes on some free() related segfault, presumably 
when it tries to actually use vader.


The machine seems to be able to see its IB interface, as reported by 
things like ibstatus or ibv_devinfo.  I’m not sure what else to look 
for.  I also confirmed that “ulimit -l” reports unlimited.


Does anyone have any suggestions as to how to diagnose this issue?

thanks,
Noam


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] no openmpi over IB on new CentOS 7 system

2018-10-09 Thread Noam Bernstein
Hi - I’m trying to get OpenMPI working on a newly configured CentOS 7 system, 
and I’m not even sure what information would be useful to provide.  I’m using 
the CentOS built in libibverbs and/or libfabric, and I configure openmpi with 
just
—with-verbs —with-ofi —prefix=$DEST
also tried —without-ofi, no change.  Basically, I can run with “—mca btl 
self,vader”, but if I try “—mca btl,openib” I get an error from each process:
[compute-0-0][[24658,1],5][connect/btl_openib_connect_udcm.c:1245:udcm_rc_qp_to_rtr]
 error modifing QP to RTR errno says Invalid argument
If I don’t specify the btl it appears to try to set up openib with the same 
errors, then crashes on some free() related segfault, presumably when it tries 
to actually use vader.

The machine seems to be able to see its IB interface, as reported by things 
like ibstatus or ibv_devinfo.  I’m not sure what else to look for.  I also 
confirmed that “ulimit -l” reports unlimited.

Does anyone have any suggestions as to how to diagnose this issue?

thanks,
Noam___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Cannot run MPI code on multiple cores with PBS

2018-10-09 Thread John Hearns via users
Michele, as other have said  libibverbs.so.1  is not in your library path.
Can you ask the person who manages yoru cluster where libibverbs is
located on the compute nodes?
Also try to runibv_devinfo

On Tue, 9 Oct 2018 at 16:03, Castellana Michele
 wrote:
>
> Dear John,
> Thank you for your reply. Here is the output of ldd
>
> $ ldd ./code.io
> linux-vdso.so.1 =>  (0x7ffcc759f000)
> liblapack.so.3 => /usr/lib64/liblapack.so.3 (0x7fbc1c613000)
> libgsl.so.0 => /usr/lib64/libgsl.so.0 (0x7fbc1c1ea000)
> libgslcblas.so.0 => /usr/lib64/libgslcblas.so.0 (0x7fbc1bfad000)
> libmpi.so.40 => /data/users/xx/openmpi/lib/libmpi.so.40 (0x7fbc1bcad000)
> libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x7fbc1b9a6000)
> libm.so.6 => /usr/lib64/libm.so.6 (0x7fbc1b6a4000)
> libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x7fbc1b48e000)
> libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7fbc1b272000)
> libc.so.6 => /usr/lib64/libc.so.6 (0x7fbc1aea5000)
> libblas.so.3 => /usr/lib64/libblas.so.3 (0x7fbc1ac4c000)
> libgfortran.so.3 => /usr/lib64/libgfortran.so.3 (0x7fbc1a92a000)
> libsatlas.so.3 => /usr/lib64/atlas/libsatlas.so.3 (0x7fbc19cdd000)
> libopen-rte.so.40 => /data/users/xx/openmpi/lib/libopen-rte.so.40 
> (0x7fbc19a2d000)
> libopen-pal.so.40 => /data/users/xx/openmpi/lib/libopen-pal.so.40 
> (0x7fbc19733000)
> libdl.so.2 => /usr/lib64/libdl.so.2 (0x7fbc1952f000)
> librt.so.1 => /usr/lib64/librt.so.1 (0x7fbc19327000)
> libutil.so.1 => /usr/lib64/libutil.so.1 (0x7fbc19124000)
> libz.so.1 => /usr/lib64/libz.so.1 (0x7fbc18f0e000)
> /lib64/ld-linux-x86-64.so.2 (0x7fbc1cd7)
> libquadmath.so.0 => /usr/lib64/libquadmath.so.0 (0x7fbc18cd2000)
>
> and the one for the PBS version
>
> $   qstat --version
> Version: 6.1.2
> Commit: 661e092552de43a785c15d39a3634a541d86898e
>
> After I created the symbolic links libcrypto.so.0.9.8  libssl.so.0.9.8, I 
> still have one error message left from MPI:
>
> mca_base_component_repository_open: unable to open mca_btl_openib: 
> libibverbs.so.1: cannot open shared object file: No such file or directory 
> (ignored)
>
> Please let me know if you have any suggestions.
>
> Best,
>
>
> On Oct 4, 2018, at 3:12 PM, John Hearns via users  
> wrote:
>
> Michele, the command is   ldd ./code.io
> I just Googled - ldd  means List dynamic Dependencies
>
> To find out the PBS batch system type - that is a good question!
> Try this: qstat --version
>
>
>
> On Thu, 4 Oct 2018 at 10:12, Castellana Michele
>  wrote:
>
>
> Dear John,
> Thank you for your reply. I have tried
>
> ldd mpirun ./code.o
>
> but I get an error message, I do not know what is the proper syntax to use 
> ldd command. Here is the information about the Linux version
>
> $ cat /etc/os-release
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/;
> BUG_REPORT_URL="https://bugs.centos.org/;
>
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION=“7"
>
> May you please tell me how to check whether the batch system is PBSPro or 
> OpenPBS?
>
> Best,
>
>
>
>
> On Oct 4, 2018, at 10:30 AM, John Hearns via users  
> wrote:
>
> Michele  one tip:   log into a compute node using ssh and as your own 
> username.
> If you use the Modules envirnonment then load the modules you use in
> the job script
> then use the  ldd  utility to check if you can load all the libraries
> in the code.io executable
>
> Actually you are better to submit a short batch job which does not use
> mpirun but uses ldd
> A proper batch job will duplicate the environment you wish to run in.
>
>   ldd ./code.io
>
> By the way, is the batch system PBSPro or OpenPBS?  Version 6 seems a bit old.
> Can you say what version of Redhat or CentOS this cluster is installed with?
>
>
>
> On Thu, 4 Oct 2018 at 00:02, Castellana Michele
>  wrote:
>
> I fixed it, the correct file was in /lib64, not in /lib.
>
> Thank you for your help.
>
> On Oct 3, 2018, at 11:30 PM, Castellana Michele  
> wrote:
>
> Thank you, I found some libcrypto files in /usr/lib indeed:
>
> $ ls libcry*
> libcrypt-2.17.so  libcrypto.so.10  libcrypto.so.1.0.2k  libcrypt.so.1
>
> but I could not find libcrypto.so.0.9.8. Here they suggest to create a 
> hyperlink, but if I do I still get an error from MPI. Is there another way 
> around this?
>
> Best,
>
> On Oct 3, 2018, at 11:00 PM, Jeff Squyres (jsquyres) via users 
>  wrote:
>
> It's probably in your Linux distro somewhere -- I'd guess you're missing a 
> package (e.g., an RPM or a deb) out on your compute nodes...?
>
>
> On Oct 3, 2018, at 4:24 PM, Castellana Michele  
> wrote:
>
> Dear Ralph,
> Thank you for your reply. Do you know where I could find libcrypto.so.0.9.8 ?
>
> Best,
>
> On Oct 3, 2018, at 9:41 PM, Ralph 

Re: [OMPI users] Cannot run MPI code on multiple cores with PBS

2018-10-09 Thread Castellana Michele
Dear John,
Thank you for your reply. Here is the output of ldd

$ ldd ./code.io
linux-vdso.so.1 =>  (0x7ffcc759f000)
liblapack.so.3 => /usr/lib64/liblapack.so.3 (0x7fbc1c613000)
libgsl.so.0 => /usr/lib64/libgsl.so.0 (0x7fbc1c1ea000)
libgslcblas.so.0 => /usr/lib64/libgslcblas.so.0 (0x7fbc1bfad000)
libmpi.so.40 => /data/users/xx/openmpi/lib/libmpi.so.40 (0x7fbc1bcad000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x7fbc1b9a6000)
libm.so.6 => /usr/lib64/libm.so.6 (0x7fbc1b6a4000)
libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x7fbc1b48e000)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7fbc1b272000)
libc.so.6 => /usr/lib64/libc.so.6 (0x7fbc1aea5000)
libblas.so.3 => /usr/lib64/libblas.so.3 (0x7fbc1ac4c000)
libgfortran.so.3 => /usr/lib64/libgfortran.so.3 (0x7fbc1a92a000)
libsatlas.so.3 => /usr/lib64/atlas/libsatlas.so.3 (0x7fbc19cdd000)
libopen-rte.so.40 => /data/users/xx/openmpi/lib/libopen-rte.so.40 
(0x7fbc19a2d000)
libopen-pal.so.40 => /data/users/xx/openmpi/lib/libopen-pal.so.40 
(0x7fbc19733000)
libdl.so.2 => /usr/lib64/libdl.so.2 (0x7fbc1952f000)
librt.so.1 => /usr/lib64/librt.so.1 (0x7fbc19327000)
libutil.so.1 => /usr/lib64/libutil.so.1 (0x7fbc19124000)
libz.so.1 => /usr/lib64/libz.so.1 (0x7fbc18f0e000)
/lib64/ld-linux-x86-64.so.2 (0x7fbc1cd7)
libquadmath.so.0 => /usr/lib64/libquadmath.so.0 (0x7fbc18cd2000)

and the one for the PBS version

$   qstat --version
Version: 6.1.2
Commit: 661e092552de43a785c15d39a3634a541d86898e

After I created the symbolic links libcrypto.so.0.9.8  libssl.so.0.9.8, I still 
have one error message left from MPI:

mca_base_component_repository_open: unable to open mca_btl_openib: 
libibverbs.so.1: cannot open shared object file: No such file or directory 
(ignored)

Please let me know if you have any suggestions.

Best,


On Oct 4, 2018, at 3:12 PM, John Hearns via users 
mailto:users@lists.open-mpi.org>> wrote:

Michele, the command is   ldd ./code.io
I just Googled - ldd  means List dynamic Dependencies

To find out the PBS batch system type - that is a good question!
Try this: qstat --version



On Thu, 4 Oct 2018 at 10:12, Castellana Michele
mailto:michele.castell...@curie.fr>> wrote:

Dear John,
Thank you for your reply. I have tried

ldd mpirun ./code.o

but I get an error message, I do not know what is the proper syntax to use ldd 
command. Here is the information about the Linux version

$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/;
BUG_REPORT_URL="https://bugs.centos.org/;

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION=“7"

May you please tell me how to check whether the batch system is PBSPro or 
OpenPBS?

Best,




On Oct 4, 2018, at 10:30 AM, John Hearns via users 
mailto:users@lists.open-mpi.org>> wrote:

Michele  one tip:   log into a compute node using ssh and as your own username.
If you use the Modules envirnonment then load the modules you use in
the job script
then use the  ldd  utility to check if you can load all the libraries
in the code.io executable

Actually you are better to submit a short batch job which does not use
mpirun but uses ldd
A proper batch job will duplicate the environment you wish to run in.

  ldd ./code.io

By the way, is the batch system PBSPro or OpenPBS?  Version 6 seems a bit old.
Can you say what version of Redhat or CentOS this cluster is installed with?



On Thu, 4 Oct 2018 at 00:02, Castellana Michele
mailto:michele.castell...@curie.fr>> wrote:

I fixed it, the correct file was in /lib64, not in /lib.

Thank you for your help.

On Oct 3, 2018, at 11:30 PM, Castellana Michele 
mailto:michele.castell...@curie.fr>> wrote:

Thank you, I found some libcrypto files in /usr/lib indeed:

$ ls libcry*
libcrypt-2.17.so  libcrypto.so.10  libcrypto.so.1.0.2k  libcrypt.so.1

but I could not find libcrypto.so.0.9.8. Here they suggest to create a 
hyperlink, but if I do I still get an error from MPI. Is there another way 
around this?

Best,

On Oct 3, 2018, at 11:00 PM, Jeff Squyres (jsquyres) via users 
mailto:users@lists.open-mpi.org>> wrote:

It's probably in your Linux distro somewhere -- I'd guess you're missing a 
package (e.g., an RPM or a deb) out on your compute nodes...?


On Oct 3, 2018, at 4:24 PM, Castellana Michele 
mailto:michele.castell...@curie.fr>> wrote:

Dear Ralph,
Thank you for your reply. Do you know where I could find libcrypto.so.0.9.8 ?

Best,

On Oct 3, 2018, at 9:41 PM, Ralph H Castain 
mailto:r...@open-mpi.org>> wrote:

Actually, I see that you do have the tm components built, but they cannot be 
loaded because you are missing libcrypto from your LD_LIBRARY_PATH


On Oct 3, 2018, at 

Re: [OMPI users] ompio on Lustre

2018-10-09 Thread Gabriel, Edgar
Ok, thanks. I usually run these test with 4 or 8, but the major item is that 
atomicity is one of the areas that are not well supported in ompio (along with 
data representations), so a failure in those tests is not entirely surprising . 
Most of the work to support atomicity properly is actually in place, but we 
didn't have the manpower (and requests to be honest) to finish that work.

Thanks
Edgar 


> -Original Message-
> From: Dave Love [mailto:dave.l...@manchester.ac.uk]
> Sent: Tuesday, October 9, 2018 7:05 AM
> To: Gabriel, Edgar 
> Cc: Open MPI Users 
> Subject: Re: [OMPI users] ompio on Lustre
> 
> "Gabriel, Edgar"  writes:
> 
> > Hm, thanks for the report, I will look into this. I did not run the
> > romio tests, but the hdf5 tests are run regularly and with 3.1.2 you
> > should not have any problems on a regular unix fs. How many processes
> > did you use, and which tests did you run specifically? The main tests
> > that I execute from their parallel testsuite are testphdf5 and
> > t_shapesame.
> 
> Using OMPI 3.1.2, in the hdf5 testpar directory I ran this as a 24-core SMP 
> job
> (so 24 processes), where $TMPDIR is on ext4:
> 
>   export HDF5_PARAPREFIX=$TMPDIR
>   make check RUNPARALLEL='mpirun'
> 
> It stopped after testphdf5 spewed "Atomicity Test Failed" errors.
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] ompio on Lustre

2018-10-09 Thread Dave Love
"Gabriel, Edgar"  writes:

> Hm, thanks for the report, I will look into this. I did not run the
> romio tests, but the hdf5 tests are run regularly and with 3.1.2 you
> should not have any problems on a regular unix fs. How many processes
> did you use, and which tests did you run specifically? The main tests
> that I execute from their parallel testsuite are testphdf5 and
> t_shapesame.

Using OMPI 3.1.2, in the hdf5 testpar directory I ran this as a 24-core
SMP job (so 24 processes), where $TMPDIR is on ext4:

  export HDF5_PARAPREFIX=$TMPDIR
  make check RUNPARALLEL='mpirun'

It stopped after testphdf5 spewed "Atomicity Test Failed" errors.
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Kawashima, Takahiro
I confirmed the patch resolved the issue with OpenJDK 11.
I've created a PR for the master branch.
I'll also create PRs for release branches.
Thanks to your bug report!

  https://github.com/open-mpi/ompi/pull/5870

Takahiro Kawashima,
Fujitsu

> Siegmar,
> 
> I think you are using Java 11 and it changed the default output HTML version.
> I'll take a look. But downloading OpenJDK 11 takes time...
> Probably the following patch will resolve the issue.
> 
> 
> diff --git ompi/mpi/java/java/Comm.java ompi/mpi/java/java/Comm.java
> index 7d11db6601..f51c28c798 100644
> --- a/ompi/mpi/java/java/Comm.java
> +++ b/ompi/mpi/java/java/Comm.java
> @@ -653,7 +653,7 @@ public class Comm implements Freeable, Cloneable
>  
> /**
>  * Start a buffered mode, nonblocking send.
> -* Java binding of the MPI operation MPI_IBSEND.
> +* Java binding of the MPI operation {@code MPI_IBSEND}.
>  * @param buf   send buffer
>  * @param count number of items to send
>  * @param type  datatype of each item in send buffer
> 
> 
> Takahiro Kawashima,
> MPI development team,
> Fujitsu
> 
> > What version of Java are you using?
> > Could you type "java -version" and show the output?
> > 
> > Takahiro Kawashima,
> > Fujitsu
> > 
> > > today I've tried to build openmpi-v4.0.x-201810090241-2124192 and
> > > openmpi-master-201810090329-e9e4d2a on my "SUSE Linux Enterprise Server
> > > 12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
> > > Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
> > > versions (Sun C still cannot built master due to undefined references as
> > > I mentioned some days ago).
> > > 
> > > loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 129 tail -20 
> > > log.make.Linux.x86_64.64_gcc
> > > Making all in java
> > > make[3]: Entering directory 
> > > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'
> > >JAVACMPI.class
> > > JAVADOCdoc
> > > Creating destination directory: "doc/"
> > > ../../../../../openmpi-v4.0.x-201810090241-2124192/ompi/mpi/java/java/Comm.java:656:
> > >  
> > > error: tag not supported in the generated HTML version: tt
> > >   * Java binding of the MPI operation MPI_IBSEND.
> > >  ^
> > > 1 error
> > > Makefile:2224: recipe for target 'doc' failed
> > > make[3]: *** [doc] Error 1
> > > make[3]: Leaving directory 
> > > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'
> > > Makefile:1743: recipe for target 'all-recursive' failed
> > > make[2]: *** [all-recursive] Error 1
> > > make[2]: Leaving directory 
> > > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java'
> > > Makefile:3521: recipe for target 'all-recursive' failed
> > > make[1]: *** [all-recursive] Error 1
> > > make[1]: Leaving directory 
> > > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi'
> > > Makefile:1896: recipe for target 'all-recursive' failed
> > > make: *** [all-recursive] Error 1
> > > loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 130
> > > 
> > > 
> > > I would be grateful, if somebody can fix the problem. Do you need anything
> > > else? Thank you very much for any help in advance.

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Kawashima, Takahiro
Siegmar,

I think you are using Java 11 and it changed the default output HTML version.
I'll take a look. But downloading OpenJDK 11 takes time...
Probably the following patch will resolve the issue.


diff --git ompi/mpi/java/java/Comm.java ompi/mpi/java/java/Comm.java
index 7d11db6601..f51c28c798 100644
--- a/ompi/mpi/java/java/Comm.java
+++ b/ompi/mpi/java/java/Comm.java
@@ -653,7 +653,7 @@ public class Comm implements Freeable, Cloneable
 
/**
 * Start a buffered mode, nonblocking send.
-* Java binding of the MPI operation MPI_IBSEND.
+* Java binding of the MPI operation {@code MPI_IBSEND}.
 * @param buf   send buffer
 * @param count number of items to send
 * @param type  datatype of each item in send buffer


Takahiro Kawashima,
MPI development team,
Fujitsu

> What version of Java are you using?
> Could you type "java -version" and show the output?
> 
> Takahiro Kawashima,
> Fujitsu
> 
> > today I've tried to build openmpi-v4.0.x-201810090241-2124192 and
> > openmpi-master-201810090329-e9e4d2a on my "SUSE Linux Enterprise Server
> > 12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
> > Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
> > versions (Sun C still cannot built master due to undefined references as
> > I mentioned some days ago).
> > 
> > loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 129 tail -20 
> > log.make.Linux.x86_64.64_gcc
> > Making all in java
> > make[3]: Entering directory 
> > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'
> >JAVACMPI.class
> > JAVADOCdoc
> > Creating destination directory: "doc/"
> > ../../../../../openmpi-v4.0.x-201810090241-2124192/ompi/mpi/java/java/Comm.java:656:
> >  
> > error: tag not supported in the generated HTML version: tt
> >   * Java binding of the MPI operation MPI_IBSEND.
> >  ^
> > 1 error
> > Makefile:2224: recipe for target 'doc' failed
> > make[3]: *** [doc] Error 1
> > make[3]: Leaving directory 
> > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'
> > Makefile:1743: recipe for target 'all-recursive' failed
> > make[2]: *** [all-recursive] Error 1
> > make[2]: Leaving directory 
> > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java'
> > Makefile:3521: recipe for target 'all-recursive' failed
> > make[1]: *** [all-recursive] Error 1
> > make[1]: Leaving directory 
> > '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi'
> > Makefile:1896: recipe for target 'all-recursive' failed
> > make: *** [all-recursive] Error 1
> > loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 130
> > 
> > 
> > I would be grateful, if somebody can fix the problem. Do you need anything
> > else? Thank you very much for any help in advance.

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Kawashima, Takahiro
What version of Java are you using?
Could you type "java -version" and show the output?

Takahiro Kawashima,
Fujitsu

> today I've tried to build openmpi-v4.0.x-201810090241-2124192 and
> openmpi-master-201810090329-e9e4d2a on my "SUSE Linux Enterprise Server
> 12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
> Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
> versions (Sun C still cannot built master due to undefined references as
> I mentioned some days ago).
> 
> loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 129 tail -20 
> log.make.Linux.x86_64.64_gcc
> Making all in java
> make[3]: Entering directory 
> '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'
>JAVACMPI.class
> JAVADOCdoc
> Creating destination directory: "doc/"
> ../../../../../openmpi-v4.0.x-201810090241-2124192/ompi/mpi/java/java/Comm.java:656:
>  
> error: tag not supported in the generated HTML version: tt
>   * Java binding of the MPI operation MPI_IBSEND.
>  ^
> 1 error
> Makefile:2224: recipe for target 'doc' failed
> make[3]: *** [doc] Error 1
> make[3]: Leaving directory 
> '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'
> Makefile:1743: recipe for target 'all-recursive' failed
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory 
> '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java'
> Makefile:3521: recipe for target 'all-recursive' failed
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory 
> '/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi'
> Makefile:1896: recipe for target 'all-recursive' failed
> make: *** [all-recursive] Error 1
> loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 130
> 
> 
> I would be grateful, if somebody can fix the problem. Do you need anything
> else? Thank you very much for any help in advance.

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] error building Java api for openmpi-v4.0.x-201810090241-2124192 and openmpi-master-201810090329-e9e4d2a

2018-10-09 Thread Siegmar Gross

Hi,

today I've tried to build openmpi-v4.0.x-201810090241-2124192 and
openmpi-master-201810090329-e9e4d2a on my "SUSE Linux Enterprise Server
12.3 (x86_64)" with Sun C 5.15, gcc 6.4.0, Intel icc 18.0.3, and Portland
Group pgcc 18.4-0. Unfortunately, I get the following error for all seven
versions (Sun C still cannot built master due to undefined references as
I mentioned some days ago).

loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 129 tail -20 
log.make.Linux.x86_64.64_gcc

Making all in java
make[3]: Entering directory 
'/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'

  JAVACMPI.class
JAVADOCdoc
Creating destination directory: "doc/"
../../../../../openmpi-v4.0.x-201810090241-2124192/ompi/mpi/java/java/Comm.java:656: 
error: tag not supported in the generated HTML version: tt

 * Java binding of the MPI operation MPI_IBSEND.
^
1 error
Makefile:2224: recipe for target 'doc' failed
make[3]: *** [doc] Error 1
make[3]: Leaving directory 
'/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java/java'

Makefile:1743: recipe for target 'all-recursive' failed
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
'/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi/mpi/java'

Makefile:3521: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
'/export2/src/openmpi-4.0.0/openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc/ompi'

Makefile:1896: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
loki openmpi-v4.0.x-201810090241-2124192-Linux.x86_64.64_gcc 130


I would be grateful, if somebody can fix the problem. Do you need anything
else? Thank you very much for any help in advance.


Kind regards

Siegmar
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users