message is not coming from Open MPI but from the PMIx component.
adding the following line in pmix-mca-params.conf should do the trick
mca_base_component_show_load_errors=0
Cheers,
Gilles
On 2/12/2019 7:31 AM, Jingchao Zhang wrote:
>
> Hi,
>
>
> We have both psm and psm2 in
Hi,
We have both psm and psm2 interfaces on our cluster. Since show load errors was
default to true from the v2.* series, we have been setting
"mca_base_component_show_load_errors=0" in the openmpi-mca-params.conf file to
suppress the load errors. But in version v3.1.3, this only works for the
Hi,
We have both psm and psm2 interfaces on our cluster. Since show load errors was
default to true from the v2.* series, we have been setting
"mca_base_component_show_load_errors=0" in the openmpi-mca-params.conf file to
suppress the load errors. But in version v3.1.3, this only works for the
Jingchao
From: users on behalf of Jeff Squyres
(jsquyres)
Sent: Thursday, March 16, 2017 8:46:30 AM
To: Open MPI User's List
Subject: Re: [OMPI users] openib/mpi_alloc_mem pathology
On Mar 16, 2017, at 10:37 AM, Jingchao Zhang wrote:
>
> One of my earlier replies includes the ba
Hi Jeff,
One of my earlier replies includes the backtraces of cp2k.popt process and the
problem points to MPI_ALLOC_MEM/MPI_FREE_MEM.
https://mail-archive.com/users@lists.open-mpi.org/msg30587.html
If that part of the code is commented out, is there another way for openmpi to
find that backt
Hi Hristo,
We have a similar problem here and I started a thread a few days ago.
https://mail-archive.com/users@lists.open-mpi.org/msg30581.html
Regard,
Jingchao
From: users on behalf of Iliev, Hristo
Sent: Wednesday, February 8, 2017 10:43:54 AM
To: users
last week with
a bunch of bug fixes.
> On Feb 7, 2017, at 3:07 PM, Jingchao Zhang wrote:
>
> Hi Tobias,
>
> Thanks for the reply. I tried both "export OMPI_MCA_mpi_leave_pinned=0" and
> "mpirun -mca mpi_leave_pinned 0" but still got the same behavior. Our O
bs.
kind regards,
Tobias Klöffel
On 02/06/2017 09:38 PM, Jingchao Zhang wrote:
Hi,
We recently noticed openmpi is using btl openib over self,sm for single node
jobs, which has caused performance degradation for some applications, e.g.
'cp2k'. For opempi version 2.0.1, our test shows s
Hi,
We recently noticed openmpi is using btl openib over self,sm for single node
jobs, which has caused performance degradation for some applications, e.g.
'cp2k'. For opempi version 2.0.1, our test shows single node 'cp2k' job using
openib is ~25% slower than using self,sm. We advise users do
Thank you! The patch fixed the problem. I did multiple tests with your program
and another application. No more process hangs!
Cheers,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open
!! It is indeed a race condition on
the backend orted. I’ll try to fix it - probably have to send you a patch to
test?
On Aug 30, 2016, at 1:04 PM, Jingchao Zhang
mailto:zh...@unl.edu>> wrote:
$mpirun -mca state_base_verbose 5 ./a.out < test.in
Please see attached for the outputs.
b”
On Aug 30, 2016, at 11:40 AM, Jingchao Zhang
mailto:zh...@unl.edu>> wrote:
I checked again and as far as I can tell, everything was setup correctly. I
added "HCC debug" to the output message to make sure it's the correct plugin.
The updated outputs:
$ mp
at the
front of the output message to ensure we are using the correct plugin?
This looks to me like you must be picking up a stale library somewhere.
On Aug 29, 2016, at 10:29 AM, Jingchao Zhang
mailto:zh...@unl.edu>> wrote:
Hi Ralph,
I used the tarball from Aug 26 and added the pat
has cleared MPI_Init
Thanks,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Saturday, August 27, 2016 12:31:53 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue w
bug_info2.txt
The new debug_info2.txt file is attached.
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Thursday, August 25, 2016 8:59:23 AM
To: Open MPI Users
Subject: Re
Hi Ralph,
I saw the pull request and did a test with v2.0.1rc1, but the problem persists.
Any ideas?
Thanks,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Wednesday
iof_base_verbose 100 ./a.out < test.in &>
debug_info.txt
The debug_info.txt is attached.
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Wednesday, August 24, 201
x86_64 GNU/Linux
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Tuesday, August 23, 2016 8:14:48 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi
has cleared MPI_Init
Rank 17 has cleared MPI_Init
Rank 18 has cleared MPI_Init
Rank 3 has cleared MPI_Init
then it just hanged.
--Jingchao
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@o
and each node with more than
10 cores.
Thank you for looking into this.
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Monday, August 22, 2016 10:23:42 PM
To: Open MPI
. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of r...@open-mpi.org
Sent: Monday, August 22, 2016 3:04:50 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0
Well, I can try to find
.0.0/gcc/6.1.0/lib/libmpi.so.20
#8 0x005c5b5d in LAMMPS_NS::Input::file() () at ../input.cpp:203
#9 0x005d4236 in main () at ../main.cpp:31
Thanks,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: u
policy=core
rmaps_base_mapping_policy=core
opal_cuda_support=0
btl_openib_use_eager_rdma=0
btl_openib_max_eager_rdma=0
btl_openib_flags=1
Thanks,
Jingchao
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
___
users m
t;rmaps_rank_file_path" (current value:
"", data source: default, level: 5 tuner/detail, type: string, synonyms:
orte_rankfile)
759: MCA rmaps: parameter "rmaps_rank_file_physical" (current
value: "false", data source: default, level: 5 tuner/deta
"openmpi-mca-params.conf" file:
orte_hetero_nodes=1
hwloc_base_binding_policy=core
rmaps_base_mapping_policy=core
The above changes in v1.8.8 work great for other stuff but breaks GAMESS. Does
anyone know how to resolve the conflict? Any suggestion is appreciated.
Thanks,
Dr. Jingchao
Thank you all. That's my oversight. I got a similar error with
"hwloc_base_binding_policy=core" so I thought it was the same.
Cheers,
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: user
will* disappear in a future version of Open MPI.
Please update to the new syntax.
--
Did I miss something in the "openmpi-mca-params.conf" file?
Thanks,
Dr. Jingchao Zhang
Holland Computing Center
University
ns to resolve it?
Thanks,
Jingchao
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of Ralph Castain
Sent: Wednesday, December 16, 2015 1:52 PM
To: Open MPI Users
Subject: Re: [OMPI users] perfor
ith the cma and vader parameters but with no
luck.
Thanks,
Jingchao
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
From: users on behalf of Gilles Gouaillardet
Sent: Tuesday, December 15, 2015 12:11 AM
To: Open MPI Us
ped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:45601859442 (42.4 GiB) TX bytes:45601859442 (42.4 GiB)
6. ulimit -l
unlimited
Please kindly let me know if more information are needed.
Thanks,
Jingchao
Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
MPIConfig.tar.bz2
Description: MPIConfig.tar.bz2
30 matches
Mail list logo