Hi John
1) For diagnostic, you could check the actual "per process" limits on
the nodes while that big job is running:
cat /proc/$PID/limits
2) If you're using a resource manager to launch the job,
the resource manager daemon/deamons (local to the nodes) may have to
to set the memlock and other limits, so that the Open MPI processes
inherit them.
I use Torque, so I put these lines in the pbs_mom (Torque local daemon)
initialization script:
# pbs_mom system limits
# max file descriptors
ulimit -n 32768
# locked memory
ulimit -l unlimited
# stacksize
ulimit -s unlimited
3) See also this FAQ related to registered memory.
I set these parameters in /etc/modprobe.d/mlx4_core.conf,
but where they're set may depend on the Linux distro/release and the
OFED you're using.
https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
I hope this helps,
Gus Correa
On 06/15/2016 11:05 AM, Sasso, John (GE Power, Non-GE) wrote:
In doing testing with IMB, I find that running a 4200+ core case with
the IMB test Alltoall, and message lengths of 16..1024 bytes (as per
-msglog 4:10 IMB option), it fails with:
--------------------------------------------------------------------------
A process failed to create a queue pair. This usually means either
the device has run out of queue pairs (too many connections) or
there are insufficient resources available to allocate a queue pair
(out of memory). The latter can happen if either 1) insufficient
memory is available, or 2) no more physical memory can be registered
with the device.
For more information on memory registration see the Open MPI FAQs at:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
Local host: node7106
Local device: mlx4_0
Queue pair type: Reliable connected (RC)
--------------------------------------------------------------------------
[node7106][[51922,1],0][connect/btl_openib_connect_oob.c:867:rml_recv_cb]
error in endpoint reply start connect
[node7106:06503] [[51922,0],0]-[[51922,1],0] mca_oob_tcp_msg_recv:
readv failed: Connection reset by peer (104)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 6504 on
node node7106 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
Yes, these are ALL of the error messages. I did not get a message
about not being able to register enough memory. I verified that
log_num_mtt = 24 and log_mtts_per_seg = 0 (via catting of their files
in /sys/module/mlx4_core/parameters and what is set in
/etc/modprobe.d/mlx4_core.conf). While such a large-scale job runs, I
run ‘vmstat 10’ to examine memory usage, but there appears to be a
good amount of memory still available and swap is never used. In
terms of settings in /etc/security/limits.conf:
* soft memlock unlimited
* hard memlock unlimited
* soft stack 300000
* hard stack unlimited
I don’t know if btl_openib_connect_oob.c or mca_oob_tcp_msg_recv are
clues, but I am now at a loss as to where the problem lies.
This is for an application using OpenMPI 1.6.5, and the systems have
Mellanox OFED 3.1.1 installed.
*--john*
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2016/06/29455.php