Dear Mark Abraham & all,

We used another benchmarking systems, such as d.dppc on 4 processors, but we have the same problem (1 proc use about 100%, the others 0%).
After for a while we receive the following error:

Working directory is /localuser/armen/d.dppc
Running on host wn1.ysu-cluster.grid.am
Time is Fri Apr 22 13:55:47 AMST 2011
Directory is /localuser/armen/d.dppc
____START____
Start: Fri Apr 22 13:55:47 AMST 2011
p2_487:  p4_error: Timeout in establishing connection to remote process: 0
rm_l_2_500: (301.160156) net_send: could not write to fd=5, errno = 32
p2_487: (301.160156) net_send: could not write to fd=5, errno = 32
p0_32738:  p4_error: net_recv read:  probable EOF on socket: 1
p3_490: (301.160156) net_send: could not write to fd=6, errno = 104
p3_490:  p4_error: net_send write: -1
p3_490: (305.167969) net_send: could not write to fd=5, errno = 32
p0_32738: (305.371094) net_send: could not write to fd=4, errno = 32
p1_483:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_1_499: (305.167969) net_send: could not write to fd=5, errno = 32
p1_483: (311.171875) net_send: could not write to fd=5, errno = 32
Fri Apr 22 14:00:59 AMST 2011
End: Fri Apr 22 14:00:59 AMST 2011
____END____

We tried new version of Gromacs, but receive the same error.
Please, help us to overcome the problem.


With regards,
Hrach

On 4/22/11 1:41 PM, Mark Abraham wrote:
On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:
Dear all,

I would like to inform you that I have installed the gromacs4.0.7 package on the cluster (nodes of the cluster are 8 core Intel, OS: RHEL4 Scientific Linux) with the following steps:

yum install fftw3 fftw3-devel
./configure --prefix=/localuser/armen/gromacs --enable-mpi

Also I have downloaded gmxbench-3.0 package and try to run d.villin to test it.

Unfortunately it wok fine until np is 1,2,3, if I use more than 3 procs I receive low CPU balancing and the process in hanging.

Could you, please, help me to overcome the problem?

Probably you have only four physical cores (hyperthreading is not normally useful), or your MPI is configured to use only four cores, or these benchmarks are too small to scale usefully.

Choosing to do a new installation of a GROMACS version that is several years old is normally less productive than the latest version.

Mark




--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to