Hi Tiago,
if you swith off PME and suddenly your system scales, then the
problems are likely to result from bad MPI_Alltoall performance. Maybe
this is worth a check. If this is the case, there's a lot more information
about this in the paper Speeding up parallel GROMACS on high-
latency
Hi Carsten and Justin,
I am interrupting here as I tried with the option u suggested..
I tried cut-off instead of PME as coulombtype option it is running well for
24 processor, then I tried with 60 processor , following is the result I am
getting
Result1: When tried for 50 ps of run on 24
vivek sharma wrote:
Hi Carsten and Justin,
I am interrupting here as I tried with the option u suggested..
I tried cut-off instead of PME as coulombtype option it is running well
for 24 processor, then I tried with 60 processor , following is the
result I am getting
Result1: When tried for
We currently have no funds available to migrate to infiniband but we will in
the future.
I thought on doing interface bonding but I really think that isn't really
the problem here, there must be something I'm missing, since most
applications scale well to 32 cores on GbE. I can't scale any
4 matches
Mail list logo