Re: Fwd: [gmx-users] Performance problems with more than one node

2008-09-26 Thread Carsten Kutzner
Hi Tiago, if you swith off PME and suddenly your system scales, then the problems are likely to result from bad MPI_Alltoall performance. Maybe this is worth a check. If this is the case, there's a lot more information about this in the paper Speeding up parallel GROMACS on high- latency

Re: Fwd: [gmx-users] Performance problems with more than one node

2008-09-26 Thread vivek sharma
Hi Carsten and Justin, I am interrupting here as I tried with the option u suggested.. I tried cut-off instead of PME as coulombtype option it is running well for 24 processor, then I tried with 60 processor , following is the result I am getting Result1: When tried for 50 ps of run on 24

Re: Fwd: [gmx-users] Performance problems with more than one node

2008-09-26 Thread Carsten Kutzner
vivek sharma wrote: Hi Carsten and Justin, I am interrupting here as I tried with the option u suggested.. I tried cut-off instead of PME as coulombtype option it is running well for 24 processor, then I tried with 60 processor , following is the result I am getting Result1: When tried for

Fwd: [gmx-users] Performance problems with more than one node

2008-09-25 Thread Tiago Marques
We currently have no funds available to migrate to infiniband but we will in the future. I thought on doing interface bonding but I really think that isn't really the problem here, there must be something I'm missing, since most applications scale well to 32 cores on GbE. I can't scale any