Re: [gmx-users] GROMACS Parallel Runs

2006-10-06 Thread David van der Spoel
TED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Fri, 06 Oct 2006 10:54:12 +0200 Sunny wrote: From: "Dallas B. Warren" <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users T

Re: [gmx-users] GROMACS Parallel Runs

2006-10-06 Thread Sunny
ply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Fri, 06 Oct 2006 10:54:12 +0200 Sunny wrote: From: "Dallas B. Warren" <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion

Re: [gmx-users] GROMACS Parallel Runs

2006-10-06 Thread David van der Spoel
Sunny wrote: From: "Dallas B. Warren" <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: RE: [gmx-users] GROMACS Parallel Runs Date: Fri, 06 Oct 2006 09:35:31 +1000 > I have successfully run gmx on up to 128 cpus

RE: [gmx-users] GROMACS Parallel Runs

2006-10-06 Thread Sunny
From: "Dallas B. Warren" <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: RE: [gmx-users] GROMACS Parallel Runs Date: Fri, 06 Oct 2006 09:35:31 +1000 > I have successfully run gmx on up to 128 cpus. When I scale

RE: [gmx-users] GROMACS Parallel Runs

2006-10-05 Thread Dallas B. Warren
> I have successfully run gmx on up to 128 cpus. When I scale > to 256 cpus, the > following error occurs. Does it mean that gmx can't be run on > 256 nodes? > > Fatal error: > could not find a grid spacing with nx and ny divisible by the > number of > nodes (256) Isn't that just due to the

Re: [gmx-users] GROMACS Parallel Runs

2006-10-05 Thread Sunny
quot;Sunny" <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: gmx-users@gromacs.org Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Tue, 03 Oct 2006 09:57:43 + Hi Carsten, Setting fourier_nx to a larger number does work. Thanks. Sunny From: Carsten Kutzner <

Re: [gmx-users] GROMACS Parallel Runs

2006-10-03 Thread David van der Spoel
Sunny wrote: Hi Carsten, Setting fourier_nx to a larger number does work. Thanks. Sunny From: Carsten Kutzner <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Mon, 02 Oct 2006

Re: [gmx-users] GROMACS Parallel Runs

2006-10-03 Thread Sunny
Hi Carsten, Setting fourier_nx to a larger number does work. Thanks. Sunny From: Carsten Kutzner <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Mon, 02 Oct 2006 11:06:49 +02

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread David van der Spoel
Sunny wrote: Hi David, From: David van der Spoel <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Mon, 02 Oct 2006 16:04:45 +0200 Sunny wrote: Hi all, Thanks for your pr

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread Sunny
Hi David, From: David van der Spoel <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Mon, 02 Oct 2006 16:04:45 +0200 Sunny wrote: Hi all, Thanks for your proposed solutions.

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread David van der Spoel
n list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Mon, 02 Oct 2006 11:06:49 +0200 Hi, the current version of gmx requires at least pme_order/2 grid points per processor for the x-dimension of the pme grid. With pme_order=4 and fourier_nx=64 you end up with only one grid

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread Sunny
in short time. I will contact the system support to see if they would do the update. Thanks, Sunny From: Carsten Kutzner <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Mon,

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread Carsten Kutzner
n list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Sun, 01 Oct 2006 19:58:48 +0200 Sunny wrote: Hi, I am using GROMACS 3.3.1 parallel runs on an AIX supercomputing system. My simulation can successfully run on 16 and 32 CPUs (as well as below 16 CPUs). When running

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread David van der Spoel
Sunny wrote: From: David van der Spoel <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Sun, 01 Oct 2006 19:58:48 +0200 Sunny wrote: Hi, I am using GROMACS 3.3.1 parallel runs on

Re: [gmx-users] GROMACS Parallel Runs

2006-10-02 Thread Sunny
From: David van der Spoel <[EMAIL PROTECTED]> Reply-To: Discussion list for GROMACS users To: Discussion list for GROMACS users Subject: Re: [gmx-users] GROMACS Parallel Runs Date: Sun, 01 Oct 2006 19:58:48 +0200 Sunny wrote: Hi, I am using GROMACS 3.3.1 parallel runs on

Re: [gmx-users] GROMACS Parallel Runs

2006-10-01 Thread Mark Abraham
Sunny wrote: Hi, I am using GROMACS 3.3.1 parallel runs on an AIX supercomputing system. My simulation can successfully run on 16 and 32 CPUs (as well as below 16 CPUs). When running on 64 CPUs, however, segmentation fault occurs in multiple tasks from very beginning of the simulation. I'd li

Re: [gmx-users] GROMACS Parallel Runs

2006-10-01 Thread David van der Spoel
Sunny wrote: Hi, I am using GROMACS 3.3.1 parallel runs on an AIX supercomputing system. My simulation can successfully run on 16 and 32 CPUs (as well as below 16 CPUs). When running on 64 CPUs, however, segmentation fault occurs in multiple tasks from very beginning of the simulation. I'd li

[gmx-users] GROMACS Parallel Runs

2006-10-01 Thread Sunny
Hi, I am using GROMACS 3.3.1 parallel runs on an AIX supercomputing system. My simulation can successfully run on 16 and 32 CPUs (as well as below 16 CPUs). When running on 64 CPUs, however, segmentation fault occurs in multiple tasks from very beginning of the simulation. I'd like know what