On Oct 25, 2013, at 4:07 PM, aixintiankong wrote:
> Dear prof.,
> i want install gromacs on a multi-core workstation with a GPU(tesla c2075),
> should i install the openmpi or mpich2?
If you want to run Gromacs on just one workstation with a single GPU, you do
not need to install an MPI library
Dear prof.,
i want install gromacs on a multi-core workstation with a GPU(tesla c2075),
should i install the openmpi or mpich2?
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Ma
On 8/19/13 5:38 AM, grita wrote:
Hey guys,
Is it possible to make a SD simulation with using the pull code in the GPU
version of Gromacs?
Have you tried it?
-Justin
--
==
Justin A. Lemkul, Ph.D.
Postdoctoral Fellow
Department of Pharmaceut
Hey guys,
Is it possible to make a SD simulation with using the pull code in the GPU
version of Gromacs?
Best, grita
--
View this message in context:
http://gromacs.5086.x6.nabble.com/GPU-version-of-Gromacs-tp5010581.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-
On 08/15/2013 11:21 AM, Jacopo Sgrignani wrote:
Dear Albert
to run parallel jobs on multiple GPUs you should use something like this:
mpirun -np (number of parallel sessions on CPU) mdrun_mpi .. -gpu_id
so you will have 4 calculations for GPU.
Jacopo
thanks a lot for
Dear Albert
to run parallel jobs on multiple GPUs you should use something like this:
mpirun -np (number of parallel sessions on CPU) mdrun_mpi .. -gpu_id
so you will have 4 calculations for GPU.
Jacopo
Inviato da iPad
Il giorno 15/ago/2013, alle ore 10:56, Albert ha sc
Hello:
I've got two GTX690 GPU in a workstation, and I compiled gromacs-4.6.3
with plumed and MPI support. I am trying to run some metadynamics with
mdrun with command:
mdrun_mpi -s md.tpr -v -g md.log -o md.trr -x md.xtc -plumed
plumed2.dat -e md.edr
but mdrun can only use 1 GPU as indi
Hello. ¿Have you removed periodicity?. Because you may only be seeing
traversal of water molecules among copies of the periodic system.
Lucio Montero
Ph. D. student
Instituto de Biotecnologia, UNAM
Mexico
El 08/08/13 07:39, Ondrej Kroutil escribió:
Dear GMX users.
I have done simulation of
Aug 2013 14:39:59 +0200
> From: okrou...@gmail.com
> To: gmx-users@gromacs.org
> Subject: [gmx-users] GPU + surface
>
> Dear GMX users.
> I have done simulation of ions and water near quartz surface
> (ClayFF) using GPU (GTX580) and Gromacs (4.6.1, single precision, 64
> b
Dear GMX users.
I have done simulation of ions and water near quartz surface
(ClayFF) using GPU (GTX580) and Gromacs (4.6.1, single precision, 64
bit, SSE4.1, fftw-3.3.3) and have observed strange behavior of water
and ions. Its NVT simulation with freezed surface atoms (see .mdp
below) and negat
erties to the MB should I consider for such system ?
>>
>> James
>>
>>
>> 2013/5/28 lloyd riggs
>>
>>> Dear Dr. Pali,
>>>
>>> Thank you,
>>>
>>> Stephan Watkins
>>>
>>> *Gesendet:* Dienstag, 28. Mai 2013 um
Hi Richard,
Thank you for the help and sorry for the delay in my reply.
I tried some test run changing some parameters (e.g. removing PME) and I
was able to reach 20ns/day, so I think that 9-11 ns/day it's the max
that I can obtain for my setting.
thank your again for your help.
cheers,
Fra
On
On 12/07/13 13:26, Francesco wrote:
Hi all,
I'm working with a 200K atoms system (protein + explicit water) and
after a while using a cpu cluster I had to switch to a gpu cluster.
I read both Acceleration and parallelization and Gromacs-gpu
documentation pages
(http://www.gromacs.org/Documentat
Hi all,
I'm working with a 200K atoms system (protein + explicit water) and
after a while using a cpu cluster I had to switch to a gpu cluster.
I read both Acceleration and parallelization and Gromacs-gpu
documentation pages
(http://www.gromacs.org/Documentation/Acceleration_and_parallelization
and
Hello:
I've installed Gromacs-4.6.2 in GPU cluster with following configurations:
CC=icc FC=ifort F77=ifort CXX=icpc
CMAKE_PREFIX_PATH=/export/intel/cmkl/include/fftw:/export/mpi/mvapich2-1.8-rhes6
cmake .. -DGMX_MPI=ON
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs -DGMX_GPU=ON
-DBUIL
kins
>>
>> *Gesendet:* Dienstag, 28. Mai 2013 um 19:50 Uhr
>> *Von:* "Szilárd Páll"
>>
>> *An:* "Discussion list for GROMACS users"
>> *Betreff:* Re: Re: [gmx-users] GPU-based workstation
>> Dear all,
>>
>> As far as I un
On 6/25/13 6:33 PM, Dwey wrote:
Hi gmx-users,
I used 8-cores AMD CPU with a GTX680 GPU [ with 1536 CUDA Cores] to
run an example of Umbrella Sampling provided by Justin.
I am happy that GPU acceleration indeed helps me reduce significant time (
from 34 hours to 7 hours) of computation
Hi gmx-users,
I used 8-cores AMD CPU with a GTX680 GPU [ with 1536 CUDA Cores] to
run an example of Umbrella Sampling provided by Justin.
I am happy that GPU acceleration indeed helps me reduce significant time (
from 34 hours to 7 hours) of computation in this example.
However, I found th
On Sat, Jun 8, 2013 at 9:21 PM, Albert wrote:
> Hello:
>
> Recently I found a strange question about Gromacs-4.6.2 on GPU workstaion.
> In my GTX690 machine, when I run md production I found that the ECC is on.
> However, in my another GTX590 machine, I found the ECC was off:
>
> 4 GPUs detected:
Hello:
Recently I found a strange question about Gromacs-4.6.2 on GPU
workstaion. In my GTX690 machine, when I run md production I found that
the ECC is on. However, in my another GTX590 machine, I found the ECC
was off:
4 GPUs detected:
#0: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:
Thanks, thats exact what I was looking for.
Stephan
Gesendet: Dienstag, 04. Juni 2013 um 22:28 Uhr
Von: "Justin Lemkul"
An: "Discussion list for GROMACS users"
Betreff: Re: [gmx-users] GPU problem
On 6/4/13 3:52 PM, lloyd riggs wrote:
> Dear All or anyone,
On 6/4/13 3:52 PM, lloyd riggs wrote:
Dear All or anyone,
A stupid question. Is there an script anyone knows of to convert a 53a6ff from
.top redirects to the gromacs/top directory to something like a ligand .itp?
This is usefull at the moment. Example:
[bond]
6 7 2gb_5
to
[b
Dear All or anyone,
A stupid question. Is there an script anyone knows of to convert a 53a6ff from .top redirects to the gromacs/top directory to something like a ligand .itp? This is usefull at the moment. Example:
[bond]
6 7 2 gb_5
to
[bonds]
; ai aj fu
"-nt" is mostly a backward compatibility option and sets the total
number of threads (per rank). Instead, you should set both "-ntmpi"
(or -np with MPI) and "-ntomp". However, note that unless a single
mdrun uses *all* cores/hardware threads on a node, it won't pin the
threads to cores. Failing to
On 06/04/2013 11:22 AM, Chandan Choudhury wrote:
Hi Albert,
I think using -nt flag (-nt=16) with mdrun would solve your problem.
Chandan
thank you so much.
it works well now.
ALBERT
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Ple
Hi Albert,
I think using -nt flag (-nt=16) with mdrun would solve your problem.
Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA
On Tue, Jun 4, 2013 at 12:56 PM, Albert wrote:
> Dear:
>
> I've got four GPU in one workstation. I am trying to run two GPU job with
> command:
>
> mdrun -s md
Dear:
I've got four GPU in one workstation. I am trying to run two GPU job
with command:
mdrun -s md.tpr -gpu_id 01
mdrun -s md.tpr -gpu_id 23
there are 32 CPU in this workstation. I found that each job trying to
use the whole CPU, and there are 64 sub job when these two GPU mdrun
submitte
the MB should I consider for such system ?
James
2013/5/28 lloyd riggs
> Dear Dr. Pali,
>
> Thank you,
>
> Stephan Watkins
>
> *Gesendet:* Dienstag, 28. Mai 2013 um 19:50 Uhr
> *Von:* "Szilárd Páll"
>
> *An:* "Discussion list for GROMACS users"
Dear Dr. Pali,
Thank you,
Stephan Watkins
Gesendet: Dienstag, 28. Mai 2013 um 19:50 Uhr
Von: "Szilárd Páll"
An: "Discussion list for GROMACS users"
Betreff: Re: Re: [gmx-users] GPU-based workstation
Dear all,
As far as I understand, the OP is interested in h
mx.ch>>
> Reply-To: Discussion users
> mailto:gmx-users@gromacs.org>>
> Date: Saturday, 25 May 2013 12:02
> To: Discussion users mailto:gmx-users@gromacs.org>>
> Subject: Aw: Re: [gmx-users] GPU-based workstation
>
> More RAM the better, and the best I have s
for 1 ns sim, but tried simple large 800 amino, 25,000 solvent
> eq (NVT or NPT) runs and they clock at around 1 hour real for say 50 ps
> eq's
>
> Stephan
>
> Gesendet: Samstag, 25. Mai 2013 um 07:54 Uhr
> Von: "James Starlight"
> An: "Discussion
r
> *Von:* "James Starlight"
>
> *An:* "Discussion list for GROMACS users"
> *Betreff:* Re: Aw: Re: [gmx-users] GPU-based workstation
> Richard,
>
> thanks for suggestion!
>
> Assuming that I'm using 2 high end GeForce's what performance b
o the exact same performance the other person had.
Stephan
Gesendet: Samstag, 25. Mai 2013 um 15:19 Uhr
Von: "James Starlight"
An: "Discussion list for GROMACS users"
Betreff: Re: Aw: Re: [gmx-users] GPU-based workstation
Richard,
thanks for suggestion!
Assuming that I
arlight"
An: "Discussion list for GROMACS users"
Betreff: Re: Aw: Re: [gmx-users] GPU-based workstation
Richard,
thanks for suggestion!
Assuming that I'm using 2 high end GeForce's what performance be better
1) in case of one i7 (4 or 6 nodes ) ?
2) in case of 8 core
; Reply-To: Discussion users gmx-users@gromacs.org>>
> Date: Saturday, 25 May 2013 12:02
> To: Discussion users mailto:gmx-users@gromacs.org>>
> Subject: Aw: Re: [gmx-users] GPU-based workstation
>
> More RAM the better, and the best I have seen is 4 GPU work statio
E5
or core i7 would be a good choice.
Richard
From: lloyd riggs mailto:lloyd.ri...@gmx.ch>>
Reply-To: Discussion users mailto:gmx-users@gromacs.org>>
Date: Saturday, 25 May 2013 12:02
To: Discussion users mailto:gmx-users@gromacs.org>>
Subject: Aw: Re: [gmx-users] GPU-based works
r real for say 50 ps eq's
Stephan
Gesendet: Samstag, 25. Mai 2013 um 07:54 Uhr
Von: "James Starlight"
An: "Discussion list for GROMACS users"
Betreff: Re: [gmx-users] GPU-based workstation
Dear Dr. Watkins!
Thank you for the suggestions!
In the local shops I
hr
> *Von:* "James Starlight"
> *An:* "Discussion list for GROMACS users"
> *Betreff:* [gmx-users] GPU-based workstation
> Dear Gromacs Users!
>
>
> I'd like to build new workstation for performing simulation on GPU with
> Gromacs 4.6 native cuda sup
an Watkins
Gesendet: Freitag, 24. Mai 2013 um 13:17 Uhr
Von: "James Starlight"
An: "Discussion list for GROMACS users"
Betreff: [gmx-users] GPU-based workstation
Dear Gromacs Users!
I'd like to build new workstation for performing simulation on GPU with
Gromacs 4.6
wise their supposedly supposed to have all the same CUDA like libraries by the end of the summer (Portlandgroup , openCL), but I heard the same for 2 years now.
Sincerely,
Stephan Watkins
Gesendet: Freitag, 24. Mai 2013 um 13:17 Uhr
Von: "James Starlight"
An: "Discussion
Dear Gromacs Users!
I'd like to build new workstation for performing simulation on GPU with
Gromacs 4.6 native cuda support.
Recently I've used such setup with Core i5 cpu and nvidia 670 GTX video
and obtain good performance ( ~ 20 ns\day for typical 60.000 atom system
with SD integrator)
Now
the problem is still there...
:-(
On 04/29/2013 06:06 PM, Szilárd Páll wrote:
On Mon, Apr 29, 2013 at 3:51 PM, Albert wrote:
>On 04/29/2013 03:47 PM, Szilárd Páll wrote:
>>
>>In that case, while it isn't very likely, the issue could be caused by
>>some implementation detail which aims
On Mon, Apr 29, 2013 at 3:51 PM, Albert wrote:
> On 04/29/2013 03:47 PM, Szilárd Páll wrote:
>>
>> In that case, while it isn't very likely, the issue could be caused by
>> some implementation detail which aims to avoid performance loss caused
>> by an issue in the NVIDIA drivers.
>>
>> Try runnin
On 04/29/2013 03:47 PM, Szilárd Páll wrote:
In that case, while it isn't very likely, the issue could be caused by
some implementation detail which aims to avoid performance loss caused
by an issue in the NVIDIA drivers.
Try running with the GMX_CUDA_STREAMSYNC environment variable set.
Btw, we
In that case, while it isn't very likely, the issue could be caused by
some implementation detail which aims to avoid performance loss caused
by an issue in the NVIDIA drivers.
Try running with the GMX_CUDA_STREAMSYNC environment variable set.
Btw, were there any other processes using the GPU whi
On 04/29/2013 03:31 PM, Szilárd Páll wrote:
The segv indicates that mdrun crashed and not that the machine was
restarted. The GPU detection output (both on stderr and log) should
show whether ECC is "on" (and so does the nvidia-smi tool).
Cheers,
--
Szilárd
yes it was on:
Reading file heavy.
On Mon, Apr 29, 2013 at 2:41 PM, Albert wrote:
> On 04/28/2013 05:45 PM, Justin Lemkul wrote:
>>
>>
>> Frequent failures suggest instability in the simulated system. Check your
>> .log file or stderr for informative Gromacs diagnostic information.
>>
>> -Justin
>
>
>
> my log file didn't have any
On 04/28/2013 05:45 PM, Justin Lemkul wrote:
Frequent failures suggest instability in the simulated system. Check
your .log file or stderr for informative Gromacs diagnostic information.
-Justin
my log file didn't have any errors, the end of topped log file something
like:
DD step 225
Hello:
yes, I tried the CPU only version, it goes well and didn't stop. I am
not sure whether I have ECC on or not. There are 4 Tesla K20 and one
GTX650 in the workstation, after compilation, I simple submit the jobs
with command:
mdrun -s md.tpr -gpu_id 0234
I submit the same system in a
Have you tried running on CPUs only just to see if the issue persists?
Unless the issue does not occur with the same binary on the same
hardware running on CPUs only, I doubt it's a problem in the code.
Do you have ECC on?
--
Szilárd
On Sun, Apr 28, 2013 at 5:27 PM, Albert wrote:
> Dear:
>
>
On 4/28/13 11:27 AM, Albert wrote:
Dear:
I am running MD jobs in a workstation with 4 K20 GPU and I found that the job
always failed with following messages from time to time:
[tesla:03432] *** Process received signal ***
[tesla:03432] Signal: Segmentation fault (11)
[tesla:03432] Signal
Dear:
I am running MD jobs in a workstation with 4 K20 GPU and I found that
the job always failed with following messages from time to time:
[tesla:03432] *** Process received signal ***
[tesla:03432] Signal: Segmentation fault (11)
[tesla:03432] Signal code: Address not mapped (1)
[tesla:0
Probably the part of the calculation done on the GPU is not rate limiting.
There's no point having four chefs to make one dish...
Look at the beginning and end of your .log files for diagnostic
information. If this is a single node, you should be using threadMPI, not
real MPI. Generally four CPU c
Dear:
I've got two GTX690 in a a workstation and I found that when I run the
md production with following two command:
mpirun -np 4 md_run_mpi
or
mpirun -np 2 md_run_mpi
the efficiency are the same. I notice that gromacs can detect 4 GPU
(probably because GTX690 have two core..):
4
On Wed, Apr 10, 2013 at 3:34 AM, Benjamin Bobay wrote:
> Szilárd -
>
> First, many thanks for the reply.
>
> Second, I am glad that I am not crazy.
>
> Ok so based on your suggestions, I think I know what the problem is/was.
> There was a sander process running on 1 of the CPUs. Clearly GROMACS
On Apr 10, 2013 3:34 AM, "Benjamin Bobay" wrote:
>
> Szilárd -
>
> First, many thanks for the reply.
>
> Second, I am glad that I am not crazy.
>
> Ok so based on your suggestions, I think I know what the problem is/was.
> There was a sander process running on 1 of the CPUs. Clearly GROMACS was
>
Szilárd -
First, many thanks for the reply.
Second, I am glad that I am not crazy.
Ok so based on your suggestions, I think I know what the problem is/was.
There was a sander process running on 1 of the CPUs. Clearly GROMACS was
trying to use 4 with "Using 4 OpenMP thread". I just did not catch
Hi Ben,
That performance is not reasonable at all - neither for CPU only run on
your quad-core Sandy Bridge, nor for the CPU+GPU run. For the latter you
should be getting more like 50 ns/day or so.
What's strange about your run is that the CPU-GPU load balancing is picking
a *very* long cut-off w
Good afternoon -
I recently installed gromacs-4.6 on CentOS6.3 and the installation went
just fine.
I have a Tesla C2075 GPU.
I then downloaded the benchmark directories and ran a bench mark on the
GPU/ dhfr-solv-PME.bench
This is what I got:
Using 1 MPI thread
Using 4 OpenMP threads
1 GPU de
Hi Szilard
Thanks for this tip; it was extremely useful. The problem was indeed the
incompatibility between the installed NVIDIA driver and the CUDA 5.0
runtime library. Installation of an older driver solved the problem. The
programs devideQuery etc can now detect the GPU.
GROMACS can also detec
The easiest way for solution is to kill MacOS ans switch to Linux.
;-)
Albert
On 03/01/2013 06:03 PM, Szilárd Páll wrote:
Hi George,
As I said before, that just means that most probably the GPU driver is not
compatible with the CUDA runtime (libcudart) that you installed with the
CUDA toolki
Hi George,
As I said before, that just means that most probably the GPU driver is not
compatible with the CUDA runtime (libcudart) that you installed with the
CUDA toolkit. I've no clue about the Mac OS installers and releases, you'll
have to do the research on that. Let us know if you have furthe
Hi Szilαrd
Thanks for your reply. I have run the deviceQuery utility and what I got
back is
/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 38
-> no CUDA-capable device is detected
Should I understand from this that the CUDA
HI,
That looks like the driver does not work or is incompatible with the
runtime. Please get the SDK, compile a simple program, e.g. deviceQuery and
see if that works (I suspect that it won't).
Regarding your machines, just FYI, the Quadro 4000 is a pretty slow card
(somewhat slower than a GTX 46
Hello
We are trying to install the GPU version of GROMACS 4.6 on our own
MacOS cluster. So for the cluster nodes that have the NVIDIA Quadro 4000
cards:
- We have downloaded and install the MAC OS X CUDA 5.0 Production Release
from here: https://developer.nvidia.com/cuda-downloads
placing the l
On 12/17/2012 08:06 PM, Justin Lemkul wrote:
It seems to me that the system is simply crashing like any other that
becomes unstable. Does the simulation run at all on plain CPU?
-Justin
Thank you very much Justin, it's really helpful. I've checked that the
structure after minization and f
On 12/17/12 2:03 PM, Albert wrote:
well, that's one of the log files.
I've tried both
VERSION 4.6-dev-20121004-5d6c49d
VERSION 4.6-beta1
VERSION 4.6-beta2
and the latest 5.0 by git.
the problems are the same.:-(
It seems to me that the system is simply crashing like any other that beco
well, that's one of the log files.
I've tried both
VERSION 4.6-dev-20121004-5d6c49d
VERSION 4.6-beta1
VERSION 4.6-beta2
and the latest 5.0 by git.
the problems are the same.:-(
On 12/17/2012 07:56 PM, Mark Abraham wrote:
On Mon, Dec 17, 2012 at 6:01 PM, Albert wrote:
>hello:
>
> I r
On Mon, Dec 17, 2012 at 7:56 PM, Mark Abraham wrote:
> On Mon, Dec 17, 2012 at 6:01 PM, Albert wrote:
>
> > hello:
> >
> > I reduced the GPU to two, and it said:
> >
> > Back Off! I just backed up nvt.log to ./#nvt.log.1#
> > Reading file nvt.tpr, VERSION 4.6-dev-20121004-5d6c49d (single precisi
On Mon, Dec 17, 2012 at 6:01 PM, Albert wrote:
> hello:
>
> I reduced the GPU to two, and it said:
>
> Back Off! I just backed up nvt.log to ./#nvt.log.1#
> Reading file nvt.tpr, VERSION 4.6-dev-20121004-5d6c49d (single precision)
>
This is a development version from October 1. Please use the m
Hi Albert,
Thanks for the testing.
Last questions.
- What version are you using? Is it beta2 release or latest git? if it's
the former, getting the latest git might help if...
- (do) you happen to be using GMX_GPU_ACCELERATION=None (you shouldn't!)?
A bug triggered only with this setting has bee
On 12/17/2012 06:08 PM, Szilárd Páll wrote:
Hi,
How about GPU emulation or CPU-only runs? Also, please try setting the
number of therads to 1 (-ntomp 1).
--
Szilárd
hello:
I am running in GPU emulation mode with the GMX_EMULATE_GPU=1 env. var
set (and to match closer the GPU setup with -nt
Hi,
How about GPU emulation or CPU-only runs? Also, please try setting the
number of therads to 1 (-ntomp 1).
--
Szilárd
On Mon, Dec 17, 2012 at 6:01 PM, Albert wrote:
> hello:
>
> I reduced the GPU to two, and it said:
>
> Back Off! I just backed up nvt.log to ./#nvt.log.1#
> Reading file
hello:
I reduced the GPU to two, and it said:
Back Off! I just backed up nvt.log to ./#nvt.log.1#
Reading file nvt.tpr, VERSION 4.6-dev-20121004-5d6c49d (single precision)
NOTE: GPU(s) found, but the current simulation can not use GPUs
To use a GPU, set the mdp option: cutoff-scheme = Ve
Hi,
That unfortunately tell exactly about the reason why mdrun is stuck. Can
you reproduce the issue on another machines or with different launch
configurations? At which step does it get stuck (-stepout 1 can help)?
Please try the following:
- try running on a single GPU;
- try running on CPUs o
hello:
I am running GMX-4.6 beta2 GPU work in a 24 CPU core workstation with
two GTX590, it stacked there without any output i.e the .xtc file size
is always 0 after hours of running. Here is the md.log file I found:
Using CUDA 8x8x8 non-bonded kernels
Potential shift: LJ r^-12: 0.112 r^-6
On Tue, Dec 11, 2012 at 6:49 PM, Mirco Wahab <
mirco.wa...@chemie.tu-freiberg.de> wrote:
> Am 11.12.2012 16:04, schrieb Szilárd Páll:
>
> It looks like some gcc 4.7-s don't work with CUDA, although I've been
>> using
>> various Ubuntu/Linaro versions, most recently 4.7.2 and had no
>> issues what
Am 11.12.2012 16:04, schrieb Szilárd Páll:
It looks like some gcc 4.7-s don't work with CUDA, although I've been using
various Ubuntu/Linaro versions, most recently 4.7.2 and had no
issues whatsoever. Some people seem to have bumped into the same problem
(see http://goo.gl/1onBz or http://goo.gl/
Hi Thomas,
It looks like some gcc 4.7-s don't work with CUDA, although I've been using
various Ubuntu/Linaro versions, most recently 4.7.2 and had no
issues whatsoever. Some people seem to have bumped into the same problem
(see http://goo.gl/1onBz or http://goo.gl/JEnuk) and the suggested fix is
t
Correct, C1060 does not have the CUDA 2.0 compute capability required for
GROMACS 4.6. We will not have the ability to support GPU cards of lower
capability in the future. Unfortunately, your only GROMACS options are
probably to use the OpenMM functionality in 4.5.x (which is still present
in 4.6,
Hi,
We've got a GPU cluster in our group and have really been looking forward to
running gromacs on it with full functionality. Unfortunately, it looks like our
NVIDIA Tesla C1060 cards aren't supported by the 4.6 beta. I was just wondering
if there was any chance that they would be support
> > gcc 4.7.2 is not supported by any CUDA version.
> >
>
> I suggest that you just fix it by editing the include/host_config.h and
> changing the version check macro (line 82 AFAIK). I've never had real
> problems with using new and officially not supported gcc-s, the version
> check is more of a
On Sun, Nov 25, 2012 at 8:47 PM, Thomas Evangelidis wrote:
> Hi Szilárd,
>
> I was able to run code compiled with icc 13 on Fedora 17, but as I don't
> > have Intel Compiler v13 on this machine I can't check it now.
> >
> > Please check if it works for you with gcc 4.7.2 (which is the default)
> a
Hi Szilárd,
I was able to run code compiled with icc 13 on Fedora 17, but as I don't
> have Intel Compiler v13 on this machine I can't check it now.
>
> Please check if it works for you with gcc 4.7.2 (which is the default) and
> let me know if you succeed. The performance difference between icc a
On Mon, Nov 19, 2012 at 6:25 PM, Szilárd Páll wrote:
> On Mon, Nov 19, 2012 at 4:09 PM, Thomas Evangelidis wrote:
>
>> Hi Szilárd,
>>
>> I compiled with the Intel compilers, not gcc. In case I am missing
>> something, these are the versions I have:
>>
>
> Indeed, I see it now in the log file. Let
On Mon, Nov 19, 2012 at 4:09 PM, Thomas Evangelidis wrote:
> Hi Szilárd,
>
> I compiled with the Intel compilers, not gcc. In case I am missing
> something, these are the versions I have:
>
Indeed, I see it now in the log file. Let me try with icc 13 and will get
back to you.
>
> glibc.i686
Hi Szilárd,
I compiled with the Intel compilers, not gcc. In case I am missing
something, these are the versions I have:
glibc.i6862.15-57.fc17
@updates
glibc.x86_64 2.15-57.fc17
@updates
glibc-common.x86_64 2.15-57.fc17
@upda
Thomas & Albert,
We are unable to reproduce the issue on FC 17 with glibc 2.15-58 and gcc
4.7.2.
Please try to update your packages (you should have updates available for
glibc), try recompiling with the latest 4.6 code and report back whether
you succeed.
Cheers,
--
Szilárd
On Fri, Nov 16, 2
Hi Albert,
Apologies for hijacking your thread. Do you happen to have Fedora 17 as
well?
--
Szilárd
On Sun, Nov 4, 2012 at 10:55 AM, Albert wrote:
> hello:
>
> I am running Gromacs 4.6 GPU on a workstation with two GTX 660 Ti (2 x
> 1344 CUDA cores), and I got the following warnings:
>
> tha
Hi Thomas,
The output you get means that you don't have any of the macros we try to
use although your man pages seem to be referring to them. Hence, I'm really
clueless why is this happening. Could you please file a bug report on
redmine.gromacs.org and add both the initial output as well as my pa
On 11/15/12 9:53 AM, Thomas Evangelidis wrote:
Hi Szilárd,
This is the warning message I get this time:
WARNING: Oversubscribing the available -66 logical CPU cores with 1
thread-MPI threads.
This will cause considerable performance loss!
I have also attached the md.log file.
At
Hi Thomas,
Could you please try applying the attached patch (git apply
hardware_detect.patch in the 4.6 source root) and let me know what the
output is?
This should show which sysconf macro is used and what its return value is
as well as indicate if none of the macros are in fact defined by your
On 10 November 2012 03:21, Szilárd Páll wrote:
> Hi,
>
> You must have an odd sysconf version! Could you please check what is the
> sysconf system variable's name in the sysconf man page (man sysconf) where
> it says something like:
>
> _SC_NPROCESSORS_ONLN
> The number of proces
Hi,
You must have an odd sysconf version! Could you please check what is the
sysconf system variable's name in the sysconf man page (man sysconf) where
it says something like:
_SC_NPROCESSORS_ONLN
The number of processors currently online.
The first line should be one of the
fol
Hi,
On Tue, Nov 6, 2012 at 12:03 AM, Thomas Evangelidis wrote:
> Hi,
>
> I get these two warnings when I run the dhfr/GPU/dhfr-solv-PME.bench
> benchmark with the following command line:
>
> mdrun_intel_cuda5 -v -s topol.tpr -testverlet
>
> "WARNING: Oversubscribing the available 0 logical CPU co
Hi,
I get these two warnings when I run the dhfr/GPU/dhfr-solv-PME.bench
benchmark with the following command line:
mdrun_intel_cuda5 -v -s topol.tpr -testverlet
"WARNING: Oversubscribing the available 0 logical CPU cores with 1
thread-MPI threads."
0 logical CPU cores? Isn't this bizarre? My C
The first warning indicates that you are starting more threads than the
hardware supports which would explain the poor performance.
Could share a log file of the suspiciously slow run as well as the command
line you used to start mdrun?
Cheers,
--
Szilárd
On Sun, Nov 4, 2012 at 5:32 PM, Albert
well, IC.
the performance is rather poor than GTX590. 32ns/day vs 4 ns/day
probably that's also something related to the warnings?
THX
On 11/04/2012 01:59 PM, Justin Lemkul wrote:
On 11/4/12 4:55 AM, Albert wrote:
hello:
I am running Gromacs 4.6 GPU on a workstation with two GTX 660
I 'm also get the first warning ("oversubscribing the available...") and
see no obvious performance gain. Do you know how to avoid that?
thanks,
Thomas
On 4 November 2012 14:59, Justin Lemkul wrote:
>
>
> On 11/4/12 4:55 AM, Albert wrote:
>
>> hello:
>>
>> I am running Gromacs 4.6 GPU on a
On 11/4/12 4:55 AM, Albert wrote:
hello:
I am running Gromacs 4.6 GPU on a workstation with two GTX 660 Ti (2 x 1344
CUDA cores), and I got the following warnings:
thank you very much.
---messages---
WARNING: On node 0: oversubscribi
1 - 100 of 136 matches
Mail list logo