[gmx-users] Re: Gromacs-GPU benchmark test killed after exhausting the memory

2012-03-04 Thread Efrat Exlrod
Hi Szilard, 

Thanks for your reply. 
I used your script and I think it does look as a memory leak. Please look at 
the attached runchkmem.out
Is it possible this problem exists in version 4.5.5 and was solved in version 
4.6 you are using?
When will version 4.6 be released?

Thanks, Efrat

>Message: 2
>Date: Fri, 2 Mar 2012 14:46:25 +0100
>From: Szil?rd P?ll 
>Subject: Re: [gmx-users] Gromacs-GPU benchmark test killed after
>exhausting  the memory
>To: Discussion list for GROMACS users 
>Message-ID:
>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Hi,
>
>First I thought that there might be a memory leak which could have
>caused this if you ran for really long time. However, I've just ran
>the very same benchmark (dhfr with PME) for one hour, monitored the
>memory usage and I couldn't see any change whatsoever (see the plot
>attached).
>
>I've attached a script I used to monitor the memory usage, feel free
>to use if you want check again.
>
>Cheers,
>--
>Szil?rd
>
>
>
>On Wed, Feb 29, 2012 at 9:50 AM, Efrat Exlrod  wrote:
>> Hi,
>>
>> I have?Gromacs-GPU version 4.5.5 and?GTX 580.
>> I ran?dhfr-solv-PME benchmark test (see below) and my run was killed after
>> couple of hours?exhausting all the computer memory, including the swap (2G
>> +?4G swap).
>> Has anyone encountered this problem? What?is wrong?
>>
>> Thanks, Efrat

runchkmem.out
Description: runchkmem.out


runchkmem_c.sh
Description: runchkmem_c.sh
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Gromacs-GPU benchmark test killed after exhausting the memory

2012-02-29 Thread Efrat Exlrod
Hi,

I have Gromacs-GPU version 4.5.5 and GTX 580.
I ran dhfr-solv-PME benchmark test (see below) and my run was killed after 
couple of hours exhausting all the computer memory, including the swap (2G + 4G 
swap).
Has anyone encountered this problem? What is wrong?

Thanks, Efrat

> mdrun-gpu -device 
> "OpenMM:platform=Cuda,memtest=15,deviceid=0,force-device=yes" -deffnm md
 :-)  G  R  O  M  A  C  S  (-:

   Great Red Oystrich Makes All Chemists Sane

:-)  VERSION 4.5.5  (-:

Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
  Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra,
Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2010, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  mdrun-gpu  (-:

Option Filename  Type Description

  -s md.tpr  InputRun input file: tpr tpb tpa
  -o md.trr  Output   Full precision trajectory: trr trj cpt
  -x md.xtc  Output, Opt. Compressed trajectory (portable xdr format)
-cpi md.cpt  Input, Opt.  Checkpoint file
-cpo md.cpt  Output, Opt. Checkpoint file
  -c md.gro  Output   Structure file: gro g96 pdb etc.
  -e md.edr  Output   Energy file
  -g md.log  Output   Log file
-dhdlmd.xvg  Output, Opt. xvgr/xmgr file
-field   md.xvg  Output, Opt. xvgr/xmgr file
-table   md.xvg  Input, Opt.  xvgr/xmgr file
-tablep  md.xvg  Input, Opt.  xvgr/xmgr file
-tableb  md.xvg  Input, Opt.  xvgr/xmgr file
-rerun   md.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
-tpi md.xvg  Output, Opt. xvgr/xmgr file
-tpidmd.xvg  Output, Opt. xvgr/xmgr file
 -ei md.edi  Input, Opt.  ED sampling input
 -eo md.edo  Output, Opt. ED sampling output
  -j md.gct  Input, Opt.  General coupling stuff
 -jo md.gct  Output, Opt. General coupling stuff
-ffout   md.xvg  Output, Opt. xvgr/xmgr file
-devout  md.xvg  Output, Opt. xvgr/xmgr file
-runav   md.xvg  Output, Opt. xvgr/xmgr file
 -px md.xvg  Output, Opt. xvgr/xmgr file
 -pf md.xvg  Output, Opt. xvgr/xmgr file
-mtx md.mtx  Output, Opt. Hessian matrix
 -dn md.ndx  Output, Opt. Index file
-multidirmd  Input, Opt., Mult. Run directory

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint0   Set the nicelevel
-deffnm  string md  Set the default filename for all file options
-xvg enum   xmgrace  xvg plot formatting: xmgrace, xmgr or none
-[no]pd  bool   no  Use particle decompostion
-dd  vector 0 0 0   Domain decomposition grid, 0 is optimize
-npmeint-1  Number of separate nodes to be used for PME, -1
is guess
-ddorder enum   interleave  DD node order: interleave, pp_pme or cartesian
-[no]ddcheck bool   yes Check for all bonded interactions with DD
-rdd real   0   The maximum distance for bonded interactions with
DD (nm), 0 is determine from initial coordinates
-rconreal   0   Maximum distance for P-LINCS (nm), 0 is estimate
-dlb enum   autoDynamic load balancing (with DD): auto, no or yes
-dds real   0.8 Minimum allowed dlb scaling of the DD cell size
-gcomint-1  Global communication frequency
-[no]v   bool   no  Be loud and noisy
-[no]compact bool   yes Write a compact log file
-[no]seppot  bool   no  Write separate V and dVdl terms for each
interaction type and node to the log file(s)
-pforce  real   -1  Print all forces larger than this (kJ/mol nm)
-[no]reprod  bool   no  Try to avoid optimizations that affect binary
reproducibility
-cpt real   15  Checkpoint interval (minutes)
-[no]cpnum   bool   no  Keep and number checkpoint files
-[no]append  bool   yes Appen

[gmx-users] FW: Gromacs-GPU benchmark test killed after exhausting the memory

2012-02-20 Thread Efrat Exlrod
 Terminate after 0.99 times this time (hours)
-multi   int0   Do multiple simulations in parallel
-replex  int0   Attempt replica exchange every # steps
-reseed  int-1  Seed for replica exchange, -1 is generate a seed
-[no]ionize  bool   no  Do a simulation including the effect of an X-Ray
bombardment on your system
-device  string OpenMM:platform=Cuda,memtest=15,deviceid=0,force-device=yes 
 Device option
string


Back Off! I just backed up md.log to ./#md.log.1#
Reading file md.tpr, VERSION 4.5.5 (double precision)

Back Off! I just backed up md.edr to ./#md.edr.1#

WARNING: OpenMM does not support leap-frog, will use velocity-verlet integrator.


WARNING: OpenMM supports only Andersen thermostat with the md/md-vv/md-vv-avek 
integrators.


WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and CCMA. 
Accuracy is based on the SHAKE tolerance set by the "shake_tol" option.


WARNING: Non-supported GPU selected (#0, GeForce GTX 580), forced 
continuing.Note, that the simulation can be slow or it migth even crash.


Pre-simulation ~15s memtest in progress...done, no errors detected
starting mdrun 'Protein in water'
-1 steps, infinite ps.
Killed

____
From: Efrat Exlrod
Sent: Monday, February 20, 2012 11:33 AM
To: gmx-users@gromacs.org
Subject: Gromacs-GPU benchmark test killed after exhausting the memory

Hi,

I have Gromacs- GPU version 4.5.5 and GTX 580.
I run dhfr-solv-PME benchmark test (see below) and my run is killed after 
couple of hours when it exhausts all the computer memory, including the swap 
(2G + 4G swap).
Has anyone encountered this problem? What do I do wrong?

Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Gromacs-GPU benchmark test killed after exhausting the memory

2012-02-20 Thread Efrat Exlrod
Hi,

I have Gromacs- GPU version 4.5.5 and GTX 580.
I run dhfr-solv-PME benchmark test (see below) and my run is killed after 
couple of hours when it exhausts all the computer memory, including the swap 
(2G + 4G swap).
Has anyone encountered this problem? What do I do wrong?

Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Installing GMX-GPU 4.5.5

2012-01-31 Thread Efrat Exlrod
Thanks for your reply.



I looked at this guide but I still wonder - can't the deletion of instances of 
"-fexcess-precision=fast" affect performance? And why is this flag unrecognized?



Thanks, Efrat



>Date: Tue, 31 Jan 2012 21:50:53 +0300
>From: ?? ?? 
>Subject: Re: [gmx-users] Installing GMX-GPU 4.5.5
>To: Discussion list for GROMACS users 
>Message-ID:

>Content-Type: text/plain; charset=UTF-8

>You should try this:
>http://verahill.blogspot.com/2012/01/debian-testing-64-wheezy-compiling_20.html

>2012/1/31 Efrat Exlrod :
>> Hi,
>>
>> I'm compiling gromacs 4.5.5 with gcc compiler (v 4.5.3), cmake (2.8.7) and
>> OpenMM 3.1.1 on Linux (Red Hat release 5.7). I have followed the
>> installation instructions.
>>
>> The configuration seems to work well.
>>
>>> ~/progs/cmake-2.8.7/bin/cmake -DGMX_OPENMM=ON
>>> -DCUDA_TOOLKIT_ROOT_DIR:PATH=/opt/cuda
>>> -DCMAKE_C_COMPILER:FILEPATH=/private/gnss/local/bin/gcc
>>> -DCMAKE_INSTALL_PREFIX=/private/gnss/Gromacs_455
>>
>> But, when I run make mdrun I get the following error:
>>
>>>make mdrun
>> [?? 0%] Building NVCC (Device) object
>> src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
>> cc1plus: error: unrecognized command line option "-fexcess-precision=fast"
>> CMake Error at
>> CMakeFiles/gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake:198 (message):
>> ?? Error generating
>>
>> /private/gnss/Gromacs_Install_455/gromacs-4.5.5/src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
>>
>>
>> make[3]: ***
>> [src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o]
>> Error 1
>> make[2]: *** [src/kernel/gmx_gpu_utils/CMakeFiles/gmx_gpu_utils.dir/all]
>> Error 2
>> make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
>> make: *** [mdrun] Error 2
>>
>> When I run make mdrun after deleting the 2 occurences of
>> "-fexcess-precision=fast" from CMakeCache.txt the compilation works.
>>
>> What could be the problem?
>>
>> Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Installing GMX-GPU 4.5.5

2012-01-31 Thread Efrat Exlrod
Hi,

I'm compiling gromacs 4.5.5 with gcc compiler (v 4.5.3), cmake (2.8.7) and 
OpenMM 3.1.1 on Linux (Red Hat release 5.7). I have followed the installation 
instructions.

The configuration seems to work well.

> ~/progs/cmake-2.8.7/bin/cmake -DGMX_OPENMM=ON 
> -DCUDA_TOOLKIT_ROOT_DIR:PATH=/opt/cuda 
> -DCMAKE_C_COMPILER:FILEPATH=/private/gnss/local/bin/gcc 
> -DCMAKE_INSTALL_PREFIX=/private/gnss/Gromacs_455

But, when I run make mdrun I get the following error:

>make mdrun
[  0%] Building NVCC (Device) object 
src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
cc1plus: error: unrecognized command line option "-fexcess-precision=fast"
CMake Error at 
CMakeFiles/gmx_gpu_utils_generated_memtestG80_core.cu.o.cmake:198 (message):
  Error generating
  
/private/gnss/Gromacs_Install_455/gromacs-4.5.5/src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o


make[3]: *** 
[src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o] Error 
1
make[2]: *** [src/kernel/gmx_gpu_utils/CMakeFiles/gmx_gpu_utils.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2

When I run make mdrun after deleting the 2 occurences of 
"-fexcess-precision=fast" from CMakeCache.txt the compilation works.

What could be the problem?

Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] average pressure using g_energy

2011-12-26 Thread Efrat Exlrod
Hi,



We have run an NPT equilibration on a protein embedded in a membrane and 
calculated the average pressure using g_energy. We then imported the resulting 
xvg file into excel and recalculated the average. The 2 numbers differed. 
Specifically, the gromacs average pressure was 0.942858 while that calculated 
by excel was -0.726106. Are g_energy calculations performed on a different set 
of numbers than the one printed by g_energy to the xvg file? If not, can anyone 
comment on a potential source for this difference?



Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] trjconv -pbc mol/nojump

2011-12-11 Thread Efrat Exlrod
Hi,



I have run simulation of a large solute in a box of water. Trying to look at 
the simulation output I used trjconv with and without the -pbc nojump option.



For example:

(1)  trjconv_d -s md_100ns.tpr -f md_100ns.xtc -o md_100ns_noPBC_pbcmol.pdb 
-pbc mol -ur compact

(2)  trjconv_d -s md_100ns.tpr -f md_100ns.xtc -o md_100ns_noPBC_nojump.pdb 
-pbc nojump



With pbc=mol the solute is broken to few pieces, while the water molecules are 
placed in a box with few holes. With pbc=nojump the solute seems reasonably 
well but the water molecules are scattered in a large radius around the solute, 
and some of them are not broken (OH or H).



I tried many options of trjconv and read the 'suggested trjconv workflow' but I 
still can't obtain a reasonable complete system after the simulation.



How can I solve this problem?



Thanks, Efrat


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] problem installing gromacs

2011-12-06 Thread Efrat Exlrod
Hi everyone,

I have installed Gromacs 4.5.5 following the online installation guide without 
and with mpi.
My configure lines (after few tries) were:

./configure CC=/private/gnss/local/bin/gcc CXX=/private/gnss/local/bin/g++ 
--prefix=/private/gnss/Gromacs_455 --disable-float --with-fft=fftw3 
--disable-shared --without-pic --program-suffix=_d


./configure CC=/private/gnss/local/bin/gcc CXX=/private/gnss/local/bin/g++ 
--prefix=/private/gnss/Gromacs_455 --disable-float --with-fft=fftw3 
--disable-shared --without-pic --program-suffix=_d_mpi --enable-mpi


The binary files were created for both configurations but I get error messages 
in the config.log files (see below) and I was wondering if these errors are 
critical.

error messages for the installation without mpi:
conftest.c:20:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.c:20:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.c:40:13: error: expected '=', ',', ';', 'asm' or '__attribute__' 
before 'a'
conftest.c:54:4: error: 'not' undeclared (first use in this function)
conftest.c:54:8: error: expected ';' before 'big'
conftest.cpp:43:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.cpp:43:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.c:86:20: fatal error: direct.h: No such file or directory
conftest.c:53:20: fatal error: direct.h: No such file or directory
conftest.c:71:1: error: void value not ignored as it ought to be
conftest.c:102:20: error: expected expression before ')' token
conftest.c:110:13: error: 'bool' undeclared (first use in this function)
conftest.c:119:21: error: expected expression before ')' token
conftest.c:89:20: error: expected expression before ')' token
conftest.c:90:20: error: expected expression before ')' token
conftest.c:91:27: error: expected expression before ')' token


error messages for the installation with mpi:
conftest.c:24:2: error: #error not catamount
| #error not catamount
conftest.c:23:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.c:23:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.c:24:13: error: expected '=', ',', ';', 'asm' or '__attribute__' 
before 'a'
conftest.c:38:4: error: 'not' undeclared (first use in this function)
conftest.c:38:8: error: expected ';' before 'big'
/private/gnss/local/bin/gcc: error trying to exec 
'/private/gnss/local/bin/x86_64-unknown-linux-gnu-gcc--I/opt/intel/impi/3.2.2.006/include64':
 execvp: No such file or directory
conftest.cpp:27:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.cpp:27:28: fatal error: ac_nonexistent.h: No such file or directory
conftest.c:70:20: fatal error: direct.h: No such file or directory
conftest.c:37:20: fatal error: direct.h: No such file or directory
conftest.c:55:1: error: void value not ignored as it ought to be
conftest.c:86:20: error: expected expression before ')' token
conftest.c:94:13: error: 'bool' undeclared (first use in this function)
conftest.c:103:21: error: expected expression before ')' token
conftest.c:73:20: error: expected expression before ')' token
conftest.c:74:20: error: expected expression before ')' token
conftest.c:75:27: error: expected expression before ')' token

I saw some of these errors in gromacs mailing list but couldn't understand if 
these errors were eventually solved and how.

Thanks, Efrat

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] gromacs 4.5.3 and GCC 4.1.x

2011-11-23 Thread Efrat Exlrod
Hi,



I want to install standard gromacs 4.5.5 on a new linux machine, prior to 
installing GPU accelerated gromacs. Looking at the installation instructions I 
see you recommend not to use GCC 4.1.x series of compilers.



A year ago I have installed gromacs 4.5.3 and I don't recall seeing the 
recommendation of not using the GCC 4.1.x compilers. The gcc version of the 
computer on which I installed is:

/usr/bin/gcc -v ==> /u/gcc version 4.1.2 20080704 (Red Hat 4.1.2-51).



Is there also a problem with gromacs 4.5.3 and GCC 4.1.x? Was I supposed to 
have trouble installing or running?

Is it possible I was able to compile and run but I can't rely on the results I 
got?



Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] setting working directory

2011-11-01 Thread Efrat Exlrod
Hi There!



Is it possible to run mdrun from a shared directory and set the working 
directory to a local directory on the computer on which it runs, in order to 
decrease NFS load?



Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] RE: Gromacs on GPU: GTX or Tesla?

2011-08-07 Thread Efrat Exlrod
Hi Szilard,



Thank you very much for your input.

At the moment we want to buy a single card (GTX-580?) in order to gain 
experience in working with GPUs. In the future we will probably like to have a 
cluster, especially when multiple GPU cards will be supported by Gromacs.



Thanks, Efrat



Date: Thu, 4 Aug 2011 21:41:29 +0200
From: Szil?rd P?ll 
Subject: Re: [gmx-users] RE: Gromacs on GPU: GTX or Tesla?
To: Discussion list for GROMACS users 
Message-ID:

Content-Type: text/plain; charset=ISO-8859-1

Hi,

Tesla cards won't give you much benefit when it comes to running the
current Gromacs. Additionally, I can tell you so much that this won't
change in the future either. The only advantage of the C20x0-s is ECC
and double precision - which is ATM anyway not supported in Gromacs on
GPUs.

Gromacs in general doesn't use much memory so unless you will want to
run gigantic systems with a single card, C2070 will not give you any
added benefit at all, in fact 1-1.5GB/card should be more than enough.

What kind of machine are you planning to buy/how many GPUs?

--
Szil?rd

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Gromacs on GPU: GTX or Tesla?

2011-08-03 Thread Efrat Exlrod
Hi,



We want to start using Gromacs on GPU and we are debating whether to purchase 
GeForce GTX card or Tesla platform. Looking at the published data it seems 
GTX580 has much impressive performance compared to Tesla C2050, and of course 
it is much cheaper. Is there a reason to prefer Tesla over GTX for running 
Gromacs in the short or long term?



Thanks, Efrat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists