[gmx-users] PLUMED2 official release
Dear GROMACS users and developers, We are very pleased to announce that PLUMED2 is available at www.plumed-code.org. Version 2.0 is a complete rewrite, so there is no way to write a complete set of differences with respect to PLUMED. Here is a summary of the major differences: - The input is simpler and more error proof. Many checks are now performed in such a way that common errors are avoided. - The units are now the same for all MD codes. If you want to use different units than the default ones, you can now set them in the input file. - The analysis tools are now much more flexible. As an example of this it is now possible to write different collective variables with different frequencies - Many complex collective variables are considerably faster with respect to PLUMED1. In particular, all variables based on RMSD distances. - Centers of mass can be used as if they were atoms. Hence, unlike PLUMED1, you can use center of mass positions in ALL collective variables. - The virial contribution is now computed and passed to the MD code. Plumed can thus now be used to perform biased NPT simulations In addition, it is now much easier to contribute new functionality to the code because: - There is a much simpler interface between PLUMED and the MD codes. This makes it much easier to add PLUMED to a new MD code. - PLUMED2 is written in a C++ object oriented programming and it is fully compatible with the C++ standard library - PLUMED2 is based on a modular structure - An extensive developer and user documentations are provided with the code While PLUMED2 includes many more functionalities with respect to PLUMED1, some other are missing and will be implemented shortly. Check the User manual in the How tos section about how moving from PLUMED1 to PLUMED2. More information can be found in the PLUMED2 paper, available on arxiv: http://arxiv.org/abs/1310.0980 and on the Computer Physics Communications website: http://www.sciencedirect.com/science/article/pii/S0010465513003196 The PLUMED developers team.-- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Re: meta-dynamics in gromacs-4.6
Hi, yes I can confirm you that PLUMED will be available for gromacs-4.6! We are currently testing it. Carlo > Message: 6 > Date: Wed, 16 Jan 2013 12:54:11 -0500 > From: Michael Shirts > Subject: Re: [gmx-users] meta-dynamics in gromacs-4.6 > To: Discussion list for GROMACS users > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > I assume PLUMED will be implemented for Gromacs 4.6, as many PLUMED > developers use Gromacs. Perhaps any PLUMED lurkers on the list can > speak up. . . . > > On Wed, Jan 16, 2013 at 9:20 AM, Mark Abraham > wrote: >> The GROMACS team has no plans for that. The usual problem here is that >> everybody would like every algorithm included, but that developers with >> time and experience are scarce :-) It's an open source project though, so >> anyone can do whatever they like. We're prepared to consider inclusions to >> the main project. >> >> Note that we're doing a major rework of the code base to C++ in the next >> year (or more), so people implementing new features may wish to consider >> that in their choice to write code :-) >> >> Mark >> >> On Tue, Jan 15, 2013 at 2:26 PM, James Starlight >> wrote: >> >>> Dear Gromacs developers! >>> >>> >>> There is well-known plugin plumed which can be used for implementation >>> of meta-dynamics simulation if Gromacs-4.5. I wounder to know if it >>> possible to include some meta-dynamics options in the new gromacs >>> release ( similar to inclusion of essential dynamics sampling in >>> previous gromacs versions) ? >>> >>> >>> Thanks for attention, >>> James >>> -- >>> gmx-users mailing listgmx-users@gromacs.org >>> http://lists.gromacs.org/mailman/listinfo/gmx-users >>> * Please search the archive at >>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >>> * Please don't post (un)subscribe requests to the list. Use the >>> www interface or send it to gmx-users-requ...@gromacs.org. >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >>> >> -- >> gmx-users mailing listgmx-users@gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-users >> * Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >> * Please don't post (un)subscribe requests to the list. Use the >> www interface or send it to gmx-users-requ...@gromacs.org. >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Re: Build on OSX with 4.6beta1
Dear Szilárd, My cmake version in the 2.8.9. The error is Linking C shared library libgmx.dylib Undefined symbols for architecture x86_64: "std::terminate()", referenced from: do_memtest(unsigned int, int, int) in libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o) memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "typeinfo for int", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_allocate_exception", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_begin_catch", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_end_catch", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_throw", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___gxx_personality_v0", referenced from: Dwarf Exception Unwind Info (__eh_frame) in libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o) Dwarf Exception Unwind Info (__eh_frame) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[2]: *** [src/gmxlib/libgmx.6.dylib] Error 1 make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2 make: *** [all] Error 2 and is the same if I compile with clang/clang++ or gcc-mp-4.7/g++-mp-4.7 (in order to get the openMP parallelisation that is not supported by clang) or gcc/g++ (compiling in the most simple fashion just with cmake ../ -DGMX_CPU_ACCELERATION=SSE4.1) the code compiled with the all the above compilers and the standard gcc for the nvcc part works and gives reproducible results on simple systems. again in all the above cases it is enough to change the c compiler to a c++ compiler in src/gmxlib/CMakeFiles/gmx.dir/link.txt Best, Carlo > > Message: 1 > Date: Mon, 3 Dec 2012 16:10:11 +0100 > From: Szil?rd P?ll > Subject: Re: [gmx-users] Build on OSX with 4.6beta1 > To: Discussion list for GROMACS users > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, > > I think this happens either because you have cmake 2.8.10 and the > host-compiler gets double-set or because something gets messed up when you > use clang/clang++ with gcc as the CUDA host-compiler. Could you provide the > exact error output you are getting as well as cmake invocation? As I don't > have access to such a machine myself, I'll need some help with figuring out > what exactly is causing the trouble. > > Btw, the CUDA Mac OS X user guide states that you need to use "The gcc > compiler and toolchain installed using Xcode" ( > http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/index.html > ). > > Note that on Intel CPUs, when running on a single node, you'll get much > better performance (up to +30%!) if you use only OpenMP-based > parallelization - which is the default with up to 16 threads (on Intel). > > Cheers, > -- > Szilárd > > > On Fri, Nov 30, 2012 at 11:01 AM, Carlo Camilloni > wrote: > >> Dear All, >> >> I have successfully compiled the beta1 of gromacs 4.6 on my macbook pro >> with mountain lion. >> I used the latest cuda and the clang/clang++ compilers in order to have >> access to the AVX instructions. >> mdrun works with great performances!! great job! >> >> two things: >> >> 1. the compilation was easy but not straightforward: >> cmake ../ -DGMX_GPU=ON >> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu >> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang >> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF >> >> and then I had to manually edit >> src/gmxlib/CMakeFiles/gmx.dir/link.txt >> >> and change clang to clang++ >> (I noted that in many other places it was correctly set, and without this >> change I got an error on some c++ related stuff) >> >> 2. is there any way to have openmp parallelisation on osx? >> >> Best, >> Carlo >> >> >> -- >> gmx-users mailing listgmx-users@gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-users >> * Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >> * Please don't post (un)subscribe requests to the list. Use the >> www interfac
[gmx-users] Re: gmx-users Digest, Vol 104, Issue 7
Dear Szilárd, My cmake version in the 2.8.9. The error is Linking C shared library libgmx.dylib Undefined symbols for architecture x86_64: "std::terminate()", referenced from: do_memtest(unsigned int, int, int) in libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o) memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "typeinfo for int", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_allocate_exception", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_begin_catch", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_end_catch", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___cxa_throw", referenced from: memtestState::allocate(unsigned int) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) "___gxx_personality_v0", referenced from: Dwarf Exception Unwind Info (__eh_frame) in libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o) Dwarf Exception Unwind Info (__eh_frame) in libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o) ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[2]: *** [src/gmxlib/libgmx.6.dylib] Error 1 make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2 make: *** [all] Error 2 and is the same if I compile with clang/clang++ or gcc-mp-4.7/g++-mp-4.7 (in order to get the openMP parallelisation that is not supported by clang) or gcc/g++ (compiling in the most simple fashion just with cmake ../ -DGMX_CPU_ACCELERATION=SSE4.1) the code compiled with the all the above compilers and the standard gcc for the nvcc part works and gives reproducible results on simple systems. again in all the above cases it is enough to change the c compiler to a c++ compiler in src/gmxlib/CMakeFiles/gmx.dir/link.txt Best, Carlo > > Message: 1 > Date: Mon, 3 Dec 2012 16:10:11 +0100 > From: Szil?rd P?ll > Subject: Re: [gmx-users] Build on OSX with 4.6beta1 > To: Discussion list for GROMACS users > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, > > I think this happens either because you have cmake 2.8.10 and the > host-compiler gets double-set or because something gets messed up when you > use clang/clang++ with gcc as the CUDA host-compiler. Could you provide the > exact error output you are getting as well as cmake invocation? As I don't > have access to such a machine myself, I'll need some help with figuring out > what exactly is causing the trouble. > > Btw, the CUDA Mac OS X user guide states that you need to use "The gcc > compiler and toolchain installed using Xcode" ( > http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/index.html > ). > > Note that on Intel CPUs, when running on a single node, you'll get much > better performance (up to +30%!) if you use only OpenMP-based > parallelization - which is the default with up to 16 threads (on Intel). > > Cheers, > -- > Szilárd > > > On Fri, Nov 30, 2012 at 11:01 AM, Carlo Camilloni > wrote: > >> Dear All, >> >> I have successfully compiled the beta1 of gromacs 4.6 on my macbook pro >> with mountain lion. >> I used the latest cuda and the clang/clang++ compilers in order to have >> access to the AVX instructions. >> mdrun works with great performances!! great job! >> >> two things: >> >> 1. the compilation was easy but not straightforward: >> cmake ../ -DGMX_GPU=ON >> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu >> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang >> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF >> >> and then I had to manually edit >> src/gmxlib/CMakeFiles/gmx.dir/link.txt >> >> and change clang to clang++ >> (I noted that in many other places it was correctly set, and without this >> change I got an error on some c++ related stuff) >> >> 2. is there any way to have openmp parallelisation on osx? >> >> Best, >> Carlo >> >> >> -- >> gmx-users mailing listgmx-users@gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-users >> * Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >> * Please don't post (un)subscribe requests to the list. Use the &g
[gmx-users] Re: Build on OSX with 4.6beta1
Hi Roland, so, the problem is not fixed by changing to -DCUDA_NVCC_HOST_COMPILER=/usr/bin/g++ , I still have to change by hand clang to clang++ in that single link.txt file, while everything works fine by checking out the revision you suggested. Best, Carlo > Message: 3 > Date: Fri, 30 Nov 2012 08:34:24 -0500 > From: Roland Schulz > Subject: Re: [gmx-users] Build on OSX with 4.6beta1 > To: Discussion list for GROMACS users > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, > > On Fri, Nov 30, 2012 at 5:01 AM, Carlo Camilloni > wrote: >> >> >> 1. the compilation was easy but not straightforward: >> cmake ../ -DGMX_GPU=ON >> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu >> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang >> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF >> > > One more thing. Could you try whether the problem is fixed with > -DCUDA_NVCC_HOST_COMPILER=/usr/bin/g++ ? > > Roland > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Build on OSX with 4.6beta1
Dear All, I have successfully compiled the beta1 of gromacs 4.6 on my macbook pro with mountain lion. I used the latest cuda and the clang/clang++ compilers in order to have access to the AVX instructions. mdrun works with great performances!! great job! two things: 1. the compilation was easy but not straightforward: cmake ../ -DGMX_GPU=ON -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF and then I had to manually edit src/gmxlib/CMakeFiles/gmx.dir/link.txt and change clang to clang++ (I noted that in many other places it was correctly set, and without this change I got an error on some c++ related stuff) 2. is there any way to have openmp parallelisation on osx? Best, Carlo -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] rlist and rlistlong in gmx451
Dear Gromacs Users and Developers, I am testing gromacs-4.5.1 with different running parameters, in particular switching potentials and rlistlong. I have a question about this NOTEs given by grompp when rlist=rvdw=rcoulomb (I thought that was rlonglist that should be longer than rvdw and rcoulomb): NOTE 1 [file qua.mdp]: For energy conservation with switch/shift potentials, rlist should be 0.1 to 0.3 nm larger than rcoulomb. NOTE 2 [file qua.mdp]: For energy conservation with switch/shift potentials, rlist should be 0.1 to 0.3 nm larger than rvdw. I am using: rlist= 1.2 rlistlong= 1.5 coulombtype = pme-switch rcoulomb-switch = 1.0 rcoulomb = 1.2 vdw-type = switch rvdw-switch = 1.0 rvdw = 1.2 If rlist should be larger than rvdw what's the use of rlonglist? Thanks, Carlo -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] rlist and rlistlong and grompp notes in gmx451
Dear Gromacs Users and Developers, I am testing gromacs-4.5.1 with different running parameters, in particular switching potentials and rlistlong, I have a question about this NOTEs: NOTE 1 [file qua.mdp]: For energy conservation with switch/shift potentials, rlist should be 0.1 to 0.3 nm larger than rcoulomb. NOTE 2 [file qua.mdp]: For energy conservation with switch/shift potentials, rlist should be 0.1 to 0.3 nm larger than rvdw. I am using: rlist= 1.2 rlistlong= 1.5 coulombtype = pme-switch rcoulomb-switch = 1.0 rcoulomb = 1.2 vdw-type = switch rvdw-switch = 1.0 rvdw = 1.2 with rlist=rvdw=rcoulomb because I thought that was rlonglist that should be longer than rvdw and rcoulomb. If rlist should be larger than rvdw what's the use of rlonglist? Thanks, Carlo -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Re: gmx-users Digest, Vol 58, Issue 33
Thank you, but is the "conserved energy quantity" written in the energy file compatible with the use of more than one temperature group? I ask this because in my tests seems that the "conserved energy quantity" is conserved better using only one temperature group. (I use pme for electrostatic and v-rescale as thermostat). Thanks, Carlo On Feb 5, 2009, at 11:02 AM, Carlo Camilloni wrote: Dear Gromacs users, I am doing some tests with thermostats and I would like to know if someone has just done it. In particular: I have not done systematic tests but here are my impressions 1) How big has to be a protein to be coupled with a separate bath? a few residues ... 2) Do you know over which time scales a flux of heat is observed between protein and solvent using only one temperature group? few 10-100 ps. Thanks, Carlo Camilloni Dr Carlo Camilloni Department of Physics, University of Milano via Celoria, 16 - 20133 Milano, Italy phone: +39-02-50317654 carlo.camill...@mi.infn.it http://merlino.mi.infn.it/~carlo ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] heat exchanges
Dear Gromacs users, I am doing some tests with thermostats and I would like to know if someone has just done it. In particular: 1) How big has to be a protein to be coupled with a separate bath? 2) Do you know over which time scales a flux of heat is observed between protein and solvent using only one temperature group? Thanks, Carlo Camilloni -- Dr Carlo Camilloni Department of Physics, University of Milano via Celoria, 16 - 20133 Milano, Italy phone: +39-02-50317654 carlo.camill...@mi.infn.it http://merlino.mi.infn.it/~carlo ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] RE: gromacs 4-rc2 and parallel tempering
Hi, It was a problem with my cluster, in particular with mvapich2 settings! Now it works fine with rc2 and rc3! Sorry, Carlo On 9 Oct 2008, at 09:35, [EMAIL PROTECTED] wrote: Message: 4 Date: Thu, 9 Oct 2008 09:07:58 +0200 From: Berk Hess <[EMAIL PROTECTED]> Subject: RE: [gmx-users] gromacs 4-rc2 and parallel tempering To: Discussion list for GROMACS users Message-ID: <[EMAIL PROTECTED]> Content-Type: text/plain; charset="iso-8859-1" Hi, With RC4 this works fine for me (both command lines). Nothing seems to have changed to the filename part of the code since RC2. Are you sure the mdrun_mpi in your path is the Gromacs 4 rc 2 binary? (try mdrun_mpi -h) Berk From: [EMAIL PROTECTED] To: gmx-users@gromacs.org Date: Wed, 8 Oct 2008 17:52:22 +0200 Subject: [gmx-users] gromacs 4-rc2 and parallel tempering Dear Gromacs Users and Developers, I tried to perform a parallel tempering simulation with gromacs 4 rc2, using two replica. I think that the way of doing so is the same of gromacs 3, but it doesn't work I have two topol.tpr file: topol0.tpr and topol1.tpr and I run gromacs as: mpirun -np 2 mdrun_mpi -s topol.tpr -multi 2 -replex 100 but it give the error: cannot open file: topol.tpr if i change the command line to mpirun -np 2 mdrun_mpi -s topol0.tpr -multi 2 -replex 100 or mpirun -np 2 mdrun_mpi -s topol0.tpr topol1.tpr -multi 2 -replex 100 the error became Fatal error: nothing to exchange win only one replica, ... could you help me? Thank you, Carlo Camilloni Carlo Camilloni Department of Physics, University of Milano via Celoria, 16 - 20133 Milano, Italy phone: +39-02-50317654 [EMAIL PROTECTED] http://merlino.mi.infn.it/~carlo ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] gromacs 4-rc2 and parallel tempering
Dear Gromacs Users and Developers, I tried to perform a parallel tempering simulation with gromacs 4 rc2, using two replica. I think that the way of doing so is the same of gromacs 3, but it doesn't work I have two topol.tpr file: topol0.tpr and topol1.tpr and I run gromacs as: mpirun -np 2 mdrun_mpi -s topol.tpr -multi 2 -replex 100 but it give the error: cannot open file: topol.tpr if i change the command line to mpirun -np 2 mdrun_mpi -s topol0.tpr -multi 2 -replex 100 or mpirun -np 2 mdrun_mpi -s topol0.tpr topol1.tpr -multi 2 -replex 100 the error became Fatal error: nothing to exchange win only one replica, ... could you help me? Thank you, Carlo Camilloni Carlo Camilloni Department of Physics, University of Milano via Celoria, 16 - 20133 Milano, Italy phone: +39-02-50317654 [EMAIL PROTECTED] http://merlino.mi.infn.it/~carlo ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] GroMeta v.2.0
Dear Gromacs Users, after many months, we are finally upgrading the GROMETA code (Metadynamics with GROMACS) to version 2.0, based on Gromacs 3.3.3. Major changes include new collective variables (including path variables) and the well-tempered algorithm of Branduardi and Bussi. Many bugs have been corrected, the parser improved (thanks to Laio's group at SISSA) and the code is clearer and easier to extend to new variables. If your are interested in trying it, you can download it using the same userid and password from http://lxmi.mi.infn.it/~provasi/grometa/ If you do not have an account yet, just send us an e-mail. If you implement new features on the code, we'd like to have them included in our version, too, so let us know! Also let us know if you find bugs or errors. Carlo Camilloni and Davide Provasi. Physics Department, University of Milano. ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] GROMETA - Gromacs with Metadynamics, and more
Dear Gromacs users, we have developed a GROMACS release with metadynamics. If your are interested in trying it, you can download it from http://lxmi.mi.infn.it/~provasi/grometa/ If you implement new features in it, we'd like to have them included in our version, too, so let us know! Also let us know if you find bugs or errors. Carlo Camilloni and Davide Provasi. Physics deparment, University of Milano via Celoria, 16 20133 Milan, Italy ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] Re: Counter ion and PME
Thank you all. I will do some test! Carlo Camilloni ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php
[gmx-users] Counter ion and PME
Dear Gromacs users, I have read in CSC Course Material that it is possible to do not add counter ions in a charged system with PME electrostatics , because gromacs (3.3.1 ?) adds a background uniform charge to neutralize the system. Is this neutralization correct and used in literature? Thanks for your help, Carlo Camilloni ___ gmx-users mailing listgmx-users@gromacs.org http://www.gromacs.org/mailman/listinfo/gmx-users Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED] Can't post? Read http://www.gromacs.org/mailing_lists/users.php