Re: [gmx-users] virtual sites

2013-10-29 Thread Roland Schulz
On Tue, Oct 29, 2013 at 2:21 AM, Neha Gandhi  wrote:

> Dear Users,
>
> I have a system consisting of peptides and a linear carbohydrate.
> Initially I tried to simulate these peptides using virtual sites and
> it worked. I can use pdb2gmx for building virtual sites on protein
> whereas I have an itp file for the carbohydrate.

Yes you can modify the top file by hand to use itp files generated in two
different ways.


> Is it possible to
> apply virtual site to a carbohydrate along with the peptides?
>
If you have an rtp file for you carbohydrate you should be able to run
pdb2gmx and then you should also be able to generate vsite. Or you can
generate them using your own scripts. Of course you only need them if you
have the respective groups (e.g. CH3).

Roland


>
> --
> Regards,
> Dr. Neha S. Gandhi,
> Curtin Research Fellow,
> School of Biomedical Sciences,
> Curtin University,
> Perth GPO U1987
> Australia
> LinkedIn
> Research Gate
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] CHARMM36 force field available for GROMACS

2013-10-09 Thread Roland Schulz
Hi Justin,

are you guys planning anything to make pdb2gmx understand the CHARMM patch
residues? We have some python scripts which generate new residues based on
the patch residues, which allows us to simulate branched molecules (e.g.
glycosylation or lignin). But that approach is very suboptimal and I think
a more general approach would be very nice.

Roland


On Tue, Oct 8, 2013 at 4:16 PM, Justin Lemkul  wrote:

>
> All,
>
> I am pleased to announce the immediate availability of the latest CHARMM36
> force
> field in GROMACS format.  You can obtain the archive from our lab's
> website at
> http://mackerell.umaryland.edu/CHARMM_ff_params.html.
>
> The present version contains up-to-date parameters for proteins, nucleic
> acids,
> lipids, some carbohydrates, CGenFF version 2b7, and a variety of of other
> small
> molecules.  Please refer to forcefield.doc, which contains a list of
> citations
> that describe the parameters, as well as the CHARMM force field files that
> were
> used to generate the distribution.
>
> We have validated the parameters by comparing energies of a wide variety of
> molecules within CHARMM and GROMACS and have found excellent agreement
> between
> the two.  If anyone has any issues or questions, please feel free to post
> them
> to this list or directly to me at the email address below.
>
> Happy simulating!
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
>
> ==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-09 Thread Roland Schulz
Hi,

based on Mark's idea I would have thought that the cpu detection would
have already failed during cmake. But it seems it detected SSE4.1
correctly.
Could you post the stack trace for the crash? (see previous mail for
instructions)

Roland

On Sun, Jun 9, 2013 at 4:42 PM, Amil Anderson  wrote:
> Roland,
>
> I have posted the cmake output (cmake-4.6.1) and the file CMakeError.log at
> the usual
>
> https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP
>
> I see there are some errors but don't know what to make of them.
>
> Thanks,
> Amil
>
>
>
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/mdrun-segmentation-fault-for-new-build-of-gromacs-4-6-1-tp5008873p5008944.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-07 Thread Roland Schulz
On Fri, Jun 7, 2013 at 2:06 PM, Mark Abraham  wrote:
> Running Amil's .tpr, the next output is from our CPU detection machinery.
> While the Xeon 5500 is not exactly new, AFAIK there should be no reason for
> the detection to fail. But the stack trace will tell.
Good point. Amil, please post the cmake output and the
CMakeFiles/CMakeError.log. Because the CPU detection probably also
already failed there if this is the case.

Roland

>
> Mark
>
>
> On Fri, Jun 7, 2013 at 7:23 PM, Roland Schulz  wrote:
>
>> Amil,
>>
>> would it be possible for you to compile with a different compiler?
>> Ideally gcc 4.7.3. Alternative could you send us the stack for when it
>> crashes?
>> For that you need to:
>> - compile with debug -DCMAKE_BUILD_TYPE=Debug
>> - run under gdb "gdb --arg /path/to/mdrun {your mdrun arguments}"
>> - enter "bt" after it crashes
>>
>> Roland
>>
>> On Fri, Jun 7, 2013 at 12:19 PM, Amil Anderson 
>> wrote:
>> > Roland,
>> >
>> > I have now run the regression test on my installation and it all fails
>> for
>> > runmd (segmentation fault).  Don't see that anything else is being
>> tested.
>> >
>> > I've placed the output of the make check  (Make_check.out) at
>> >
>> > https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP
>> >
>> > I'll copy the first part here to give you a taste of it:
>> >
>> >
>> ---
>> > [softwaremgmt@warp2-login build]$ more make_check.out
>> > [ 64%] Built target gmx
>> > [ 64%] Built target gmxfftw
>> > [ 76%] Built target md
>> > [ 92%] Built target gmxana
>> > [ 92%] Built target editconf
>> > [ 97%] Built target gmxpreprocess
>> > [ 97%] Built target grompp
>> > [ 98%] Built target pdb2gmx
>> > [ 98%] Built target gmxcheck
>> > [100%] Built target mdrun
>> > [100%] Built target gmxtests
>> > Test project /shared/software/temp/gromacs-4.6.1/build
>> > Start 1: regressiontests/simple
>> > 1/5 Test #1: regressiontests/simple ...***Failed0.83 sec
>> > sh: line 1: 11622 Segmentation fault  mdrun -notunepme -table
>> ../table
>> > -tabl
>> > ep ../tablep > mdrun.out 2>&1
>> >
>> > Abnormal return value for ' mdrun-notunepme -table ../table -tablep
>> > ../table
>> > p >mdrun.out 2>&1' was 139
>> > No mdrun output files.
>> > FAILED. Check mdrun.out, md.log files in angles1
>> > sh: line 1: 11628 Segmentation fault  mdrun -notunepme -table
>> ../table
>> > -tabl
>> > ep ../tablep > mdrun.out 2>&1
>> >
>> > Abnormal return value for ' mdrun-notunepme -table ../table -tablep
>> > ../table
>> > p >mdrun.out 2>&1' was 139
>> > No mdrun output files.
>> > FAILED. Check mdrun.out, md.log files in angles125
>> > sh: line 1: 11637 Segmentation fault  mdrun -notunepme -table
>> ../table
>> > -tabl
>> > ep ../tablep -pd > mdrun.out 2>&1
>> >
>> > Abnormal return value for ' mdrun-notunepme -table ../table -tablep
>> > ../table
>> > p -pd >mdrun.out 2>&1' was 139
>> > No mdrun output files.
>> > FAILED. Check mdrun.out, md.log files in bham
>> >
>> > ...
>> >
>> > 0% tests passed, 5 tests failed out of 5
>> >
>> > Total Test time (real) =  29.58 sec
>> >
>> > The following tests FAILED:
>> >   1 - regressiontests/simple (Failed)
>> >   2 - regressiontests/complex (Failed)
>> >   3 - regressiontests/kernel (Failed)
>> >   4 - regressiontests/freeenergy (Failed)
>> >   5 - regressiontests/pdb2gmx (Failed)
>> >
>> 
>> >
>> > Amil
>> >
>> >
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://gromacs.5086.x6.nabble.com/mdrun-segmentation-fault-for-new-build-of-gromacs-4-6-1-tp5008873p5008902.html
>> > Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
>> > --
>> > gmx-users mailing listgmx-users@gromacs.org
>> > http://lists.gromacs.org/mailman/listinfo/gmx-users
>> > * Please search the archive at
>&

Re: [gmx-users] Re: mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-07 Thread Roland Schulz
Amil,

would it be possible for you to compile with a different compiler?
Ideally gcc 4.7.3. Alternative could you send us the stack for when it
crashes?
For that you need to:
- compile with debug -DCMAKE_BUILD_TYPE=Debug
- run under gdb "gdb --arg /path/to/mdrun {your mdrun arguments}"
- enter "bt" after it crashes

Roland

On Fri, Jun 7, 2013 at 12:19 PM, Amil Anderson  wrote:
> Roland,
>
> I have now run the regression test on my installation and it all fails for
> runmd (segmentation fault).  Don't see that anything else is being tested.
>
> I've placed the output of the make check  (Make_check.out) at
>
> https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP
>
> I'll copy the first part here to give you a taste of it:
>
> ---
> [softwaremgmt@warp2-login build]$ more make_check.out
> [ 64%] Built target gmx
> [ 64%] Built target gmxfftw
> [ 76%] Built target md
> [ 92%] Built target gmxana
> [ 92%] Built target editconf
> [ 97%] Built target gmxpreprocess
> [ 97%] Built target grompp
> [ 98%] Built target pdb2gmx
> [ 98%] Built target gmxcheck
> [100%] Built target mdrun
> [100%] Built target gmxtests
> Test project /shared/software/temp/gromacs-4.6.1/build
> Start 1: regressiontests/simple
> 1/5 Test #1: regressiontests/simple ...***Failed0.83 sec
> sh: line 1: 11622 Segmentation fault  mdrun -notunepme -table ../table
> -tabl
> ep ../tablep > mdrun.out 2>&1
>
> Abnormal return value for ' mdrun-notunepme -table ../table -tablep
> ../table
> p >mdrun.out 2>&1' was 139
> No mdrun output files.
> FAILED. Check mdrun.out, md.log files in angles1
> sh: line 1: 11628 Segmentation fault  mdrun -notunepme -table ../table
> -tabl
> ep ../tablep > mdrun.out 2>&1
>
> Abnormal return value for ' mdrun-notunepme -table ../table -tablep
> ../table
> p >mdrun.out 2>&1' was 139
> No mdrun output files.
> FAILED. Check mdrun.out, md.log files in angles125
> sh: line 1: 11637 Segmentation fault  mdrun -notunepme -table ../table
> -tabl
> ep ../tablep -pd > mdrun.out 2>&1
>
> Abnormal return value for ' mdrun-notunepme -table ../table -tablep
> ../table
> p -pd >mdrun.out 2>&1' was 139
> No mdrun output files.
> FAILED. Check mdrun.out, md.log files in bham
>
> ...
>
> 0% tests passed, 5 tests failed out of 5
>
> Total Test time (real) =  29.58 sec
>
> The following tests FAILED:
>   1 - regressiontests/simple (Failed)
>   2 - regressiontests/complex (Failed)
>   3 - regressiontests/kernel (Failed)
>   4 - regressiontests/freeenergy (Failed)
>   5 - regressiontests/pdb2gmx (Failed)
> 
>
> Amil
>
>
>
>
>
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/mdrun-segmentation-fault-for-new-build-of-gromacs-4-6-1-tp5008873p5008902.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-06 Thread Roland Schulz
Hi,

I recommend to run the regressiontests. The simplest way is to build
GROMACS with cmake -DREGRESSIONTEST_DOWNLOAD, and run make check.
See 
http://www.gromacs.org/Documentation/Installation_Instructions#.c2.a7_4.12._Testing_GROMACS_for_correctness
for more details.

Roland

On Thu, Jun 6, 2013 at 4:56 PM, Amil G. Anderson
 wrote:
> Gromacs users:
>
> I have just built and installed gromacs-4.6.1 on my Xeon 5500 compute cluster 
> running Centos 5.  The installation was done with gcc 4.7.0
>
> I have run a simple test (the old tutor/gmxdemo) which fails at the first 
> mdrun step with a segmentation fault.  The command line for this step is:
>
> mdrun -nt 1 -s cpeptide_em -o cpeptide_em -c cpeptide_b4pr -v -debug 1
>
> where I have included the debug flag and have restricted the run to one core. 
>  The files associated with this run are located at:
>
> https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP
>
>
> I have done a test build of gromacs-4.5.4 (version I have been running the 
> last year) with the same build environment as the 4.6.1 build, including 
> using cmake.  The rebuild of gromacs-4.5.4 runs the demo completely.
>
> Given the limited information for the run (segmentation fault seems to occur 
> just after reading in the parameters), I'm not sure how to further pursue the 
> source of this error.  I have also tried building gromacs-4.6.2 but have the 
> same error for mdrun.
>
> Thanks for any insight that you may be able to provide.
>
> Dr. Amil Anderson
> Associate Professor of Chemistry
> Wittenberg University
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] compile Gromacs using Cray compilers

2013-05-20 Thread Roland Schulz
Hi,

I agree with Mark that it is probably not worth it. Why are you
interested in it? Are you working on the Cray compiler or planning to
use some specific features?

You need to apply the patch here: https://gerrit.gromacs.org/#/c/2343/
to get it to compile. But of course then it will still be much slower
than GCC (and untested - so make sure to run tests). It would need
support for at least atomics and intrinsics to get comparable speed.

Roland

On Mon, May 20, 2013 at 7:01 PM, Mark Abraham  wrote:
> Hi,
>
> Fixing or working around Cray's compilers (whether atomic support or vector
> intrinsics) is really not worth any of our time if gcc+mpi is available.
>
> Mark
>
>
> On Tue, May 21, 2013 at 12:40 AM, Humayun Arafat  wrote:
>
>> Hi Roland,
>>
>> I can actually try to solve some issues with Cray compiler  if you can give
>> me some idea what is needed.
>> Currently I get error for the atomic.h inside thread_mpi. I think it needs
>> intrinsics.
>> Maybe there are some others as well.
>>
>> Please let me know the issues.
>>
>> thanks
>>
>>
>>
>> On Mon, May 20, 2013 at 5:09 PM, Roland Schulz  wrote:
>>
>> > On Mon, May 20, 2013 at 1:23 PM, Humayun Arafat 
>> wrote:
>> > > I am only interested in GPU code.
>> > > Is there anyway I can just run this using cray compilers?
>> > Why do you want to use the cray compilers? As Mark said those aren't
>> > supported. We reported the problems with them to Cray but AFAIK they
>> > haven't fixed them.
>> > FYI Cray's also have other compilers usually installed. You usually
>> > use them by something like: "module swap PrgEnv-cray PrgEnv-gnu"
>> >
>> > Roland
>> >
>> > >
>> > > thanks
>> > >
>> > >
>> > > On Mon, May 20, 2013 at 12:06 PM, Szilárd Páll > > >wrote:
>> > >
>> > >> The thread-MPI library provides the thread affinity setting
>> > >> functionality to mdrun, hence certain parts of it will always be
>> > >> compiled in, even with GMX_MPI=ON. Apparently, the Cray compiler does
>> > >> not like some of the thread-MPI headers. Feel free to file a bug
>> > >> report on redmine.gromacs.org, but *don't* expect it to get high
>> > >> priority (reasons below).
>> > >>
>> > >> FYI, the Cray compiler does not support SIMD intrinsics and therefore
>> > >> you can't use any of the SIMD accelerated  code. hence, the overall
>> > >> performance will be at least 3x lower than with a compiler that has
>> > >> decent SIMD instrinsics support.
>> > >>
>> > >> --
>> > >> Szilárd
>> > >>
>> > >>
>> > >> On Mon, May 20, 2013 at 6:16 PM, Humayun Arafat 
>> > wrote:
>> > >> > Hi,
>> > >> >
>> > >> > I enabled mpich2 and used that to compile gromacs using this command
>> > >> > cmake   -DGMX_MPI=ON ..
>> > >> >
>> > >> > But I am still getting the error.
>> > >> >
>> > >> > [ 10%] Building C object
>> > >> > src/gmxlib/CMakeFiles/gmx.dir/gmx_thread_affinity.c.o
>> > >> > CC-20 craycc: ERROR File =
>> > >> > /home/users/mmm/gromacs/include/thread_mpi/atomic.h, Line = 202
>> > >> >   The identifier "tMPI_Thread_mutex_t" is undefined.
>> > >> >
>> > >> >   static tMPI_Thread_mutex_t tMPI_Atomic_mutex =
>> > >> > TMPI_THREAD_MUTEX_INITIALIZER;
>> > >> >
>> > >> >
>> > >> > It seems like even though it is disabling threadmpi, it is still
>> > making
>> > >> > some tests
>> > >> > I added some part of the cmake output.
>> > >> >
>> > >> > -- MPI is not compatible with thread-MPI. Disabling thread-MPI.
>> > >> > -- Checking for MPI_IN_PLACE
>> > >> > -- Performing Test MPI_IN_PLACE_COMPILE_OK
>> > >> > -- Performing Test MPI_IN_PLACE_COMPILE_OK - Success
>> > >> > -- Checking for MPI_IN_PLACE - yes
>> > >> > -- Checking for CRAY XT Catamount compile
>> > >> > -- Checking for CRAY XT Catamount target - no
>> > >> > CMake Warning at cmake/ThreadMPI.cmake:52 (message):
>> > >> >   Atomic operations not found for this CPU+compil

Re: [gmx-users] compile Gromacs using Cray compilers

2013-05-20 Thread Roland Schulz
On Mon, May 20, 2013 at 1:23 PM, Humayun Arafat  wrote:
> I am only interested in GPU code.
> Is there anyway I can just run this using cray compilers?
Why do you want to use the cray compilers? As Mark said those aren't
supported. We reported the problems with them to Cray but AFAIK they
haven't fixed them.
FYI Cray's also have other compilers usually installed. You usually
use them by something like: "module swap PrgEnv-cray PrgEnv-gnu"

Roland

>
> thanks
>
>
> On Mon, May 20, 2013 at 12:06 PM, Szilárd Páll wrote:
>
>> The thread-MPI library provides the thread affinity setting
>> functionality to mdrun, hence certain parts of it will always be
>> compiled in, even with GMX_MPI=ON. Apparently, the Cray compiler does
>> not like some of the thread-MPI headers. Feel free to file a bug
>> report on redmine.gromacs.org, but *don't* expect it to get high
>> priority (reasons below).
>>
>> FYI, the Cray compiler does not support SIMD intrinsics and therefore
>> you can't use any of the SIMD accelerated  code. hence, the overall
>> performance will be at least 3x lower than with a compiler that has
>> decent SIMD instrinsics support.
>>
>> --
>> Szilárd
>>
>>
>> On Mon, May 20, 2013 at 6:16 PM, Humayun Arafat  wrote:
>> > Hi,
>> >
>> > I enabled mpich2 and used that to compile gromacs using this command
>> > cmake   -DGMX_MPI=ON ..
>> >
>> > But I am still getting the error.
>> >
>> > [ 10%] Building C object
>> > src/gmxlib/CMakeFiles/gmx.dir/gmx_thread_affinity.c.o
>> > CC-20 craycc: ERROR File =
>> > /home/users/mmm/gromacs/include/thread_mpi/atomic.h, Line = 202
>> >   The identifier "tMPI_Thread_mutex_t" is undefined.
>> >
>> >   static tMPI_Thread_mutex_t tMPI_Atomic_mutex =
>> > TMPI_THREAD_MUTEX_INITIALIZER;
>> >
>> >
>> > It seems like even though it is disabling threadmpi, it is still making
>> > some tests
>> > I added some part of the cmake output.
>> >
>> > -- MPI is not compatible with thread-MPI. Disabling thread-MPI.
>> > -- Checking for MPI_IN_PLACE
>> > -- Performing Test MPI_IN_PLACE_COMPILE_OK
>> > -- Performing Test MPI_IN_PLACE_COMPILE_OK - Success
>> > -- Checking for MPI_IN_PLACE - yes
>> > -- Checking for CRAY XT Catamount compile
>> > -- Checking for CRAY XT Catamount target - no
>> > CMake Warning at cmake/ThreadMPI.cmake:52 (message):
>> >   Atomic operations not found for this CPU+compiler combination.  Thread
>> >   support will be unbearably slow: disable threads.  Atomic operations
>> > should
>> >   work on all but the most obscure CPU+compiler combinations; if your
>> system
>> >   is not obscure -- like, for example, x86 with gcc -- please contact the
>> >   developers.
>> > Call Stack (most recent call first):
>> >   cmake/ThreadMPI.cmake:100 (test_tmpi_atomics)
>> >   CMakeLists.txt:558 (include)
>> >
>> >
>> > thanks a lot for your help.
>> >
>> >
>> >
>> > On Mon, May 20, 2013 at 10:34 AM, Mark Abraham > >wrote:
>> >
>> >> Specify an MPI compiler and use cmake -DGMX_MPI=on by itself. That is
>> >> mutually exclusive with thread mpi.
>> >>
>> >> Mark
>> >>
>> >>
>> >> On Mon, May 20, 2013 at 5:01 PM, Humayun Arafat 
>> wrote:
>> >>
>> >> > Hi,
>> >> >
>> >> > I tried different options.
>> >> >
>> >> > cmake -DGMX_MPI=ON/OFF ..
>> >> > cmake -DGMX_THREAD_MPI=OFF/ON ..
>> >> >
>> >> > I tried both of these two flags together. But none of them worked.
>> >> > The compilation fail in either gmx_omp.c or gmx_thread_affinity.c
>> >> >
>> >> > Can you please suggest me another way to turn off the thread mpi?
>> >> >
>> >> > Thanks
>> >> > Humayun
>> >> >
>> >> > On Fri, May 17, 2013 at 6:54 PM, Mark Abraham <
>> mark.j.abra...@gmail.com
>> >> > >wrote:
>> >> >
>> >> > > Cray's compiler is largely/wholly untested. I'd suggest you use the
>> >> > version
>> >> > > of gcc that you know works.
>> >> > >
>> >> > > For use on a big cluster, you probably don't want Thread MPI anyway.
>> >> Does
>> >> > > cmake -DGMX_MPI work?
>> >> > >
>> >> > > Mark
>> >> > >
>> >> > >
>> >> > > On Sat, May 18, 2013 at 12:01 AM, Humayun Arafat > >
>> >> > > wrote:
>> >> > >
>> >> > > > Hi,
>> >> > > >
>> >> > > >
>> >> > > > I need some help for the compilation of gromacs using Cray
>> >> > > compilers(CCE).
>> >> > > >
>> >> > > > I can compile gromacs using GNU compilers but not using CCE.
>> >> > > >
>> >> > > > I am using gromacs 4.6 and cmake 2.8.4 on Cray XE6
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > > > After doing cmake, when I try to do make, I am getting this error.
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > > > CC-20 craycc: ERROR File =
>> >> > > > /home/users/me/gromacs/include/thread_mpi/atomic.h, Line = 202
>> >> > > >
>> >> > > > The identifier "tMPI_Thread_mutex_t" is undefined.
>> >> > > >
>> >> > > > static tMPI_Thread_mutex_t tMPI_Atomic_mutex =
>> >> > > > TMPI_THREAD_MUTEX_INITIALIZER;
>> >> > > >
>> >> > > > ^
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > > > Then I checked that the cmake  configuration had errors for this
>> >> > atomic.h
>> >> > > > header file.
>> >> > > >
>>

Re: [gmx-users] xtc2dcd conversion

2013-05-15 Thread Roland Schulz
Hi,

you can use catdcd: www.ks.uiuc.edu/Development/MDTools/catdcd/

Roland

On Wed, May 15, 2013 at 3:14 AM, James Starlight  wrote:
>
> Dear Gromacs users!
>
> I want to find possible way for the conversion of the set of the Gromacs's
> xtc trajectories to the DCD format.
>
> The only possible way that I know for such conversion is the VMD. But it's
> very routine for the big set of the XTC's inputs.
>
>
> Thanks for help,
>
>
> James
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



--
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gromacs 4.6.1 on win7?

2013-04-02 Thread Roland Schulz
On Tue, Apr 2, 2013 at 5:40 AM, Mark Abraham wrote:

> IIRC, the default Cygwin gcc is too old to compile GROMACS, as discussed on
> this list in the last few months some time. I don't know how easy it is to
> get a new one via the Cygwin package system.
>
Cygwin has the gcc package which is 3.4.4 and the gcc4 package which offers
gcc 4.7.2. Installing the gcc4 package and telling cmake to use gcc-4 as
compiler should fix it (not tested).

Roland


>
> Mark
>
> On Mon, Apr 1, 2013 at 5:03 PM, Justin Lemkul  wrote:
>
> > On Mon, Apr 1, 2013 at 8:58 AM, 라지브간디  wrote:
> >
> > > Dear gmx,
> > >
> > >
> > > I tried to install 4.6.1 version through cygwin and got following error
> > by
> > > using this command :
> > >
> > >
> > > CMake Error at CMakeLists.txt:811 (message):
> > >   Cannot find immintrin.h, which is required for AVX intrinsics
> support.
> > >   Consider switching compiler.
> > >
> > >
> > >
> > > When I use SSE4.1 i got this error :
> > > CMake Error at CMakeLists.txt:750 (message):  Cannot find smmintrin.h,
> > > which is required for SSE4.1 intrinsics support.
> > >
> > > Please need an guidance to install it. Thanks.
> > >
> >
> > What compiler (and version) are you using?  Apparently whatever is
> > installed does not support the features that Gromacs thinks you should
> have
> > available.
> >
> > -Justin
> >
> > --
> >
> > 
> >
> > Justin A. Lemkul, Ph.D.
> > Research Scientist
> > Department of Biochemistry
> > Virginia Tech
> > Blacksburg, VA
> > jalemkul[at]vt.edu | (540)
> > 231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
> >
> > 
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Thread affinity setting failed

2013-03-06 Thread Roland Schulz
Hi Raid,

I just tested Gromacs 4.6.1 compiled with ICC 13 and GCC 4.1.2 on CentOS
5.6 and I don't have any problems with pinning. So it might be useful to
open a bug and provide more details, because it should work for CentOS 5.x.

Yes, for pure water the group kernels are faster than Verlet.

Roland


On Wed, Mar 6, 2013 at 10:17 PM, Reid Van Lehn  wrote:

> Hi Szilárd,
>
> Thank you very much for the detailed write up. To answer your question,
> yes, I am using an old Linux distro, specifically CentOS 5.4, though
> upgrading to 5.9 still had the same problem. I have another few machines
> with different hardware CentOS 6.3 which does not have this issue so it is
> likely an operating system issue based on your description. As I'm
> (unfortunately...) also the sysadmin on this cluster I'm unlikely to find
> the time to upgrade all the nodes, so I'll probably stick with the "-pin
> off" workaround for now. Hopefully this thread might help out other users!
>
> As an aside, I found that the OpenMP + Verlet combination was slower for
> this particular system, but I suspect that it's because it's almost
> entirely water and hence probably benefits from the Group scheme
> optimizations for water described on the Gromacs website.
>
> Thanks again for the explanation,
> Reid
>
> On Mon, Mar 4, 2013 at 3:45 PM, Szilárd Páll 
> wrote:
>
> > Hi,
> >
> > There are some clarifications needed and as this might help you and other
> > understand what's going on, I'll take the time to explain things.
> >
> > Affinity setting is a low-, operating system-level, operation and "locks"
> > (="pins") threads to physical cores of the CPU preventing the OS from
> > moving them which can cause performance drop - especially when using
> > OpenMP-multithreading on multi-socket and NUMA machines.
> >
> > Now, mdrun will by default *try* to set affinity if you use all cores
> > detected (i.e if mdrun can be sure that it is the only application
> running
> > on the machine), but will by default *not* set thread affinities if the
> > number of thread/processes per compute node is less than the number of
> > cores detected. Hence, when you decrease -ntmpi to 7, you implicitly end
> up
> > turning off thread pinning, that's why the warnings don't show up.
> >
> > The fact that affinity setting fails on your machine suggests that either
> > the system libraries don't support this or the mdrun code is not fully
> > compatible with your OS, the type of CPUs AFAIK don't matter at all. What
> > OS are you using? Is it an old installation?
> >
> > If you are not using OpenMP - which btw you probably should with the
> Verlet
> > scheme if you are running running on a single node or at high
> > parallelization -, the performance will not be affected very much by the
> > lack of thread pinning. While the warnings themselves can often be safely
> > ignored, if only some of the threads/processes can't set affinities, this
> > might indicate a problem. I your case, if you were really seeing only 5
> > cores being used with 3 warnings, this might suggest that while the
> > affinity setting failed, three threads are using already "busy" cores
> > overlapping with others which will cause severe performance drop.
> >
> > What you can do to avoid the performance drop is to turn of pinning by
> > passing "-pin off" to mdrun. Without OpenMP this will typically not
> cause a
> > large performance drop compared to having correct pinning and it will
> avoid
> > the bad overlapping threads/processes case.
> >
> > I suspect that your machines might be running an old OS which could be
> > causing the failed affinity setting. If that is the case, you should talk
> > to your sysadmins and have them figure out the issue. If you have a
> > moderately new OS, you should not be seeing such issues, so I suggest
> that
> > you file a bug report with details like: OS + version + kernel version,
> > pthread library version, standard C library version.
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> >
> > On Mon, Mar 4, 2013 at 1:45 PM, Mark Abraham  > >wrote:
> >
> > > On Mon, Mar 4, 2013 at 6:02 AM, Reid Van Lehn 
> wrote:
> > >
> > > > Hello users,
> > > >
> > > > I ran into a bug I do not understand today upon upgrading from v.
> 4.5.5
> > > to
> > > > v 4.6. I'm using older 8 core Intel Xeon E5430 machines, and when I
> > > > submitted a job for 8 cores to one of the nodes I received the
> > following
> > > > error:
> > > >
> > > > NOTE: In thread-MPI thread #3: Affinity setting failed.
> > > >   This can cause performance degradation!
> > > >
> > > > NOTE: In thread-MPI thread #2: Affinity setting failed.
> > > >   This can cause performance degradation!
> > > >
> > > > NOTE: In thread-MPI thread #1: Affinity setting failed.
> > > >   This can cause performance degradation!
> > > >
> > > > I ran mdrun simply with the flags:
> > > >
> > > > mdrun -v -ntmpi 8 -deffnm em
> > > >
> > > > Using the top command, I confirmed that no other programs were
> running
> 

Re: [gmx-users] configure gromacs 4.6

2013-02-06 Thread Roland Schulz
On Wed, Feb 6, 2013 at 6:05 PM, jeela keel  wrote:

> cmake .. -DGMX_BUILD_OWN_FFTW=ON
>

Because you already build fftw yourself you don't need GMX_BUILD_OWN_FFTW.
The option means that gromacs builds fftw for you. Instead simply specify
where you installed fftw with CMAKE_PREFIX_PATH

Roland
-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MPI oversubscription

2013-02-06 Thread Roland Schulz
On Wed, Feb 6, 2013 at 2:35 PM, Roland Schulz  wrote:

>
> On Tue, Feb 5, 2013 at 8:52 AM, Christian H. wrote:
>
>> Head of .log:
>>
>> Gromacs version:VERSION 5.0-dev-20121213-e1fcb0a-dirty
>>
>
> Is it on purpose that you use version 5.0 and not 4.6? Unless you plan
> development I suggest to use 4.6 (git checkout release-4-6)
> I can reproduce your problem with 5.0. We haven't tested 5.0 much lately
> because we were so busy with 4.6.
>

If you want to use 5.0 you can take the version from here:
https://gerrit.gromacs.org/#/c/2132/. This fixes the problems.

Roland

 2013/2/5 Berk Hess 
>>
>> >
>> > OK, then this is an unhandled case.
>> > Strange, because I am also running OpenSUSE 12.2 with the same CPU, but
>> > use gcc 4.7.1.
>> >
>> > I will file a bug report on redmine.
>> > Could you also post the header of md.log which gives all configuration
>> > information?
>> >
>> > To make it work for now, you can insert immediately after  #ifdef
>> > GMX_OMPENMP:
>> > if (ret <= 0)
>> > {
>> > ret = gmx_omp_get_num_procs();
>> > }
>> >
>> >
>> > Cheers,
>> >
>> > Berk
>> >
>> > 
>> > > Date: Tue, 5 Feb 2013 14:27:44 +0100
>> > > Subject: Re: [gmx-users] MPI oversubscription
>> > > From: hypo...@googlemail.com
>> > > To: gmx-users@gromacs.org
>> > >
>> > > None of the variables referenced here are set on my system, the print
>> > > statements are never executed.
>> > >
>> > > What I did:
>> > >
>> > > printf("Checking which processor variable is set");
>> > > #if defined(_SC_NPROCESSORS_ONLN)
>> > > ret = sysconf(_SC_NPROCESSORS_ONLN);
>> > > printf("case 1 ret = %d\n",ret);
>> > > #elif defined(_SC_NPROC_ONLN)
>> > > ret = sysconf(_SC_NPROC_ONLN);
>> > > printf("case 2 ret = %d\n",ret);
>> > > #elif defined(_SC_NPROCESSORS_CONF)
>> > > ret = sysconf(_SC_NPROCESSORS_CONF);
>> > > printf("case 3 ret = %d\n",ret);
>> > > #elif defined(_SC_NPROC_CONF)
>> > > ret = sysconf(_SC_NPROC_CONF);
>> > > printf("case 4 ret = %d\n",ret);
>> > > #endif /* End of check for sysconf argument values */
>> > >
>> > > >From /etc/issue:
>> > > Welcome to openSUSE 12.2 "Mantis" - Kernel \r (\l)
>> > > >From uname -a:
>> > > Linux kafka 3.4.11-2.16-desktop #1 SMP PREEMPT Wed Sep 26 17:05:00 UTC
>> > 2012
>> > > (259fc87) x86_64 x86_64 x86_64 GNU/Linux
>> > >
>> > >
>> > >
>> > > 2013/2/5 Berk Hess 
>> > >
>> > > >
>> > > > Hi,
>> > > >
>> > > > This is the same cpu I have in my workstation and this case should
>> not
>> > > > cause any problems.
>> > > >
>> > > > Which operating system and version are you using?
>> > > >
>> > > > If you know a bit about programming, could you check what goes
>> wrong in
>> > > > get_nthreads_hw_avail
>> > > > in src/gmxlib/gmx_detect_hardware.c ?
>> > > > Add after the four "ret =" at line 434, 436, 438 and 440:
>> > > > printf("case 1 ret = %d\n",ret);
>> > > > and replace 1 by different numbers.
>> > > > Thus you can check if one of the 4 cases returns 0 or none of the
>> cases
>> > > > is called.
>> > > >
>> > > > Cheers,
>> > > >
>> > > > Berk
>> > > >
>> > > >
>> > > > 
>> > > > > Date: Tue, 5 Feb 2013 13:45:02 +0100
>> > > > > Subject: Re: [gmx-users] MPI oversubscription
>> > > > > From: hypo...@googlemail.com
>> > > > > To: gmx-users@gromacs.org
>> > > > >
>> > > > > >From the .log file:
>> > > > >
>> > > > > Present hardware specification:
>> > > > > Vendor: GenuineIntel
>> > > > > Brand: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
>> > > > > Family: 6 Model: 42 Stepping: 7
>> > > > > Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx ms

Re: [gmx-users] MPI oversubscription

2013-02-06 Thread Roland Schulz
On Tue, Feb 5, 2013 at 8:52 AM, Christian H.  wrote:

> Head of .log:
>
> Gromacs version:VERSION 5.0-dev-20121213-e1fcb0a-dirty
>

Is it on purpose that you use version 5.0 and not 4.6? Unless you plan
development I suggest to use 4.6 (git checkout release-4-6)
I can reproduce your problem with 5.0. We haven't tested 5.0 much lately
because we were so busy with 4.6.

C compiler: /home/christian/opt/bin/mpicc GNU gcc (GCC) 4.8.0
>
It seems you are not using 4.7.x as you said


Roland


>  2013/2/5 Berk Hess 
>
> >
> > OK, then this is an unhandled case.
> > Strange, because I am also running OpenSUSE 12.2 with the same CPU, but
> > use gcc 4.7.1.
> >
> > I will file a bug report on redmine.
> > Could you also post the header of md.log which gives all configuration
> > information?
> >
> > To make it work for now, you can insert immediately after  #ifdef
> > GMX_OMPENMP:
> > if (ret <= 0)
> > {
> > ret = gmx_omp_get_num_procs();
> > }
> >
> >
> > Cheers,
> >
> > Berk
> >
> > 
> > > Date: Tue, 5 Feb 2013 14:27:44 +0100
> > > Subject: Re: [gmx-users] MPI oversubscription
> > > From: hypo...@googlemail.com
> > > To: gmx-users@gromacs.org
> > >
> > > None of the variables referenced here are set on my system, the print
> > > statements are never executed.
> > >
> > > What I did:
> > >
> > > printf("Checking which processor variable is set");
> > > #if defined(_SC_NPROCESSORS_ONLN)
> > > ret = sysconf(_SC_NPROCESSORS_ONLN);
> > > printf("case 1 ret = %d\n",ret);
> > > #elif defined(_SC_NPROC_ONLN)
> > > ret = sysconf(_SC_NPROC_ONLN);
> > > printf("case 2 ret = %d\n",ret);
> > > #elif defined(_SC_NPROCESSORS_CONF)
> > > ret = sysconf(_SC_NPROCESSORS_CONF);
> > > printf("case 3 ret = %d\n",ret);
> > > #elif defined(_SC_NPROC_CONF)
> > > ret = sysconf(_SC_NPROC_CONF);
> > > printf("case 4 ret = %d\n",ret);
> > > #endif /* End of check for sysconf argument values */
> > >
> > > >From /etc/issue:
> > > Welcome to openSUSE 12.2 "Mantis" - Kernel \r (\l)
> > > >From uname -a:
> > > Linux kafka 3.4.11-2.16-desktop #1 SMP PREEMPT Wed Sep 26 17:05:00 UTC
> > 2012
> > > (259fc87) x86_64 x86_64 x86_64 GNU/Linux
> > >
> > >
> > >
> > > 2013/2/5 Berk Hess 
> > >
> > > >
> > > > Hi,
> > > >
> > > > This is the same cpu I have in my workstation and this case should
> not
> > > > cause any problems.
> > > >
> > > > Which operating system and version are you using?
> > > >
> > > > If you know a bit about programming, could you check what goes wrong
> in
> > > > get_nthreads_hw_avail
> > > > in src/gmxlib/gmx_detect_hardware.c ?
> > > > Add after the four "ret =" at line 434, 436, 438 and 440:
> > > > printf("case 1 ret = %d\n",ret);
> > > > and replace 1 by different numbers.
> > > > Thus you can check if one of the 4 cases returns 0 or none of the
> cases
> > > > is called.
> > > >
> > > > Cheers,
> > > >
> > > > Berk
> > > >
> > > >
> > > > 
> > > > > Date: Tue, 5 Feb 2013 13:45:02 +0100
> > > > > Subject: Re: [gmx-users] MPI oversubscription
> > > > > From: hypo...@googlemail.com
> > > > > To: gmx-users@gromacs.org
> > > > >
> > > > > >From the .log file:
> > > > >
> > > > > Present hardware specification:
> > > > > Vendor: GenuineIntel
> > > > > Brand: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
> > > > > Family: 6 Model: 42 Stepping: 7
> > > > > Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
> > > > nonstop_tsc
> > > > > pcid pclmuldq pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
> > tdt
> > > > > Acceleration most likely to fit this hardware: AVX_256
> > > > > Acceleration selected at GROMACS compile time: AVX_256
> > > > >
> > > > > Table routines are used for coulomb: FALSE
> > > > > Table routines are used for vdw: FALSE
> > > > >
> > > > >
> > > > > >From /proc/cpuinfo (8 entries like this in total):
> > > > >
> > > > > processor : 0
> > > > > vendor_id : GenuineIntel
> > > > > cpu family : 6
> > > > > model : 42
> > > > > model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
> > > > > stepping : 7
> > > > > microcode : 0x28
> > > > > cpu MHz : 1600.000
> > > > > cache size : 8192 KB
> > > > > physical id : 0
> > > > > siblings : 8
> > > > > core id : 0
> > > > > cpu cores : 4
> > > > > apicid : 0
> > > > > initial apicid : 0
> > > > > fpu : yes
> > > > > fpu_exception : yes
> > > > > cpuid level : 13
> > > > > wp : yes
> > > > > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> > > > > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
> > syscall nx
> > > > > rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> xtopology
> > > > > nonstop_tsc aperfmper
> > > > > f pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr
> > pdcm
> > > > pcid
> > > > > sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida
> > arat
> > > > epb
> > > > > xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
> > > > > bogomips : 6784.04
> > > > >

Re: [gmx-users] MPI oversubscription

2013-02-05 Thread Roland Schulz
On Tue, Feb 5, 2013 at 8:58 AM, Berk Hess  wrote:

>
> One last thing:
> Maybe a macro is not set, but we can actually query the number of
> processors.
> Could you replace the conditional that gets triggered on my machine:
> #if defined(_SC_NPROCESSORS_ONLN)
> to
> #if 1
>
> So we can check if the actual sysconf call works or not?
>
> My workaround won't work without OpenMP.
> Did you disable that manually?
>
> Also large file support is not turned on.
> It seems like your build setup is somehow messed up and lot of features
> are not found.
>

Could you post your CMakeFiles/CMakeError.log? That should show why those
features are disabled.

Roland


>
> Cheers,
>
> Berk
>
>
> 
> > Date: Tue, 5 Feb 2013 14:52:17 +0100
> > Subject: Re: [gmx-users] MPI oversubscription
> > From: hypo...@googlemail.com
> > To: gmx-users@gromacs.org
> >
> > Head of .log:
> >
> > Gromacs version: VERSION 5.0-dev-20121213-e1fcb0a-dirty
> > GIT SHA1 hash: e1fcb0a3d2768a8bb28c2e4e8012123ce773e18c (dirty)
> > Precision: single
> > MPI library: MPI
> > OpenMP support: disabled
> > GPU support: disabled
> > invsqrt routine: gmx_software_invsqrt(x)
> > CPU acceleration: AVX_256
> > FFT library: fftw-3.3.2-sse2
> > Large file support: disabled
> > RDTSCP usage: enabled
> > Built on: Tue Feb 5 10:58:32 CET 2013
> > Built by: christian@k [CMAKE]
> > Build OS/arch: Linux 3.4.11-2.16-desktop x86_64
> > Build CPU vendor: GenuineIntel
> > Build CPU brand: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
> > Build CPU family: 6 Model: 42 Stepping: 7
> > Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
> > nonstop_tsc pcid pclmuldq pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2
> > ssse3 tdt
> > C compiler: /home/christian/opt/bin/mpicc GNU gcc (GCC) 4.8.0
> > 20120618 (experimental)
> > C compiler flags: -mavx -Wextra -Wno-missing-field-initializers
> > -Wno-sign-compare -Wall -Wno-unused -Wunused-value -Wno-unknown-pragmas
> > -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast -O3
> > -DNDEBUG
> > C++ compiler: /home/christian/opt/bin/mpiCC GNU g++ (GCC) 4.8.0
> > 20120618 (experimental)
> > C++ compiler flags: -mavx -std=c++0x -Wextra
> > -Wno-missing-field-initializers -Wnon-virtual-dtor -Wall -Wno-unused
> > -Wunused-value -Wno-unknown-pragmas -fomit-frame-pointer
> > -funroll-all-loops -fexcess-precision=fast -O3 -DNDEBUG
> >
> > I will try your workaround, thanks!
> >
> > 2013/2/5 Berk Hess 
> >
> > >
> > > OK, then this is an unhandled case.
> > > Strange, because I am also running OpenSUSE 12.2 with the same CPU, but
> > > use gcc 4.7.1.
> > >
> > > I will file a bug report on redmine.
> > > Could you also post the header of md.log which gives all configuration
> > > information?
> > >
> > > To make it work for now, you can insert immediately after #ifdef
> > > GMX_OMPENMP:
> > > if (ret <= 0)
> > > {
> > > ret = gmx_omp_get_num_procs();
> > > }
> > >
> > >
> > > Cheers,
> > >
> > > Berk
> > >
> > > 
> > > > Date: Tue, 5 Feb 2013 14:27:44 +0100
> > > > Subject: Re: [gmx-users] MPI oversubscription
> > > > From: hypo...@googlemail.com
> > > > To: gmx-users@gromacs.org
> > > >
> > > > None of the variables referenced here are set on my system, the print
> > > > statements are never executed.
> > > >
> > > > What I did:
> > > >
> > > > printf("Checking which processor variable is set");
> > > > #if defined(_SC_NPROCESSORS_ONLN)
> > > > ret = sysconf(_SC_NPROCESSORS_ONLN);
> > > > printf("case 1 ret = %d\n",ret);
> > > > #elif defined(_SC_NPROC_ONLN)
> > > > ret = sysconf(_SC_NPROC_ONLN);
> > > > printf("case 2 ret = %d\n",ret);
> > > > #elif defined(_SC_NPROCESSORS_CONF)
> > > > ret = sysconf(_SC_NPROCESSORS_CONF);
> > > > printf("case 3 ret = %d\n",ret);
> > > > #elif defined(_SC_NPROC_CONF)
> > > > ret = sysconf(_SC_NPROC_CONF);
> > > > printf("case 4 ret = %d\n",ret);
> > > > #endif /* End of check for sysconf argument values */
> > > >
> > > > >From /etc/issue:
> > > > Welcome to openSUSE 12.2 "Mantis" - Kernel \r (\l)
> > > > >From uname -a:
> > > > Linux kafka 3.4.11-2.16-desktop #1 SMP PREEMPT Wed Sep 26 17:05:00
> UTC
> > > 2012
> > > > (259fc87) x86_64 x86_64 x86_64 GNU/Linux
> > > >
> > > >
> > > >
> > > > 2013/2/5 Berk Hess 
> > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > This is the same cpu I have in my workstation and this case should
> not
> > > > > cause any problems.
> > > > >
> > > > > Which operating system and version are you using?
> > > > >
> > > > > If you know a bit about programming, could you check what goes
> wrong in
> > > > > get_nthreads_hw_avail
> > > > > in src/gmxlib/gmx_detect_hardware.c ?
> > > > > Add after the four "ret =" at line 434, 436, 438 and 440:
> > > > > printf("case 1 ret = %d\n",ret);
> > > > > and replace 1 by different numbers.
> > > > > Thus you can check if one of the 4 cases returns 0 or none of the
> cases
> > > > > is called.
> > > > >
> > > > > Cheers,
> > > > >
> > > > >

Re: [gmx-users] gromacs 4.6 installation error

2013-02-01 Thread Roland Schulz
Hi,

make sure to run
source /opt/intel/bin/iccvars.sh intel64
before compiling.

Roland


On Fri, Feb 1, 2013 at 9:06 AM, Fernando Favela wrote:

> Hi justin,
>
> this is the other error message:
> [ 56%] Building C object src/mdlib/CMakeFiles/md.dir/genborn_allvsall.c.o
> [ 56%] [ 56%] [ 56%] Building C object
> src/mdlib/CMakeFiles/md.dir/mvxvf.c.o
> Building C object src/mdlib/CMakeFiles/md.dir/qm_mopac.c.o
> Building C object src/mdlib/CMakeFiles/md.dir/csettle.c.o
> Building C object src/mdlib/CMakeFiles/md.dir/ebin.c.o
> Building C object src/mdlib/CMakeFiles/md.dir/genborn.c.o
> [ 56%] Building C object
> src/mdlib/CMakeFiles/md.dir/genborn_allvsall_sse2_single.c.o
> ld: warning: libimf.so, needed by ../../src/gmxlib/libgmx.so.6, not found
> (try using -rpath or -rpath-link)
> ld: warning: libsvml.so, needed by ../../src/gmxlib/libgmx.so.6, not found
> (try using -rpath or -rpath-link)
> ld: warning: libirng.so, needed by ../../src/gmxlib/libgmx.so.6, not found
> (try using -rpath or -rpath-link)
> ld: warning: libintlc.so.5, needed by ../../src/gmxlib/libgmx.so.6, not
> found (try using -rpath or -rpath-link)
> [ 56%] Building C object src/mdlib/CMakeFiles/md.dir/pme.c.o
> ld: template: hidden symbol `__intel_cpu_indicator_init' in
> /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libirc.a(cpu_disp.o)
> is referenced by DSO
> ld: final link failed: Bad value
> make[2]: *** [share/template/template] Error 1
> make[1]: *** [share/template/CMakeFiles/template.dir/all] Error 2
> make[1]: *** Waiting for unfinished jobs….
>
> then
> ...
> [ 65%] Building C object src/mdlib/CMakeFiles/md.dir/ewald.c.o
> [ 65%] Building C object src/mdlib/CMakeFiles/md.dir/calcvir.c.o
> Linking CXX shared library libmd.so
> [ 65%] Built target md
> make: *** [all] Error 2
>
> That's all.
>
> Thanks,
>
> Fernando.
>
> On Feb 1, 2013, at 7:38 AM, Justin Lemkul wrote:
>
> >
> >
> > On 2/1/13 8:35 AM, Fernando Favela wrote:
> >> Dear Mark Abraham,
> >>
> >> without the sudo, the problem is still there:
> >> ...
> >> …
> >> ...
> >> [ 65%] Building C object src/mdlib/CMakeFiles/md.dir/ewald.c.o
> >> [ 65%] Building C object src/mdlib/CMakeFiles/md.dir/calcvir.c.o
> >> Linking CXX shared library libmd.so
> >> [ 65%] Built target md
> >> make: *** [all] Error 2
> >>
> >> It could be possible that after a previous gromacs installation
> (removed) there's still a linked library or something?
> >>
> >
> > Not likely, but you still haven't posted the original error.  Error 2
> says that somewhere higher up there is an Error 1.  We need that
> information.
> >
> > -Justin
> >
> >> Thanks in advance.
> >>
> >> Fernando.
> >>
> >> On Feb 1, 2013, at 2:11 AM, Mark Abraham wrote:
> >>
> >>> On Fri, Feb 1, 2013 at 2:18 AM, Fernando Favela <
> ffav...@fis.cinvestav.mx>wrote:
> >>>
>  Dear Gromacs users,
> 
>  I'm trying to install GMX 4.6 in a 64 Bit Machine with Nvidia GTX 680
>  cards, I've already installed the intel compilers, cuda 5 and openmpi.
> 
>  I do the following procedure:
>  CC=/opt/intel/bin/icc CXX=/opt/intel/bin/icpc cmake ..
> 
>  then
> 
>  sudo make -j 12
> 
> >>>
> >>> Don't use sudo before make. Consider using sudo before "make install",
> per
> >>> the installation instructions.
> >>>
> >>> There's no error message in the output file you posted. I suspect you
> might
> >>> running "sudo make", and then "make" and running into file permissions
> >>> problems because of it. Remove your whole build directory (which will
> >>> probably need sudo) and start again.
> >>>
> >>> Mark
> >>>
> >>>
> 
>  and I get this error:
> 
>  Linking CXX shared library libmd.so
>  cd /home/ffavela/Downloads/gromacs-4.6/build-cmake/src/mdlib &&
>  /usr/bin/cmake -E cmake_link_script CMakeFiles/md.dir/link.txt
> --verbose=1
>  /opt/intel/bin/icpc -fPIC -mavx -Wall -ip -funroll-all-loops -O3
> -DNDEBUG
>  -shared -Wl,-soname,libmd.so.6 -o libmd.so.6
>  CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_gpu_ref.c.o
>  CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_simd_4xn.c.o
>  CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_common.c.o
>  CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_simd_2xnn.c.o
>  CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_ref.c.o
>  CMakeFiles/md.dir/genborn_allvsall.c.o CMakeFiles/md.dir/qm_mopac.c.o
>  CMakeFiles/md.dir/mvxvf.c.o CMakeFiles/md.dir/genborn.c.o
>  CMakeFiles/md.dir/ebin.c.o CMakeFiles/md.dir/csettle.c.o
>  CMakeFiles/md.dir/genborn_allvsall_sse2_single.c.o
>  CMakeFiles/md.dir/pme.c.o CMakeFiles/md.dir/gmx_fft.c.o
>  CMakeFiles/md.dir/force.c.o CMakeFiles/md.dir/qm_gaussian.c.o
>  CMakeFiles/md.dir/qm_orca.c.o CMakeFiles/md.dir/mdatom.c.o
>  CMakeFiles/md.dir/stat.c.o CMakeFiles/md.dir/perf_est.c.o
>  CMakeFiles/md.dir/domdec_network.c.o CMakeFiles/md.dir/pme_pp.c.o
>  CMakeFiles/md.dir/calcmu.c.o CMakeFiles/md.dir/shakef.c.o
>  CMakeFiles/md.dir/fft5d.c.o CMakeFi

Re: [gmx-users] grimaces 4.6 installation error

2013-01-31 Thread Roland Schulz
On Thu, Jan 31, 2013 at 8:18 PM, Fernando Favela
wrote:

> Dear Gromacs users,
>
> I'm trying to install GMX 4.6 in a 64 Bit Machine with Nvidia GTX 680
> cards, I've already installed the intel compilers, cuda 5 and openmpi.
>
> I do the following procedure:
> CC=/opt/intel/bin/icc CXX=/opt/intel/bin/icpc cmake ..
>
> then
>
> sudo make -j 12
>
> and I get this error:
>
> Linking CXX shared library libmd.so
> cd /home/ffavela/Downloads/gromacs-4.6/build-cmake/src/mdlib &&
> /usr/bin/cmake -E cmake_link_script CMakeFiles/md.dir/link.txt --verbose=1
> /opt/intel/bin/icpc -fPIC -mavx -Wall -ip -funroll-all-loops -O3 -DNDEBUG
> -shared -Wl,-soname,libmd.so.6 -o libmd.so.6
> CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_gpu_ref.c.o
> CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_simd_4xn.c.o
> CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_common.c.o
> CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_simd_2xnn.c.o
> CMakeFiles/md.dir/nbnxn_kernels/nbnxn_kernel_ref.c.o
> CMakeFiles/md.dir/genborn_allvsall.c.o CMakeFiles/md.dir/qm_mopac.c.o
> CMakeFiles/md.dir/mvxvf.c.o CMakeFiles/md.dir/genborn.c.o
> CMakeFiles/md.dir/ebin.c.o CMakeFiles/md.dir/csettle.c.o
> CMakeFiles/md.dir/genborn_allvsall_sse2_single.c.o
> CMakeFiles/md.dir/pme.c.o CMakeFiles/md.dir/gmx_fft.c.o
> CMakeFiles/md.dir/force.c.o CMakeFiles/md.dir/qm_gaussian.c.o
> CMakeFiles/md.dir/qm_orca.c.o CMakeFiles/md.dir/mdatom.c.o
> CMakeFiles/md.dir/stat.c.o CMakeFiles/md.dir/perf_est.c.o
> CMakeFiles/md.dir/domdec_network.c.o CMakeFiles/md.dir/pme_pp.c.o
> CMakeFiles/md.dir/calcmu.c.o CMakeFiles/md.dir/shakef.c.o
> CMakeFiles/md.dir/fft5d.c.o CMakeFiles/md.dir/tables.c.o
> CMakeFiles/md.dir/qmmm.c.o CMakeFiles/md.dir/domdec_con.c.o
> CMakeFiles/md.dir/clincs.c.o CMakeFiles/md.dir/domdec_setup.c.o
> CMakeFiles/md.dir/gmx_wallcycle.c.o CMakeFiles/md.dir/gmx_fft_fftw3.c.o
> CMakeFiles/md.dir/partdec.c.o CMakeFiles/md.dir/domdec.c.o
> CMakeFiles/md.dir/genborn_allvsall_sse2_double.c.o
> CMakeFiles/md.dir/md_support.c.o CMakeFiles/md.dir/vsite.c.o
> CMakeFiles/md.dir/groupcoord.c.o CMakeFiles/md.dir/mdebin.c.o
> CMakeFiles/md.dir/gmx_fft_fftpack.c.o CMakeFiles/md.dir/tgroup.c.o
> CMakeFiles/md.dir/vcm.c.o CMakeFiles/md.dir/nsgrid.c.o
> CMakeFiles/md.dir/nbnxn_search.c.o CMakeFiles/md.dir/constr.c.o
> CMakeFiles/md.dir/shellfc.c.o CMakeFiles/md.dir/iteratedconstraints.c.o
> CMakeFiles/md.dir/rf_util.c.o CMakeFiles/md.dir/update.c.o
> CMakeFiles/md.dir/genborn_sse2_double.c.o CMakeFiles/md.dir/forcerec.c.o
> CMakeFiles/md.dir/nbnxn_atomdata.c.o
> CMakeFiles/md.dir/genborn_sse2_single.c.o CMakeFiles/md.dir/gmx_fft_mkl.c.o
> CMakeFiles/md.dir/pull.c.o CMakeFiles/md.dir/domdec_box.c.o
> CMakeFiles/md.dir/domdec_top.c.o CMakeFiles/md.dir/mdebin_bar.c.o
> CMakeFiles/md.dir/nlistheuristics.c.o CMakeFiles/md.dir/qm_gamess.c.o
> CMakeFiles/md.dir/coupling.c.o CMakeFiles/md.dir/adress.c.o
> CMakeFiles/md.dir/init.c.o CMakeFiles/md.dir/wnblist.c.o
> CMakeFiles/md.dir/expanded.c.o CMakeFiles/md.dir/wall.c.o
> CMakeFiles/md.dir/ns.c.o CMakeFiles/md.dir/minimize.c.o
> CMakeFiles/md.dir/sim_util.c.o CMakeFiles/md.dir/pullutil.c.o
> CMakeFiles/md.dir/gmx_parallel_3dfft.c.o CMakeFiles/md.dir/edsam.c.o
> CMakeFiles/md.dir/pull_rotation.c.o CMakeFiles/md.dir/gmx_fft_acml.c.o
> CMakeFiles/md.dir/tpi.c.o CMakeFiles/md.dir/ewald.c.o
> CMakeFiles/md.dir/calcvir.c.o nbnxn_cuda/libnbnxn_cuda.a
> ../gmxlib/libgmx.so.6 /usr/lib/libblas.so.3gf /usr/lib/liblapack.so.3gf
> /usr/lib/libblas.so.3gf -ldl -lm -lfftw3f -openmp /usr/lib/liblapack.so.3gf
> -ldl -lm -lfftw3f ../gmxlib/gpu_utils/libgpu_utils.a
> ../gmxlib/cuda_tools/libcuda_tools.a /usr/local/cuda/lib64/libcudart.so
> -lcuda -lpthread
> -Wl,-rpath,/home/ffavela/Downloads/gromacs-4.6/build-cmake/src/gmxlib:/usr/local/cuda/lib64:
> cd /home/ffavela/Downloads/gromacs-4.6/build-cmake/src/mdlib &&
> /usr/bin/cmake -E cmake_symlink_library libmd.so.6 libmd.so.6 libmd.so
> make[2]: Leaving directory
> `/home/ffavela/Downloads/gromacs-4.6/build-cmake'
> /usr/bin/cmake -E cmake_progress_report
> /home/ffavela/Downloads/gromacs-4.6/build-cmake/CMakeFiles 89 90 91 92 93
> 94 95 96 97
> [ 65%] Built target md
> make[1]: Leaving directory
> `/home/ffavela/Downloads/gromacs-4.6/build-cmake'
> make: *** [all] Error 2
>

This isn't the error message. It is further up. Please post it.


> NOTE: after an eventual successful installation, where can I find
> information of how to get the benefits of gromacs-gpu? --
>
see
http://www.gromacs.org/Documentation/Cut-off_schemes
http://www.gromacs.org/Documentation/Performance_checklist

Roland


> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT 

Re: [gmx-users] question re; building Gromacs 4.6

2013-01-29 Thread Roland Schulz
On Tue, Jan 29, 2013 at 11:07 AM, Susan Chacko  wrote:

> Thanks for the info! Our cluster is somewhat heterogenous, with
> some 32-core GigE-connected nodes, some older 8-core Infiniband-connected
> nodes, and some GPU nodes. So we need pretty much every variation
> of mdrun :-).
>

There is no disadvantage in having GPU or MPI compiled in. Including for
tools (the binaries other than mdrun). The tools don't use MPI or GPU, so
they work the same with or without it compiled in. As long as the cuda
librararies are available also on the nodes which don't have a GPU, the GPU
version works everywhere. Also if you use OpenMPI (not sure about MPICH)
and shared libraries you don't need different binaries for GigE and
Infiniband (only different version of OpenMPI). Only if your CPUs are
different (if only some support SSE4.1/AVX) you need different binaries. So
you might only need to compile Gromacs once.

Roland


> On Jan 29, 2013, at 11:00 AM, Mark Abraham wrote:
>
> > On Tue, Jan 29, 2013 at 4:39 PM, Susan Chacko 
> wrote:
> >
> >>
> >> Sorry for a newbie question -- I've built several versions of Gromacs in
> >> the
> >> past but am not very familiar with the new cmake build system.
> >>
> >> In older versions, the procedure was:
> >> - build the single-threaded version
> >> - then build the MPI version of mdrun only. No need to build the other
> >> executables with MPI.
> >>
> >> Is this still how it should be done, or should one just build everything
> >> once with MPI?
> >>
> >
> > You can still follow this workflow if you need mdrun with real MPI to run
> > on your hardware (i.e. multiple physical nodes with network connections
> > between them).
> >
> >
> >> Likewise, if I want a separate GPU version (only a few nodes on our
> >> cluster have GPUs), do I build the whole tree separately with
> -DGMX_GPU=ON,
> >> or just a GPU-enabled version of mdrun?
> >>
> >
> > Only mdrun is GPU-aware, so that's all you'd need/want. I'll update the
> > installation instructions accordingly. Thanks!
> >
> > Mark
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> Susan Chacko
> Helix/Biowulf Staff
>
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gromacs 4.6 GB/SA problem and poor performance

2013-01-21 Thread Roland Schulz
On Mon, Jan 21, 2013 at 3:55 AM, Changwon Yang wrote:

> Im trying to run an md or em using an implicit solvation method using
> gromacs 4.6 but I always get the incorrect result.
> ICC version : icc 11.0
>  fftw version : 3.2.2
>
> benchmark system is gromacs-gpubench
> gromacs-gpubench-dhfr.tar/CPU/dhfr-impl-inf.bench
>
> Angle,Proper Dih,Imp Dih,Nonpolar sol,LJ-14,Coulomb-14 energy are correct.
> but GB polarization energy is too low, LJ(SR),Coulomb(SR) energy are always
> zero.
>
>
>
> It seems that there is a bug in the program.
>
> Using gromacs 4.5. It works fine.
>
> gromacs 4.5.3 : 9.4ns/day
> gromacs 4.6 :2.4ns/day
>

Please provide the output of "mdrun -version". Also ICC 11.0 is quite old.
Please test whether the same issue is present with a more recent ICC or GCC
compiler.

Roland


>
>
>
>
> --
> View this message in context:
> http://gromacs.5086.n6.nabble.com/gromacs-4-6-GB-SA-problem-and-poor-performance-tp5004728.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs-4.6-beta3 compile warnings intel-suite 2011 and 2013

2013-01-15 Thread Roland Schulz
Hi,

could you check if you get these warnings also with the latest version from
git? We have changed quite a bit sense then.

git clone https://gerrit.gromacs.org/p/gromacs

git checkout release-4-6


Roland


On Tue, Jan 15, 2013 at 9:34 AM, Richard Broadbent <
richard.broadben...@imperial.ac.uk> wrote:

> Dear All,
>
> I've just installed 4.6-beta3 on my ubuntu linux (Intel Xeon [sandy
> bridge]) box using both intel-suite/64/2011.10/319, and
> intel-suite/64/2013.0/079 with mkl
>
> Using either compiler I received several hundred warnings of type #120,
> #167, and #556 (see bellow for examples). I thought this might be
> vaguely related to Bug #1074, as these all appear to be casting
> warnings. A small test md simulation ran as expected so the executable
> seems to be working and it is using AVX_256 acceleration. I therefore
> don't think this justifies a bug report but I did think it might be
> worth flagging up that these warnings occur and asking if other people
> had seen them. If anyone has any recommendations for how to get rid of
> them or thinks that they are significant I would also be interested.
>
> Thanks,
>
> Richard
>
>
>
> my cmake line was:
>
> $ export CC=icc  ; export CXX=icpc ;  cmake -DGMX_MPI=OFF
> -DGMX_DOUBLE=ON -DGMX_GPU=OFF -DGMX_PREFER_STATIC_LIBS=ON
> -DGMX_FFT_LIBRARY=mkl -DMKL_INCLUDE_DIR=$MKLROOT/include
>
> -DMKL_LIBRARIES="$MKLROOT/lib/intel64/libmkl_core.so;$MKLROOT/lib/intel64/libmkl_intel_lp64.so;$MKLROOT/lib/intel64/libmkl_sequential.so"
> -DGMX_OPENMP=ON  ../
>
> and I then built it with:
>
> $ make -j8 mdrun
> $ make install-mdrun
>
> the Warnings are of the form:
>
>
> gromacs-4.6-beta3/src/gmxlib/nonbonded/nb_kernel_avx_256_double/kernelutil_x86_avx_256_double.h(80):
> warning #120: return value type does not match the function type
>return gmx_mm256_set_m128(t2,t1);
>
>
> gromacs-4.6-beta3/src/gmxlib/nonbonded/nb_kernel_avx_256_double/kernelutil_x86_avx_256_double.h(204):
> warning #167: argument of type "__m128d" is incompatible with parameter
> of type "__m128"
>t1   = gmx_mm256_set_m128(_mm_loadu_pd(p3),_mm_loadu_pd(p1)); /*
> c12c  c6c | c12a  c6a */
>
>
>
> gromacs-4.6-beta3/src/gmxlib/nonbonded/nb_kernel_avx_256_double/kernelutil_x86_avx_256_double.h(233):
> warning #556: a value of type "__m256" cannot be assigned to an entity
> of type "__m256d"
>*x1 = gmx_mm256_set_m128(tx,tx);
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_select error

2013-01-10 Thread Roland Schulz
Hi,

thanks for the bug report. Please let us know whether:
https://gerrit.gromacs.org/#/c/2014/

fixes it.

Roland


On Thu, Jan 10, 2013 at 4:08 AM, Albert  wrote:

> hello:
>
>   I am trying to use g_select to make an index file with command:
>
>
> g_select_mpi -f md.xtc -s npt3.pdb -on density.ndx
>
> but it failed with messages:
>
> WARNING: Masses and atomic (Van der Waals) radii will be guessed
>   based on residue and atom names, since they could not be
>   definitively assigned from the information in your input
>   files. These guessed numbers might deviate from the mass
>   and radius of the atom type. Please check the output
>   files if necessary.
>
> Assertion failed for "g" in file
> /home/albert/Desktop/gromacs-4.6-beta3/src/gmxlib/sel
> dump core ? (y/n)
>
>
> thank you very much
> Albert
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gromacs on GPU

2013-01-09 Thread Roland Schulz
Hi,

is this an implicit water calculation? If so it shouldn't use PME.

Roland


On Wed, Jan 9, 2013 at 2:27 PM, James Starlight wrote:

> Dear Szilárd, thanks for help again!
>
> 2013/1/9 Szilárd Páll :
>
> >
> > There could be, but I/we can't well without more information on what and
> > how you compiled and ran. The minimum we need is a log file.
> >
> I've compilated gromacs 4.6-3 beta via simple
>
>
> cmake CMakeLists.txt -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.0
> make
> sudo make install
>
> I have not added any special params to the grompp or mdrun.
>
> After that I've run tested simulation of the calmodulin in explicit
> water ( 60k atoms ) 100ps and obtain next output
>
> Host: starlight  pid: 21028  nodeid: 0  nnodes:  1
> Gromacs version:VERSION 4.6-beta3
> Precision:  single
> MPI library:thread_mpi
> OpenMP support: enabled
> GPU support:enabled
> invsqrt routine:gmx_software_invsqrt(x)
> CPU acceleration:   AVX_256
> FFT library:fftw-3.3.2-sse2-avx
> Large file support: enabled
> RDTSCP usage:   enabled
> Built on:   Wed Jan  9 20:44:51 MSK 2013
> Built by:   own@starlight [CMAKE]
> Build OS/arch:  Linux 3.2.0-2-amd64 x86_64
> Build CPU vendor:   GenuineIntel
> Build CPU brand:Intel(R) Core(TM) i5-3570 CPU @ 3.40GHz
> Build CPU family:   6   Model: 58   Stepping: 9
> Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf_lm
> mmx msr nonstop_tsc pcid pclmuldq pdcm popcnt pse rdrnd rdtscp sse2
> sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> C compiler: /usr/bin/gcc GNU gcc (Debian 4.6.3-11) 4.6.3
> C compiler flags:   -mavx  -Wextra -Wno-missing-field-initializers
> -Wno-sign-compare -Wall -Wno-unused -Wunused-value
> -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3
> -DNDEBUG
> C++ compiler:   /usr/bin/c++ GNU c++ (Debian 4.6.3-11) 4.6.3
> C++ compiler flags: -mavx  -Wextra -Wno-missing-field-initializers
> -Wno-sign-compare -Wall -Wno-unused -Wunused-value
> -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3
> -DNDEBUG
> CUDA compiler:  nvcc: NVIDIA (R) Cuda compiler driver;Copyright
> (c) 2005-2012 NVIDIA Corporation;Built on
> Fri_Sep_21_17:28:58_PDT_2012;Cuda compilation tools, release 5.0,
> V0.2.1221
> CUDA driver:5.0
> CUDA runtime:   5.0
>
> 
>
>Core t (s)   Wall t (s)(%)
>Time: 2770.700 1051.927  263.4
>  (ns/day)(hour/ns)
> Performance:8.2142.922
>
> full log can be found here http://www.sendspace.com/file/inum84
>
>
> Finally when I check CPU usage I notice that only 1 CPU was full
> loaded ( 100%) and 2-4 cores were loaded on only 60% but  gave me
> strange results that GPU is not used (I've only monitored temperature
> of video card and noticed increase of the temperature up to 65 degrees
> )
>
> +--+
> | NVIDIA-SMI 4.304.54   Driver Version: 304.54 |
>
> |---+--+--+
> | GPU  Name | Bus-IdDisp.  | Volatile Uncorr.
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute
> M. |
>
> |===+==+==|
> |   0  GeForce GTX 670  | :02:00.0 N/A |
>  N/A |
> | 38%   63C  N/A N/A /  N/A |   9%  174MB / 2047MB | N/A
>  Default |
>
> +---+--+--+
>
>
> +-+
> | Compute processes:   GPU
> Memory |
> |  GPU   PID  Process name Usage
>|
>
> |=|
> |0Not Supported
> |
>
> +-+
>
>
> Thanks for help again,
>
> James
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Suppor

Re: [gmx-users] Floating point exception with mdrun-gpu on CUDA

2013-01-09 Thread Roland Schulz
Hi,

it seems you are using OpenMM. The recommended approach is to compile with
GMX_OPENMM=off and GMX_GPU=on.

Roland


On Wed, Jan 9, 2013 at 10:45 AM, sdlonga  wrote:

> Hi,
>
> I succesfully built the mdrun-gpu on a MacOS MountainLion having one CUDA
> NVIDIA GeForce GTX 660 platform. When I try to run one of the GPU
> benchmarks
> (e.g. dhfr-impl-1nm.bench) a floating point exception occurs. The same
> happens for all the benchmarks. I have already tested the functionality of
> the CUDA GPU with the CUDA toolkit samples.
> Hope someone can help me to understand what is going wrong.. thanks in
> advance!
> The last part of the output of mdrun-gpu is as follows:
>
> ..
> -[no]ionize  bool   no  Do a simulation including the effect of an
> X-Ray
> bombardment on your system
> -device  string Device option string
>
> Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single precision)
>
> WARNING: OpenMM does not support leap-frog, will use velocity-verlet
> integrator.
>
>
> WARNING: OpenMM supports only Andersen thermostat with the
> md/md-vv/md-vv-avek integrators.
>
>
> WARNING: OpenMM provides contraints as a combination of SHAKE, SETTLE and
> CCMA. Accuracy is based on the SHAKE tolerance set by the "shake_tol"
> option.
>
> Floating point exception: 8
> >
>
>
>
>
> --
> View this message in context:
> http://gromacs.5086.n6.nabble.com/Floating-point-exception-with-mdrun-gpu-on-CUDA-tp5004393.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gromacs on GPU

2013-01-09 Thread Roland Schulz
On Wed, Jan 9, 2013 at 3:17 AM, James Starlight wrote:

> As I understood that gromacs version already has included openMM so
> the installation of the external openMM sources is not needed, isnt it
> ?
>

No the new build in GPU implementation and openMM are two different things.
The Gromacs-OpenMM interface isn't actively maintained and thus not
recommended.


> also I wounder to know what exactly CUDA version is needed ? For
> example I've tried lattest cuda-5.0 but with that version i've obtain
> error from mdrun-openmm that platform cuda was not detected (gromacs
> was compilated without any errors).
>
with the natived gpu implementation (GMX_GPU) cuda 5.0 works fine.


> by the way is it possible to compilate gromacs-4.6 agains other
> platgorm ( e.g openCL) ? I have no problems with the compatibility of
> the openCL and openMM.
>
GMX_GPU doesn't support openCL.

Roland


>
> James
>
> 2013/1/9 Szilárd Páll :
> > On Tue, Jan 8, 2013 at 3:22 PM, James Starlight  >wrote:
> >
> >> So could someone provide me more about gpu-accelerated MD implemented
> >> in the 4.6 gromacs ? Does it require openMM (what version is supported
> >>
> >
> > FYI, if nobody can, trust G:
> > http://lmgtfy.com/?q=gromacs+4.6+gpu+acceleration
> > http://lmgtfy.com/?q=gromacs+4.6+installation+instructions
> >
> > The wiki and mailing list contains quite extensive information (indexed
> by
> > G).
> >
> > Otherwise, release notes (not final):
> > http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x
> >
> > Install guide is at the expected location:
> > http://www.gromacs.org/Documentation/Installation_Instructions
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> >> for that gromacs release ?) installed? By the way at present time I
> >> force with the problem of compilation 4.1.1 openMM (i need to compile
> >> openMM because of cuda-5.0 ). If someone have done it (openMM 4.11
> >> +cuda 5.0 + gromacs-4.5 for lattest geforces) please let me know.
> >>
> >>
> >> James
> >>
> >> 2013/1/7 James Starlight :
> >> > Hi Szilárd!
> >> >
> >> > As I understood you correctly gromacs-4.6 have specific algorithm
> >> > (independent on openMM?) for gpu-based calculations havent ? If it
> >> > true how I should compilate such new gpu-based gromacs? In the
> >> > gromacs-4.6-beta-3 folder I've found instructuon for the standart
> >> > installation via cmake
> >> >
> >> > cmake PATH_TO_SOURCE_DIRECTORY -DGMX_OPENMM=ON -DGMX_THREADS=OFF
> >> >
> >> >
> >> > James
> >> >
> >> > 2013/1/7 Szilárd Páll :
> >> >> Szilárd
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> * Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_sans

2012-12-18 Thread Roland Schulz
Hi,

g_sans is already in the master version of Gromacs (Justin's link is to
g_nse) but it won't be part of 4.6 instead it will be part of 5.0. You can
get this version from git (git clone git://git.gromacs.org/gromacs.git). As
an alternative you could use http://www.sassena.org/ (disclaimer the author
of the software is a colleague in my group).

Roland


On Tue, Dec 18, 2012 at 4:10 PM, Justin Lemkul  wrote:

>
>
> On 12/18/12 3:59 PM, XUEMING TANG wrote:
> > Hi there
> >
> > I searched through the website for g_sans, which is a simple tool to
> > compute Small Angle Neutron Scattering spectra. But I cannot find it in
> > gromacs folder?
> > I found it in the following website:
> >
> >
> http://gromacs.5086.n6.nabble.com/g-kinetics-g-options-g-dos-g-dyecoupl-and-g-sans-description-missing-td4999165.html
> >
> > Is there any ready to use script for SANS in Gromacs?
> >
>
> The code is still being reviewed and has not be merged into the development
> version at this time.
>
> https://gerrit.gromacs.org/#/c/1828/
>
> -Justin
>
> --
> 
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] failed with intel compiler

2012-12-09 Thread Roland Schulz
On Sun, Dec 9, 2012 at 11:30 AM, Albert  wrote:

> Hello:
>
>I am compiling Gromacs4.6beta2 with intel cimpiler by command:
>
> cmake .. -DGMX_MPI=ON
> -DCMAKE_CXX_COMPILER=/soft/intel64/icc/bin/intel64/mpiCC
> -DCMAKE_C_COMPILER=/soft/intel64/icc/bin/intel64/icc
>

You shouldn't use mpiCC as C++ and icc for C. Either both should be mpi or
none.
Also you say what ICC version you have.

>
>
> /home/albert/Downloads/gromacs/gromacs-4.6-beta2/build/CMakeFiles/CMakeTmp/testCXXCompiler.cxx
>catastrophic error: Compiler configuration problem encountered. The
>expected target architecture compiler is missing (11.1-intel64 !=
>12.1-intel64)
>(0): internal error: backend signals
>compilation aborted for


Make sure your compiler environment is setup correctly (you want to load
the iccvars.sh and then don't use absolute paths for the icc binary). The
error message indicates something is wrong with that. Make sure you can
compile a simple test program independent of Gromacs.

Roland




>  --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.6-beta2 released

2012-12-06 Thread Roland Schulz
On Thu, Dec 6, 2012 at 9:42 PM, Yorquant Wang  wrote:

> Hi Mark:
> There is a new instruction set architecture (*Advanced Vector
> Extensions
> * (AVX)) that can make float calculation in Intel CPU faster two times
> compared with the old instruction set. I want to know if the Gromacs
> developers have a plan to make GMX support AVX. if the GMX can support GMX,
> I think maybe two times speed-up in GMX will be gotten immediately.
>
Yes AVX is supported. But you won't see a 2x speedup.

Roland


>
>   Best
>
> Yukun Wang
> PhD candidate
> Institute of Natural Sciences && College of Life Science, Shanghai Jiao
> Tong University
> Cell phone: 13621806236.
> China Shanghai
>
>
> 2012/12/7 Mark Abraham 
>
> > Hi all,
> >
> > We've updated the GROMACS beta version to fix some bugs both you and we
> > found. We've also added the Adaptive resolution scheme (adResS) to our
> list
> > of new features (though we still have yet to publish a complete list of
> > those!). adResS couples two systems with different resolutions by a force
> > interpolation, which can be used to speed-up atomistic simulations. The
> new
> > source package can be found at
> > ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6-beta2.tar.gz. Installation
> > instructions still here
> > http://www.gromacs.org/Documentation/Installation_Instructions
> >
> > Please try it out, particularly if you haven't done so already! Also, if
> > everything is smooth sailing, please drop us a line on gmx-users just to
> > say that. We can't tell whether silence is "worked great, nothing to say"
> > or "haven't tried it yet". That will help us judge when things are stable
> > enough to make a real release!
> >
> > Speaking of that, we are keen to make that final release soon. While we
> > can't pick a date yet, we promise that if you give us feedback by
> December
> > 21, then we will make a sincere effort to incorporate the results of that
> > feedback in the final release. In particular, if it's an issue on our
> > Redmine bug report database http://redmine.gromacs.org, then it will be
> > sure to get our attention and consideration. You'll need to register an
> > account to make a bug report (so that we can get back to you), but that's
> > free and easy.
> >
> > We hope to release a current version of our regression test suite next
> > week, and a benchmark set soon.
> >
> > Cheers,
> >
> > Mark Abraham
> > GROMACS development manager
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
>
>
>
> --
> Yukun Wang
> PhD candidate
> Institute of Natural Sciences && College of Life Science, Shanghai Jiao
> Tong University
> Cell phone: 13621806236.
> China Shanghai
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Issue building template file for Gromacs 4.6-beta1

2012-12-06 Thread Roland Schulz
Hi,


On Thu, Dec 6, 2012 at 4:58 AM, hubert santuz wrote:

>
> The building/compilation of gromacs itself is working.
> But, when I try to compile the template file to create my own plugin, it
> fails.


Yes. I broke that for beta1. If you don't want to wait for beta2 or 3 (not
sure it'll make it into beta2). You can download a fix here:
https://gerrit.gromacs.org/#/c/1884/

Roland


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-12-05 Thread Roland Schulz
On Wed, Dec 5, 2012 at 4:14 AM, sebastian <
sebastian.wa...@physik.uni-freiburg.de> wrote:

>
>
> The test of the nbnxn_pme setup breaks as well after 50 steps (I
> extended the run using tpbconv) with the same lincs warning. (log attached)
>
> AFAIK, attachment aren't supported by the mailing list. Could you open a
redmine issue and attach it there?
You could also test the latest release-4-6 version from git. There are 2
bug fixes which might be related to the issue you are seeing.

Roland
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Roland Schulz
On Tue, Dec 4, 2012 at 11:30 AM, sebastian <
sebastian.wa...@physik.uni-freiburg.de> wrote:

> On 12/04/2012 05:09 PM, Mark Abraham wrote:
> > 2fs is normally considered too large a time step for stable integration
> > with only bonds to hydrogen constrained, so your observation of
> > non-reproducible LINCS warnings is not indicative of some other problem.
> >
>
> Sorry, but why is this whole setup running on my local desktop with
> GPUs? As far as I know this is a rather typical set of parameters.
>
I agree with you that normally a well equilibrated system should not crash
because of that. But the integration error is relative large with that so
you might want to change it for accuracy. Also you don't want to use a
system like that for bug hunting because as Mark said, the stability can
depend on slight numerical differences. And for bug hunting you want a
system which only crashes if something is wrong and not also sometimes if
everything is OK.


> The only difference what I can think of is that gromacs was compiled
> with intel and mkl libs on the cluster there as it was compiled with gcc
> and fftw3 libs on the local desktop.
>
Did you run the regressiontests as suggested by Szilard? Can you narrow it
down to one compontent? E.g. by compiling a version on the cluster with
gcc+fftw (to make sure it is not any other cluster componennt e.g. the MPI
library) and a version with icc+fftw (to see whether it correlates with
either icc or mkl?).

Roland


>
> Sebastian
>
> > Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK
> this
> > works fine, but is inefficient.
> >
> > Mark
> >
> > On Tue, Dec 4, 2012 at 3:41 PM, sebastian<
> > sebastian.wa...@physik.uni-freiburg.de>  wrote:
> >
> >
> >> On 11/23/2012 08:29 PM, Szilárd Páll wrote:
> >>
> >>
> >>> Hi,
> >>>
> >>> On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
> >>> sebastian.wa...@physik.uni-**freiburg.de<
> sebastian.wa...@physik.uni-freiburg.de>>
> >>>   wrote:
> >>>
> >>>
> >>>
> >>>
>  Dear GROMCS user,
> 
>  I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on
> my
>  local desktop
> 
> 
> 
> >>> Watch out, the dirty version suffix means you have changed something in
> >>> the
> >>> source.
> >>>
> >>>
> >>>
> >>>
> >>>
>  (2*GTX 670 + i7) and everything works as smooth as possible. The
> outcomes
>  are very reasonable and match the outcome of the 4.5.5 version without
>  GPU
>  acceleration. On our
> 
> 
> 
> >>> What does "outcome" mean? If that means performance, than something is
> >>> wrong, you should see a considerable performance increase (PME,
> >>> non-bonded,
> >>> bondeds have all gotten a lot faster).
> >>>
> >>>
> >>>
> >>>
> >> With outcome I mean the trajectory not the performance.
> >>
> >>
> >>
> >>
> >>>
>  cluster (M2090+2*Xeon X5650) I installed the  VERSION
>  4.6-dev-20121120-0290409. Using the same .tpr file used for runs with
> my
>  desktop I get lincs warnings that the watermolecules can't be settled.
> 
> 
> 
> 
> >>> The group kernels have not "stabilized" yet and there have been some
> fixes
> >>> lately. Could you please the latest version and check again.
> >>>
> >>>
> >>>
> >> I installed the beta1 release and still the water can not be settled.
> >>
> >>   Additionally, you could try running our regression tests suite (
> >>
> >>> git.gromacs.org/**regressiontests<
> http://git.gromacs.org/regressiontests>)
> >>> to see if at least the tests pass with the
> >>> binaries you compiled
> >>> Cheers,
> >>> --
> >>> Szilárd
> >>>
> >>>
> >>>
> >>>
> >>>
> >> Cheers,
> >>
> >> Sebastian
> >>
> >>
> >>   My .mdp file looks like:
> >>
> ;
>  title= ttt
>  cpp =  /lib/cpp
>  include = -I../top
>  constraints =  hbonds
>  integrator  =  md
>  cutoff-scheme   =  verlet
> 
>  ;define  =  -DPOSRES; for possition restraints
> 
>  dt  =  0.002; ps !
>  nsteps  =  1  \
>  nstcomm =  25; frequency for center of mass
>  motion
>  removal
>  nstcalcenergy   =  25
>  nstxout =  10; frequency for writting the
>  trajectory
>  nstvout =  10; frequency for writting the
>  velocity
>  nstfout =  10; frequency to write forces
> to
>  output trajectory
>  nstlog  =  1; frequency to write the log
> file
>  nstenergy   =  1; frequency to write energies
> to
>  energy file
>  nstxtcout   =  1
> 
>  xtc_grps=  System
> 
>  nstlist =  25; Frequency to update the
> neighbor
>  list
>  ns_type =  grid; Make a grid in the box and
> on

Re: [gmx-users] Installing 4.6 beta1

2012-12-03 Thread Roland Schulz
Hi,

a small addition to Mark's answer. If you want/need to use an older gcc
version 4.4.6 and 4.5.3 have the necessary fix too. Also you can disable
AVX with cmake -DGMX_CPU_ACCELERATION=SSE4.1 . But you will get somewhat
lower performance (AFAIK only 10% with group kernels but more with verlet
kernels).
If you want to track the status of the this Gromacs bug you can find it
here: http://redmine.gromacs.org/issues/1058

Roland


On Mon, Dec 3, 2012 at 4:43 PM, Mark Abraham wrote:

> Looks like a compiler bug to me, and google agrees
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47318.
>
> Even if there were not this bug, we would strongly encourage you to get the
> better performance available from more recent versions of gcc. I was about
> to suggest 4.6 or newer, but the above URL suggests 4.6.0 has the same bug
> :-O. So go higher than that, or Intel's compiler.
>
> We will probably try detecting this at configure time in a future beta.
> Clearly our test matrix doesn't cover this case, but as Szilard has
> mentioned in another thread, the multiplicative explosion of possibilities
> for us to try to anticipate to make users' lives easier during installation
> makes our lives pretty tough :-)
>
> Mark
>
> On Mon, Dec 3, 2012 at 8:05 PM, sebastian <
> sebastian.wa...@physik.uni-freiburg.de> wrote:
>
> > Hey Dear,
> >
> > when I try to install the GROMACS-4.6-beta1 version I get an error which
> I
> > can not pin down when do make.
> >
> > My system:
> >
> > CPU: i7
> > GPU: NVIDIA GTX670
> > gcc: Debian 4.4.5-8
> > CUDA: 4.2
> > fft3w: 3.3.2 including sse2
> > Debian Linux 64-bit
> >
> > First I do:
> >
> > cmake ../ -DCMAKE_INSTALL_PREFIX=/usr/**local/gromacs_4.6
> >
> > which works fine. When I do:
> >
> > make -j6
> >
> > I get a bunch of errors which look more or less the same:
> >
> > gromacs-4.6-beta1/src/gmxlib/**nonbonded/nb_kernel_avx_256_**
> > single/kernelutil_x86_avx_256_**single.h:600: error: incompatible type
> > for argument 2 of ‘_mm_maskload_ps’
> > /usr/lib/gcc/x86_64-linux-gnu/**4.4.5/include/avxintrin.h:919: note:
> > expected ‘__m128’ but argument is of type ‘__m128i’
> >
> > or like:
> >
> > gromacs-4.6-beta1/src/gmxlib/**nonbonded/nb_kernel_avx_256_**
> > single/kernelutil_x86_avx_256_**single.h:607: error: incompatible type
> > for argument 2 of ‘_mm_maskstore_ps’
> > /usr/lib/gcc/x86_64-linux-gnu/**4.4.5/include/avxintrin.h:926: note:
> > expected ‘__m128’ but argument is of type ‘__m128i’
> >
> > and it breaks with this output:
> >
> > make[2]: *** [src/gmxlib/CMakeFiles/gmx.**dir/nonbonded/nb_kernel_avx_**
> > 256_single/nb_kernel_ElecCoul_**VdwCSTab_GeomW4W4_avx_256_**single.c.o]
> > Error 1
> > make[1]: *** [src/gmxlib/CMakeFiles/gmx.**dir/all] Error 2
> > make: *** [all] Error 2
> >
> > I don't know what goes wrong and maybe I just miss a flag.
> >
> > Dears,
> >
> > Sebastian
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> http://lists.gromacs.org/mailman/listinfo/gmx-users>
> > * Please search the archive at http://www.gromacs.org/**
> > Support/Mailing_Lists/Search<
> http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> > * Please don't post (un)subscribe requests to the list. Use the www
> > interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> http://www.gromacs.org/Support/Mailing_Lists>
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Build on OSX with 4.6beta1

2012-11-30 Thread Roland Schulz
Hi Carlo,

thanks for the feedback!

Roland


On Fri, Nov 30, 2012 at 1:35 PM, Carlo Camilloni
wrote:

> Hi Roland,
>
> so, the problem is not fixed by changing to
> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/g++ ,
> I still have to change by hand clang to clang++ in that single link.txt
> file,
> while everything works fine by checking out the revision you suggested.
>
> Best,
> Carlo
>
>
> > Message: 3
> > Date: Fri, 30 Nov 2012 08:34:24 -0500
> > From: Roland Schulz 
> > Subject: Re: [gmx-users] Build on OSX with 4.6beta1
> > To: Discussion list for GROMACS users 
> > Message-ID:
> >c0jyknirtm7o5gxvhvuserwczvcjheeow-3-ypxv...@mail.gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > Hi,
> >
> > On Fri, Nov 30, 2012 at 5:01 AM, Carlo Camilloni
> > wrote:
> >>
> >>
> >> 1. the compilation was easy but not straightforward:
> >> cmake ../ -DGMX_GPU=ON
> >> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu
> >> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang
> >> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF
> >>
> >
> > One more thing. Could you try whether the problem is fixed with
> > -DCUDA_NVCC_HOST_COMPILER=/usr/bin/g++ ?
> >
> > Roland
> >
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Build on OSX with 4.6beta1

2012-11-30 Thread Roland Schulz
Hi,

On Fri, Nov 30, 2012 at 5:01 AM, Carlo Camilloni
wrote:
>
>
> 1. the compilation was easy but not straightforward:
> cmake ../ -DGMX_GPU=ON
> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu
> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang
> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF
>

One more thing. Could you try whether the problem is fixed with
 -DCUDA_NVCC_HOST_COMPILER=/usr/bin/g++ ?

Roland
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Build on OSX with 4.6beta1

2012-11-30 Thread Roland Schulz
On Fri, Nov 30, 2012 at 5:01 AM, Carlo Camilloni
wrote:

> Dear All,
>
> I have successfully compiled the beta1 of gromacs 4.6 on my macbook pro
> with mountain lion.
> I used the latest cuda and the clang/clang++ compilers in order to have
> access to the AVX instructions.
> mdrun works with great performances!! great job!
>
> two things:
>
> 1. the compilation was easy but not straightforward:
> cmake ../ -DGMX_GPU=ON
> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu
> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang
> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF
>
> and then I had to manually edit
> src/gmxlib/CMakeFiles/gmx.dir/link.txt
>
> and change clang to clang++
> (I noted that in many other places it was correctly set, and without this
> change I got an error on some c++ related stuff)
>

If you have git, does the version you get from:
git init gromacs.dev && cd gromacs.dev && git fetch
https://gerrit.gromacs.org/gromacs refs/changes/54/1854/1 && git checkout
FETCH_HEAD

fix this issue?


> 2. is there any way to have openmp parallelisation on osx?
>
As far as I know you have two options:
- You can get the none free Intel ICC.
- You can use gcc (ideally 4.7). I have tried only the one from MacPorts.
But the version from MacPorts uses an assembler which doesn't support AVX.
You need to work around that as described here:
http://old.nabble.com/Re%3a-gcc,-as,-AVX,-binutils-and-MacOS-X-10.7-p32584737.htmlFor
more details see
http://redmine.gromacs.org/issues/1021

Roland


>
> Best,
> Carlo
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Build problem with 4.6beta1

2012-11-29 Thread Roland Schulz
On Thu, Nov 29, 2012 at 9:20 PM, Justin Lemkul  wrote:

>
> Hooray for being the first to report a problem with the beta :)
>
> We have a cluster at our university that provides us with access to some
> CPU-only nodes and some CPU-GPU nodes.  I'm having problems with getting
> 4.6beta1 to build, and I suspect the issue is related to GPU detection.
>
> Here are some specifics:
>
> FFTW 3.3.3
> CMake 2.8.10
> gcc 4.3.4
> CUDA 4.0
> 64-bit Linux on AMD hardware
> GPU nodes have Tesla C2050 cards
>
> Commands:
>
> cmake ../gromacs-4.6-beta1
> -DCMAKE_INSTALL_PREFIX=/home/jalemkul/software/gromacs-46beta1
> -DGMX_X11=OFF
> -DGMX_GPU=ON -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++
> -DCMAKE_PREFIX_PATH=/home/jalemkul/software/fftw-3.3.3/
>
> The first step runs alright, but two things to note:
>
> 1. It doesn't detect any GPU, which is correct because I'm on a head node
> and
> not a compute node:
>
> ...-- Looking for NVIDIA GPUs present in the system
> -- Could not detect NVIDIA GPUs
>
This is not a warning just a note. Is this confusing?


> -- Found CUDA: /cm/shared/apps/cuda40/toolkit/4.0.17 (found suitable
> version
> "4.0", minimum required is "3.2")
> ...
>
> 2. It says FFTW isn't detected, but it actually is:
>
> ...
> -- checking for module 'fftw3f'
> --   package 'fftw3f' not found
>
It checks several ways. The first (pgkconfig) din't work. Again just a note.


> -- Looking for fftwf_plan_r2r_1d in
> /home/jalemkul/ATHENA/software/fftw-3.3.3/lib/libfftw3f.so
> -- Looking for fftwf_plan_r2r_1d in
> /home/jalemkul/ATHENA/software/fftw-3.3.3/lib/libfftw3f.so - found
> -- Looking for fftwf_have_simd_avx in
> /home/jalemkul/ATHENA/software/fftw-3.3.3/lib/libfftw3f.so
> -- Looking for fftwf_have_simd_avx in
> /home/jalemkul/ATHENA/software/fftw-3.3.3/lib/libfftw3f.so - not found
> -- Looking for fftwf_have_simd_sse2 in
> /home/jalemkul/ATHENA/software/fftw-3.3.3/lib/libfftw3f.so
> -- Looking for fftwf_have_simd_sse2 in
> /home/jalemkul/ATHENA/software/fftw-3.3.3/lib/libfftw3f.so - found
> ...
>
> Upon running "make," I get an immediate failure:
>
> [  0%] Building NVCC (Device) object
>
> src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o
> nvcc fatal   : redefinition of argument 'compiler-bindir'
> CMake Error at gpu_utils_generated_gpu_utils.cu.o.cmake:206 (message):
>Error generating
>
>
> /home/jalemkul/gmxbuild/src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o
>
>
> make[2]: ***
>
> [src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/./gpu_utils_generated_gpu_utils.cu.o]
> Error 1
> make[1]: *** [src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/all] Error 2
> make: *** [all] Error 2
>
> Any ideas?  I'm guessing it's related to the GPU detection failing,
> because I
> can build on a workstation in our lab that has a C2075 card.
>

Could be this cmake bug: http://www.gccxml.org/Bug/view.php?id=13674
Could you try whether any other cmake version (e.g. 2.8.9) works?

Roland


>
> -Justin
>
> --
> 
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-16 Thread Roland Schulz
Hi Raf,

which version of Gromacs did you use? If you used branch nbnxn_hybrid_acc
please use branch release-4-6 instead and see whether that fixes your
issue. If not please open a bug and upload your log file and your tpr.

Roland


On Thu, Nov 15, 2012 at 5:13 PM, Raf Ponsaerts <
raf.ponsae...@med.kuleuven.be> wrote:

> Hi Szilárd,
>
> I assume I get the same segmentation fault error as Sebastian (don't
> shoot if not so). I have 2 NVIDA GTX580 cards (and 4x12-core amd64
> opteron 6174).
>
> in brief :
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffc07f8700 (LWP 32035)]
> 0x761de301 in nbnxn_make_pairlist.omp_fn.2 ()
> from /usr/local/gromacs/bin/../lib/libmd.so.6
>
> Also -nb cpu with Verlet cutoff-scheme results in this error...
>
> gcc 4.4.5 (Debian 4.4.5-8), Linux kernel 3.1.1
> CMake 2.8.7
>
> If I attach the mdrun.debug output file to this mail, the mail to the
> list gets bounced by the mailserver (because mdrun.debug > 50 Kb).
>
> Hoping this might help,
>
> regards,
>
> raf
> ===
> compiled code :
> commit 20da7188b18722adcd53088ec30e5f256af62f20
> Author: Szilard Pall 
> Date:   Tue Oct 2 00:29:33 2012 +0200
>
> ===
> (gdb) exec mdrun
> (gdb) run -debug 1 -v -s test.tpr
>
> Reading file test.tpr, VERSION 4.6-dev-20121002-20da718 (single
> precision)
> [New Thread 0x73844700 (LWP 31986)]
> [Thread 0x73844700 (LWP 31986) exited]
> [New Thread 0x73844700 (LWP 31987)]
> [Thread 0x73844700 (LWP 31987) exited]
> Changing nstlist from 10 to 50, rlist from 2 to 2.156
>
> Starting 2 tMPI threads
> [New Thread 0x73844700 (LWP 31992)]
> Using 2 MPI threads
> Using 24 OpenMP threads per tMPI thread
>
> 2 GPUs detected:
>   #0: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat:
> compatible
>   #1: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat:
> compatible
>
> 2 GPUs auto-selected to be used for this run: #0, #1
>
>
> Back Off! I just backed up ctab14.xvg to ./#ctab14.xvg.1#
> Initialized GPU ID #1: GeForce GTX 580
> [New Thread 0x73043700 (LWP 31993)]
>
> Back Off! I just backed up dtab14.xvg to ./#dtab14.xvg.1#
>
> Back Off! I just backed up rtab14.xvg to ./#rtab14.xvg.1#
> [New Thread 0x71b3c700 (LWP 31995)]
> [New Thread 0x7133b700 (LWP 31996)]
> [New Thread 0x70b3a700 (LWP 31997)]
> [New Thread 0x7fffebfff700 (LWP 31998)]
> [New Thread 0x7fffeb7fe700 (LWP 31999)]
> [New Thread 0x7fffeaffd700 (LWP 32000)]
> [New Thread 0x7fffea7fc700 (LWP 32001)]
> [New Thread 0x7fffe9ffb700 (LWP 32002)]
> [New Thread 0x7fffe97fa700 (LWP 32003)]
> [New Thread 0x7fffe8ff9700 (LWP 32004)]
> [New Thread 0x7fffe87f8700 (LWP 32005)]
> [New Thread 0x7fffe7ff7700 (LWP 32006)]
> [New Thread 0x7fffe77f6700 (LWP 32007)]
> [New Thread 0x7fffe6ff5700 (LWP 32008)]
> [New Thread 0x7fffe67f4700 (LWP 32009)]
> [New Thread 0x7fffe5ff3700 (LWP 32010)]
> [New Thread 0x7fffe57f2700 (LWP 32011)]
> [New Thread 0x7fffe4ff1700 (LWP 32012)]
> [New Thread 0x7fffe47f0700 (LWP 32013)]
> [New Thread 0x7fffe3fef700 (LWP 32014)]
> [New Thread 0x7fffe37ee700 (LWP 32015)]
> [New Thread 0x7fffe2fed700 (LWP 32016)]
> [New Thread 0x7fffe27ec700 (LWP 32017)]
> Initialized GPU ID #0: GeForce GTX 580
> Using CUDA 8x8x8 non-bonded kernels
> [New Thread 0x7fffe1feb700 (LWP 32018)]
> [New Thread 0x7fffe0ae4700 (LWP 32019)]
> [New Thread 0x7fffcbfff700 (LWP 32020)]
> [New Thread 0x7fffcb7fe700 (LWP 32021)]
> [New Thread 0x7fffcaffd700 (LWP 32022)]
> [New Thread 0x7fffca7fc700 (LWP 32023)]
> [New Thread 0x7fffc9ffb700 (LWP 32024)]
> [New Thread 0x7fffc97fa700 (LWP 32025)]
> [New Thread 0x7fffc8ff9700 (LWP 32026)]
> [New Thread 0x7fffc3fff700 (LWP 32027)]
> [New Thread 0x7fffc37fe700 (LWP 32028)]
> [New Thread 0x7fffc2ffd700 (LWP 32029)]
> [New Thread 0x7fffc27fc700 (LWP 32031)]
> [New Thread 0x7fffc1ffb700 (LWP 32032)]
> [New Thread 0x7fffc17fa700 (LWP 32033)]
> [New Thread 0x7fffc0ff9700 (LWP 32034)]
> [New Thread 0x7fffc07f8700 (LWP 32035)]
> [New Thread 0x7fffbfff7700 (LWP 32036)]
> [New Thread 0x7fffbf7f6700 (LWP 32037)]
> [New Thread 0x7fffbeff5700 (LWP 32038)]
> [New Thread 0x7fffbe7f4700 (LWP 32039)]
> [New Thread 0x7fffbdff3700 (LWP 32040)]
> [New Thread 0x7fffbd7f2700 (LWP 32042)]
> [New Thread 0x7fffbcff1700 (LWP 32043)]
> Making 1D domain decomposition 2 x 1 x 1
>
> * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *
> We have just committed the new CPU detection code in this branch,
> and will commit new SSE/AVX kernels in a few days. However, this
> means that currently only the NxN kernels are accelerated!
> In the mean time, you might want to avoid production runs in 4.6.
>
>
> Back Off! I just backed up traj.trr to ./#traj.trr.1#
>
> Back Off! I just backed up traj.xtc to ./#traj.xtc.1#
>
> Back Off! I just backed up ener.edr to ./#ener.edr.1#
> starting mdrun 'Protein in water'
> 10 steps,200.0 ps.
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffc07f8700 (LWP 32035)]
> 0x

Re: [gmx-users] testing Gromacs 4.6 (git version) on GPUs

2012-10-31 Thread Roland Schulz
Hi,

you can use the tpr files from the tgz if you use the non-implicit "solv"
(preferable PME) test.

Roland

On Wed, Oct 31, 2012 at 10:37 AM, Susan Chacko  wrote:

>
> Hi all,
>
> I wanted to test the built-in support in Gromacs 4.6 (non-OpenMM) for GPUs.
> I downloaded the latest git version and it built successfully, appears
> to link the right CUDA libraries etc.
>
> I tried testing it with the Gromacs GPU benchmark suite
> (http://www.gromacs.org/@api/deki/files/128/=gromacs-gpubench-dhfr.tar.gz)
>
> The output md.log file reports:
> 
> […]
> NOTE: GPU(s) found, but the current simulation can not use GPUs
>  To use a GPU, set the mdp option: cutoff-scheme = Verlet
>  (for quick performance testing you can use the -testverlet option)
> 
>
> So I tried 'mdrun -s topol.tpr -testverlet' and got:
> ---
> Program mdrun, VERSION 4.6-dev
> Source code file: /usr/local/src/gromacs/gromacs/src/kernel/runner.c,
> line: 700
>
> Fatal error:
> Can only convert old tpr files to the Verlet cut-off scheme with 3D pbc
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> Does anyone have a test set that I can use to check the performance of
> Gromacs 4.6 on GPUs? I'm more of a sysadmin than a computational chemist,
> so I'm sort of reliant on the provided benchmark suites.
>
> Any suggestions welcomed and appreciated,
> Susan.
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Version 4.6

2012-10-17 Thread Roland Schulz
Hi,

the main remaining issue are the new (faster) group cut-off kernels.
As always any volunteers are very welcome. Remaining issues can be found at
redmine.gromacs.org and if someone wants to help but doesn't want to
program it is always nice to have help with the documentation
wiki.gromacs.org (e.g.
http://www.gromacs.org/Documentation/Installation_Instructions/Cmake).

Roland

On Tue, Oct 16, 2012 at 4:56 AM, SebastianWaltz <
sebastian.wa...@physik.uni-freiburg.de> wrote:

> Hey together,
>
> I am wondering when the stable version of 4.6 will be released. I am
> using the prerelease version for basic NVT and NPT full atom simulations
> for months now with great success and wonder what is missing for the
> full release. I would be very grateful for some informations about the
> current status of the 4.6 version.
>
> Thanks a lot
>
> Sebastian
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem with OMP_NUM_THREADS=12 mpirun -np 16 mdrun_mpi

2012-08-29 Thread Roland Schulz
Hi,

the OpenMM code is still under review. You can download it using
git fetch https://gerrit.gromacs.org/gromacs refs/changes/83/1283/14 && git
checkout FETCH_HEAD
You can check https://gerrit.gromacs.org/#/c/1283/ for the latest version
of it (as of time of writing the above line gives you the latest).

Roland

On Wed, Aug 29, 2012 at 10:37 AM, jesmin jahan  wrote:

> Thanks David and Szilárd.
>
> I am attaching a log file that I have got from my experiment. Please
> have a look. It says, gromacs version 4.6-dev
> I am using  :-)  VERSION 4.6-dev-20120820-87e5bcf  (-: of Gromacs.
>
> I have used the commands:
>
> git clone git://git.gromacs.org/gromacs.git
> cd gromacs
> git checkout --track -b release-4-6 origin/release-4-6 as written on
> the gromacs website.
>
> to download it and
>
> used cmake .. -DGMX_MPI=ON -DGMX_OPENMP=ON to configure it.
>
> Is it the case that later version of 4.6 has this feature?
>
> Please let me know.
>
> Thanks,
> Jesmin
>
> On Wed, Aug 29, 2012 at 4:27 AM, Szilárd Páll 
> wrote:
> > On Wed, Aug 29, 2012 at 5:32 AM, jesmin jahan 
> wrote:
> >> Dear All,
> >>
> >> I have installed gromacs VERSION 4.6-dev-20120820-87e5bcf with
> >> -DGMX_MPI=ON . I am assuming as OPENMP is default, it will be
> >> automatically installed.
> >>
> >> My Compiler is
> >> /opt/apps/intel11_1/mvapich2/1.6/bin/mpicc Intel icc (ICC) 11.1 20101201
> >>
> >> And I am using OMP_NUM_THREADS=12 mpirun -np 16 mdrun_mpi -s imd.tpr
> >>
> >> I was hopping this will run 16 processes each with 12 threads.
> >> However, in the log file I saw something like this:
> >>
> >>  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
> >>
> >>  Computing: Nodes Number G-CyclesSeconds %
> >> ---
> >>  Domain decomp.16  10.0270.0 1.8
> >>  Comm. coord.  16  10.0020.0 0.1
> >>  Neighbor search   16  10.1130.1 7.7
> >>  Force 16  11.2360.883.4
> >>  Wait + Comm. F16  10.0150.0 1.0
> >>  Update16  10.0050.0 0.4
> >>  Comm. energies16  10.0080.0 0.5
> >>  Rest  16   0.0760.0 5.1
> >> ---
> >>  Total 16   1.4810.9   100.0
> >> ---
> >>
> >>
> >> Its not clear whether each of the 16 nodes runs 12 threads internally
> or not.
> >
> > No it's not. That out put is not from 4.6, you should have an extra
> > column with the number of threads.
> >
> > --
> > Szilárd
> >
> >
> >> If anyone knows about this, please let me know.
> >>
> >> Thanks for help.
> >>
> >> Best Regards,
> >> Jesmin
> >>
> >>
> >>
> >> --
> >> Jesmin Jahan Tithi
> >> PhD Student, CS
> >> Stony Brook University, NY-11790.
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> * Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
> --
> Jesmin Jahan Tithi
> PhD Student, CS
> Stony Brook University, NY-11790.
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme restart

2012-08-24 Thread Roland Schulz
On Fri, Aug 24, 2012 at 12:23 PM, Albert  wrote:
> Dear:
>
>I use g_tune_pme for MD production but it clashed before the job
> finished since it is over cluster walltime limitation. I get the
> following information from perf.out:
>
> mpirun -np 144 mdrun -npme -1 -s tuned.tpr -v -o md.trr -c md.gro -e
> md.edr -g md.log
>
> I am just wondering, is it correct using the following command to append
> the jobs?
>
> mpirun -np 144 mdrun -npme -1 -s tuned.tpr -v -f md.trr -e md.edr -g
> md.log -o md.trr -cpi -append

mdrun doesn't have "-f" as option. It has to be "-o". Otherwise it seems OK.

Roland
>
> thank you very much
>
> best
> Albert
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Only plain text messages are allowed!
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] laptop GPU support

2012-08-04 Thread Roland Schulz
Hi,

yes they will all be supported.
For the 650M (GDDR5 model) I get: 58.5 ns/day and only using the CPU
(i7-3610QM) I get: 44.4 ns/day
for  21k atoms water system (rcoulomb .8, spacing: .096, nstlist=7,
CUDA 5, development GPU driver).
So if you are planning to get a similar fast CPU you probably won't
see much speedup if you go with something slower than 650M. You might
even want to consider the 660M.

Roland


On Wed, Aug 1, 2012 at 6:29 AM, Thomas Evangelidis  wrote:
> Dear GROMACS community,
>
> I am about to buy a new laptop so I would like to know if GROMACS
> supports, or will support in the future, any of the following GPUs:
>
> NVIDIA GeForce GT 630M, 640M or 650M
>
> Thank you in advance,
> Thomas
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Only plain text messages are allowed!
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: [gmx-developers] issue on benchmarking FEP calculations in Gromacs and NAMD

2012-06-10 Thread Roland Schulz
On Fri, Jun 8, 2012 at 12:21 PM, Wang, Yuhang  wrote:

>  Dear Gromacs developers,
>
> I have benchmarked the desolvation free energy calculation of Na+ ion
> using Gromacs and NAMD (for comparison). There is a large difference in the
> electrostatic desolvation free energy (see below and the attached figure):
>
> Gromacs: 82.10(raw)+11.65(Ewald correction) = 93.75 kcal/mol
> NAMD: 93.62(raw)+11.23(Ewald correction) = 104.85 kcal/mol
>
> Both of them are FEP calculations with perturbation of electrostatic
> interactions and used the same Lennard-Jones parameters:
> sigma=0.243 nm, epsilon=0.196 kJ/mol (0.0469 kca/mol)
>
> Ewald correction was calculated by: 0.5*(2.837297/L)*331 (unit: kcal/mol),
> "L" is the cubic box length.
>
> Question: how can I explain the difference? Does Gromacs have a different
> PME implementation than NAMD?
>
Since you are using PME (and not PME-Switch) you don't have a buffer
region. That is different from NAMD. So if you want to make sure that this
isn't the cause of the difference you might want to check Gromacs with
PME-Switch. If you don't mind pre-release code you can also try out pre-4.6
and verlet-PME that is even closer to how it is in NAMD:
http://lists.gromacs.org/pipermail/gmx-developers/2012-March/005674.html

PS: Please send user questions to gmx-users not gmx-developers.

Roland


>
> P.S. my input scripts are in the attachments.
>
>
> Steven W.
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] How to read in netcdf with GROMACS?‏

2012-05-26 Thread Roland Schulz
Hi,

On Sat, May 26, 2012 at 12:42 PM, a a  wrote:

>
> /usr/local/lib/vmd/plugins/LINUXAMD64/molfile/xyzplugin.so: wrong ELF
> class: ELFCLASS64
>

This error suggests that you installed vmd in 64bit but installed Gromacs
in 32bit.  Reinstall either to make it match and it should work.

Roland


>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Problems of gmx4.5.5 on the E3-1230 V2 CPU (Ivy Bridge)

2012-05-26 Thread Roland Schulz
Hi,

it is always possible that your simulation is isn't well equilibrated and
different rouding errors make it crash with one binary/hardware but not
with another. See also
http://www.gromacs.org/Documentation/Terminology/Blowing_Up. You should
first make sure that this isn't the case by taking a well equilibrated
structure from your working hardware as a starting structure for the
Ivy-Bridge run and see whether you still observe the crash.

If you want to see whether it is really a problem with the binary on
Ivy-Bridge, you should write out the energy on all steps (nstenergy=1) and
compare the energy between the working and the non-working simulation
(gmxcheck). If the energy disagrees already for the first 40 steps it is
likely a problem (later differences are expected because of the chaotic
nature of MD). You should also write a bit more about how you installed
Gromacs (including the full cmake/configure command line).

Roland

On Fri, May 25, 2012 at 8:44 AM, 石子枫  wrote:

>  Dear,every one!
>
> We found some problems of run gmx4.5.5 in parallel on the E3-1230 V2 CPU (Ivy
> Bridge).
> The compilers we used were ifort and icc (Version 12.0.3).
> It only if the value of the option "-nt" > 2, the mdrun program crashed
> after hundreds MD steps.
> And we also noticed that the same .tpr file could successfully run on both
> the i7-2600 and AMD platforms.
>
> The typically error outputs are attached below. Thanks for any responses
> and suggestions.
>
> Wade Lv
>
>
> Error Outputs:
> ==
> step 222: Water molecule starting at atom 3633 can not be settled.
> Check for bad contacts and/or reduce the timestep if appropriate.
>
> step 222: Water molecule starting at atom 3882 can not be settled.
> Check for bad contacts and/or reduce the timestep if appropriate.
> Wrote pdb files with previous and current coordinates
> Wrote pdb files with previous and current coordinates
>
> ---
> Program mdrun, VERSION 4.5.5
> Source code file: pme.c, line: 538
>
> Fatal error:
> 1 particles communicated to PME node 0 are more than 2/3 times the cut-off
> out of the domain decomposition cell of their charge group in dimension x.
> This usually means that your system is not well equilibrated.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> =
>
> **
> **
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Gromacs 4.6 with CUDA 4.2

2012-05-04 Thread Roland Schulz
On Wed, Apr 25, 2012 at 9:38 AM, Szilárd Páll wrote:

> On Wed, Apr 25, 2012 at 11:43 AM, SebastianWaltz
>  wrote:
> > Dear all,
> >
> > will the new version 4.6 work together with CUDA 4.2? Would be good to
> > know, since this is needed for the new NVIDIA Gemini cards with Kepler
> > technology.
>
> Yes it will. I would like to point out that the "needed" part is not
> exactly true, binaries compiled with pre-4.2 can run equally fast on
> Kepler especially if JIT compilation of PTX code is enabled.
>
> Moreover, current 4.2 pre-release compiler produces slower code than
> CUDA 4.0 not only for Fermi but also for Kepler. We'll have to see
> brins the final CUDA 4.2 release + official drivers.
>

Szilard, had you already a chance to test the 4.2 final release and can
recommend which version to use?
Is the difference in performance significant?

Roland


>
> --
> Szilárd
>
>
>
> > Thanks,
> >
> > Basti
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] further discussion on the mdrun -append function

2012-04-05 Thread Roland Schulz
On Fri, Apr 6, 2012 at 12:24 AM, Peter C. Lai  wrote:

> Sounds like there is a lanugage barrier?
>
> Anyway, some cluster filesystems don't support append (e.g. lustre).
>

I often use append on lustre. No problem. Why do you think it doesn't work?


>
> So I never use append.
>
> I will use tpbconv -extend and -o to a *different* tpr file, then run
>

Even if you don't want to use append you don't need to use tpbconv. You can
run mdrun -wth -noappend with the total you want for nsteps. mdrun
automatically creates files called e.g. traj.part0002.trr.

Roland


> mdrun -deffnm new_extended which generates everything with the new_extended
> filename prefix. Then I can trjcat or enerconv to concatenate everything
> together before running the analysis.
>
> Detailed Example:
>
> md.mdp is configured for 1ns of simulation time
>
> grompp -f md.mdp -n index.ndx -p topol.top -c minimized.gro -o md-1ns.tpr
>
> mdrun -deffnm md-1ns
>
> -deffnm write out all files starting with this parameter: md-1ns.trr,
> md-1ns.edr, md-1ns.log, etc.
>
> Note: do not use a file extension for -deffnm or else it will put that in
> the resulting filenames. If you use -deffnm md-1ns.tpr it will write out
> files called md-1ns.tpr.trr md-1ns.tpr.edr etc.
>
> ...
>
> tpbconv -s md-1ns.tpr -extend 2000 -o md-1ns-to-3ns.tpr
>
> mdrun -deffnm md-1ns-to-3ns -cpi md-1ns.cpt
>
> If you do not use -cpi here, your simulation will restart from 0, and will
> be written to files beginning with "md-1ns-to-3ns"
>
> The -cpi makes it continue from md-1ns.cpt but because of *different
> filenames*, it will not append, and therefore will write new files
> starting at 1ns.
>
> ...
>
> trjcat -f md-1ns.trr md-1ns-to-3ns.trr -o md-to-3ns.xtc
> eneconv -f md-1ns.edr md-1ns-to-3ns.edr -o md-to-3ns.edr
>
> g_energy -f md-to-3ns.edr -o total-energy.xvg will give you a continuous
> .xvg from 0 to 3ns...
>
> If you only analysed md-1ns-to-3ns.trr/.edr then your curves will only
> start
> from 1ns onwards.
>
> Thus the trick is: use different file prefixes all the time and do not use
> append. It is the least confusing workflow to use.
>
> On 2012-04-06 01:56:18PM +1000, Mark Abraham wrote:
> > On 6/04/2012 1:41 PM, Acoot Brett wrote:
> > > Dear All,
> > > Frim mdrun -h, I got the following message:
> > > /-[no]append bool yes Append to previous output files when continuing
> > > from checkpoint instead of adding the simulation art number to all
> > > file names/
> > > Thus there is the possibility that the series xvg curves can never
> > > starts from 0 ns. Do you agree,
> >
> > No. If you have your initial trajectory, then I told you about both the
> > available workflows yesterday. I'm going to stop repeating myself.
> >
> > Mark
>
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> --
> ==
> Peter C. Lai| University of Alabama-Birmingham
> Programmer/Analyst  | KAUL 752A
> Genetics, Div. of Research  | 705 South 20th Street
> p...@uab.edu | Birmingham AL 35294-4461
> (205) 690-0808|
> ==
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] How to set the VMD_PLUGIN_PATH for gromacs analysis?

2012-04-03 Thread Roland Schulz
Hi,

please make sure you compile with support for vmd plugins. One way to check
is that the src/config.h in your build folder contains "#define
GMX_DLOPEN". If it does not please check for warnings/errors in the
cmake/configure output and let us know how you run cmake/configure.

If have compiled with  vmd plugin support, please send the full output of
the tool and please send me a small trajectory which reproduces the
problem.

Roland

On Fri, Mar 30, 2012 at 11:47 PM, a a  wrote:

>  Dear Sir/Madam,
>
>
> I am trying to run covar using mdcrd file.
>
>
> I installed the vmd and gromacs 4.5.5. to my computer. I also set the 
> VMD_PLUGIN_PATH by adding a line in .bashrc file in my home directory.
>
>
> VMD_PLUGIN_PATH=/home/cmche/AnnieSoftwares/vmd_ie
>  d/lib/vmd/plugins/LINUX/molfile
>
>
> When I did the following command:
>
> /usr/local/gromacs/bin/g_covar_d -s average.pdb -f md0.mdcrd -o -v
>
>
>  I get the following error message.
>
>
> ---
> Program g_covar_d, VERSION 4.5.5
> Source code file: trxio.c, line: 870
>
> Fatal error:
> Not supported in read_first_frame: md0.mdcrd
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
>
>  Thanks to Francesco, I know we should be able to readin AMBER's mdcrd files, 
> did I do anything wrong here?  Could you mind to let me if I still did 
> anything wrong here?
>
>
> Best regards,
>
>
> Catherine
>
>
>
>
> Hi Catherine,
> you should install any gromacs *4.5.x*, and then you can use gromacs
> with each trajectory supported by VMD because gromacs is able to use VMD 
> plugin
> to
> perform trajectory reading.
> Basically:
> 1) Install the latest gromacs version
> 2) Install VMD
> 3) Set the variable VMD_PLUGIN_PATH to contain the complete path of the
> molfile vmd directory.
>In my case, for example:  VMD_PLUGIN_PATH=/apps/vmd/1.9/**lib/vmd
> /plugins/LINUXAMD64/**molfile
>
> Francesco
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] analysis tools

2012-01-07 Thread Roland Schulz
Hi,

it is planned for 5.0. Currently you need to manually run it in parallel.
E.g. by split the trajectory into pieces and running the analysis of each
piece in parallel.

Roland

On Sun, Jan 8, 2012 at 1:14 AM, Juliette N.  wrote:

>  Hello all,
>
> I am just wondering if analysis tools like g_rdf..can be run in parallel
> like mdrun? I have a big system and analysis tools take a lot of time to
> finish.
>
> Thanks,
> J.
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Possible typo in "nb_kernel_x86_64_sse.c" (4.5.5)

2011-12-20 Thread Roland Schulz
Hi,

this is on purpose. Since 4.5 GROMACS requires SSE2. It is used for e.g.
the GB kernels. Because CPUs with SSE1 support but no SSE2 are so old we
don't try to support them anymore even for those kernels not requiring SSE2.

Roland

On Tue, Dec 20, 2011 at 6:05 AM, Daniel Adriano Silva M
wrote:

>  Hi Devs,
>
>  Is this intended or just a typo:
>
>  #nb_kernel_x86_64_sse/nb_kernel_x86_64_sse.c#
> .:
> 211:fprintf(log,"Testing x86_64 SSE2 support...");
> .:
> ##
>
>  instead of:
>
>  .:
> 211:fprintf(log,"Testing x86_64 SSE1 support...");
> .:
>
>
>  The same thing appears in other "_sse.c" files.
>
>
>  Thanks,
> Daniel
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Re: Restarting a crashed run

2011-11-16 Thread Roland Schulz
On Wed, Nov 16, 2011 at 7:57 PM, bharat gupta wrote:

>  Hi,
>
>  I was running a simulation of 10ns which crashed in between at 1.7 ns
> due to power failure. I used the following command to restart the
> simulation form that point:
>
>
>  mdrun -s topol.tpr -cpi state.cpt -append
>
>
>  After checking the file md_0_1.log and others , I am getting data only
> for those 1.7ns . How can I retrieve the data for the other half of the
> simulation that I restarted ??
>
Check for "Restarting from" line in log and errors warnings in the log and
the output file. Without more information we can't help.

Roland


>
>
>  --
> Bharat
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MDRun -append error

2011-11-16 Thread Roland Schulz
On Wed, Nov 16, 2011 at 4:11 PM, xianqiang  wrote:

>   Hi, all
>
> I just restart a simulation with 'mpirun -np 8 mdrun -pd yes -s md_0_1.tpr
> -cpi state.cpt -append'
>
> However, the following error appears:
>
>
> Output file appending has been requested,
> but some output files listed in the checkpoint file state.cpt
> are not present or are named differently by the current program:
> output files present: traj.xtc
> output files not present or named differently: md_0_1.log md_0_1.edr
>
> ---
> Program mdrun, VERSION 4.5.3
> Source code file: ../../../gromacs-4.5.3/src/gmxlib/checkpoint.c, line:
> 2139
>
> Fatal error:
> File appending requested, but only 1 of the 3 output files are present
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
>
> The two files which can not be found were located in the same directory
> with 'traj.xtc', and why they can not be found by gromacs?
>
Maybe they are not readable? Can you look at the log file (e.g. using
"less")?

Roland


>
> Thanks and best regards,
> Xianqiang
>
>
> --
>  Xianqiang Sun
>
>  Email: xianqi...@theochem.kth.se
> Division of Theoretical Chemistry and Biology
> School of Biotechnology
> Royal Institute of Technology
> S-106 91 Stockholm, Sweden
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] No locks available.

2011-11-14 Thread Roland Schulz
Hi,

On Mon, Nov 14, 2011 at 2:15 AM, lina  wrote:

> On Mon, Nov 14, 2011 at 7:47 AM, Roland Schulz  wrote:
> > Hi,
> > what file system is this? What operating system on the compute node? In
> case
> > it is a network file system what file system is used underneath and what
> > operating system is the file server using? What version of GROMACS are
> you
> > using?
> > As you workaround you should be able to run with "mdrun -noappend".
>
> There is no problem running a mdrun without appending.


Could you please send the information about your system? I would be
interested to see why it doesn't succeed to lock.

Roland



>  > Roland
> >
> > On Sun, Nov 13, 2011 at 10:43 AM, lina  wrote:
> >>
> >> Hi,
> >>
> >> This is the first time I met:
> >>
> >> Fatal error:
> >> Failed to lock: md.log. No locks available
> >>
> >> the disk is not saturated,
> >>
> >> md.log is normal,
> >>
> >> The job was stopped months ago, and now I planned to resume it with
> >> all the necessary files kept intact.
> >>
> >> Thanks for pointing out which parts I should examine,
> >>
> >> Best regards,
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >>
> >>
> >>
> >
> >
> >
> > --
> > ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
> > 865-241-1537, ORNL PO BOX 2008 MS6309
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] No locks available.

2011-11-13 Thread Roland Schulz
Hi,

what file system is this? What operating system on the compute node? In
case it is a network file system what file system is used underneath and
what operating system is the file server using? What version of GROMACS are
you using?

As you workaround you should be able to run with "mdrun -noappend".

Roland

On Sun, Nov 13, 2011 at 10:43 AM, lina  wrote:

> Hi,
>
> This is the first time I met:
>
> Fatal error:
> Failed to lock: md.log. No locks available
>
> the disk is not saturated,
>
> md.log is normal,
>
> The job was stopped months ago, and now I planned to resume it with
> all the necessary files kept intact.
>
> Thanks for pointing out which parts I should examine,
>
> Best regards,
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] CygWin and Gromacs 4.5.5

2011-11-10 Thread Roland Schulz
On Thu, Nov 10, 2011 at 5:37 AM, Mr Bernard Ramos wrote:

>  yes, I have also experience the same with Gromacs 4.5.5 and cygwin. I
> hope this issue will be addressed.Thanks
>
The slow IO performance under cygwin is a known problem with the approach
of cygwin and happens with any cygwin program and cannot be fixed by
GROMACS. If you   want decent IO performance you need to either use Visual
C++ or Msys/MingW (see http://redmine.gromacs.org/issues/448).

Roland

>
>   --
> *From:* Roland Schulz 
> *To:* Discussion list for GROMACS users 
> *Sent:* Wednesday, November 9, 2011 11:03 AM
> *Subject:* Re: [gmx-users] CygWin and Gromacs 4.5.5
>
>
>
> On Tue, Nov 8, 2011 at 5:59 PM, Mark Abraham wrote:
>
> On 8/11/2011 11:35 PM, Szilárd Páll wrote:
>  > Additionally, AFAIK you will get better performance if you compile
> > with MSVC which should be fairly easy if you use CMake - I'm not
> > entirely sure about this
>
>  I'd be surprised. Why should MSVC outperform gcc?
>
> The file performance is horrible with cygwin (even much slower than a
> virtual machine). But this should only be important for analysis. For
> simulation I agree that the performance should be as good  (I don't know
> about NUMA) .
>
>  Roland
>
>
> Mark
>
> > though.
> > Cheers,
> > --
> > Szilárd
> >
> >
> >
> > On Tue, Nov 8, 2011 at 12:41 PM,  wrote:
> >> Help me.
> >> I want to install Gromacs 4.5.5 with usage CygWin.
> >> When I execute a command "make" I receive the error report:
> >>
> >> numa_malloc.c:117: error: expected '>  ' before ' Processor'
> >> numa_malloc.c:117: error: expected '>  ' before ' ProcNumber'
> >> numa_malloc.c:117: error: expected ' = ', ', ', '; ', ' asm '
> >> or.
> >> ...
> >> make [3]: *** [numa_malloc.lo] Error 1
> >> make [3]: leaving directory '/cygdrive/.
> >> gromacs4.5.5/src/gmxlib/thread_mpi'
> >> make [3]: *** [install-recursive] Error 1
> >> make [3]: leaving directory '/cygdrive/. gromacs4.5.5/src/gmxlib'
> >> make [3]: *** [install-recursive] Error 1
> >> make [3]: leaving directory '/cygdrive/. gromacs4.5.5/src'
> >> make [3]: *** [install-recursive] Error 1
> >>
> >> Where an error?
> >>
> >>
> >> CygWin it is installed with packets:
> >> Section "Devel"
> >> - autoconf: Wrapper scripts for autoconf commands
> >> - autoconf2.1: Stable version of the automatic configure script builder
> >> - autoconf2.5: Development version of the automatic configure script
> builder
> >> - automake1.9: a tool for generating GNU-compliant Makefiles
> >> - binutils: The GNU assembler, linker and binary utilites
> >> - gcc: A C compiler upgrade helper
> >> - gcc-core: A C compiler
> >> - gcc-g ++: A C ++ compiler
> >> - gcc-g77: Fortran compiler
> >> - gcc-mingw-core: Mingw32 support headers and libraries for GCC
> >> - gcc-mingw-g ++: Mingw32 support headers and libraries for GCC A C ++
> >> - gcc-mingw-g77: Mingw32 support headers and libraries for GCC Fortran
> >> - libgcc1: GCC compiler support shared runtime
> >> - libgdbm-devel: GNU dbm database routines (development)
> >> - make: The GNU version of the ` make ` utility
> >> - mingw-runtime: MinGW Runtime
> >>
> >> Section "Interpreters"
> >> - perl: Larry Wall ` s Practical Extracting and Report Language
> >>
> >> Packet FFTW ver.3.2.2 is in addition compiled and installed
> >>
> >> Trial setting Gromacs of 4.5.3 errors does not give.
> >>
> >> The instruction on setting took here:
> >> http://lists.groma
> cs.org/pipermail/gmx-users/2009-September/044792.html
> >>
> >> The error arises only for version Gromacs 4.5.5
> >>
> >>
> >> Igor
> >>
> >>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
>
> --
> gmx-users mailing listgmx-u

Re: [gmx-users] how to do remd with different tabulated potentials

2011-11-09 Thread Roland Schulz
Hi,

for Hamiltonian RepEx you need to formulate the different states as a
function of lambda. Look at the free energy documentation to see how to
describe different tables for different lambdas.

Roland

2011/11/8 杜波 <2008d...@gmail.com>

>  dear teacher,
>
>  if i want to do remd  with different tabulated potentials.
> how can i use the mdrun's   -table (-table table.xvg -tableb table.xvg )?
>
>  if it can also use like that,there is another question:
> and how can i rename the tables name ( table_CR1_CR1: i rename
> them table_CR1_CR10,table_CR1_CR11,table_CR1_CR12...,
> i test this is wrong!!!
>  )
>
>  thanks
>
>  regards,
> PHD, Bo Du
> Department of Polymer Science and Engineering,
> School of Chemical Engineering and technology,
> Tianjin University, Weijin Road 92, Nankai District 300072,
> Tianjin City P. R. China
> Tel/Fax: +86-22-27404303
> E-mail: 2008d...@gmail.com 
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: Re: [gmx-users] remd with different potential at different temperature

2011-11-09 Thread Roland Schulz
Hi,

this is currently not possible. Currently you can only do temperature or
Hamiltonian RepEx. As far as I know 4.6 will support both simultaneous. In
the mean time you might be able to accomplish your goal
by reformulating the Temp-RepEx as a Hamiltonian RepEx as is done in the
newer version of REST.

Roland

2011/11/8 杜波 <2008d...@gmail.com>

>  dear teacher,
> if i want to do remd  with different tabulated potentials.
> how can i use the mdrun's   -table (-table table.xvg -tableb table.xvg )?
> if it can also use like that,there is another question:
> and how can i rename the tables name ( table_CR1_CR1: i rename
> them table_CR1_CR10,table_CR1_CR11,table_CR1_CR12...,
> i test this is wrong!!!
>  )
> thanks
> > regards,
> > PHD, Bo Du
> > Department of Polymer Science and Engineering,
> > School of Chemical Engineering and technology,
> > Tianjin University, Weijin Road 92, Nankai District 300072,
> > Tianjin City P. R. China
> > Tel/Fax: +86-22-27404303
> > E-mail: 2008d...@gmail.com 
>
>
>
>
>
>  Message: 1
> Date: Tue, 08 Nov 2011 17:55:49 +1100
> From: Mark Abraham 
> Subject: Re: [gmx-users] remd with different potential at different
>temperature
> To: Discussion list for GROMACS users 
> Message-ID: <4eb8d275.2010...@anu.edu.au>
> Content-Type: text/plain; charset="iso-8859-1"
>
>
> On 8/11/2011 5:43 PM, ?? wrote:
> > dear teacher,
> > how can i do remd with different non-bond potential at different
> > temperature ?
> > easy to say ,can i use different *.top at diferent temperature.
>
> Probably. Try a simple case and see. The REMD implementation checks only
> certain critical quantities are constant over the generalized ensemble.
> See the lines that begin "Multi-checking" in an REMD .log file. You can
> probably even use different tabulated potentials for each replica.
>
> Mark
>
> >
> > if not ,can you give me some suggestions to rewrite the gromacs codes.
> > thanks!!
> >
> > regards,
> > PHD, Bo Du
> > Department of Polymer Science and Engineering,
> > School of Chemical Engineering and technology,
> > Tianjin University, Weijin Road 92, Nankai District 300072,
> > Tianjin City P. R. China
> > Tel/Fax: +86-22-27404303
> > E-mail: 2008d...@gmail.com 
> >
> >
> >
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] CygWin and Gromacs 4.5.5

2011-11-08 Thread Roland Schulz
On Tue, Nov 8, 2011 at 5:59 PM, Mark Abraham wrote:

> On 8/11/2011 11:35 PM, Szilárd Páll wrote:
> > Additionally, AFAIK you will get better performance if you compile
> > with MSVC which should be fairly easy if you use CMake - I'm not
> > entirely sure about this
>
> I'd be surprised. Why should MSVC outperform gcc?
>
The file performance is horrible with cygwin (even much slower than a
virtual machine). But this should only be important for analysis. For
simulation I agree that the performance should be as good  (I don't know
about NUMA) .

Roland

>
> Mark
>
> > though.
> > Cheers,
> > --
> > Szilárd
> >
> >
> >
> > On Tue, Nov 8, 2011 at 12:41 PM,  wrote:
> >> Help me.
> >> I want to install Gromacs 4.5.5 with usage CygWin.
> >> When I execute a command "make" I receive the error report:
> >>
> >> numa_malloc.c:117: error: expected '>  ' before ' Processor'
> >> numa_malloc.c:117: error: expected '>  ' before ' ProcNumber'
> >> numa_malloc.c:117: error: expected ' = ', ', ', '; ', ' asm '
> >> or.
> >> ...
> >> make [3]: *** [numa_malloc.lo] Error 1
> >> make [3]: leaving directory '/cygdrive/.
> >> gromacs4.5.5/src/gmxlib/thread_mpi'
> >> make [3]: *** [install-recursive] Error 1
> >> make [3]: leaving directory '/cygdrive/. gromacs4.5.5/src/gmxlib'
> >> make [3]: *** [install-recursive] Error 1
> >> make [3]: leaving directory '/cygdrive/. gromacs4.5.5/src'
> >> make [3]: *** [install-recursive] Error 1
> >>
> >> Where an error?
> >>
> >>
> >> CygWin it is installed with packets:
> >> Section "Devel"
> >> - autoconf: Wrapper scripts for autoconf commands
> >> - autoconf2.1: Stable version of the automatic configure script builder
> >> - autoconf2.5: Development version of the automatic configure script
> builder
> >> - automake1.9: a tool for generating GNU-compliant Makefiles
> >> - binutils: The GNU assembler, linker and binary utilites
> >> - gcc: A C compiler upgrade helper
> >> - gcc-core: A C compiler
> >> - gcc-g ++: A C ++ compiler
> >> - gcc-g77: Fortran compiler
> >> - gcc-mingw-core: Mingw32 support headers and libraries for GCC
> >> - gcc-mingw-g ++: Mingw32 support headers and libraries for GCC A C ++
> >> - gcc-mingw-g77: Mingw32 support headers and libraries for GCC Fortran
> >> - libgcc1: GCC compiler support shared runtime
> >> - libgdbm-devel: GNU dbm database routines (development)
> >> - make: The GNU version of the ` make ` utility
> >> - mingw-runtime: MinGW Runtime
> >>
> >> Section "Interpreters"
> >> - perl: Larry Wall ` s Practical Extracting and Report Language
> >>
> >> Packet FFTW ver.3.2.2 is in addition compiled and installed
> >>
> >> Trial setting Gromacs of 4.5.3 errors does not give.
> >>
> >> The instruction on setting took here:
> >> http://lists.groma
> cs.org/pipermail/gmx-users/2009-September/044792.html
> >>
> >> The error arises only for version Gromacs 4.5.5
> >>
> >>
> >> Igor
> >>
> >>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] error while install GMX4.5.5

2011-09-21 Thread Roland Schulz
On Wed, Sep 21, 2011 at 9:01 PM, Mark Abraham wrote:

>  On 21/09/2011 7:27 PM, zhongjin wrote:
>
>   Dear GMX users:
> While I am installing gmx4.5.5, an error occured after "make" command :
> cc -O3 -fomit-frame-pointer -finline-functions -Wall -Wno-unused -msse2
> -funroll-all-loops -std=gnu99 -pthread -I./include -o .libs/grompp grompp.o
> -L/home/hzj1000/software/fftw/lib ./.libs/libgmxpreprocess.so
> /home/hzj1000/gromacs-4.5.5/src/mdlib/.libs/libmd.so ../mdlib/.libs/libmd.so
> /home/hzj1000/gromacs-4.5.5/src/gmxlib/.libs/libgmx.so
> ../gmxlib/.libs/libgmx.so -ldl -lnsl -lm  -Wl,--rpath
> -Wl,/home/hzj1000/software/GMX/gromacs4.5.5/lib
> /home/hzj1000/gromacs-4.5.5/src/gmxlib/.libs/libgmx.so: undefined reference
> to `pthread_setaffinity_np'
> collect2: ld returned 1 exit status
> make[3]: *** [grompp] Error 1
> make[3]: Leaving directory `/home/hzj1000/gromacs-4.5.5/src/kernel'
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory `/home/hzj1000/gromacs-4.5.5/src'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/home/hzj1000/gromacs-4.5.5/src'
>
>
>
> This shouldn't happen. What compiler and hardware? configure or CMake? What
> command lines?
>

Please also add: Which Unix? Is it Linux? Distribution? What libc version?

Roland


> Mark
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] extract subset from cpt?

2011-08-18 Thread Roland Schulz
On Thu, Aug 18, 2011 at 3:02 PM, Justin A. Lemkul  wrote:

>
>
> Roland Schulz wrote:
> > Hi,
> >
> > you can use trjconv for that (or editconf). You probably would want to
> > add some waters though.
> >
>
> For trjconv, you can get a coordinate file, but not a .cpt; editconf throws
> a
> fatal error (unless I'm using it wrong, but I tried -f state.cpt).  If you
> try
> to output a .cpt from trjconv, it gives you file.cpt.xtc.  Is there some
> other
> way to preserve a state that's not given in the documentation?  You can
> write a
> .trr file, but that only gives you velocities.
>

Yes, one can only get coordinates and velocities. But as you said, it
doesn't make sense to get the full state.

Roland


>
> -Justin
>
> > Roland
> >
> > On Thu, Aug 18, 2011 at 2:50 PM, Peter C. Lai  > <mailto:p...@uab.edu>> wrote:
> >
> > Is there anyway I can extract a subset of atoms from a cpt file,
> > like I can
> > with trjconv operating on a traj file? I want to remove a ligand and
> > still
> > keep all the remainder of the state information, so I can feed this
> back
> > into grompp with a modified topology and "continue" a run without
> > the ligand.
> >
> > --
> > ==
> > Peter C. Lai| University of Alabama-Birmingham
> > Programmer/Analyst  | BEC 257
> > Genetics, Div. of Research  | 1150 10th Avenue South
> > p...@uab.edu <mailto:p...@uab.edu> | Birmingham AL
> > 35294-4461
> > (205) 690-0808 |
> > ==
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > <mailto:gmx-users@gromacs.org>
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org
> > <mailto:gmx-users-requ...@gromacs.org>.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >
> >
> >
> >
> > --
> > ORNL/UT Center for Molecular Biophysics cmb.ornl.gov <
> http://cmb.ornl.gov>
> > 865-241-1537, ORNL PO BOX 2008 MS6309
> >
>
> --
> 
>
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] extract subset from cpt?

2011-08-18 Thread Roland Schulz
Hi,

you can use trjconv for that (or editconf). You probably would want to add
some waters though.

Roland

On Thu, Aug 18, 2011 at 2:50 PM, Peter C. Lai  wrote:

> Is there anyway I can extract a subset of atoms from a cpt file, like I can
> with trjconv operating on a traj file? I want to remove a ligand and still
> keep all the remainder of the state information, so I can feed this back
> into grompp with a modified topology and "continue" a run without the
> ligand.
>
> --
> ==
> Peter C. Lai| University of Alabama-Birmingham
> Programmer/Analyst  | BEC 257
> Genetics, Div. of Research  | 1150 10th Avenue South
> p...@uab.edu | Birmingham AL 35294-4461
> (205) 690-0808|
> ==
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] git release-4-5-patches behind proxy

2011-08-09 Thread Roland Schulz
Hi,

please try "git pull" (for repo.or.cz) and than redo the checkout. I
accidentally deleted the release-4-5-patches branch. But it is back and it
should work. If it does not work please run "git branch -r" and post the
result.

BTW: The error message you get for git.gromacs.org suggests that you are
behind a firewall which doesn't permit this outgoing connection. Thus in
case you want to use the main git server directly you need to use a proxy
server.

Roland

On Tue, Aug 9, 2011 at 10:03 AM, César Ávila  wrote:

>  I want to calculate secondary structure propensities of residues.
>
> Following Mark's message on thread
> Re: [gmx-users] secondary structure propensities of residues
>
>
> I would like to get the release-4.5-patches from git (in order to get
> do_dssp.c). Nevertheless I found problems while trying to fetch it from git
> repository due to the proxy. (I followed the advices on
> http://www.gromacs.org/Developer_Zone/Git/Git_Tutorial#Git_behind_a_proxy
>
> $ git clone git://git.gromacs.org/gromacs.git
> Initialized empty Git repository in ~/gromacs/.git/
> 2011/08/09 10:37:46 socat[17264] E CONNECT git.gromacs.org:9418: Forbidden
> fatal: The remote end hung up unexpectedly
>
> I tried the alternative suggested by Roland
>
> git clone  http://repo.or.cz/r/gromacs.git
> Initialized empty Git repository in ~/gromacs/.git/
>
> The gromacs directory is created and populated. After this I ran
>
> $git branch
> * master
>
> $git checkout --track -b release-4-5-patches origin/release-4-5-patches
> fatal: git checkout: updating paths is incompatible with switching
> branches.
> Did you intend to checkout 'origin/release-4-5-patches' which can not be
> resolved as commit?
>
> Is this a problem of the server I am using or is it just a misuse of git?
>
> Thanks
>
>
>
>
>
>
>
>
> 2011/3/22 Roland Schulz 
>
>> you can clone from http://repo.or.cz/r/gromacs.git if you have problems
>> with a proxy.
>>
>>   On Mon, Mar 21, 2011 at 10:54 PM, Alif M Latif wrote:
>>
>>>Dear Gromacs Developers,
>>>
>>> Just dropping by to say THANK YOU for v4.5.4, surely is updated version
>>> of 4.5.3 with all the bugfixes yes?. I'm having problem getting past my
>>> university's proxy server to use git. Now hopefully I can continue my
>>> research and solve git problems later..Your hard work is greatly
>>> appreciated!
>>>
>>> MUHAMMAD ALIF MOHAMMAD LATIF
>>> Laboratory of Theoretical and Computational Chemistry
>>> Department of Chemistry
>>> Faculty of Science
>>> Universiti Putra Malaysia
>>> 43400 UPM Serdang, Selangor
>>> MALAYSIA
>>>
>>>
>>>
>>>  --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>
>>
>>
>> --
>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
>> 865-241-1537, ORNL PO BOX 2008 MS6309
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] is lincs used with virtual hydrogens?

2011-06-18 Thread Roland Schulz
My understanding is that the v-site algorithm is used for the virtual sites
and LINCS is used for bonds not involving v-sites (and also angles if you
choose contraint=angles).

On Sat, Jun 18, 2011 at 4:40 PM, chris.ne...@utoronto.ca <
chris.ne...@utoronto.ca> wrote:

> Thank you Roland.
>
> I did use:
>
> constraints = all-bonds
> lincs-iter =  1
> lincs-order =  6
> constraint_algorithm =  lincs
>
>  From looking at the manual, I figured that angle and bond constraints
> would all be done by LINCS if I had done (A):
>
> pdb2gmx -vsite none
> constraints = h-angles
> (a combination that I have never tried)
>
> But when I use (B):
>
> pdb2gmx -vsite hydrogen
> constraints = all-bonds
>
> It seems possible to me that LINCS is not used but instead the
> position of the atom is simply built from a mathematical function.
> Perhaps this all stems from my lack of thorough understanding of
> LINCS, but it seems to me that there need be no iteration to simply
> place an atoms based on virtual_sites3 (which are constructed by
> pdb2gms -hydrogen)
>
> For now, I'll simply add a line to state that I built virtual sites
> for hydrogen atoms to make it clear, but I'd still like to understand
> the difference between options A and B, above, if you have some time.
>
> Thank you again,
> Chris.
>
>
>
>
>
> On Sat, Jun 18, 2011 at 4:13 PM, chris.neale at utoronto.ca <
> chris.neale at utoronto.ca> wrote:
>
> > Dear Users:
> >
> > If I create the topology of a peptide like this:
> >
> > pdb2gmx -f protein.gro -vsite hydrogens
> >
> > And then simulate it in vacuum, is lincs used at all? I believe that
> > it is, as if I use a timestep that is too large then I get LINCS
> > warnings about angles rotating more than 30 degrees, but that warning
> > message could possibly have been written with the assumption that I
> > used LINCS and not virtual hydrogens.
> >
> Probably. To make sure check the constraint-algorithm selected in your mdp.
> BTW: If you want to use large timecheck you should normally use
> constraints=all-bonds and lincs-order=6.
>
>
> >
> > Finally, is there a method that needs to be named or cited in relation
> > to the fact that the angles are now constrained? Is that also done
> > with P-LINCS?
> >
> This is also done with P-LINCS. Not sure whether one should sign something
> regarding the construction/usage of v-sites.
>
> Roland
>
>
> >
> > Thank you,
> > Chris.
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] is lincs used with virtual hydrogens?

2011-06-18 Thread Roland Schulz
On Sat, Jun 18, 2011 at 4:13 PM, chris.ne...@utoronto.ca <
chris.ne...@utoronto.ca> wrote:

> Dear Users:
>
> If I create the topology of a peptide like this:
>
> pdb2gmx -f protein.gro -vsite hydrogens
>
> And then simulate it in vacuum, is lincs used at all? I believe that
> it is, as if I use a timestep that is too large then I get LINCS
> warnings about angles rotating more than 30 degrees, but that warning
> message could possibly have been written with the assumption that I
> used LINCS and not virtual hydrogens.
>
Probably. To make sure check the constraint-algorithm selected in your mdp.
BTW: If you want to use large timecheck you should normally use
constraints=all-bonds and lincs-order=6.


>
> Finally, is there a method that needs to be named or cited in relation
> to the fact that the angles are now constrained? Is that also done
> with P-LINCS?
>
This is also done with P-LINCS. Not sure whether one should sign something
regarding the construction/usage of v-sites.

Roland


>
> Thank you,
> Chris.
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] gromacs 2.1

2011-06-12 Thread Roland Schulz
Hi,

you can get it from git. "git log {file}" shows the history and git show
{version}:{file} gives you a specific version.
In this case you can use:
git show 74b20ce9:src/tools/g_order.c

Roland

On Sat, Jun 11, 2011 at 10:11 PM,  wrote:

> Dear users:
>
> Does anybody have the source code for gromacs version 2.1? I would like to
> check the original source of g_order, but only versions 3.0 and above are
> available on the gromacs website.
>
> Thank you,
> Chris.
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use thewww interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Why does the -append option exist?

2011-06-08 Thread Roland Schulz
Hi,

yes that helps a lot. One more question. What filesystem on hopper 2 are you
using for this test (home, scratch or proj, to see if it is Lustre or GPFS)
? And are you running the test on the login node or on the compute node?

Thanks
Roland

On Wed, Jun 8, 2011 at 1:17 PM, Dimitar Pachov  wrote:

> Hello,
>
> On Wed, Jun 8, 2011 at 4:21 AM, Sander Pronk  wrote:
>
>> Hi Dimitar,
>>
>> Thanks for the bug report. Would you mind trying the test program I
>> attached on the same file system that you get the truncated files on?
>>
>> compile it with gcc testje.c -o testio
>>
>
> Yes, but no problem:
>
> 
> [dpachov@login-0-0 NEWTEST]$ ./testio
> TEST PASSED: ftell gives: 46
> 
>
> As for the other questions:
>
> HPC OS version:
> 
> [dpachov@login-0-0 NEWTEST]$ uname -a
> Linux login-0-0.local 2.6.18-194.17.1.el5xen #1 SMP Mon Sep 20 07:20:39 EDT
> 2010 x86_64 x86_64 x86_64 GNU/Linux
> [dpachov@login-0-0 NEWTEST]$ cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 5.2 (Tikanga)
> 
>
> GROMACS 4.5.4 built:
> 
> module purge
> module load INTEL/intel-12.0
> module load OPENMPI/1.4.3_INTEL_12.0
> module load FFTW/2.1.5-INTEL_12.0 # not needed
>
> #
> # GROMACS settings
>
> export CC=mpicc
> export F77=mpif77
> export CXX=mpic++
> export FC=mpif90
> export F90=mpif90
>
> make distclean
>
> echo "XXX building single prec XX"
>
> ./configure
> --prefix=/home/dpachov/mymodules/GROMACS/EXEC/4.5.4-INTEL_12.0/SINGLE \
> --enable-mpi \
>  --enable-shared \
> --program-prefix="" --program-suffix="" \
> --enable-float --disable-fortran \
> --with-fft=mkl \
> --with-external-blas \
> --with-external-lapack \
> --with-gsl \
> --without-x \
> CFLAGS="-O3 -funroll-all-loops" \
> FFLAGS="-O3 -funroll-all-loops" \
> CPPFLAGS="-I${MPI_INCLUDE} -I${MKL_INCLUDE} " \
> LDFLAGS="-L${MPI_LIB} -L${MKL_LIB} -lmkl_intel_lp64 -lmkl_core
> -lmkl_intel_thread -liomp5 "
>
> make -j 8 && make install
> 
>
> Just did the same test on Hopper 2:
> http://www.nersc.gov/users/computational-systems/hopper/
>
> with their built GROMACS 4.5.3 (gromacs/4.5.3(default)), and the result was
> the same as reported earlier. You could do the test there as well, if you
> have access, and see what you would get.
>
> Hope that helps a bit.
>
> Thanks,
> Dimitar
>
>
>
>
>
>>
>> Sander
>>
>>
>>
>>
>>
>> On Jun 7, 2011, at 23:21 , Dimitar Pachov wrote:
>>
>> Hello,
>>
>> Just a quick update after a few shorts tests we (my colleague and I)
>> quickly did. First, using
>>
>> "*You can emulate this yourself by calling "sleep 10s" before mdrun and
>> see if that's long enough to solve the latency issue in your case.*"
>>
>> doesn't work for a few reasons, mainly because it doesn't seem to be a
>> latency issue, but also because the load on a node is not affected by
>> "sleep".
>>
>> However, you can reproduce the behavior I have observed pretty easily. It
>> seems to be related to the values of the pointers to the *xtc, *trr, *edr,
>> etc files written at the end of the checkpoint file after abrupt crashes AND
>> to the frequency of access (opening) to those files. How to test:
>>
>> 1. In your input *mdp file put a high frequency of saving coordinates to,
>> say, the *xtc (10, for example) and a low frequency for the *trr file
>> (10,000, for example).
>> 2. Run GROMACS (mdrun -s run.tpr -v -cpi -deffnm run)
>> 3. Kill abruptly the run shortly after that (say, after 10-100 steps).
>> 4. You should have a few frames written in the *xtc file, and the only one
>> (the first) in the *trr file. The *cpt file should have different from zero
>> values for "file_offset_low" for all of these files (the pointers have been
>> updated).
>>
>> 5. Restart GROMACS (mdrun -s run.tpr -v -cpi -deffnm run).
>> 6. Kill abruptly the run shortly after that (say, after 10-100 steps). Pay
>> attention that the frequency for accessing/writing the *trr has not been
>> reached.
>> 7. You should have a few additional frames written in the *xtc file, while
>> the *trr will still have only 1 frame (the first). The *cpt file now has
>> updated all pointer values "file_offset_low", BUT the pointer to the *trr
>> has acquired a value of 0. Obviously, we already now what will happen if we
>> restart again from this last *cpt file.
>>
>> 8. Restart GROM

Re: [gmx-users] Why does the -append option exist?

2011-06-07 Thread Roland Schulz
On Tue, Jun 7, 2011 at 5:21 PM, Dimitar Pachov  wrote:

> Hello,
>
> Just a quick update after a few shorts tests we (my colleague and I)
> quickly did. First, using
>
> "*You can emulate this yourself by calling "sleep 10s" before mdrun and
> see if that's long enough to solve the latency issue in your case.*"
>
> doesn't work for a few reasons, mainly because it doesn't seem to be a
> latency issue, but also because the load on a node is not affected by
> "sleep".
>
> However, you can reproduce the behavior I have observed pretty easily. It
> seems to be related to the values of the pointers to the *xtc, *trr, *edr,
> etc files written at the end of the checkpoint file after abrupt crashes AND
> to the frequency of access (opening) to those files. How to test:
>
> 1. In your input *mdp file put a high frequency of saving coordinates to,
> say, the *xtc (10, for example) and a low frequency for the *trr file
> (10,000, for example).
> 2. Run GROMACS (mdrun -s run.tpr -v -cpi -deffnm run)
> 3. Kill abruptly the run shortly after that (say, after 10-100 steps).
> 4. You should have a few frames written in the *xtc file, and the only one
> (the first) in the *trr file. The *cpt file should have different from zero
> values for "file_offset_low" for all of these files (the pointers have been
> updated).
>
> 5. Restart GROMACS (mdrun -s run.tpr -v -cpi -deffnm run).
> 6. Kill abruptly the run shortly after that (say, after 10-100 steps). Pay
> attention that the frequency for accessing/writing the *trr has not been
> reached.
> 7. You should have a few additional frames written in the *xtc file, while
> the *trr will still have only 1 frame (the first). The *cpt file now has
> updated all pointer values "file_offset_low", BUT the pointer to the *trr
> has acquired a value of 0. Obviously, we already now what will happen if we
> restart again from this last *cpt file.
>
> 8. Restart GROMACS (mdrun -s run.tpr -v -cpi -deffnm run).
> 9. Kill it.
> 10. File *trr has size zero.
>
>
> Therefore, if a run is killed before the files are accessed for writing
> (depending on the chosen frequency), the file offset values reported in the
> *cpt file doesn't seem to be accordingly updated, and hence a new restart
> inevitably leads to overwritten output files.
>
> Do you think this is fixable?
>

Thanks a lot for searching for a reproducible case.

What file-system and operating system are you using? If it is a
network file-system: can you reproduce it on a non network file-system? If
not what is the OS on the client and server, and what is the network
file-system and the underlaying file-system on the server.

Thanks
Roland

>
>
>
>
>
> On Sun, Jun 5, 2011 at 6:20 PM, Roland Schulz  wrote:
>
>> Two comments about the discussion:
>>
>> 1) I agree that buffered output (Kernel buffers - not application buffers)
>> should not affect I/O. If it does it should be filed as bug to the OS. Maybe
>> someone can write a short test application which tries to reproduce this
>> idea. Thus writing to a file from one node and immediate after one test
>> program is killed on one node writing to it from some other node.
>>
>> 2) We lock files but only the log file. The idea is that we only need
>> to guarantee that the set of files is only accessed by one application. This
>> seems safe but in case someone sees a way of how the trajectory is opened
>> without the log file being opened, please file a bug.
>>
>> Roland
>>
>> On Sun, Jun 5, 2011 at 10:13 AM, Mark Abraham wrote:
>>
>>>  On 5/06/2011 11:08 PM, Francesco Oteri wrote:
>>>
>>> Dear Dimitar,
>>> I'm following the debate regarding:
>>>
>>>
>>>The point was not "why" I was getting the restarts, but the fact
>>> itself that I was getting restarts close in time, as I stated in my first
>>> post. I actually also don't know whether jobs are deleted or suspended. I've
>>> thought that a job returned back to the queue will basically start from the
>>> beginning when later moved to an empty slot ... so don't understand the
>>> difference from that perspective.
>>>
>>>
>>> In the second mail yoo say:
>>>
>>>  Submitted by:
>>> 
>>> ii=1
>>> ifmpi="mpirun -np $NSLOTS"
>>> 
>>>if [ ! -f run${ii}-i.tpr ];then
>>>cp run${ii}.tpr run${ii}-i.tpr
>>>   tpbconv -s run${ii}-i.tpr -until 20 -o run${ii}.tpr
>>>fi
>>>
>>> k=

Re: [gmx-users] Why does the -append option exist?

2011-06-05 Thread Roland Schulz
Two comments about the discussion:

1) I agree that buffered output (Kernel buffers - not application buffers)
should not affect I/O. If it does it should be filed as bug to the OS. Maybe
someone can write a short test application which tries to reproduce this
idea. Thus writing to a file from one node and immediate after one test
program is killed on one node writing to it from some other node.

2) We lock files but only the log file. The idea is that we only need
to guarantee that the set of files is only accessed by one application. This
seems safe but in case someone sees a way of how the trajectory is opened
without the log file being opened, please file a bug.

Roland

On Sun, Jun 5, 2011 at 10:13 AM, Mark Abraham wrote:

>  On 5/06/2011 11:08 PM, Francesco Oteri wrote:
>
> Dear Dimitar,
> I'm following the debate regarding:
>
>
>The point was not "why" I was getting the restarts, but the fact itself
> that I was getting restarts close in time, as I stated in my first post. I
> actually also don't know whether jobs are deleted or suspended. I've thought
> that a job returned back to the queue will basically start from the
> beginning when later moved to an empty slot ... so don't understand the
> difference from that perspective.
>
>
> In the second mail yoo say:
>
>  Submitted by:
> 
> ii=1
> ifmpi="mpirun -np $NSLOTS"
> 
>if [ ! -f run${ii}-i.tpr ];then
>cp run${ii}.tpr run${ii}-i.tpr
>   tpbconv -s run${ii}-i.tpr -until 20 -o run${ii}.tpr
>fi
>
> k=`ls md-${ii}*.out | wc -l`
>outfile="md-${ii}-$k.out"
>if [[ -f run${ii}.cpt ]]; then
>
>   * $ifmpi `which mdrun` *-s run${ii}.tpr -cpi run${ii}.cpt -v -deffnm
> run${ii} -npme 0 > $outfile  2>&1
>
> fi
>  =
>
>
> If I understand well, you are submitting the SERIAL  mdrun. This means that
> multiple instances of mdrun are running at the same time.
> Each instance of mdrun is an INDIPENDENT instance. Therefore checkpoint
> files, one for each instance (i.e. one for each CPU),  are written at the
> same time.
>
>
> Good thought, but Dimitar's stdout excerpts from early in the thread do
> indicate the presence of multiple execution threads. Dynamic load balancing
> gets turned on, and the DD is 4x2x1 for his 8 processors. Conventionally,
> and by default in the installation process, the MPI-enabled binaries get an
> "_mpi" suffix, but it isn't enforced - or enforceable :-)
>
> Mark
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] writing trajectory with water molecules within a distance from protein

2011-05-10 Thread Roland Schulz
On Mon, May 9, 2011 at 6:28 PM, Mark Abraham wrote:

> On 10/05/2011 12:50 AM, maria goranovic wrote:
>
>> Dear experts
>>
>> I have a protein simulation in a water box. I now want to write a
>> trajectory containing only the protein, and water molecules within 5
>> Angstroms of the protein, with the water list being updated each time step.
>> How can one do this? Appreciate the help
>>
>
> g_select is useful for "dynamic" selections of this type. g_select -select
> "help" can give examples and such.
>
> I'd hope it's been designed so that then using trjconv to extract such
> selections works, but I can't think how, having not ever tried.
>
g_select writes out one index group per time frame. But trjconv can't use a
different index group for each frame. Thus it can't be used to write out a
trajectory with those atoms for each frame.  Part of the problem is that the
trajectory format doesn't support different number of atoms for different
frames.
What is possible is writing a small script around trjconv to produce one
gro/trr file per frame with only those atoms.

Roland


> Mark
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] writing trajectory with water molecules within a distance from protein

2011-05-09 Thread Roland Schulz
Hi,

there is no tool to do that. Trajectories assume to have the same number of
atoms per frame. What you can do is use g_order (it gives you the water
sorted by distance and the number of water within .5nm) or g_select (it can
give you an index file with the atoms within .5nm for each frame)

Roland

On Mon, May 9, 2011 at 10:50 AM, maria goranovic
wrote:

> Dear experts
>
> I have a protein simulation in a water box. I now want to write a
> trajectory containing only the protein, and water molecules within 5
> Angstroms of the protein, with the water list being updated each time step.
> How can one do this? Appreciate the help
>
> --
> Maria G.
> Technical University of Denmark
> Copenhagen
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Benchmarking gromacs over large number of cores

2011-04-28 Thread Roland Schulz
On Thu, Apr 28, 2011 at 10:05 AM, Bruno Monnet  wrote:

>  Hi,
>
> I'm not really a Gromacs user, but I'm currently benchmarking Gromacs 4.5.4
> on a large cluster. It seems that my communication (PME) is really high and
> gromacs keeps complaining for more PME nodes :
>
>Average load imbalance: 113.6 %
>  Part of the total run time spent waiting due to load imbalance: 3.3 %
>  Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 9
> % Y 9 % Z 9 %
>  Average PME mesh/force load: 3.288
>  Part of the total run time spent waiting due to PP/PME imbalance: 32.6 %
>
> NOTE: 32.6 % performance was lost because the PME nodes
>   had more work to do than the PP nodes.
>   You might want to increase the number of PME nodes
>   or increase the cut-off and the grid spacing.
>
>
> I can't modify the original dataset as I only have the TPR file. I switched
> from dlb yes -> dlb auto since it seems to have trouble with more than 6000
> / 8000 cores.
>
You can set the number of PME nodes with -npme. All other nodes are used for
the particle-particle computations (PP).There is also a tool called
g_tune_pme which optimizes it automatically for you.

On that many cores you might see a significant speed-up by using the
prerelease version of GROMACS 4.6. You can obtain that from git using the
branch "threading". It uses threading for the PME nodes. The number of
threads used by the PME nodes is set with the environment variable
GMX_PME_NTHREADS. The total number of cores should be equal to the number of
PP nodes + GMX_PME_NTHREADS * number of PP nodes. Let me know if you try it
- I would be interested in feedback.

I tried to add " -gcom " parameter. This speedup the computation. This
> parameter is not really explained in the Gromacs documentation. Could you
> give me some advice on how I could use it ?
>
It defines how often e.g. the total energy is computed which is important
for pressure and temperature coupling. As long as you don't set it to higher
than 10 it shouldn't affect the accuracy significantly. But it does effect
it minimally thus you should document that in your results.

Roland


>
> Best regards,
> Bruno Monnet
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Roland Schulz
This seems to be a problem with your MPI library. Test to see whether other
MPI programs don't have the same problem. If it is not GROMACS specific
please ask on the mailinglist of your MPI library. If it only happens with
GROMACS be more specific about what your setup is (what MPI library, what
hardware, ...).

Also you could use the latest GROMACS 4.5.x. It has built in thread support
and doesn't need MPI as long as you only run on n cores within one SMP node.

Roland

On Wed, Apr 27, 2011 at 2:13 PM, Hrachya Astsatryan  wrote:

> Dear Mark Abraham & all,
>
> We  used another benchmarking systems, such as d.dppc on 4 processors, but
> we have the same problem (1 proc use about 100%, the others 0%).
> After for a while we receive the following error:
>
> Working directory is /localuser/armen/d.dppc
> Running on host wn1.ysu-cluster.grid.am
> Time is Fri Apr 22 13:55:47 AMST 2011
> Directory is /localuser/armen/d.dppc
> START
> Start: Fri Apr 22 13:55:47 AMST 2011
> p2_487:  p4_error: Timeout in establishing connection to remote process: 0
> rm_l_2_500: (301.160156) net_send: could not write to fd=5, errno = 32
> p2_487: (301.160156) net_send: could not write to fd=5, errno = 32
> p0_32738:  p4_error: net_recv read:  probable EOF on socket: 1
> p3_490: (301.160156) net_send: could not write to fd=6, errno = 104
> p3_490:  p4_error: net_send write: -1
> p3_490: (305.167969) net_send: could not write to fd=5, errno = 32
> p0_32738: (305.371094) net_send: could not write to fd=4, errno = 32
> p1_483:  p4_error: net_recv read:  probable EOF on socket: 1
> rm_l_1_499: (305.167969) net_send: could not write to fd=5, errno = 32
> p1_483: (311.171875) net_send: could not write to fd=5, errno = 32
> Fri Apr 22 14:00:59 AMST 2011
> End: Fri Apr 22 14:00:59 AMST 2011
> END
>
> We tried new version of Gromacs, but receive the same error.
> Please, help us to overcome the problem.
>
>
> With regards,
> Hrach
>
>
> On 4/22/11 1:41 PM, Mark Abraham wrote:
>
>> On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:
>>
>>> Dear all,
>>>
>>> I would like to inform you that I have installed the gromacs4.0.7 package
>>> on the cluster (nodes of the cluster are 8 core Intel, OS: RHEL4 Scientific
>>> Linux) with the following steps:
>>>
>>> yum install fftw3 fftw3-devel
>>> ./configure --prefix=/localuser/armen/gromacs --enable-mpi
>>>
>>> Also I have downloaded gmxbench-3.0 package and try to run d.villin to
>>> test it.
>>>
>>> Unfortunately it wok fine until np is 1,2,3, if I use more than 3 procs I
>>> receive low CPU balancing and the process in hanging.
>>>
>>> Could you, please, help me to overcome the problem?
>>>
>>
>> Probably you have only four physical cores (hyperthreading is not normally
>> useful), or your MPI is configured to use only four cores, or these
>> benchmarks are too small to scale usefully.
>>
>> Choosing to do a new installation of a GROMACS version that is several
>> years old is normally less productive than the latest version.
>>
>> Mark
>>
>>
>>
>>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Load imbalance vs accuracy

2011-04-25 Thread Roland Schulz
Hi,

load imbalance is only a performance issue.

Load balancing affects binary reproducibility (meaning the exact same value)
but not accuracy (those it can differ but only within the same accuracy).
Binary reproducibility should only be important for testing purposes, thus
you should use load balancing to improve the performance if you have
a significant imbalance.

Roland

On Mon, Apr 25, 2011 at 2:17 PM, Sikandar Mashayak wrote:

> Hi
>
> Is load imbalance during mdrun only a performance issue or it affects
> accuracy of run as well?
>
> thanks
> sikandar
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] diverging temperature with pressure coupling

2011-04-17 Thread Roland Schulz
Forwarding this email from my group colleague:


Dear Gromacs users,



I am trying to simulate a cellulose fiber in an ionic liquid solution in the
NPT ensemble.  During the simulation, the entire system is coupled to a
thermostat.  Yet, I observe an inhomogeneous temperature distribution
throughout my system (hot-solvent/cold-solute) when I use Parrinello-Rahman
pressure coupling but NOT when I employ Berendsen pressure coupling.  I have
tested velocity-rescaling and the Nose-Hoover scheme to keep the temperature
constant and in both cases Parrinello-Rahman pressure coupling seems to
cause the solute’s temperature to become significantly lower than the
solvent’s (to decompose temperatures, I am using “mdrun -rerun” with a run
input that defines tc_grps separately).



I was wondering whether there were any known algorithmic reasons for this
unphysical temperature gradient when using Parrinello-Rahman pressure
coupling.

Thank you.

Barmak


Comment from me: The effect is large. The ionic liquid is 5 degrees higher
and the cellulose is 50 degrees lower (after 50ps, after that it stays
constant). With Berendsen pressure both parts fluctuate around the same
target temperature (as one would expect). Any reason why one doesn't get the
correct temperature with rerun? Or is their a better way to get the
temperature for different groups(for a simulation with just one tc-group)?
Any reason why Parrinello-Rahman pressure coupling would have this effect on
the temperature?


Roland

-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] gromacs 4.5.4 analysis tools

2011-04-03 Thread Roland Schulz
Hi,

depends on the tool. If the tool just need the coordinates or just
velocities (dcd/vel) you don't need to convert anything because the tools
can use the VMD plugins to read dcd. If the tool needs a tpr file (e.g.
carges) than you need to create one. In most cases the easiest solution to
do this is to use pdb2gmx. If you have a non-standard molecule for which no
rtp exists you could also use psfgen+top (http://www.benlabs.net/psfgen+top/
).
If the tool needs coordinates and velocities, let me know and I'll think of
something.

Roland

On Sun, Apr 3, 2011 at 3:06 PM, Molecular Dynamics <
moleculardynam...@yahoo.com> wrote:

> Dear gmx users,
>
> I’m a NAMD user and want to use gromacs 4.5.4 analysis tools for my NAMD
> output files. I have some NAMD output files : output.coor , output.vel ,
> output.dcd (binary coordinate trajectory output file). Can I convert these
> output files into gromacs output files and use gromacs 4.5.4 analysis
> tools ? If it’s possible to do it, could you please explain this job ?
>
> Thanks in advanceM
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Installtion of gromacs-4.5.3

2011-03-31 Thread Roland Schulz
Hi,

this question has been answered often. Please check the archive before
posting.

Roland

On Thu, Mar 31, 2011 at 8:01 PM, parichita parichita <
parichitamajum...@yahoo.co.in> wrote:

> Hi...
> I am try to istalling gromacs -4.5.3 on AMD phenom II.
> I am following the installtions protocols, 1 St i have installed the
> fftw3.2.2,
> ./configure --enable-threads --enables-floats --enables-sse
> make
> make install
>
> then I am doing the "cmake".
> cd gromacs-4.5.3
> mkdir exec
> cd exec
> cmake ../
>
> After cmake the make -j 6 gives me an error
>
> "Linking C shared library libmd.so
> /usr/bin/ld: /usr/local/lib/libfftw3f.a(tensor.o): relocation R_X86_64_32
> against `.rodata.str1.1' can not be used when making a shared object;
> recompile with -fPIC
> /usr/local/lib/libfftw3f.a: could not read symbols: Bad value
> collect2: ld returned 1 exit status
> make[2]: *** [src/mdlib/libmd.so.6] Error 1
> make[1]: *** [src/mdlib/CMakeFiles/md.dir/all] Error 2
> make: *** [all] Error 2"
>
> please suggest me what I should do next...
>
> with regards
>
>
> Parichita Mazumder
> Research Fellow
> C/O Dr. Chaitali Mukhopadhayay
> Department of Chemistry
> University of Calcutta
> 92,A P C Road
> Kolkata-79
> India.
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] install problems cygwin

2011-03-22 Thread Roland Schulz
Hi,

try to move your files into your cygwin home directory before
compiling. If that doesn't work, try to first compile a hello world C
program to see why the compiler doesn't work.

Roland

On Tue, Mar 22, 2011 at 12:13 PM, vijaya subramanian
 wrote:
>
> Hi
> I am trying to install gromacs on a windows laptop with rhcygwin.  I am
> doing it as administrator
> and I ran into the same problem when I tried installing fftw-3.2.2.
>
> $ ./configure --enable-sse --enable-float
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
> checking for gawk... gawk
> checking whether make sets $(MAKE)... yes
> checking whether to enable maintainer-specific portions of Makefiles... no
> checking build system type... i686-pc-cygwin
> checking host system type... i686-pc-cygwin
> checking for gcc... gcc
> checking for C compiler default output file name...
> configure: error: in `/cygdrive/c/fftw-3.2.2':
> configure: error: C compiler cannot create executables
> See `config.log' for more details.
>
> The config.log file is attached.
> I read the config.log file but am unable to figure out what the problem is.
> It shows
> the right version of gcc installed and configure appears to find it.
>
>
> Thanks
> Vijaya
>
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] New maintenance release: gromacs-4.5.4

2011-03-22 Thread Roland Schulz
On Tue, Mar 22, 2011 at 10:26 AM, Ye MEI  wrote:
> Thank you for the new version of gromacs.
> But the compilation of gromacs failed on my computer. The commands are as 
> follows:
> make distclean
> export CC=icc
> export F77=ifort
> export CXX=icc
> export CFLAGS="-xS -I/apps/fftw3/include"
> export FFLAGS="-xS -I/apps/fftw3/include"
> export CXXFLAGS="-I/apps/fftw3/include"
> export LDFLAGS="-L/apps/fftw3/lib -lfftw3f"
> ./configure --prefix=/apps/gromacs4.5 --with-fft=fftw3 --with-x 
> --with-qmmm-gaussian
> make
>
> and the error message is
> icc  -shared  .libs/calcmu.o .libs/calcvir.o .libs/constr.o .libs/coupling.o 
> .libs/domdec.o .libs/domdec_box.o .libs/domdec_con.o .libs/domdec_network.o 
> .libs/domdec_setup.o .libs/domdec_top.o .libs/ebin.o .libs/edsam.o 
> .libs/ewald.o .libs/force.o .libs/forcerec.o .libs/ghat.o .libs/init.o 
> .libs/mdatom.o .libs/mdebin.o .libs/minimize.o .libs/mvxvf.o .libs/ns.o 
> .libs/nsgrid.o .libs/perf_est.o .libs/genborn.o .libs/genborn_sse2_single.o 
> .libs/genborn_sse2_double.o .libs/genborn_allvsall.o 
> .libs/genborn_allvsall_sse2_single.o .libs/genborn_allvsall_sse2_double.o 
> .libs/gmx_qhop_parm.o .libs/gmx_qhop_xml.o .libs/groupcoord.o .libs/pme.o 
> .libs/pme_pp.o .libs/pppm.o .libs/partdec.o .libs/pull.o .libs/pullutil.o 
> .libs/rf_util.o .libs/shakef.o .libs/sim_util.o .libs/shellfc.o .libs/stat.o 
> .libs/tables.o .libs/tgroup.o .libs/tpi.o .libs/update.o .libs/vcm.o 
> .libs/vsite.o .libs/wall.o .libs/wnblist.o .libs/csettle.o .libs/clincs.o 
> .libs/qmmm.o .libs/gmx_fft.o .libs/gmx_parallel_3dfft.o .libs/fft5d.o 
> .libs/gmx_wallcycle.o .libs/qm_gaussian.o .libs/qm_mopac.o .libs/qm_gamess.o 
> .libs/gmx_fft_fftw2.o .libs/gmx_fft_fftw3.o .libs/gmx_fft_fftpack.o 
> .libs/gmx_fft_mkl.o .libs/qm_orca.o .libs/mdebin_bar.o  -Wl,--rpath 
> -Wl,/home/ymei/gromacs-4.5.4/src/gmxlib/.libs -Wl,--rpath 
> -Wl,/apps/gromacs4.5/lib -lxml2 -L/apps/fftw3/lib /apps/fftw3/lib/libfftw3f.a 
> ../gmxlib/.libs/libgmx.so -lnsl  -pthread -Wl,-soname -Wl,libmd.so.6 -o 
> .libs/libmd.so.6.0.0
> ld: /apps/fftw3/lib/libfftw3f.a(problem.o): relocation R_X86_64_32 against `a 
> local symbol' can not be used when making a shared object; recompile with 
> -fPIC
> /apps/fftw3/lib/libfftw3f.a: could not read symbols: Bad value

Either recompile fftw with additional flag --with-pic or recompile
GROMACS with --disable-shared.

Roland
>
> However, it works fine for gromacs 4.5.3. Can anyone help?
>
> Ye MEI
>
> 2011-03-22
>
>
>
> From: Rossen Apostolov
> Date: 2011-03-22  03:24:55
> To: Discussion list for GROMACS development; Discussion list for GROMACS 
> users; gmx-announce
> CC:
> Subject: [gmx-users] New maintenance release: gromacs-4.5.4
>
> Dear Gromacs community,
> A new maintenance release of Gromacs is available for download at
> ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.5.4.tar.gz.
> Some notable updates in this release:
> * Fixed pdb2gmx picking up force field from local instead of library
> directory
> * Made pdb2gmx vsite generation work again for certain His namings.
> * Fixed incorrect virial and pressure averages with certain nst...
> values (instantaneous values correct)
> * Fixed incorrect cosine viscosity output
> * New -multidir alternative for mdrun -multi option
> * Several minor fixes in analysis tools
> * Several updates to the program documentation
> Big thanks to all developers and users!
> Happy simulating!
> Rossen
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Thank You for 4.5.4

2011-03-21 Thread Roland Schulz
you can clone from http://repo.or.cz/r/gromacs.git if you have problems with
a proxy.

On Mon, Mar 21, 2011 at 10:54 PM, Alif M Latif  wrote:

>   Dear Gromacs Developers,
>
> Just dropping by to say THANK YOU for v4.5.4, surely is updated version of
> 4.5.3 with all the bugfixes yes?. I'm having problem getting past my
> university's proxy server to use git. Now hopefully I can continue my
> research and solve git problems later..Your hard work is greatly
> appreciated!
>
> MUHAMMAD ALIF MOHAMMAD LATIF
> Laboratory of Theoretical and Computational Chemistry
> Department of Chemistry
> Faculty of Science
> Universiti Putra Malaysia
> 43400 UPM Serdang, Selangor
> MALAYSIA
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Gromacs on other Operating Systems

2011-03-21 Thread Roland Schulz
On Mon, Mar 21, 2011 at 9:09 PM, Justin A. Lemkul  wrote:
>
>
> Nancy wrote:
>>
>> Hi All,
>>
>> I've used used Gromacs under Linux, and I'm wondering whether it can be
>> used under Windows 7 and/or Snow Leopard (10.6.6).
>
> Theoretically, Gromacs is compatible with any environment provided you have
> proper compilers and libraries, etc.  Mac operating systems almost always
> play nicely.  Windows can be a challenge.
Actually with Visual Studio Express and CMake it is free and not very
difficult at all.

Roland


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] namd in gromacs

2011-03-10 Thread Roland Schulz
Hi,

in most cases the easiest way is to regenerate the topology in
pdb2gmx. If that is not possible for some reason, you can use a
version of psfgen we developed in our lab:
http://www.benlabs.net/psfgen+top/Overview.html

BTW: Please don't cross-post to both gmx-users and gmx-developers.

Roland

On Thu, Mar 10, 2011 at 9:28 AM, Francesco Oteri
 wrote:
> Dear gromacs users,
> I need to convert namd topology (.psf file) in gromacs format (.top).
> Any suggestion?
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] divergent energy minimization results from identical starting system

2011-01-28 Thread Roland Schulz
Hi,

are you running in parallel (either MPI or threads)? Load-balancing is one
reason for different rounding errors.
You can run with "mdrun -reprod" to avoid any different rounding between
runs and should in general get the same then.

Roland

On Fri, Jan 28, 2011 at 6:17 PM, Matthew Chan wrote:

> Hi,
>
> My second question is about the divergent energy minimization results which
> I have been receiving.
>
> I've taken the 1AKI lysozyme and prepared a single em.tpr file (1AKI is in
> vacuum). Afterwards I make 10 copies of the em.tpr file and use mdrun on
> each one. If I set the stopping condition to less than 1000kJ/mol nm, my
> potential energies and final structures from each run are not identical.
>
> I've tried both steepest descent and cg methods for minimization. I've also
> checked that the Fmax is indeed less than my stopping condition, and the
> potential energy is negative. Is this problem well documented or is there
> something wrong with my system? There seem to be a few parts of the manual
> that allude to the possibility of variance between subsequent runs of EM.
>
> If this is a well documented problem, can someone try explaining the cause
> to me please? I would like to learn more about this topic.
>
> Also, the potential energy value reported seems to be several orders of
> magnitude different from what other programs are reporting (24 000 vs 500).
> What units are it expressed in?
>
> Thanks in advance for your replies,
>
> --
> 
>
> Matthew Chan
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] can i run .tpr using Gromacs 4.5.3(has parallel) in cluster computer; the .tpr was created using gromacs 4.0.7 in my desktop (without parallel)

2011-01-26 Thread Roland Schulz
Yes. But it is better to use the same version. Otherwise you can't set all
parameters (e.g. nstcalcenergy)

Roland

2011/1/26 gromacs 

>
>
>
> Hi guys,
>
> I created my .tpr through my own desktop (without parallel) using gromacs
> 4.0.7.
>
> can i run .tpr  parallelly using Gromacs 4.5.3(has parallel) in cluster
> computer??
>
> I want creat all my .tpr files in my own computer, and then run on HPC
> (High performance computer).
>
> Thanks
>
>
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] trjconv -fit rot+trans before a pbc? safe way to rotate a trajectory?

2011-01-10 Thread Roland Schulz
2011/1/10 Camilo Andrés Jimenez Cruz 

> Hi list!
>
> it has been discussed in other posts the inconvenience of PBCing after
> rotating (see example:
> http://oldwww.gromacs.org/pipermail/gmx-developers/2009-November/003754.html),
> I just want to know what is the state of this problem and if somebody has
> found a satisfying solution.
>
> I also wonder if it has been considered the idea of doing PBC  *at the same
> time* that the rotation is made, In my view, this would fix the problem. no?
>
No. At the same time doesn't make sense. You need one reference frame to do
PBC. And it can't be rotated. In most cases (I give one example in the mail
you linked where this is not the case) you can do first PBC and than fit.

Roland

>
>

> Thanks!
>
> --
> Camilo Andrés Jiménez Cruz
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] average pressure too high

2010-12-26 Thread Roland Schulz
Hi,

what is the standard deviation and drift? Are you sure this is a significant
difference to 1?

Roland

On Sun, Dec 26, 2010 at 12:28 AM, sreelakshmi ramesh <
sree.laks...@research.iiit.ac.in> wrote:

> Dear users,
>  I did nvt equil and after that npt equilbriation and i am
> using parinello rahman as the barostat but the prob is even after 200 ps of
> equil the avg pressure is 1.5 bar .can anybody hepl me out with the
> issue.Any suggestions please.
>
> sree.
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] compilation instructions, the gromacs wiki, documentation, and test suites

2010-12-20 Thread Roland Schulz
Chris,

sorry I didn't pay attention (was in a hurry). I know that you have helped
with the documentation and wouldn't have suggested to you to put it on the
wiki if I recognized it was you. I thought it was a new user. And I didn't
want to criticize but only point out (to the assumed new user) that the wiki
can be improved by everyone.

Do you have suggestion of how to improve the documentation or the test
suite? What is the barrier to the wiki?
I have mentioned that before (on the dev-list) but I think a monthly phone
conference could help to coordinate those kind of issues.

Roland

On Mon, Dec 20, 2010 at 10:07 PM,  wrote:

> < "cmake --> relocation R_X86_64_32S against `a local symbol' can not be used
> when making a shared object;>>
>
> Dear Roland:
>
> It is not my intention to be confrontational, your assistance was very
> useful, I appreciate it very much, and I realize that it's not your job to
> comment everything (or even answer my questions on this mailing list).
> Further, I have actually contributed significantly to the gromacs wiki in
> the past, but it's not a wiki anymore and the barrier to posting is enough
> that I'm not the only person who has given up on it.
>
> Second, I would like to mention that as a user I am extremely hesitant to
> upgrade my gromacs version due to the lack of commenting and lack of a good
> test suite. Anybody who used the free energy code with TIP4P in 2008/2009 or
> used the pull code in the early versions of gromacs 4 will probably agree
> with me that testing and documentation are at least as important as new
> code.
>
> I'm not asking anybody else to add documentation or test suites. I'm simply
> pointing out that gromacs is falling behind in these areas and it is not
> necessarily a good thing. I think that there is a utility in simply noting
> this.
>
> Sincerely,
> Chris.
>
>
>
> -- original message --
>
> Please contribute you experience to the wiki pages. Good documentation
> requires many people to help. The (main) developers are often too busy too
> write detailed documentation.
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use thewww interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] cmake --> relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object;

2010-12-20 Thread Roland Schulz
On Mon, Dec 20, 2010 at 8:20 PM,  wrote:
>
> Just for completeness, I could not get fftw to compile with --enable-shared
> (although I don't need to any more):
>
> $ ./configure --enable-float --enable-threads --enable-shared
> --prefix=/project/pomes/cneale/GPC/exe/intel/fftw-3.1.2/exec
> $ make
>
> ..  ...
> rm -fr  .libs/libfftw3f.a .libs/libfftw3f.la .libs/libfftw3f.lai
> icc -c99 -shared  -Wl,--whole-archive kernel/.libs/libkernel.a
> dft/.libs/libdft.a dft/codelets/.libs/libdft_codelets.a
> dft/codelets/standard/.libs/libdft_standard.a rdft/.libs/librdft.a
> rdft/codelets/.libs/librdft_codelets.a
> rdft/codelets/r2hc/.libs/librdft_codelets_r2hc.a
> rdft/codelets/hc2r/.libs/librdft_codelets_hc2r.a
> rdft/codelets/r2r/.libs/librdft_codelets_r2r.a reodft/.libs/libreodft.a
> api/.libs/libapi.a -Wl,--no-whole-archive
>  -L/scratch/cneale/GPC/exe/intel/fftw-3.1.2/exec/lib -lm  -pthread
> -Wl,-soname -Wl,libfftw3f.so.3 -o .libs/libfftw3f.so.3.1.2
> icc: command line remark #10010: option '-c99' is deprecated and will be
> removed in a future release. See '-help deprecated'
> ld: kernel/.libs/libkernel.a(alloc.o): relocation R_X86_64_32 against `a
> local symbol' can not be used when making a shared object; recompile with
> -fPIC
> kernel/.libs/libkernel.a(alloc.o): could not read symbols: Bad value
> make[2]: *** [libfftw3f.la] Error 1
> make[2]: Leaving directory
> `/project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory
> `/project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again'
> make: *** [all] Error 2
>

You need to run "make clean" when you change the configure options.

It will be nice to see some example build instructions on the gromacs site
> as people have time to add them.
>
Please contribute you experience to the wiki pages. Good documentation
requires many people to help. The (main) developers are often too busy too
write detailed documentation.

Roland


 From: Roland Schulz 
>> Subject: Re: [gmx-users] cmake --> relocation R_X86_64_32S against `a
>>local   symbol' can not be used when making a shared object;
>>  recompile
>>with -fPIC
>> To: Discussion list for GROMACS users 
>> Message-ID:
>>
>> 
>> >
>> Content-Type: text/plain; charset="iso-8859-1"
>>
>> You want to compile fftw with either --enable-shared or with --with-pic.
>> Or
>> you need to compile a static version of Gromacs. As the message says you
>> can't use fftw without pic with shared libraries in GROMACS.
>>
>> Roland
>>
>> On Mon, Dec 20, 2010 at 6:31 PM, Chris Neale > >wrote:
>>
>>  Dear Gromacs users:
>>>
>>> I pulled the master version of the source code today at 2pm via:
>>> git clone git://git.gromacs.org/gromacs.git
>>> and I tried to compile it following the instructions posted here:
>>> http://www.gromacs.org/Developer_Zone/Cmake but I was unsuccessful.
>>>  This
>>> is my first attempt at using cmake, but I have been compiling gromacs
>>> successfully with autoconf since version 3.3.1
>>>
>>> I have a recent enough version of cmake:
>>> $ cmake --version
>>> cmake version 2.8.0
>>>
>>> And I have installed fftw:
>>> $ echo $FFTW_LOCATION
>>> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec
>>> $ ls $FFTW_LOCATION/lib
>>> libfftw3f.a  libfftw3f.la  libfftw3f_threads.a   
>>> libfftw3f_threads.lapkgconfig
>>>
>>> But then when I try to compile gromacs:
>>> $ cmake ../ -DFFTW3F_INCLUDE_DIR=$FFTW_LOCATION/include
>>> -DFFTW3F_LIBRARIES=$FFTW_LOCATION/lib/libfftw3f.a
>>> -DCMAKE_INSTALL_PREFIX=$(pwd) -DGMX_X11=OFF
>>> -DCMAKE_CXX_COMPILER=/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/icpc
>>> -DCMAKE_C_COMPILER=/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/icc
>>> $ make -j 4
>>>
>>> I get the error:
>>> ...  ...
>>> [ 72%] Building C object src/mdlib/CMakeFiles/md.dir/tables.c.o
>>> Linking C shared library libmd.so
>>> ld:
>>>
>>> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec/lib/libfftw3f.a(mapflags.o):
>>> relocation R_X86_64_32S against `a local symbol' can not be used when
>>> making
>>> a shared object; recompile with -fPIC
>>>
>>> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec/lib/libfftw3f.a:
>>> could not read symbols: Bad value
>>> make[2]: *** [src/mdlib/libmd.so.6] Error 1
>>> make[1]

Re: [gmx-users] cmake --> relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC

2010-12-20 Thread Roland Schulz
You want to compile fftw with either --enable-shared or with --with-pic. Or
you need to compile a static version of Gromacs. As the message says you
can't use fftw without pic with shared libraries in GROMACS.

Roland

On Mon, Dec 20, 2010 at 6:31 PM, Chris Neale wrote:

> Dear Gromacs users:
>
> I pulled the master version of the source code today at 2pm via:
> git clone git://git.gromacs.org/gromacs.git
> and I tried to compile it following the instructions posted here:
> http://www.gromacs.org/Developer_Zone/Cmake but I was unsuccessful.  This
> is my first attempt at using cmake, but I have been compiling gromacs
> successfully with autoconf since version 3.3.1
>
> I have a recent enough version of cmake:
> $ cmake --version
> cmake version 2.8.0
>
> And I have installed fftw:
> $ echo $FFTW_LOCATION
> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec
> $ ls $FFTW_LOCATION/lib
> libfftw3f.a  libfftw3f.la  libfftw3f_threads.a  libfftw3f_threads.la pkgconfig
>
> But then when I try to compile gromacs:
> $ cmake ../ -DFFTW3F_INCLUDE_DIR=$FFTW_LOCATION/include
> -DFFTW3F_LIBRARIES=$FFTW_LOCATION/lib/libfftw3f.a
> -DCMAKE_INSTALL_PREFIX=$(pwd) -DGMX_X11=OFF
> -DCMAKE_CXX_COMPILER=/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/icpc
> -DCMAKE_C_COMPILER=/scinet/gpc/intel/Compiler/11.1/072/bin/intel64/icc
> $ make -j 4
>
> I get the error:
> ...  ...
> [ 72%] Building C object src/mdlib/CMakeFiles/md.dir/tables.c.o
> Linking C shared library libmd.so
> ld:
> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec/lib/libfftw3f.a(mapflags.o):
> relocation R_X86_64_32S against `a local symbol' can not be used when making
> a shared object; recompile with -fPIC
> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec/lib/libfftw3f.a:
> could not read symbols: Bad value
> make[2]: *** [src/mdlib/libmd.so.6] Error 1
> make[1]: *** [src/mdlib/CMakeFiles/md.dir/all] Error 2
> make: *** [all] Error 2
>
> Just to be sure, I recompiled my FFTW and also tried a few different
> options for FFTW3F_LIBRARIES, all with errors.
> This fftw was compiled the same way I did previously, but just in case our
> version of icc was updated and this is causing the problem:
> export CC=icc
> export CXX=icpc
> ./configure --enable-float --enable-threads --prefix=$(pwd)/exec
> make
> make install
>
>  USING
> -DFFTW3F_LIBRARIES=$FFTW_LOCATION/lib/libfftw3f_threads.a
> ...  ...
> ../mdlib/libmd.so.6: undefined reference to `fftwf_plan_dft_c2r_3d'
> ../mdlib/libmd.so.6: undefined reference to `fftwf_plan_guru_dft_r2c'
> ../mdlib/libmd.so.6: undefined reference to `fftwf_plan_dft_r2c_2d'
>
> ...  ...
>
>  78%] Building C object src/kernel/CMakeFiles/g_luck.dir/g_luck.c.o
> Linking C executable g_luck
> Scanning dependencies of target g_protonate
> [ 79%] Building C object
> src/kernel/CMakeFiles/g_protonate.dir/g_protonate.c.o
> [ 79%] /scinet/gpc/intel/Compiler/11.1/072/lib/intel64//libimf.so: warning:
> warning: feupdateenv is not implemented and will always fail
> Building C object src/tools/CMakeFiles/gmxana.dir/gmx_bundle.c.o
> ../mdlib/libmd.so.6: undefined reference to `fftwf_plan_dft_3d'
> ../mdlib/libmd.so.6: undefined reference to `fftwf_execute_dft_r2c'
>
> ...  ...
>
> Building C object src/tools/CMakeFiles/gmxana.dir/edittop.c.o
> [ 88%] Building C object src/tools/CMakeFiles/gmxana.dir/gmx_bar.c.o
> [ 89%] Building C object src/tools/CMakeFiles/gmxana.dir/gmx_pme_error.c.o
> Linking C shared library libgmxana.so
> [ 89%] Built target gmxana
> make: *** [all] Error 2
>
>  USING -DFFTW3F_LIBRARIES=$FFTW_LOCATION/lib/
> libfftw3f.la
> ...  ...
> [ 72%] Building C object src/mdlib/CMakeFiles/md.dir/tables.c.o
> Linking C shared library libmd.so
> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec/lib/libfftw3f.la:
> file not recognized: File format not recognized
>
>  USING -DFFTW3F_LIBRARIES=$FFTW_LOCATION/lib/
> libfftw3f_threads.la
> ...  ...
> [ 72%] Building C object src/mdlib/CMakeFiles/md.dir/mvxvf.c.o
> [ 72%] Building C object src/mdlib/CMakeFiles/md.dir/tables.c.o
> Linking C shared library libmd.so
> /project/pomes/cneale/GPC/exe/intel/fftw-3.1.2_again/exec/lib/
> libfftw3f_threads.la: file not recognized: File format not recognized
> make[2]: *** [src/mdlib/libmd.so.6] Error 1
> make[1]: *** [src/mdlib/CMakeFiles/md.dir/all] Error 2
> make: *** [all] Error 2
>
>
> Thank you,
> Chris.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@

Re: [gmx-users] Seeking advice on how to build Gromacs on Teragrid resources

2010-12-09 Thread Roland Schulz
What MVAPICH version are you using?

Are you using a TPR file you know is running fine on some other machine?

Does the 4.5.2 version they installed run correct? If so what is the
configure line they used?

Roland

On Thu, Dec 9, 2010 at 5:14 PM, J. Nathan Scott <
scot...@chemistry.montana.edu> wrote:

> Hello gmx users! I realize this may be a touch off topic, but I am
> hoping that someone out there can offer some advice on how to build
> Gromacs for parallel use on a Teragrid site. Our group is currently
> using Abe on Teragrid, and unfortunately the latest version of Gromacs
> compiled for public use on Abe is 4.0.2. Apparently installation of
> 4.5.3 is at least on the to-do list for Abe, but we would very much
> like to use 4.5.3 now if we can get this issue figured it out.
>
> I have built a parallel version of mdrun using Abe installed versions
> of fftw3 and mvapich2 using the following commands:
>
> setenv CPPFLAGS "-I/usr/apps/math/fftw/fftw-3.1.2/gcc/include/
> -I/usr/apps/mpi/marmot_mvapich2_intel/include"
> setenv LDFLAGS "-L/usr/apps/math/fftw/fftw-3.1.2/gcc/lib
> -L/usr/apps/mpi/marmot_mvapich2_intel/lib"
> ./configure --enable-mpi --enable-float --prefix=/u/ac/jnscott/gromacs
> --program-suffix=_mpi
> make -j 8 mdrun && make install-mdrun
>
> My PBS script file looks like the following:
>
> ---
> #!/bin/csh
> #PBS -l nodes=2:ppn=8
> #PBS -V
> #PBS -o pbs_nvt.out
> #PBS -e pbs_nvt.err
> #PBS -l walltime=2:00:00
> #PBS -N gmx
> cd /u/ac/jnscott/1stn/1stn_wt/oplsaa_spce
> mvapich2-start-mpd
> setenv NP `wc -l ${PBS_NODEFILE} | cut -d'/' -f1`
> setenv MV2_SRQ_SIZE 4000
> mpirun -np ${NP} mdrun_mpi -s nvt.tpr -o nvt.trr -x nvt.xtc -cpo
> nvt.cpt -c nvt.gro -e nvt.edr -g nvt.log -dlb yes
> ---
>
> Unfortunately my runs always fail in the same manner. The log file
> simply ends, as you can see below. It appears that Gromacs is picking
> up the correct number of nodes specified in the PBS script, but then
> something causes it to quit abruptly with no error message.
>
> ---
> 
> Initializing Domain Decomposition on 16 nodes
> Dynamic load balancing: yes
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
>two-body bonded interactions: 0.526 nm, LJ-14, atoms 1735 1744
>  multi-body bonded interactions: 0.526 nm, Ryckaert-Bell., atoms 1735 1744
> Minimum cell size due to bonded interactions: 0.578 nm
> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.820 nm
> Estimated maximum distance required for P-LINCS: 0.820 nm
> This distance will limit the DD cell size, you can override this with -rcon
> Guess for relative PME load: 0.27
> Will use 10 particle-particle and 6 PME only nodes
> This is a guess, check the performance at the end of the log file
> Using 6 separate PME nodes
> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> Optimizing the DD grid for 10 cells with a minimum initial size of 1.025 nm
> The maximum allowed number of cells is: X 5 Y 5 Z 4
> Domain decomposition grid 2 x 5 x 1, separate PME nodes 6
> PME domain decomposition: 2 x 3 x 1
> Interleaving PP and PME nodes
> This is a particle-particle only node
>
> Domain decomposition nodeid 0, coordinates 0 0 0
>
> Using two step summing over 2 groups of on average 5.0 processes
>
> Table routines are used for coulomb: TRUE
> Table routines are used for vdw: FALSE
> Will do PME sum in reciprocal space.
>
> 
>
> Will do ordinary reciprocal space Ewald sum.
> Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
> Cut-off's:   NS: 1   Coulomb: 1   LJ: 1
> Long Range LJ corr.:  3.3589e-04
> System total charge: 0.000
> Generated table with 1000 data points for Ewald.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for LJ6.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for LJ12.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for 1-4 COUL.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for 1-4 LJ6.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for 1-4 LJ12.
> Tabscale = 500 points/nm
>
> Enabling SPC-like water optimization for 6952 molecules.
>
> Configuring nonbonded kernels...
> Configuring standard C nonbonded kernels...
> Testing x86_64 SSE2 support... present.
>
> Removing pbc first time
>
> Initializing Parallel LINear Constraint Solver
>
> 
> Linking all bonded interactions to atoms
> There are 9778 inter charge-group exclusions,
> will use an extra communication step for exclusion forces for PME
>
> The maximum number of communication pulses is: X 1 Y 2
> The minimum size for domain decomposition cells is 0.827 nm
> The requested allowed shrink of DD cells (option -dds) is: 0.80
> The allowed shrink of domain decomposition cells is: X 0.35 Y 0.73
> The maximum allowed distance for charge groups involved in interactions is:
> non-bo

Re: [gmx-users] Can't build gromacs-4.5.3 with cmake

2010-12-08 Thread Roland Schulz
What cmake version are you using?

On Wed, Dec 8, 2010 at 9:49 AM, Keith Callenberg  wrote:

> Hello gmx-users,
>
> I am trying to build gromacs-4.5.3 with GPU support with OpenMM. I
> have followed the INSTALL-GPU instructions along with the tips given
> in this post:
> http://www.mail-archive.com/gmx-users@gromacs.org/msg34139.html
>
> However, I am receiving the following error from cmake even without
> the -DGMX_OPENMM=ON flag (and I have been making sure to rm
> CMakeCache.txt and src/CMakeCache.txt):
>
> u...@host:~/apps/gromacs-4.5.3$ cmake src/
> -- The C compiler identification is GNU
> -- The CXX compiler identification is GNU
> -- Check for working C compiler: /usr/bin/gcc
> -- Check for working C compiler: /usr/bin/gcc -- works
> -- Detecting C compiler ABI info
> -- Detecting C compiler ABI info - done
> -- Check for working CXX compiler: /usr/bin/c++
> -- Check for working CXX compiler: /usr/bin/c++ -- works
> -- Detecting CXX compiler ABI info
> -- Detecting CXX compiler ABI info - done
> -- Found assembler: /usr/bin/as
> -- Loaded CMakeASM-ATTInformation - ASM-ATT support is still
> experimental, please report issues
> CMake Error at gmxlib/CMakeLists.txt:146 (set_target_properties):
>  set_target_properties called with incorrect number of arguments.
>
>
> CMake Error at gmxlib/CMakeLists.txt:148 (install):
>  install TARGETS given no ARCHIVE DESTINATION for static library target
>  "gmx".
>
>
> CMake Error at mdlib/CMakeLists.txt:11 (set_target_properties):
>  set_target_properties called with incorrect number of arguments.
>
>
> CMake Error at mdlib/CMakeLists.txt:13 (install):
>  install TARGETS given no ARCHIVE DESTINATION for static library target
>  "md".
>
>
> CMake Error at kernel/CMakeLists.txt:42 (set_target_properties):
>  set_target_properties called with incorrect number of arguments.
>
>
> CMake Error at kernel/CMakeLists.txt:108 (install):
>  install TARGETS given no ARCHIVE DESTINATION for static library target
>  "gmxpreprocess".
>
>
> CMake Error at kernel/CMakeLists.txt:109 (install):
>  install TARGETS given no RUNTIME DESTINATION for executable target
> "mdrun".
>
>
> CMake Error at kernel/CMakeLists.txt:110 (install):
>  install TARGETS given no RUNTIME DESTINATION for executable target
>  "grompp".
>
>
> CMake Error at tools/CMakeLists.txt:36 (set_target_properties):
>  set_target_properties called with incorrect number of arguments.
>
>
> CMake Error at tools/CMakeLists.txt:65 (install):
>  install TARGETS given no ARCHIVE DESTINATION for static library target
>  "gmxana".
>
>
> CMake Error at tools/CMakeLists.txt:66 (install):
>  install TARGETS given no RUNTIME DESTINATION for executable target
>  "do_dssp".
>
>
> CMake Warning (dev) in CMakeLists.txt:
>  No cmake_minimum_required command is present.  A line of code such as
>
>   cmake_minimum_required(VERSION 2.8)
>
>  should be added at the top of the file.  The version specified may be
> lower
>  if you wish to support older CMake versions for this project.  For more
>  information run "cmake --help-policy CMP".
> This warning is for project developers.  Use -Wno-dev to suppress it.
>
> -- Configuring incomplete, errors occurred!
>
> Here are my environment variables:
>
> PATH=/usr/local/cuda/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
>
> LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/lib:/usr/local/openmm/lib:
>
> The variable that seems to be empty in CMakeLists.txt is
> ${LIB_INSTALL_DIR}. Does anyone know what that might be caused by?
>
> Thank you,
> Keith Callenberg
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Cut-offs using CHARMM27 ff

2010-12-01 Thread Roland Schulz
On Wed, Dec 1, 2010 at 4:30 PM, Hassan Shallal  wrote:

>  Dear Gromacs users,
>
> I am trying to use CHARMM27 taking the simulation conditions of two recent
> articles as guides in optimizing the simulation parameters. *(DOI: 
> *10.1021/ct900549r
> and *DOI:* 10.1021/jp101581h).
>
> -- Both are using PME for electrostatics and I am planning to do that too
> -- Both have rcoulomb = 1.2 (this value is optimal for CHARMM27 force field
> as mentioned)
> -- In the second paper, they have explicitly assigned rvdw = 1.2
> (this value is also optimized for CHARMM27 force field) and they assigned
> the NS rlist = 1.2 because as I understand, using PME as a coulombtype
> requires rcoulomb to be equal to rlist. I have no problem following up
> until now and it seems that (rlist = rcoulomb = rvdw = 1.2) presents the
> best combination of cut-offs for using (coulombtype = PME and vdwtype =
> cut-off) along with CHARMM27
>
> -- What I can't get is that in the first paper, they mentioned the
> following *"van der Waals interactions were switched off between 1.0 to
> 1.2 nm"*
> *what does that mean in terms of cut-offs and vdwtype?*
> Does that mean *vdwtype = switch*, *vdw_switch* *= 1*, *rvdw = 1.2* ?
>
yes.

> and if this is what is meant, then rlist has to be 0.1-0.3 larger than rvdw
> to accomodate for the size of the charge groups as mentioned in the manual
> and accordingly, we can't keep rcoulomb = 1.2 because rcoulomb must be equal
> to rlist to allow using PME.
>
CHARMM FF doesn't use charge groups. Thus (as far as I understand) this
warning can be ignored.  You still miss a few interactions because
the neighbor-list only contains everything up to 1.2 thus an atom at a
distance just above 1.2 and moves a bit inwards, it is not included until
the next neighbor-list update. But because the interaction is already
switched it is not very critical in my opinion. If you want to avoid this
you can use PME-Switch but this is significant slower.

Roland

>
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] printing a trajectory file in readable format starting from a specific frame other than zero

2010-12-01 Thread Roland Schulz
Hi,

you can use trjconv to write frames to GRO format. Or you can use it to
write a trajectory from a different starting point (and then you can use
gmxdump).

Roland

On Wed, Dec 1, 2010 at 4:39 PM, Silvia Crivelli wrote:

> Hello,
>
> Is there a function that allows users to read a trajectory file and print
> it in a readable format (like gmxdump) but starting from a user-specified
> frame rather than from 0?
>
> Thanks in advance,
> Silvia Crivelli
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use thewww interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3): SOLVED

2010-11-26 Thread Roland Schulz
Hi,

we use Lustre too and it doesn't cause any problem. I found this message on
the Lustre list:
http://lists.lustre.org/pipermail/lustre-discuss/2008-May/007366.html

And according to your mount output, lustre on your machine is not mounted
with the flock or localflock option. This seems to be the reason for the
problem. Thus if you would like to run the simulation directly on lustre you
have to ask the sysadmin to mount it with flock or localflock ( I don't
recommend localflock. It doesn't guarantee the correct locking).

If you would like to have an option to disable the locking than please file
a bug report on bugzilla. The reason we lock the logfile is: We want to make
sure that only one simulation is appending to the same files. Otherwise the
files could get corrupted. This is why the locking is on by default and
currently can't be disabled.

Roland


On Fri, Nov 26, 2010 at 3:17 PM, Baofu Qiao  wrote:

> Hi all,
>
> What Roland said is right! the lustre system causes the problem of "lock".
> Now I copy all the files to a folder of /tmp, then run the continuation. It
> works!
>
> Thanks!
>
> regards,
>
>
> $于 2010-11-26 22:53, Florian Dommert 写道:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> To make things short. The used file system is lustre.
>>
>> /Flo
>>
>> On 11/26/2010 05:49 PM, Baofu Qiao wrote:
>>
>>> Hi Roland,
>>>
>>> The output of "mount" is :
>>> /dev/mapper/grid01-root on / type ext3 (rw)
>>> proc on /proc type proc (rw)
>>> sysfs on /sys type sysfs (rw)
>>> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
>>> /dev/md0 on /boot type ext3 (rw)
>>> tmpfs on /dev/shm type tmpfs (rw)
>>> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
>>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
>>> 172.30.100.254:/home on /home type nfs
>>>
>>> (rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.254)
>>> 172.30.100.210:/opt on /opt type nfs
>>>
>>> (rw,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
>>> 172.30.100.210:/var/spool/torque/server_logs on
>>> /var/spool/pbs/server_logs type nfs
>>>
>>> (ro,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.210)
>>> none on /ipathfs type ipathfs (rw)
>>> 172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib
>>> ,172.30.100@tcp:/lprod
>>> on /lustre/ws1 type lustre (rw,noatime,nodiratime)
>>> 172.31.100@o2ib,172.30.100@tcp:172.31.100@o2ib
>>> ,172.30.100@tcp:/lbm
>>> on /lustre/lbm type lustre (rw,noatime,nodiratime)
>>> 172.30.100.219:/export/necbm on /nfs/nec type nfs
>>>
>>> (ro,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
>>> 172.30.100.219:/export/necbm-home on /nfs/nec/home type nfs
>>>
>>> (rw,bg,tcp,nfsvers=3,actimeo=10,hard,rsize=65536,wsize=65536,timeo=600,addr=172.30.100.219)
>>>
>>>
>>> On 11/26/2010 05:41 PM, Roland Schulz wrote:
>>>
>>>> Hi Baofu,
>>>>
>>>> could you provide more information about the file system?
>>>> The command "mount" provides the file system used. If it is a
>>>> network-file-system than the operating system and file system used on
>>>> the
>>>> file server is also of interest.
>>>>
>>>> Roland
>>>>
>>>> On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:
>>>>
>>>>
>>>>  Hi Roland,
>>>>>
>>>>> Thanks a lot!
>>>>>
>>>>> OS: Scientific Linux 5.5. But the system to store data is called as
>>>>> WORKSPACE, different from the regular hardware system. Maybe this is
>>>>> the
>>>>> reason.
>>>>>
>>>>> I'll try what you suggest!
>>>>>
>>>>> regards,
>>>>> Baofu Qiao
>>>>>
>>>>>
>>>>> On 11/26/2010 04:07 PM, Roland Schulz wrote:
>>>>>
>>>>>  Baofu,
>>>>>>
>>>>>> what operating system are you using? On what file system do you try to
>>>>>>
>>>>>>  store
>>>>>
>>>>>  the log file? The error (should) mean that the file system you use
>>>>>>
>>>>>>  doesn't
>

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Roland Schulz
Hi Baofu,

could you provide more information about the file system?
The command "mount" provides the file system used. If it is a
network-file-system than the operating system and file system used on the
file server is also of interest.

Roland

On Fri, Nov 26, 2010 at 11:00 AM, Baofu Qiao  wrote:

> Hi Roland,
>
> Thanks a lot!
>
> OS: Scientific Linux 5.5. But the system to store data is called as
> WORKSPACE, different from the regular hardware system. Maybe this is the
> reason.
>
> I'll try what you suggest!
>
> regards,
> Baofu Qiao
>
>
> On 11/26/2010 04:07 PM, Roland Schulz wrote:
> > Baofu,
> >
> > what operating system are you using? On what file system do you try to
> store
> > the log file? The error (should) mean that the file system you use
> doesn't
> > support locking of files.
> > Try to store the log file on some other file system. If you want you can
> > still store the (large) trajectory files on the same file system.
> >
> > Roland
> >
> > On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:
> >
> >
> >> Hi Carsten,
> >>
> >> Thanks for your suggestion! But because my simulation will be run for
> >> about 200ns, 10ns per day(24 hours is the maximum duration for one
> >> single job on the Cluster I am using), which will generate about 20
> >> trajectories!
> >>
> >> Can anyone find the reason causing such error?
> >>
> >> regards,
> >> Baofu Qiao
> >>
> >>
> >> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> >>
> >>> Hi,
> >>>
> >>> as a workaround you could run with -noappend and later
> >>> concatenate the output files. Then you should have no
> >>> problems with locking.
> >>>
> >>> Carsten
> >>>
> >>>
> >>> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
> >>>
> >>>
> >>>
> >>>> Hi all,
> >>>>
> >>>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is
> about
> >>>>
> >> 30% slower than 4.5.3. So I really appreciate if anyone can help me with
> it!
> >>
> >>>> best regards,
> >>>> Baofu Qiao
> >>>>
> >>>>
> >>>> 于 2010-11-25 20:17, Baofu Qiao 写道:
> >>>>
> >>>>
> >>>>> Hi all,
> >>>>>
> >>>>> I got the error message when I am extending the simulation using the
> >>>>>
> >> following command:
> >>
> >>>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
> >>>>>
> >> pre.cpt -append
> >>
> >>>>> The previous simuluation is succeeded. I wonder why pre.log is
> locked,
> >>>>>
> >> and the strange warning of "Function not implemented"?
> >>
> >>>>> Any suggestion is appreciated!
> >>>>>
> >>>>> *
> >>>>> Getting Loaded...
> >>>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
> >>>>>
> >>>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
> >>>>>
> >>>>> ---
> >>>>> Program mdrun, VERSION 4.5.3
> >>>>> Source code file: checkpoint.c, line: 1750
> >>>>>
> >>>>> Fatal error:
> >>>>> Failed to lock: pre.log. Function not implemented.
> >>>>> For more information and tips for troubleshooting, please check the
> >>>>>
> >> GROMACS
> >>
> >>>>> website at http://www.gromacs.org/Documentation/Errors
> >>>>> ---
> >>>>>
> >>>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >>>>>
> >>>>> Error on node 0, will try to stop all the nodes
> >>>>> Halting parallel program mdrun on CPU 0 out of 64
> >>>>>
> >>>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >>>>>
> >>>>>
> >>>>>
> >>
> --
> >&

Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Roland Schulz
Baofu,

what operating system are you using? On what file system do you try to store
the log file? The error (should) mean that the file system you use doesn't
support locking of files.
Try to store the log file on some other file system. If you want you can
still store the (large) trajectory files on the same file system.

Roland

On Fri, Nov 26, 2010 at 4:55 AM, Baofu Qiao  wrote:

> Hi Carsten,
>
> Thanks for your suggestion! But because my simulation will be run for
> about 200ns, 10ns per day(24 hours is the maximum duration for one
> single job on the Cluster I am using), which will generate about 20
> trajectories!
>
> Can anyone find the reason causing such error?
>
> regards,
> Baofu Qiao
>
>
> On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> > Hi,
> >
> > as a workaround you could run with -noappend and later
> > concatenate the output files. Then you should have no
> > problems with locking.
> >
> > Carsten
> >
> >
> > On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
> >
> >
> >> Hi all,
> >>
> >> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about
> 30% slower than 4.5.3. So I really appreciate if anyone can help me with it!
> >>
> >> best regards,
> >> Baofu Qiao
> >>
> >>
> >> 于 2010-11-25 20:17, Baofu Qiao 写道:
> >>
> >>> Hi all,
> >>>
> >>> I got the error message when I am extending the simulation using the
> following command:
> >>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
> pre.cpt -append
> >>>
> >>> The previous simuluation is succeeded. I wonder why pre.log is locked,
> and the strange warning of "Function not implemented"?
> >>>
> >>> Any suggestion is appreciated!
> >>>
> >>> *
> >>> Getting Loaded...
> >>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
> >>>
> >>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
> >>>
> >>> ---
> >>> Program mdrun, VERSION 4.5.3
> >>> Source code file: checkpoint.c, line: 1750
> >>>
> >>> Fatal error:
> >>> Failed to lock: pre.log. Function not implemented.
> >>> For more information and tips for troubleshooting, please check the
> GROMACS
> >>> website at http://www.gromacs.org/Documentation/Errors
> >>> ---
> >>>
> >>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >>>
> >>> Error on node 0, will try to stop all the nodes
> >>> Halting parallel program mdrun on CPU 0 out of 64
> >>>
> >>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
> >>>
> >>>
> --
> >>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> >>> with errorcode -1.
> >>>
> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> >>> You may or may not see output from other processes, depending on
> >>> exactly when Open MPI kills them.
> >>>
> --
> >>>
> --
> >>> mpiexec has exited due to process rank 0 with PID 32758 on
> >>>
> >>>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >
> >
> >
> >
> >
>
>
> --
> 
>  Dr. Baofu Qiao
>  Institute for Computational Physics
>  Universität Stuttgart
>  Pfaffenwaldring 27
>  70569 Stuttgart
>
>  Tel: +49(0)711 68563607
>  Fax: +49(0)711 68563658
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] two problems with gromacs 4.5.3

2010-11-22 Thread Roland Schulz
On Mon, Nov 22, 2010 at 2:55 PM, Sanku M  wrote:

> Hi,
>
> When using gromacs 4.5.3, I experienced two problems ( neither of which
> exist in gromacs 4.0.7 or older version like 3.3.3)
>   1. when issuing grompp command to start one of my  simulations, gromacs
> 4.5.3 gives me fatal error:
>   'can not find atom type SNa .'
>   However, with gromacs 4.0.7 and gromacs 3.3.3, for the  same simulation ,
> grompp runs smoothly ( as atom type SNa indeed exist in my directory).
> I am not sure what is wrong with grompp in gromacs 4.5.3
>

The folder layout for the input files has changed. Make sure you have the
files in the correct location for 4.5.3.

>
> 2. when using g_density command , after processing data , gromacs 4.5.3
> gives an error :
> *** glibc detected *** double free or corruption (out): 0x006d8ce0
> ***
> Aborted
>
> But, the same g_density with gromacs 4.0.7 and gromacs 3.3.3 works
> smoothly.
>

That seems to be a problem in g_density. Please open a bug on
bugzilla.gromacs.org with enough detail to reproduce the error.

Roland


>
> Any idea  will be appreciated .
> Sanku
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] xtc corrupted during REMD

2010-11-21 Thread Roland Schulz
On Sun, Nov 21, 2010 at 2:05 PM, Spitaleri Andrea
wrote:

> Hi,
> yes sure. Basically I do:
>
> 1. mdrun -s runA_ -deffnm runA_ -replex 5000 -multi 5 -> i get the first
> 5x25ns of remd simulation (five xtc every 5ps and five trr every 20ps, for
> each replica). I check those files by gmxcheck and they are fine. no errors.
>
> 2. for i in 'seq 1 5'; tpbconv -s runA_$i -nsteps 2500 -o runB_$i ->
> extension the simulation to 50ns total
>
> 3. mdrun -s runB_ -replex 5000 -multi 5 -deffnm runB_ -cpi runA_  -> at end
> some (2 or 1 on the 5 xtc file) of the xtc are corrupted (from gmxcheck)
> whereas the trr are fine. These are from 25ns to 50ns.
>
> >From the log file I do not see any errors. Everything seems fine. I have
> free room space in the hd too :)
>
> I am just wondering whether the problem is in the xtc options (precision
> and writing step)
>

I doubt it that it has anything to do with your xtc options.

Are all you xtc corrupted or only some? Are those which are corrupted all
corrupted on the same frame or different ones?

Roland

>
> 
> Andrea Spitaleri PhD
> Dulbecco Telethon Institute
> Center of Genomics, BioInformatics and BioStatistics
> c/o DIBIT Scientific Institute
> Biomolecular NMR, 1B4
> Via Olgettina 58
> 20132 Milano (Italy)
> http://sites.google.com/site/andreaspitaleri/
> Tel: 0039-0226434348/5622/3497/4922
> Fax: 0039-0226434153
> 
> 
> Da: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] per
> conto di Mark Abraham [mark.abra...@anu.edu.au]
> Inviato: domenica 21 novembre 2010 17.03
> A: Discussion list for GROMACS users
> Oggetto: Re: [gmx-users] xtc corrupted during REMD
>
> On 21/11/2010 2:34 AM, Spitaleri Andrea wrote:
> > Hi there,
> > I am encountering a weird problem with a REMD simulation using 4.5.3. The
> total simulation is 50ns with 5 replica, and I do in two runs: 25ns and then
> continuing to 50ns (walltime queue). The first run is okay, the continue run
> (the last 25ns) randomly make some xtc files corrupted (from gmxcheck I get
> the Magic Number Error).
>
> I don't understand how the simulation can continue writing the .xtc
> files when you are getting magic number errors from gmxcheck. We need to
> see command lines for your workflow, please :-)
>
> Mark
>
> >   It is strange since the respective trr files are okay and the
> simulation is still going (it is not blowing up from the log, not step.pdb
> files, not crash). The only difference is that I am writing the xtc often
> respect to the trr file and just the complex not the solvent:
> >
> > nstxout = 1 ; coordinates every 20ps
> > nstvout = 0 ; velocity every 0ps
> > nstfout = 0 ; forces every 0 ps
> > nstlog  = 2500 ; energies log every 5ps
> > nstenergy   = 2500 ; energies  every 5ps
> > nstxtcout   = 2500 ; coordinates every 5ps to xtc
> > xtc-precision   = 2500 ;
> > xtc-grps= complex;
> >
> >
> > Since the error is happening only for the continuing run, I am just
> wondering if there is any reason for this.
> >
> > thanks for any help
> >
> >
> > and
> >
> > 
> > Andrea Spitaleri PhD
> > Dulbecco Telethon Institute
> > Center of Genomics, BioInformatics and BioStatistics
> > c/o DIBIT Scientific Institute
> > Biomolecular NMR, 1B4
> > Via Olgettina 58
> > 20132 Milano (Italy)
> > http://sites.google.com/site/andreaspitaleri/
> > Tel: 0039-0226434348/5622/3497/4922
> > Fax: 0039-0226434153
> > 
> >
> >
> >
> ---
> > SOSTIENI ANCHE TU LA RICERCA DEL SAN RAFFAELE.
> > NON C'E' CURA SENZA RICERCA.
> > Per donazioni: ccp 42437681 intestato a Fondazione Arete' Onlus del San
> Raffaele.
> > Per informazioni: tel. 02.2643.4461 - www.sanraffaele.org
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> ---
> SOSTIENI ANCHE TU LA RICERCA DEL SAN RAFFAELE.
> NON C'E' CURA SENZA RICERCA.
> Per donazioni: ccp 42437681 intestato a Fondazione Arete' Onlus del San
> Raffaele.
> Per informazioni: tel. 02.2643.4461 - www.sanraffaele.org
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>

Re: [gmx-users] PGI link error: unknown switch --rpath & attempted static link of dynamic object fftw/lib/libfftw3.so

2010-11-15 Thread Roland Schulz
Hi,

this is a bug in the autoconf version GROMACS is using. Since we are moving
to cmake we are not updating it thus this is an issue which won't be fixed.

There are two options:
- Create a folder. Copy (or symlink) the static library into that folder.
Point the linker (-L) to that folder. Because your folder only has the
static libraries it will work.
- Use cmake. For cmake this issue is solved.

Roland

On Mon, Nov 15, 2010 at 7:18 AM, Yudong Sun  wrote:

> Hi,
>
> I want to build an all static library because a job running on background
> nodes may not be able to find the dynamic libraries installed on the front
> node in my system.
>
> My configure line for GCC is:
>
> ./configure --prefix=/path-to/gromacs_4.5.3 --enable-mpi --enable-double
> CC=cc CFLAGS=-O3 MPICC=cc
>
> Here I use cc for both C compiler and MPI compiler because it is the name
> of the C compiler wrapper used on my system which is also functioning for
> the MPI C compiler.
>
> The fftw library is pre-installed on my system which is managed by the
> Modules package. When loaded, the fftw module sets the environment variables
> as:
>
> prepend-path LD_LIBRARY_PATH /opt/fftw/3.2.2.1/lib
> setenv   FFTW_POST_LINK_OPTS  
> -L/opt/fftw/3.2.2.1/lib-Wl,-rpath=/opt/fftw/
> 3.2.2.1/lib -lfftw3 -lfftw3f
> setenv   FFTW_INCLUDE_OPTS  -I/opt/fftw/3.2.2.1/include
> setenv   FFTW_DIR /opt/fftw/3.2.2.1/lib
> setenv   FFTW_INC /opt/fftw/3.2.2.1/include
>
> In make, the link line causing the libfftw3 error is:
>
> cc -DHAVE_CONFIG_H -I. -I../../src -I/usr/include/libxml2 -I../../include
> -DGMXLIBDIR=\"/usr/local/packages/nag/GROMACS/phase2b_4.5.3/share/top\"
> -O3 -MT grompp.o -MD -MP -MF .deps/grompp.Tpo -c -o grompp.o grompp.c
> mv -f .deps/grompp.Tpo .deps/grompp.Po
> /bin/sh ../../libtool --tag=CC   --mode=link cc  -O3   -o grompp grompp.o
> libgmxpreprocess_mpi_d.la  ../mdlib/libmd_mpi_d.la ../gmxlib/
> libgmx_mpi_d.la  -lnsl -lm
> cc -O3 -o grompp grompp.o  ./.libs/libgmxpreprocess_mpi_d.a
> /usr/local/packages/nag/GROMACS/gromacs-4.5.3/src/mdlib/.libs/libmd_mpi_d.a
> ../mdlib/.libs/libmd_mpi_d.a 
> /opt/fftw/3.2.2.1/lib/libfftw3.so/usr/lib64/libxml2.so -lz
> /usr/local/packages/nag/GROMACS/gromacs-4.5.3/src/gmxlib/.libs/libgmx_mpi_d.a
> ../gmxlib/.libs/libgmx_mpi_d.a -ldl -lnsl -lm   -Wl,--rpath -Wl,/opt/fftw/
> 3.2.2.1/lib -Wl,--rpath -Wl,/opt/fftw/3.2.2.1/lib
> /opt/cray/xt-asyncpe/3.7.24/bin/cc: INFO: linux target is being used
>
> /usr/bin/ld: attempted static link of dynamic object `/opt/fftw/
> 3.2.2.1/lib/libfftw3.so'
> collect2: ld returned 1 exit status
>
> GCC doesn't support the -rpath flag but it seems not a problem here.
>
> The fftw library has the static and dynamic libraries provided. The linker
> picks the libfftw3.so. This may be relevant to the specification in the
> libfftw3.la:
>
> # libfftw3.la - a libtool library file
> # Generated by ltmain.sh (GNU libtool) 2.2.6 Debian-2.2.6a-4
> #
> # Please DO NOT delete this file!
> # It is necessary for linking the library.
>
> # The name that we can dlopen(3).
> dlname='libfftw3.so.3'
>
> # Names of this library.
> library_names='libfftw3.so.3.2.4 libfftw3.so.3 libfftw3.so'
>
> # The name of the static archive.
> old_library='libfftw3.a'
> # Linker flags that can not go in dependency_libs.
> inherited_linker_flags=' -pthread'
>
> # Libraries that this one depends upon.
> dependency_libs=' -lm'
>
> # Names of additional weak libraries provided by this library
> weak_library_names=''
>
> # Version information for libfftw3.
> current=5
> age=2
> revision=4
>
> # Is this an already installed library?
> installed=yes
>
> # Should we warn about portability when linking against -modules?
> shouldnotlink=no
>
> # Files to dlopen/dlpreopen
> dlopen=''
> dlpreopen=''
>
> # Directory that this library needs to be installed in:
> libdir='/opt/fftw/3.2.2.1/lib'
>
>
> Is there any workaround to instruct the linker to use the static
> libfftw3.a?
>
> Thanks,
>
> Yudong
>
> Roland Schulz wrote, On 12/11/2010 18:53:
>
>> Little bit more background/context would help.
>>
>> Do you try to compile an all static library? If so you of course need a
>> static library of fftw. If it is not all static it normally should
>> accept the dynamic fftw. Then please give us the full configure line,
>> the gcc command line of the link step and the full error message.
>>
>> Roland
>>
>> On Fri, Nov 12, 2010 at 12:17 PM, Yudong Sun > <mailto:yud...@nag.co.uk>>

Re: [gmx-users] PGI link error: unknown switch --rpath & attempted static link of dynamic object fftw/lib/libfftw3.so

2010-11-12 Thread Roland Schulz
Little bit more background/context would help.

Do you try to compile an all static library? If so you of course need a
static library of fftw. If it is not all static it normally should accept
the dynamic fftw. Then please give us the full configure line, the gcc
command line of the link step and the full error message.

Roland

On Fri, Nov 12, 2010 at 12:17 PM, Yudong Sun  wrote:

> Mark Abraham wrote, On 12/11/2010 17:02:
>
>  On 13/11/2010 3:15 AM, Yudong Sun wrote:
>>
>>> Hi,
>>>
>>> I have some troubles when compiling GROMACS 4.5.3 using PGI compiler
>>> on the -rpath flag and also a static link to dynamic libfftw3.so.
>>>
>>> I use the pre-installed FFTW 3.2.2.1 library on my Linux system. The
>>> FFTW library is managed by the Modules package. The fftw module
>>> automatically sets the environ variable as:
>>>
>>> FFTW_POST_LINK_OPTS = -L/opt/fftw/3.2.2.1/lib
>>> -Wl,-rpath=/opt/fftw/3.2.2.1/lib -lfftw3 -lfftw3f
>>>
>>
>> So how does configure use this information? (hint: providing the
>> configure command line is essential for us to understand any context!)
>>
>>
>>> When compiling, an error occurs on the -rpath:
>>>
>>> pgcc -fast -o grompp grompp.o ./.libs/libgmxpreprocess_mpi_d.a
>>>
>>> /usr/local/packages/nag/GROMACS/gromacs-4.5.3/src/mdlib/.libs/libmd_mpi_d.a
>>> ../mdlib/.libs/libmd_mpi_d.a /opt/fftw/3.2.2.1/lib/libfftw3.so
>>>
>>> /usr/local/packages/nag/GROMACS/gromacs-4.5.3/src/gmxlib/.libs/libgmx_mpi_d.a
>>> ../gmxlib/.libs/libgmx_mpi_d.a -ldl -lnsl -lm --rpath
>>> /opt/fftw/3.2.2.1/lib --rpath /opt/fftw/3.2.2.1/lib
>>> pgcc-Error-Unknown switch: --rpath
>>> pgcc-Error-Unknown switch: --rpath
>>>
>>> Pgcc doesn't recognize --rpath. The correct format is a single dash
>>> only -rpath.
>>>
>>
>> Sounds like configure isn't handling pgcc properly. However, GROMACS is
>> using very vanilla autoconf stuff, so I'm strongly of the opinion that
>> the problem isn't on the GROMACS side.
>>
>>
>>> If I manually remove the extra '-' (-rpath /opt/fftw/3.2.2.1/lib) and
>>> rerun the command line, a link error appears:
>>>
>>> /usr/bin/ld: attempted static link of dynamic object
>>> `/opt/fftw/3.2.2.1/lib/libfftw3.so'
>>>
>>> The command line links the dynamic fftw library. As the 'configure
>>> --help' shows the default is a static build. Why doesn't the configure
>>> pick the libfftw3.a but the libfftw3.so? The fftw library on my system
>>> contains both static and dynamic libraries.
>>>
>>
>> Don't know. Ask the autoconf list.
>>
>>
>>> I have also tried to make the old GROMACS 4.0.7 which has shown the
>>> same problems as above.
>>>
>>> Any workarounds to the problems or what options should I pass to the
>>> configure?
>>>
>>
>> Don't bother with PGI compilers. GROMACS performance is 99%
>> compiler-independent, thanks to hand-coded assembly for the inner loops.
>> Use gcc.
>>
>>
> I have tried GCC. It has the same static link problem:
>
>
> attempted static link of dynamic object `/opt/fftw/3.2.2.1/lib/libfftw3.so
> '
>
> Yudong
>
>  Mark
>>
>
> 
> The Numerical Algorithms Group Ltd is a company registered in England
> and Wales with company number 1249803. The registered office is:
> Wilkinson House, Jordan Hill Road, Oxford OX2 8DR, United Kingdom.
>
> This e-mail has been scanned for all viruses by Star. The service is
> powered by MessageLabs.
> 
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>


-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] ngmx in windows

2010-11-10 Thread Roland Schulz
If you compile using cygwin than it should be able to compile ngmx if you
have all required X11 libraries. I recommend you to use VMD.

On Wed, Nov 10, 2010 at 11:03 PM, bharat gupta wrote:

> I am trying to run the simulation trajectory using ngmx in windows .. but
> it seems that the ngmx.exe file doesnot exists .. can anybody tell what
> shall i do  to view the movie of my simulation
>
>
>
> --
> Bharat
> Ph.D. Candidate
> Room No. : 7202A, 2nd Floor
> Biomolecular Engineering Laboratory
> Division of Chemical Engineering and Polymer Science
> Pusan National University
> Busan -609735
> South Korea
> Lab phone no. - +82-51-510-3680, +82-51-583-8343
> Mobile no. - 010-5818-3680
> E-mail : monu46...@yahoo.com
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Gromacs 4.5 build problems

2010-11-10 Thread Roland Schulz
Hi,

compiling with PGI doesn't currently work. It would not be difficult to get
it to compile with PGI but currently it is not supported. Please let us know
if their is a compelling reason why you need to use PGI.

Their should be no performance advantage. Thus I suggest to use Intel or
GNU.

Roland

On Wed, Nov 10, 2010 at 8:26 AM, Miah Wadud Dr (ITCS) wrote:

> Hello Gromacs users,
>
> I am trying to build Gromacs 4.5 using PGI compiler 10.9 and am getting the
> following error message:
>
> mpicc -DHAVE_CONFIG_H -I. -I../../src -I/usr/include/libxml2
> -I../../include
> -DGMXLIBDIR=\"/cvos/apps/gromacs-4.5/share/top\"
> -I/cvos/shared/apps/fftw/pgi/64/3.1.2/include -Mm128 -c main.c -o main.o
> >/dev/null 2>&1
> source='maths.c' object='maths.lo' libtool=yes \
>DEPDIR=.deps depmode=none /bin/sh ../../config/depcomp \
>/bin/sh ../../libtool --tag=CC   --mode=compile mpicc
> -DHAVE_CONFIG_H -I. -I../../src -I/usr/include/libxml2
> -I../../include -DGMXLIBDIR=\"/cvos/apps/gromacs-4.5/share/top\"
> -I/cvos/shared/apps/fftw/pgi/64/3.1.2/include  -Mm128 -c -o maths.lo
> maths.c
>  mpicc -DHAVE_CONFIG_H -I. -I../../src -I/usr/include/libxml2
> -I../../include -DGMXLIBDIR=\"/cvos/apps/gromacs-4.5/share/top\"
> -I/cvos/shared/apps/fftw/pgi/64/3.1.2/include -Mm128 -c maths.c  -DPIC -o
> .libs/maths.o
> PGC-F-0249-#error --  ERROR: No 32 bit wide integer type found! (maths.c:
> 99)
> PGC/x86-64 Linux 10.9-0: compilation aborted
> make[4]: *** [maths.lo] Error 1
> make[4]: Leaving directory
> `/gpfs/ueasystem/escluster/gromacs-4.5/src/gmxlib'
> make[3]: *** [all-recursive] Error 1
> make[3]: Leaving directory
> `/gpfs/ueasystem/escluster/gromacs-4.5/src/gmxlib'
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory `/gpfs/ueasystem/escluster/gromacs-4.5/src'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/gpfs/ueasystem/escluster/gromacs-4.5/src'
> make: *** [all-recursive] Error 1
> [r...@master1 gromacs-4.5]#
>
> The configure command typed is:
>
> [r...@master1 gromacs-4.5]# ./configure --enable-shared --enable-double
> --enable-mpi --program-suffix=_mpi_d \
> --prefix=/cvos/apps/gromacs-4.5
> checking build system type... x86_64-unknown-linux-gnu
> checking host system type... x86_64-unknown-linux-gnu
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for a thread-safe mkdir -p... /bin/mkdir -p
> checking for gawk... gawk
> checking whether make sets $(MAKE)... yes
> checking how to create a ustar tar archive... gnutar
> checking for C compiler default output file name... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables...
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... no
> checking whether /cvos/shared/apps/pgi/10.9/linux86-64/10.9/bin/pgcc
> accepts
> -g... yes
> checking for /cvos/shared/apps/pgi/10.9/linux86-64/10.9/bin/pgcc option to
> accept ISO C89... none needed
> checking for style of include used by make... GNU
> checking dependency style of
> /cvos/shared/apps/pgi/10.9/linux86-64/10.9/bin/pgcc... none
> checking dependency style of
> /cvos/shared/apps/pgi/10.9/linux86-64/10.9/bin/pgcc... none
> checking for mpxlc... no
> checking for mpicc... mpicc
> checking whether the MPI cc command works... yes
> checking for MPI_IN_PLACE in collective operations... no
> checking for catamount... no
> checking how to run the C preprocessor... mpicc -E
> checking for grep that handles long lines and -e... /bin/grep
> checking for egrep... /bin/grep -E
> checking whether ln -s works... yes
> **
> * Using CFLAGS from environment variable *
> **
> checking whether byte ordering is bigendian... no
> checking that size_t can hold pointers... yes
> checking for SIGUSR1... yes
> checking for pipes... yes
> checking floating-point format... IEEE754 (little-endian byte and word
> order)
> checking whether ln -s works... yes
> checking whether make sets $(MAKE)... (cached) yes
> checking for a sed that does not truncate output... /bin/sed
> checking for non-GNU ld... /usr/bin/ld
> checking if the linker (/usr/bin/ld) is GNU ld... yes
> checking for /usr/bin/ld option to reload object files... -r
> checking for BSD-compatible nm... /usr/bin/nm -B
> checking how to recognise dependent libraries... pass_all
> checking dlfcn.h usability... yes
> checking dlfcn.h presence... yes
> checking for dlfcn.h... yes
> checking whether we are using the GNU C++ compiler... no
> checking whether mpicc accepts -g... no
> checking dependency style of mpicc... none
> checking how to run the C++ preprocessor... /lib/cpp
> checking the maximum length of command line arguments... 32768
> checking command to parse /usr/bin/nm -B output from mpicc object... failed
> checking for objdir... .libs
> checking for ar... ar
>

  1   2   >