[gmx-users] trjorder

2013-03-12 Thread Nidhi Katyal
Dear all
I would like to know the number of oxygen atoms of my co-solvent
molecules which are around 0.3nm of the protein in the last few ns. I
have read the manual and found that trjorder could serve the purpose.
So i have first created index file containing all the oxygen atoms of
my co-solvent molecules. Then I have used the following command:
trjorder -f *.xtc -s *.tpr -b 18000 -e 2 -nshell nshell.xvg -na 1
Here I have used na as 1 since I am interested in number of atoms
instead of molecules (although one cosolvent molecule consists of 11
oxygen atoms).
Please guide me if I am proceeding in the right direction and if I am
wrong what na value should I give to serve the required purpose.
Thanks in advance.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: combine two gro files

2013-03-12 Thread shika
Thanks Dr. Dallas for your replying.

But my friend told me that the extended one is only 2 ns not the total 22ns.

So,the gro for the extended one is not affect at all?

I thought that the gro file that prodice from the extend one is only 2 ns.

Thanks Doc!

On Wed, Mar 13, 2013 at 11:19 AM, Dallas Warren [via GROMACS]
 wrote:
> Why is it that you want to combine the two coordinate files?
>
> md.gro is the coordinates of the system at the end of 20ns.
>
> md_extend.gro is the coordinates of the system at the end of 22ns.
>
> So combining them will not make much sense.
>
> Catch ya,
>
> Dr. Dallas Warren
> Drug Discovery Biology
> Monash Institute of Pharmaceutical Sciences, Monash University
> 381 Royal Parade, Parkville VIC 3052
> [hidden email]
> +61 3 9903 9304
> -
> When the only tool you own is a hammer, every problem begins to resemble a
> nail.
>
>
>> -Original Message-
>> From: [hidden email] [mailto:gmx-users-
>> [hidden email]] On Behalf Of Nur Syafiqah Abdul Ghani
>> Sent: Wednesday, 13 March 2013 2:10 PM
>> To: [hidden email]
>> Subject: [gmx-users] combine two gro files
>>
>> Hi all,
>>
>> I just finished my simulation protein in mix solvent with the file
>> name md.gro but when i analyze the protein it seems not stable yet
>> therefore i have to extend it about 2ns more.The previously i used
>> 20ns for my simulation.So the extended oneis name md_extend.gro and
>> also the others file, trr tpr.
>>
>> The command that i used to extend the simu is like below :
>> tpbconv -f md.trr -s md.tpr -o md_extend.tpr -extend 2000
>> mdrun -v -cpi md.cpt -deffnm md_extend
>>
>> >From what I understand the new file that created after extend is only
>> 2ns second.right?
>> How i can combine the gro file from the original and the extended one?
>> Because the xtc trr and also edr file i combine it by using comand
>> trjcat and eneconv.
>>
>> The trjcat cant combine the gro file because the error stated :
>> Fatal error:
>> Can not write a gro file without atom names
>>
>> So anyone facing this problems.?I already search in previous problem
>> but they said convert the gro to pdb?then use command cat..can i use
>> that?
>>
>> --
>> Best Regards,
>>
>> Nur Syafiqah Abdul Ghani,
>> Theoretical and Computational Chemistry Laboratory,
>> Department of Chemistry,
>> Faculty of Science,
>> Universiti Putra Malaysia,
>> 43400 Serdang,
>> Selangor.
>> alternative email : [hidden email]
>> --
>> gmx-users mailing list[hidden email]
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to [hidden email].
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing list[hidden email]
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [hidden email].
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> 
> If you reply to this email, your message will be added to the discussion
> below:
> http://gromacs.5086.n6.nabble.com/combine-two-gro-files-tp5006279p5006280.html
> To start a new topic under GROMACS Users Forum, email
> ml-node+s5086n4370410...@n6.nabble.com
> To unsubscribe from GROMACS, click here.
> NAML



--
Best Regards,

Nur Syafiqah Abdul Ghani,
Theoretical and Computational Chemistry Laboratory,
Department of Chemistry,
Faculty of Science,
Universiti Putra Malaysia,
43400 Serdang,
Selangor.
alternative email : syafiqahabdulgh...@gmail.com




--
View this message in context: 
http://gromacs.5086.n6.nabble.com/combine-two-gro-files-tp5006279p5006281.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] combine two gro files

2013-03-12 Thread Dallas Warren
Why is it that you want to combine the two coordinate files?

md.gro is the coordinates of the system at the end of 20ns.

md_extend.gro is the coordinates of the system at the end of 22ns.

So combining them will not make much sense.

Catch ya,

Dr. Dallas Warren
Drug Discovery Biology
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


> -Original Message-
> From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
> boun...@gromacs.org] On Behalf Of Nur Syafiqah Abdul Ghani
> Sent: Wednesday, 13 March 2013 2:10 PM
> To: gmx-users@gromacs.org
> Subject: [gmx-users] combine two gro files
> 
> Hi all,
> 
> I just finished my simulation protein in mix solvent with the file
> name md.gro but when i analyze the protein it seems not stable yet
> therefore i have to extend it about 2ns more.The previously i used
> 20ns for my simulation.So the extended oneis name md_extend.gro and
> also the others file, trr tpr.
> 
> The command that i used to extend the simu is like below :
> tpbconv -f md.trr -s md.tpr -o md_extend.tpr -extend 2000
> mdrun -v -cpi md.cpt -deffnm md_extend
> 
> >From what I understand the new file that created after extend is only
> 2ns second.right?
> How i can combine the gro file from the original and the extended one?
> Because the xtc trr and also edr file i combine it by using comand
> trjcat and eneconv.
> 
> The trjcat cant combine the gro file because the error stated :
> Fatal error:
> Can not write a gro file without atom names
> 
> So anyone facing this problems.?I already search in previous problem
> but they said convert the gro to pdb?then use command cat..can i use
> that?
> 
> --
> Best Regards,
> 
> Nur Syafiqah Abdul Ghani,
> Theoretical and Computational Chemistry Laboratory,
> Department of Chemistry,
> Faculty of Science,
> Universiti Putra Malaysia,
> 43400 Serdang,
> Selangor.
> alternative email : syafiqahabdulgh...@gmail.com
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Postdoc jobs developing gromacs etc.

2013-03-12 Thread David van der Spoel
If you are interested in a gromacs-related development position at the 
postdoc level, please have a look at our ad below. Please spread to 
interested colleagues.


http://www.uu.se/jobb/others/annonsvisning?languageId=1&tarContentId=235221

Regards,
--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] mdrun WARING and crash

2013-03-12 Thread L.Liu
Hallo Justin,

One update on the wired snapshot mentioned on previous email.
I checked over the output coordinates and xmgrace it with xy directions, 
finding that it is not crystal, instead it is a normal homogeneous box. All 
these give us a clue that it might be the trajectory file goes wrong, cause we 
calculated the RDF and MSD and view VMD all through traj.xtc. 
I am checking the way I output the trajectory. Could you please give me your 
suggestions if there is something wrong on my .mdp file( included in earlier 
email).

It seems that the system now is working, though with still some output errors 
or perhaps further unexpected problems. Well, so far we got a progress, and I 
want to say thank you very much for all your help.

Kind regards,
Li

From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
of Justin Lemkul [jalem...@vt.edu]
Sent: Monday, March 11, 2013 10:06 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash

On Monday, March 11, 2013, wrote:

> Hallo Justin,
>
> Thank you for your comments.
> Taking your suggestions, I set nrexcl=1, and comment the [pairs] section,
> because there is no special case of non-bonded interactions to declare,
>  then we try to see what happens.
> Now we minimize it by steep, then by cg, both of the processor are very
> quick done, because after around 7000 steps, the energy can not go further
> down any more.
> Then we finish the mdrun, the energies output are like:
>
>Step   Time Lambda
>   1   10.00.0
>
>Energies (kJ/mol)
> LJ (SR)   Coulomb (SR)  PotentialKinetic En.   Total Energy
> 1.49468e+050.0e+001.49468e+058.73890e+031.58207e+05
> Temperature Pressure (bar)
> 4.38208e+029.93028e+04
>
> Although this time no running error, we find the outputs are extremely
> wired, for example through
> VMD conf.gro traj.xtc
> we find 0 frame a homogeneous box, starting from the first step, the box
> becomes a lattice, which is far away from our expected the Polymer melt
> system should be.
>
> The force parameters are taken from literature PRL 85(5), 1128(2000), I am
> still very worried about the format of my input files. Could you please
> give me, a very beginner a help.
>
>
Please provide links to images. This is probably not a big deal as long as
the simulation is actually running, since a triclinic representation of the
unit cell is used.

-Justin



> Thanks a lot.
> Kind regards,
> Li
> 
> From: gmx-users-boun...@gromacs.org  [
> gmx-users-boun...@gromacs.org ] on behalf of Justin Lemkul [
> jalem...@vt.edu ]
> Sent: Thursday, February 28, 2013 3:02 PM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] mdrun WARING and crash
>
> On 2/28/13 6:59 AM, l@utwente.nl wrote:
> > Hallo Justin,
> >
> > Thank you for you help. I have read the previous discussions on this
> topic, which is very helpful.
> > The link is:
> http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs-section-td5003598.html
> > Well, there are still something I want to make sure, which might be the
> reason of mdrun crash of my system.
> >
> > ###Introduction of system##
> > Linear Polyethylene melt:  each chain contains 16 beads, each bead
> coarse grained 3 monomers. Number of  chain in the box is 64.
> >
> > Force Field##
> > ffbonded.itp
> > [ bondtypes ]
> > ; FENE, Ro = 1.5 sigma and kb = 30 epsilon/sigma^2
> > ;   ij funcb0 (nm) kb (kJ/mol nm^2)
> >   CH2   CH27   0.795   393.
> >
> > ffnonbonded.itp
> > [ atomtypes ]
> > ; epsilon / kB = 443K
> > ;name  at.num  mass (au)   charge   ptype sigma (nm)
>  epsilon (kJ/mol)
> > CH2  142.3   0.000   A   0.5300
>  3.68133
> >
> > [ nonbond_params ]
> >; i  jfunc  sigma   epsilon
> > CH2   CH210.5303.68133
> >
> > [ pairtypes ]
> >;  i  jfunc  sigma   epsilon
> > CH2   CH21  0.533.68133
> >
> > topology##
> > [ defaults ]
> > ; nbfunccomb-rule   gen-pairs   fudgeLJ   fudgeQQ
> >  1  2no  1.0  1.0
> >
> > ; The force field files to be included
> > #include "../forcefield/forcefield.itp"
> >
> > [ moleculetype ]
> > ; name  nrexcl
> > PE  0
> > [atoms]
> > ;   nrtype   resnr  residuatomcgnr  charge
> >   1 CH2   1PE  C   1  0.0
> >   2 CH2   1PE  C   2  0.0
> >   3 CH2   1PE  C   3  0.0
> >   4 CH2   1PE  C   4  0.0
> >  ..
> >   15CH2   1PE 

Re: [gmx-users] Mismatching number of PP MPI processes and GPUs per node

2013-03-12 Thread George Patargias
Hi Carsten

Thanks a lot for this tip. It worked!

George

> Hi,
>
> On Mar 11, 2013, at 10:50 AM, George Patargias  wrote:
>
>> Hello
>>
>> Sorry for posting this again.
>>
>> I am trying to run GROMACS 4.6 compiled with MPI and GPU acceleration
>> (CUDA 5.0 lib) using the following SGE batch script.
>>
>> #!/bin/sh
>> #$ -V
>> #$ -S /bin/sh
>> #$ -N test-gpus
>> #$ -l h="xgrid-node02"
>> #$ -pe mpi_fill_up 12
>> #$ -cwd
>>
>> source /opt/NetUsers/pgkeka/gromacs-4.6_gpu_mpi/bin/GMXRC
>> export
>> DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-5.0/lib:$DYLD_LIBRARY_PATH
>>
>> mpirun -np 12 mdrun_mpi -s test.tpr -deffnm test_out -nb gpu
>>
>> After detection of the installed GPU card
>>
>> 1 GPU detected on host xgrid-node02.xgrid:
>>  #0: NVIDIA Quadro 4000, compute cap.: 2.0, ECC:  no, stat: compatible
>>
>> GROMACS issues the following error
>>
>> Incorrect launch configuration: mismatching number of PP MPI processes
>> and
>> GPUs per node. mdrun_mpi was started with 12 PP MPI processes per node,
>> but only 1 GPU were detected.
>>
>> It can't be that we need to run GROMACS only on a single core so that it
>> matches the single GPU card.
> Have you compiled mdrun_mpi with OpenMP threads support? Then, if you
> do
>
> mpirun -np 1 mdrun_mpi ?
>
> it should start one MPI process with 12 OpenMP threads, which should give
> you what you want. You can also manually specify the number of OpenMP
> threads
> by adding
>
> -ntomp 12
>
> Carsten
>
>>
>
>>
>> Do you have any idea what has to be done?
>>
>> Many thanks.
>>
>> Dr. George Patargias
>> Postdoctoral Researcher
>> Biomedical Research Foundation
>> Academy of Athens
>> 4, Soranou Ephessiou
>> 115 27
>> Athens
>> Greece
>>
>> Office: +302106597568
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/grubmueller/kutzner
> http://www.mpibpc.mpg.de/grubmueller/sppexa
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>


Dr. George Patargias
Postdoctoral Researcher
Biomedical Research Foundation
Academy of Athens
4, Soranou Ephessiou
115 27
Athens
Greece

Office: +302106597568

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs with Intel Xeon Phi coprocessors ?

2013-03-12 Thread Szilárd Páll
Hi Chris,

You should be able to run on MIC/Xeon Phi as these accelerators, when used
in symmetric mode, behave just like a compute node. However, for two main
reasons the performance will be quite bad:
- no SIMD accelerated kernels for MIC;
- no accelerator-specific parallelization implemented (asymmetric/"offload"
mode).

Note that OpenMP alone won't help much.

Both are work in progress (see redmine #1181 and #1187), but there is no
target date for the availability of an efficient MIX/Phi-optimized GROMACS
version. I personally hope that we will have something in form of preview
release based on 4.6 (but probably not included) and if it works well
enough, perhaps included in 5.0.


Let me take the opportunity to say that if anybody is interested in
contributing to the MIC acceleration (or other HPC, computing, or
scientific aspects of GROMACS or MD in general), we are in the process of
defining more or less independent projects for new
contribution/collaboration:
http://www.gromacs.org/Projects_available_for_new_contributors#Intel_MIC_support.3a.c2.a0implementing_asymmetric_offload_mechanism

Cheers,

--
Szilárd


On Sat, Mar 9, 2013 at 2:43 AM, Christopher Neale <
chris.ne...@mail.utoronto.ca> wrote:

> Dear users:
>
> does anybody have any experience with gromacs on a cluster in which each
> node is composed of 1 or 2 x86 processors plus an Intel Xeon Phi
> coprocessor? Can gromacs make use of the xeon phi coprocessor? If not, does
> anybody know if that is in the pipeline?
>
> Thank you,
> Chris.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] No default U-B types

2013-03-12 Thread 라지브간디
Dear gmx users.


I have specified the bond line for heme with CO ligand in specbond.dat and it 
created topology with special bond in bond type.  However, when i come to the 
steps of atomic-level description of our system in the binary file using 
ions.tpr, i got an following error.




ERROR 1 [file topol_Other_chain_A2.itp, line 265]:
  No default U-B types




ERROR 2 [file topol_Other_chain_A2.itp, line 343]:
  No default U-B types

How do i avoid this error ? Thanks in advance.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] constant velocity pulling/Umbrella sampling

2013-03-12 Thread raghav singh
Hello Fellow Users,

I have a doubt about the mdp option in case of constant velocity pulling /
Umbrella pulling simulation.

pull= umbrella
pull_geometry   = distance  ; simple distance increase
pull_dim= N N Y
pull_start  = yes   ; define initial COM distance > 0
pull_ngroups= 1
pull_group0 = Chain_B
pull_group1 = Chain_A**pull_rate1  = 0.01  ; 0.01 nm
per ps = 10 nm per ns
**pull_k1 = 1000  ; kJ mol^-1 nm^-2*

Pull_rate1 = velocity variable for CVP
Pull_k1= is this option is for constant force pulling OR THIS IS SPRING
CONSTANT FOR THE VIRTUAL SPRING???

because i am trying to follow Justin's umbrella sampling tutorial and when
i remove this PULL_K1 option...then the COM Pull Energy ... does not change
and the ligand never get pulled away.??

guys Please help me out here.

Thank you in Advance.

cheers
Raghav
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] setting the gromacs 4.6.1 path

2013-03-12 Thread Mark Abraham
On Tue, Mar 12, 2013 at 11:14 AM, 라지브간디  wrote:

> dear gmx.
>
>
> i am having a problem of setting the path as mentioned in gromacs manual
> 4.6.1 version.
>
>
> I used
>
>
> source /usr/local/gromacs/bin/GMXRC
>
>
> bash: goto: command not found..
>

What is your terminal? What is its version? You might need to update that
if it's truly ancient. Otherwise, your bash shell is probably erroneously
setting the variable "shell", which it should not do. This foils the
GROMACS detection of which shell you are running. If you can't fix that,
you should source whichever of GMXRC.csh, GMXRC.bash or GMXRC.ksh is
applicable.

when i use
>
>
> echo "source /usr/local/gromacs/bin/GMXRC" >> ~/.bash_profile
>
>
> it work on that terminal only. cant access the gromacs from other
> terminal. Could you tell me what is the problem of ? Thanks
>

Source only ever works on the shell in which it is run. Setting up your
profile to source automatically solves the problem for each new shell, and
not any existing ones.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] setting the gromacs 4.6.1 path

2013-03-12 Thread 라지브간디
dear gmx.


i am having a problem of setting the path as mentioned in gromacs manual 4.6.1 
version.


I used 


source /usr/local/gromacs/bin/GMXRC


bash: goto: command not found..


when i use


echo "source /usr/local/gromacs/bin/GMXRC" >> ~/.bash_profile


it work on that terminal only. cant access the gromacs from other terminal. 
Could you tell me what is the problem of ? Thanks

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] query for gromacs-4.5.4

2013-03-12 Thread Mark Abraham
It could be anything. But until we see some GROMACS diagnostic messages,
nobody can tell.

Mark

On Tue, Mar 12, 2013 at 10:08 AM, Chaitali Chandratre  wrote:

> Sir,
>
> Thanks for your reply
> But the same script runs on some other cluster with apprx same
> configuration but not on cluster on which I am doing set up.
>
> Also job hangs after some 16000 steps but not come out immediately.
> It might be problem with configuration or what?
>
> Thanks...
>
> Chaitali
>
> On Tue, Mar 12, 2013 at 2:18 PM, Mark Abraham  >wrote:
>
> > They're just MPI error messages and don't provide any useful GROMACS
> > diagnostics. Look in the end of the .log file, stderr and stdout for
> clues.
> >
> > One possibility is that your user's system is too small to scale
> > effectively. Below about 1000 atoms/core you're wasting your time unless
> > you've balanced the load really well. There is a
> > simulation-system-dependent point below which fatal GROMACS errors are
> > assured.
> >
> > Mark
> >
> > On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
> > wrote:
> >
> > > Hello Sir,
> > >
> > > Actually I have been given work to setup gromacs-4.5.4 in our cluster
> > which
> > > is being used
> > > by users.I am not gromacs user and not aware of its internal details.
> > > I have got only .tpr file from user and I need to test my installation
> > > using that .tpr file.
> > >
> > > It works fine for 2 nodes 8 processes , 1 node 8 processes.
> > >  But when number of processes are equal to 16 it gives segmentation
> fault
> > > and
> > >  if number of processes are equal to 32 it gives
> > > error message like
> > > " HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:221): assert
> > > (!closed) failed
> > >  ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to send SIGUSR1
> > > downstream
> > >  HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77):
> callback
> > > returned error status
> > >  HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:388): error
> > > waiting for event
> > > [ main (./ui/mpich/mpiexec.c:718): process manager error waiting for
> > > completion"
> > >
> > > I am not clear like whether problem is there in my installation or
> what?
> > >
> > > Thanks and Regards,
> > >Chaitalij
> > >
> > > On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul  wrote:
> > >
> > > >
> > > >
> > > > On 3/6/13 4:20 AM, Chaitali Chandratre wrote:
> > > >
> > > >> Dear Sir ,
> > > >>
> > > >> I am new to this installation and setup area. I need some
> information
> > > for
> > > >> -stepout option for
> > > >>
> > > >
> > > > What more information do you need?
> > > >
> > > >
> > > >  mdrun_mpi and also probable causes for segmentation fault in
> > > >>  gromacs-4.5.4.
> > > >> (my node having 64 GB mem running with 16 processes, nsteps =
> > 2000)
> > > >>
> > > >>
> > > > There are too many causes to name.  Please consult
> > > http://www.gromacs.org/
> > > > **Documentation/Terminology/**Blowing_Up<
> > > http://www.gromacs.org/Documentation/Terminology/Blowing_Up>.
> > > >  If you need further help, please be more specific, including a
> > > description
> > > > of the system, steps taken to minimize and/or equilibrate it, and any
> > > > complete .mdp file(s) that you are using.
> > > >
> > > > -Justin
> > > >
> > > > --
> > > > ==**==
> > > >
> > > > Justin A. Lemkul, Ph.D.
> > > > Research Scientist
> > > > Department of Biochemistry
> > > > Virginia Tech
> > > > Blacksburg, VA
> > > > jalemkul[at]vt.edu | (540) 231-9080
> > > > http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin<
> > > http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
> > > >
> > > > ==**==
> > > > --
> > > > gmx-users mailing listgmx-users@gromacs.org
> > > > http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users>
> > > > * Please search the archive at http://www.gromacs.org/**
> > > > Support/Mailing_Lists/Search<
> > > http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> > > > * Please don't post (un)subscribe requests to the list. Use the www
> > > > interface or send it to gmx-users-requ...@gromacs.org.
> > > > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> > > http://www.gromacs.org/Support/Mailing_Lists>
> > > >
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/

RE: [gmx-users] mdrun WARING and crash

2013-03-12 Thread L.Liu
Hallo Justin,

Thank you for your reply, I uploaded the images, Please find following the link 
below,

start box:
http://s1279.beta.photobucket.com/user/Li_Liu/media/image0_zpsf95d10fe.jpeg.html?filters[user]=134822327&filters[recent]=1&filters[publicOnly]=1&sort=1&o=1

and  snapshot for first step:
http://s1279.beta.photobucket.com/user/Li_Liu/media/image1_zps06f589e6.jpeg.html?filters[user]=134822327&filters[recent]=1&filters[publicOnly]=1&sort=1&o=0

Thanks a lot, and have a nice day.

Kind regards,
Li

From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
of Justin Lemkul [jalem...@vt.edu]
Sent: Monday, March 11, 2013 10:06 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash

On Monday, March 11, 2013, wrote:

> Hallo Justin,
>
> Thank you for your comments.
> Taking your suggestions, I set nrexcl=1, and comment the [pairs] section,
> because there is no special case of non-bonded interactions to declare,
>  then we try to see what happens.
> Now we minimize it by steep, then by cg, both of the processor are very
> quick done, because after around 7000 steps, the energy can not go further
> down any more.
> Then we finish the mdrun, the energies output are like:
>
>Step   Time Lambda
>   1   10.00.0
>
>Energies (kJ/mol)
> LJ (SR)   Coulomb (SR)  PotentialKinetic En.   Total Energy
> 1.49468e+050.0e+001.49468e+058.73890e+031.58207e+05
> Temperature Pressure (bar)
> 4.38208e+029.93028e+04
>
> Although this time no running error, we find the outputs are extremely
> wired, for example through
> VMD conf.gro traj.xtc
> we find 0 frame a homogeneous box, starting from the first step, the box
> becomes a lattice, which is far away from our expected the Polymer melt
> system should be.
>
> The force parameters are taken from literature PRL 85(5), 1128(2000), I am
> still very worried about the format of my input files. Could you please
> give me, a very beginner a help.
>
>
Please provide links to images. This is probably not a big deal as long as
the simulation is actually running, since a triclinic representation of the
unit cell is used.

-Justin



> Thanks a lot.
> Kind regards,
> Li
> 
> From: gmx-users-boun...@gromacs.org  [
> gmx-users-boun...@gromacs.org ] on behalf of Justin Lemkul [
> jalem...@vt.edu ]
> Sent: Thursday, February 28, 2013 3:02 PM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] mdrun WARING and crash
>
> On 2/28/13 6:59 AM, l@utwente.nl wrote:
> > Hallo Justin,
> >
> > Thank you for you help. I have read the previous discussions on this
> topic, which is very helpful.
> > The link is:
> http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs-section-td5003598.html
> > Well, there are still something I want to make sure, which might be the
> reason of mdrun crash of my system.
> >
> > ###Introduction of system##
> > Linear Polyethylene melt:  each chain contains 16 beads, each bead
> coarse grained 3 monomers. Number of  chain in the box is 64.
> >
> > Force Field##
> > ffbonded.itp
> > [ bondtypes ]
> > ; FENE, Ro = 1.5 sigma and kb = 30 epsilon/sigma^2
> > ;   ij funcb0 (nm) kb (kJ/mol nm^2)
> >   CH2   CH27   0.795   393.
> >
> > ffnonbonded.itp
> > [ atomtypes ]
> > ; epsilon / kB = 443K
> > ;name  at.num  mass (au)   charge   ptype sigma (nm)
>  epsilon (kJ/mol)
> > CH2  142.3   0.000   A   0.5300
>  3.68133
> >
> > [ nonbond_params ]
> >; i  jfunc  sigma   epsilon
> > CH2   CH210.5303.68133
> >
> > [ pairtypes ]
> >;  i  jfunc  sigma   epsilon
> > CH2   CH21  0.533.68133
> >
> > topology##
> > [ defaults ]
> > ; nbfunccomb-rule   gen-pairs   fudgeLJ   fudgeQQ
> >  1  2no  1.0  1.0
> >
> > ; The force field files to be included
> > #include "../forcefield/forcefield.itp"
> >
> > [ moleculetype ]
> > ; name  nrexcl
> > PE  0
> > [atoms]
> > ;   nrtype   resnr  residuatomcgnr  charge
> >   1 CH2   1PE  C   1  0.0
> >   2 CH2   1PE  C   2  0.0
> >   3 CH2   1PE  C   3  0.0
> >   4 CH2   1PE  C   4  0.0
> >  ..
> >   15CH2   1PE  C   15 0.0
> >   16CH2   1PE  C   16 0.0
> >
> > [ bonds ]
> > ;  aiaj  funct   c0   c1
> >  1 2  7  0.795 393.
> >  2 3 7  0.795 

Re: [gmx-users] query for gromacs-4.5.4

2013-03-12 Thread Chaitali Chandratre
Sir,

Thanks for your reply
But the same script runs on some other cluster with apprx same
configuration but not on cluster on which I am doing set up.

Also job hangs after some 16000 steps but not come out immediately.
It might be problem with configuration or what?

Thanks...

Chaitali

On Tue, Mar 12, 2013 at 2:18 PM, Mark Abraham wrote:

> They're just MPI error messages and don't provide any useful GROMACS
> diagnostics. Look in the end of the .log file, stderr and stdout for clues.
>
> One possibility is that your user's system is too small to scale
> effectively. Below about 1000 atoms/core you're wasting your time unless
> you've balanced the load really well. There is a
> simulation-system-dependent point below which fatal GROMACS errors are
> assured.
>
> Mark
>
> On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
> wrote:
>
> > Hello Sir,
> >
> > Actually I have been given work to setup gromacs-4.5.4 in our cluster
> which
> > is being used
> > by users.I am not gromacs user and not aware of its internal details.
> > I have got only .tpr file from user and I need to test my installation
> > using that .tpr file.
> >
> > It works fine for 2 nodes 8 processes , 1 node 8 processes.
> >  But when number of processes are equal to 16 it gives segmentation fault
> > and
> >  if number of processes are equal to 32 it gives
> > error message like
> > " HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:221): assert
> > (!closed) failed
> >  ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to send SIGUSR1
> > downstream
> >  HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback
> > returned error status
> >  HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:388): error
> > waiting for event
> > [ main (./ui/mpich/mpiexec.c:718): process manager error waiting for
> > completion"
> >
> > I am not clear like whether problem is there in my installation or what?
> >
> > Thanks and Regards,
> >Chaitalij
> >
> > On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul  wrote:
> >
> > >
> > >
> > > On 3/6/13 4:20 AM, Chaitali Chandratre wrote:
> > >
> > >> Dear Sir ,
> > >>
> > >> I am new to this installation and setup area. I need some information
> > for
> > >> -stepout option for
> > >>
> > >
> > > What more information do you need?
> > >
> > >
> > >  mdrun_mpi and also probable causes for segmentation fault in
> > >>  gromacs-4.5.4.
> > >> (my node having 64 GB mem running with 16 processes, nsteps =
> 2000)
> > >>
> > >>
> > > There are too many causes to name.  Please consult
> > http://www.gromacs.org/
> > > **Documentation/Terminology/**Blowing_Up<
> > http://www.gromacs.org/Documentation/Terminology/Blowing_Up>.
> > >  If you need further help, please be more specific, including a
> > description
> > > of the system, steps taken to minimize and/or equilibrate it, and any
> > > complete .mdp file(s) that you are using.
> > >
> > > -Justin
> > >
> > > --
> > > ==**==
> > >
> > > Justin A. Lemkul, Ph.D.
> > > Research Scientist
> > > Department of Biochemistry
> > > Virginia Tech
> > > Blacksburg, VA
> > > jalemkul[at]vt.edu | (540) 231-9080
> > > http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin<
> > http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
> > >
> > > ==**==
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> > http://lists.gromacs.org/mailman/listinfo/gmx-users>
> > > * Please search the archive at http://www.gromacs.org/**
> > > Support/Mailing_Lists/Search<
> > http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the www
> > > interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> > http://www.gromacs.org/Support/Mailing_Lists>
> > >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before p

Re: [gmx-users] query for gromacs-4.5.4

2013-03-12 Thread Mark Abraham
They're just MPI error messages and don't provide any useful GROMACS
diagnostics. Look in the end of the .log file, stderr and stdout for clues.

One possibility is that your user's system is too small to scale
effectively. Below about 1000 atoms/core you're wasting your time unless
you've balanced the load really well. There is a
simulation-system-dependent point below which fatal GROMACS errors are
assured.

Mark

On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
wrote:

> Hello Sir,
>
> Actually I have been given work to setup gromacs-4.5.4 in our cluster which
> is being used
> by users.I am not gromacs user and not aware of its internal details.
> I have got only .tpr file from user and I need to test my installation
> using that .tpr file.
>
> It works fine for 2 nodes 8 processes , 1 node 8 processes.
>  But when number of processes are equal to 16 it gives segmentation fault
> and
>  if number of processes are equal to 32 it gives
> error message like
> " HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:221): assert
> (!closed) failed
>  ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to send SIGUSR1
> downstream
>  HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback
> returned error status
>  HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:388): error
> waiting for event
> [ main (./ui/mpich/mpiexec.c:718): process manager error waiting for
> completion"
>
> I am not clear like whether problem is there in my installation or what?
>
> Thanks and Regards,
>Chaitalij
>
> On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul  wrote:
>
> >
> >
> > On 3/6/13 4:20 AM, Chaitali Chandratre wrote:
> >
> >> Dear Sir ,
> >>
> >> I am new to this installation and setup area. I need some information
> for
> >> -stepout option for
> >>
> >
> > What more information do you need?
> >
> >
> >  mdrun_mpi and also probable causes for segmentation fault in
> >>  gromacs-4.5.4.
> >> (my node having 64 GB mem running with 16 processes, nsteps = 2000)
> >>
> >>
> > There are too many causes to name.  Please consult
> http://www.gromacs.org/
> > **Documentation/Terminology/**Blowing_Up<
> http://www.gromacs.org/Documentation/Terminology/Blowing_Up>.
> >  If you need further help, please be more specific, including a
> description
> > of the system, steps taken to minimize and/or equilibrate it, and any
> > complete .mdp file(s) that you are using.
> >
> > -Justin
> >
> > --
> > ==**==
> >
> > Justin A. Lemkul, Ph.D.
> > Research Scientist
> > Department of Biochemistry
> > Virginia Tech
> > Blacksburg, VA
> > jalemkul[at]vt.edu | (540) 231-9080
> > http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin<
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
> >
> > ==**==
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> http://lists.gromacs.org/mailman/listinfo/gmx-users>
> > * Please search the archive at http://www.gromacs.org/**
> > Support/Mailing_Lists/Search<
> http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> > * Please don't post (un)subscribe requests to the list. Use the www
> > interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> http://www.gromacs.org/Support/Mailing_Lists>
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists