Re: [gmx-users] Installation GROMACS UBUNTU
On Fri, Dec 24, 2010 at 3:45 PM, Sergio Manzetti < sergio.manze...@vestforsk.no> wrote: > Hi thanks, but if I use make it says make no Makefile found... > > Yes, Makefile should be generated by configure. You should check the last few lines that configure output to your screen and the configure log file to see what's wrong. I suggest you go to a experienced Linux user for help if you don't want to figure out how configure and make work. Terry > > > On Thu, Dec 23, 2010 at 10:46 PM, Mark Abraham wrote: > >> On 23/12/2010 9:12 PM, Sergio Manzetti wrote: >> >>> >>> >>> >>> >>> Dear Users, I am unable to get pass the first step of ./configure . This >>> step works, but when typing "make" it says: >>> >>> no targets specified and no makefile found. >>> >> >> That suggests configure did not work. >> >> >> Should it be like this, or is it compulsory to use cmake? >>> >> >> cmake is not compulsory. >> >> Mark >> -- >> gmx-users mailing listgmx-users@gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-users >> Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >> Please don't post (un)subscribe requests to the list. Use the www >> interface or send it to gmx-users-requ...@gromacs.org. >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >> > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] reading tpx file version 73 with version 58 program
hi , Mr. justin thanx a lot for ur help! now i m able to generate system_inflated.gro. but while i m performing Run energy minimization accorning to tutorial getting this error grompp -f minim.mdp -c pope.gro -p topol_pope.top -o em1.tpr mdrun -v -s em1 -o em1 -c after_em -g emlog Fatal error: reading tpx file (em1.tpr) version 73 with version 58 program with regards: shikha IIIT-A -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Installation GROMACS UBUNTU
Hi thanks, but if I use make it says make no Makefile found... On Thu, Dec 23, 2010 at 10:46 PM, Mark Abraham wrote: > On 23/12/2010 9:12 PM, Sergio Manzetti wrote: > >> >> >> >> >> Dear Users, I am unable to get pass the first step of ./configure . This >> step works, but when typing "make" it says: >> >> no targets specified and no makefile found. >> > > That suggests configure did not work. > > > Should it be like this, or is it compulsory to use cmake? >> > > cmake is not compulsory. > > Mark > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the www interface > or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Fatal error (g_polystat)
-- Chandan kumar Choudhury NCL, Pune INDIA On Fri, Dec 24, 2010 at 3:13 AM, Mark Abraham wrote: > On 24/12/2010 4:58 AM, Chandan Choudhury wrote: > > Hello all !! > > I have a 30 ns trajectory in two parts (0-20 & 20-30). I am using Gromacs > 4.0.7. I concatenated using trjcat (*echo 2 | trjconv -s md10-30.tpr -f > 0-30.trr -o pbc_whole.trr -n index_grdf.ndx -nice 0*). > > > That looks like subset creation, not concatenation. Perhaps you should > clarify what you think should be in all these files, and confirm with > gmxcheck. > Sorry. I wrongly copied the commands. The correct command was *trjcat -f 0-20.trr 20-30/20-30.trr -o analysis/0-30/0-30.trr -nice 0* *gmxcheck -f pbc_whole.trr *outputs Checking file pbc_whole.trr trn version: GMX_trn_file (single precision) Reading frame 0 time0.000 # Atoms 162 Last frame 30 time 3.002 Item#frames Timestep (ps) Step310.1 Time310.1 Lambda 310.1 Coords 310.1 Velocities 310.1 Forces 0 Box 310.1 > Then I converted the concatenated trajectory into pbc trajectory using > trjconv (*echo 2 | trjconv -f 0-30.trr -s md10-30.tpr -o pbc_whole.trr -n > index_grdf.ndx -pbc whole*). The problem comes when I try to use the > g_polystat utility (*echo 2 |* *g_polystat -s md10-30.tpr -f pbc_whole.trr > -n index_grdf.ndx -o polystat_pbc.dat -xvgr*). The error message it > produces is : > *trn version: GMX_trn_file (single precision)* > *Reading frame 0 time0.000 * > *---* > *Program g_dist, VERSION 4.0.7* > *Source code file: mshift.c, line: 103* > * > * > *Fatal error:* > *Molecule in topology has atom numbers below and above natoms (162).* > *You are probably trying to use a trajectory which does not match the > first 162 atoms of the run input file.* > *You can make a matching run input file with tpbconv.* > --- > > > This means the contents of at least two of the .trr, .tpr and .ndx aren't > describing the same thing. > > Mark > > > The same error message erupts when I try to use g_dist. (echo "5 24" | > g_dist -f pbc_whole.trr -s md10-30.tpr -n index_grdf.ndx -o dist_N-N.xvg). > > > But when I execute g_mindist (echo "5 24" | g_mindist -f pbc_whole.trr > -s md10-30.tpr -n index_grdf.ndx -o mindist_N-N.xvg). I works w/o any error > message. > > Can figure out the probable cause. Please help. > > -- > Chandan kumar Choudhury > NCL, Pune > INDIA > > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Optimization of the box size during an energy minimization?
On Fri, Dec 24, 2010 at 5:36 AM, Mark Abraham wrote: > On 24/12/2010 5:17 AM, MyLinkka wrote: > > Does anybody know if it is possible to optimize the box size during an > energy minimization in Gromacs? > > > Optimize for what criterion? > > > Can I make pressure coupling if it's possible? > > Do you mean energy minimization with pressure coupling? Terry > > Sure, that's in the manual and covered in tutorials. > > Mark > > > Is there a workaround if there is no direct way? > > Thanks! > > Ting > > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] dssp
Hi, I have a problem with dssp programme. I have the programme in /home/m/DSSP. When I write export DSSP=/usr/local/bin and then I check /usr/local/bin I couldn't see dssp. How can I handle this problem. I have caheceked gromacs website export DSSP=/path/to/dssp setenv DSSP /path/to/dssp commands are written. Could you write export DSSP=/path/to/dssp command according to me. And also I use Ubuntu 10.04, setenv is not used, if I try to install it, I see the package not found message. What should I do about it? -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] dssp problem
On Fri, Dec 24, 2010 at 11:38 AM, mustafa bilsel wrote: > Hi, > I have a problem with dssp programme. > I have the programme in /home/m/DSSP. When I write > export DSSP=/usr/local/bin > and then I check /usr/local/bin I couldn't see dssp. > How can I handle this problem. I have caheceked gromacs website > > export DSSP=/path/to/dssp > > setenv DSSP /path/to/dssp > > commands are written. > Could you write export DSSP=/path/to/dssp command according to me. > > You should issue: export DSSP=/home/m/DSSP See, "/home/m/DSSP" is actually the "/path/to/dssp" in your case. Terry And also I use Ubuntu 10.04, setenv is not used, if I try to install it, I > see the package not found message. > What should I do about it? > > Best wishes > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] box size & continuing the previous energy minimization
On Fri, Dec 24, 2010 at 8:12 AM, Amit Choubey wrote: > > > On Thu, Dec 23, 2010 at 2:41 PM, mustafa bilsel wrote: > >> Hi, >> >> 1. How can I learn the box shape and size of a completed simulation? >> > > check the end of the resulting gro file . > > >> 2. I want to keep going the previous completed energy minimization by >> increasing nsteps. How can I do this? >> > > increase in nsteps, and may be the tolerance too. > > >> >> Actually, you need to start from the output .gro file, and *decrease* the "emtol" value, if your previous minimization converged. Terry > Best wishes >> Mustafa >> >> >> -- >> gmx-users mailing listgmx-users@gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-users >> Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >> Please don't post (un)subscribe requests to the list. Use the >> www interface or send it to gmx-users-requ...@gromacs.org. >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >> > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] dssp problem
Hi, I have a problem with dssp programme. I have the programme in /home/m/DSSP. When I write export DSSP=/usr/local/bin and then I check /usr/local/bin I couldn't see dssp. How can I handle this problem. I have caheceked gromacs website export DSSP=/path/to/dssp setenv DSSP /path/to/dssp commands are written. Could you write export DSSP=/path/to/dssp command according to me. And also I use Ubuntu 10.04, setenv is not used, if I try to install it, I see the package not found message. What should I do about it? Best wishes -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] RE: "moleculetype DRG is redefined"
Hey - I need to see"rrg.itp" and "drg.itp" to undesrtand the situation. = * Dr. Vitaly V. Chaban | skype: vvchaban * * Department of Chemistry | email: v.cha...@rochester.edu * * University of Rochester | email: vvcha...@gmail.com * * Rochester, NY 14627-0216 | Tel.: 585-276-5751* * United States of America | WWW: chem.rochester.edu/~prezhdo_group/ * = Respected Sir, I am HARESH AJANI from CADILA HEALTHCARE LTD, INDIA. I am using gromacs at my Department of Bioinformatics. I am going to simulate an enzyme complex taking a ligand at active site and another ligand at allosteric site. When I ran md simulation taking ligand at active site only then it was ok. But when I am running the simulation taking both they show "moleculetype DRG is redefined" my topolgy look like this > #include "ffG43a1.itp" > #include "rrg.itp" > #include "drg.itp" > and at the end it is like this > > Protein 1 > RRG 1 > DRG 1 SOL10 > SOL 17662 I can not understand why GROMACS takes into account bcause they are far apart or is there any wrong to my pdb file. I am genereting the coordinate and topology using PRODRG server. Please help me to find out the problem. - HARESH AJANI 09377756124 ajani_har...@yahoo.co.in -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] box size & continuing the previous energy minimization
On Thu, Dec 23, 2010 at 2:41 PM, mustafa bilsel wrote: > Hi, > > 1. How can I learn the box shape and size of a completed simulation? > check the end of the resulting gro file . > 2. I want to keep going the previous completed energy minimization by > increasing nsteps. How can I do this? > increase in nsteps, and may be the tolerance too. > > Best wishes > Mustafa > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] box size & continuing the previous energy minimization
Hi, 1. How can I learn the box shape and size of a completed simulation? 2. I want to keep going the previous completed energy minimization by increasing nsteps. How can I do this? Best wishes Mustafa -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Installation GROMACS UBUNTU
On 23/12/2010 9:12 PM, Sergio Manzetti wrote: Dear Users, I am unable to get pass the first step of ./configure . This step works, but when typing "make" it says: no targets specified and no makefile found. That suggests configure did not work. Should it be like this, or is it compulsory to use cmake? cmake is not compulsory. Mark -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: AW: [gmx-users] mdrun mpi segmentation fault in high load situation
On 24/12/2010 8:34 AM, Mark Abraham wrote: On 24/12/2010 3:28 AM, Wojtyczka, André wrote: On 23/12/2010 10:01 PM, Wojtyczka, André wrote: Dear Gromacs Enthusiasts. I am experiencing problems with mdrun_mpi (4.5.3) on a Nehalem cluster. Problem: This runs fine: mpiexec -np 72 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr This produces a segmentation fault: mpiexec -np 128 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr Unless you know you need it, don't use -pd. DD will be faster and is probably better bug-tested too. Mark Hi Mark thanks for the push into that direction, but I am in the unfortunate situation where I really need -pd because I have long bonds which is the reason why my large system is decomposable just into a little number of domains. I'm not sure that PD has any advantage here. From memory it has to create a 128x1x1 grid, and you can direct that with DD also. See mdrun -h -hidden for -dd. Mark The contents of your .log file will be far more helpful than stdout in diagnosing what condition led to the problem. Mark So the only difference is the number of cores I am using. mdrun_mpi was compiled using the intel compiler 11.1.072 with my own fftw3 installation. While configuring and make mdrun / make install-mdrun no errors came up. Is there some issue with threading or mpi? If someone has a clue please give me a hint. integrator = md dt = 0.004 nsteps = 2500 nstxout = 0 nstvout = 0 nstlog = 25 nstenergy = 25 nstxtcout = 12500 xtc_grps = protein energygrps = protein non-protein nstlist = 2 ns_type = grid rlist= 0.9 coulombtype = PME rcoulomb = 0.9 fourierspacing = 0.12 pme_order= 4 ewald_rtol = 1e-5 rvdw = 0.9 pbc = xyz periodic_molecules = yes tcoupl = nose-hoover nsttcouple = 1 tc-grps = protein non-protein tau_t= 0.1 0.1 ref_t= 310 310 Pcoupl = no gen_vel = yes gen_temp = 310 gen_seed = 173529 constraints = all-bonds Error: Getting Loaded... Reading file full031K_mdrun_ions.tpr, VERSION 4.5.3 (single precision) Loaded with Money NOTE: The load imbalance in PME FFT and solve is 48%. For optimal PME load balancing PME grid_x (144) and grid_y (144) should be divisible by #PME_nodes_x (128) and PME grid_y (144) and grid_z (144) should be divisible by #PME_nodes_y (1) Step 0, time 0 (ps) PSIlogger: Child with rank 82 exited on signal 11: Segmentation fault PSIlogger: Child with rank 79 exited on signal 11: Segmentation fault PSIlogger: Child with rank 2 exited on signal 11: Segmentation fault PSIlogger: Child with rank 1 exited on signal 11: Segmentation fault PSIlogger: Child with rank 100 exited on signal 11: Segmentation fault PSIlogger: Child with rank 97 exited on signal 11: Segmentation fault PSIlogger: Child with rank 98 exited on signal 11: Segmentation fault PSIlogger: Child with rank 96 exited on signal 6: Aborted ... Ps, for now I don't care about the imbalanced PME load unless it's independent from my problem. Cheers André Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Fatal error (g_polystat)
On 24/12/2010 4:58 AM, Chandan Choudhury wrote: Hello all !! I have a 30 ns trajectory in two parts (0-20 & 20-30). I am using Gromacs 4.0.7. I concatenated using trjcat (*echo 2 | trjconv -s md10-30.tpr -f 0-30.trr -o pbc_whole.trr -n index_grdf.ndx -nice 0*). That looks like subset creation, not concatenation. Perhaps you should clarify what you think should be in all these files, and confirm with gmxcheck. Then I converted the concatenated trajectory into pbc trajectory using trjconv (*echo 2 | trjconv -f 0-30.trr -s md10-30.tpr -o pbc_whole.trr -n index_grdf.ndx -pbc whole*). The problem comes when I try to use the g_polystat utility (*echo 2 |* *g_polystat -s md10-30.tpr -f pbc_whole.trr -n index_grdf.ndx -o polystat_pbc.dat -xvgr*). The error message it produces is : /trn version: GMX_trn_file (single precision)/ /Reading frame 0 time0.000 / /---/ /Program g_dist, VERSION 4.0.7/ /Source code file: mshift.c, line: 103/ / / /Fatal error:/ /Molecule in topology has atom numbers below and above natoms (162)./ /You are probably trying to use a trajectory which does not match the first 162 atoms of the run input file./ /You can make a matching run input file with tpbconv./ --- This means the contents of at least two of the .trr, .tpr and .ndx aren't describing the same thing. Mark The same error message erupts when I try to use g_dist. (echo "5 24" | g_dist -f pbc_whole.trr -s md10-30.tpr -n index_grdf.ndx -o dist_N-N.xvg). But when I execute g_mindist (echo "5 24" | g_mindist -f pbc_whole.trr -s md10-30.tpr -n index_grdf.ndx -o mindist_N-N.xvg). I works w/o any error message. Can figure out the probable cause. Please help. -- Chandan kumar Choudhury NCL, Pune INDIA -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Optimization of the box size during an energy minimization?
On 24/12/2010 5:17 AM, MyLinkka wrote: Does anybody know if it is possible to optimize the box size during an energy minimization in Gromacs? Optimize for what criterion? Can I make pressure coupling if it's possible? Sure, that's in the manual and covered in tutorials. Mark Is there a workaround if there is no direct way? Thanks! Ting -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: AW: [gmx-users] mdrun mpi segmentation fault in high load situation
On 24/12/2010 3:28 AM, Wojtyczka, André wrote: On 23/12/2010 10:01 PM, Wojtyczka, André wrote: Dear Gromacs Enthusiasts. I am experiencing problems with mdrun_mpi (4.5.3) on a Nehalem cluster. Problem: This runs fine: mpiexec -np 72 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr This produces a segmentation fault: mpiexec -np 128 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr Unless you know you need it, don't use -pd. DD will be faster and is probably better bug-tested too. Mark Hi Mark thanks for the push into that direction, but I am in the unfortunate situation where I really need -pd because I have long bonds which is the reason why my large system is decomposable just into a little number of domains. I'm not sure that PD has any advantage here. From memory it has to create a 128x1x1 grid, and you can direct that with DD also. The contents of your .log file will be far more helpful than stdout in diagnosing what condition led to the problem. Mark So the only difference is the number of cores I am using. mdrun_mpi was compiled using the intel compiler 11.1.072 with my own fftw3 installation. While configuring and make mdrun / make install-mdrun no errors came up. Is there some issue with threading or mpi? If someone has a clue please give me a hint. integrator = md dt = 0.004 nsteps = 2500 nstxout = 0 nstvout = 0 nstlog = 25 nstenergy = 25 nstxtcout = 12500 xtc_grps = protein energygrps = protein non-protein nstlist = 2 ns_type = grid rlist= 0.9 coulombtype = PME rcoulomb = 0.9 fourierspacing = 0.12 pme_order= 4 ewald_rtol = 1e-5 rvdw = 0.9 pbc = xyz periodic_molecules = yes tcoupl = nose-hoover nsttcouple = 1 tc-grps = protein non-protein tau_t= 0.1 0.1 ref_t= 310 310 Pcoupl = no gen_vel = yes gen_temp = 310 gen_seed = 173529 constraints = all-bonds Error: Getting Loaded... Reading file full031K_mdrun_ions.tpr, VERSION 4.5.3 (single precision) Loaded with Money NOTE: The load imbalance in PME FFT and solve is 48%. For optimal PME load balancing PME grid_x (144) and grid_y (144) should be divisible by #PME_nodes_x (128) and PME grid_y (144) and grid_z (144) should be divisible by #PME_nodes_y (1) Step 0, time 0 (ps) PSIlogger: Child with rank 82 exited on signal 11: Segmentation fault PSIlogger: Child with rank 79 exited on signal 11: Segmentation fault PSIlogger: Child with rank 2 exited on signal 11: Segmentation fault PSIlogger: Child with rank 1 exited on signal 11: Segmentation fault PSIlogger: Child with rank 100 exited on signal 11: Segmentation fault PSIlogger: Child with rank 97 exited on signal 11: Segmentation fault PSIlogger: Child with rank 98 exited on signal 11: Segmentation fault PSIlogger: Child with rank 96 exited on signal 6: Aborted ... Ps, for now I don't care about the imbalanced PME load unless it's independent from my problem. Cheers André Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] amber convert gromacs input files
Hi, Not sure exactly what you plan to simulate but here are a couple of potential pitfalls: Does acpype call amb2gmx.pl or is it new code that converts? If it is a amb2gmx.pl call I'd check the torsions on the NAc group if you have one. They didn't get translated when I used it. When using amber ports be careful about using default index groups like "protein" or "C alpha" as they won't contain atoms from residues like LYP that are different in the ports. Also you'll want to set fudge to 1.0 in the amber99sb.itp or where it is set (can't check this atm) if simulating the sugar alone. Its due to differences in the way amber and glycam are parametrized. (If you are interested it is differences in 1-4 scaling). There may be other issues I'm not aware of yet. :) All the best, Oliver On 23 December 2010 18:13, Alan Wilter Sousa da Silva wrote: > Have a look at acpype.googlecode.com > > Alan > > 2010/12/23 gromacs564 > >> >> Hi , >> >> I have obtained some files(.top,.crd,.pdb) about disaccharide via >> glycam web(they are glycam06 force field,included in AMBER) , but cannot >> converted this amber files to gromacs files format. >> >> >> Can anyone help me to convert this (amber) files to gromacs input >> files(top or itp,gro).? >> >> Many thanks! >> >> >> >> >> >> >> >> >> -- >> gmx-users mailing listgmx-users@gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-users >> Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >> Please don't post (un)subscribe requests to the list. Use the >> www interface or send it to gmx-users-requ...@gromacs.org. >> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >> > > > > -- > Alan Wilter SOUSA da SILVA, D.Sc. > Bioinformatician, UniProt - PANDA, EBI-EMBL > CB10 1SD, Hinxton, Cambridge, UK > +44 1223 49 4588 > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Grompp error message
On 24/12/2010 7:01 AM, Sergio Manzetti wrote: Dear users, I made the a topology, but when I grompp for EM I get this weird message: Fatal error: Syntax error - File forcefield.itp, line 12 Last line read: 'Buckingham 1 no 1.0 1.0' Found a second defaults directive. I attached the topology here with the gro file. Does anybody recognize this message? Yes. See http://www.gromacs.org/Documentation/Errors Mark -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Grompp error message
Dear users, I made the a topology, but when I grompp for EM I get this weird message: Fatal error: Syntax error - File forcefield.itp, line 12 Last line read: 'Buckingham 1 no 1.0 1.0' Found a second defaults directive. I attached the topology here with the gro file. Does anybody recognize this message? Best wishes Sergio bzp.itp Description: Binary data bzp.gro Description: Binary data -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Optimization of the box size during an energy minimization?
Does anybody know if it is possible to optimize the box size during an energy minimization in Gromacs? Can I make pressure coupling if it's possible? Is there a workaround if there is no direct way? Thanks! Ting -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] amber convert gromacs input files
Have a look at acpype.googlecode.com Alan 2010/12/23 gromacs564 > > Hi , > > I have obtained some files(.top,.crd,.pdb) about disaccharide via glycam > web(they are glycam06 force field,included in AMBER) , but cannot converted > this amber files to gromacs files format. > > > Can anyone help me to convert this (amber) files to gromacs input > files(top or itp,gro).? > > Many thanks! > > > > > > > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- Alan Wilter SOUSA da SILVA, D.Sc. Bioinformatician, UniProt - PANDA, EBI-EMBL CB10 1SD, Hinxton, Cambridge, UK +44 1223 49 4588 -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Fatal error (g_polystat)
Hello all !! I have a 30 ns trajectory in two parts (0-20 & 20-30). I am using Gromacs 4.0.7. I concatenated using trjcat (*echo 2 | trjconv -s md10-30.tpr -f 0-30.trr -o pbc_whole.trr -n index_grdf.ndx -nice 0*). Then I converted the concatenated trajectory into pbc trajectory using trjconv (*echo 2 | trjconv -f 0-30.trr -s md10-30.tpr -o pbc_whole.trr -n index_grdf.ndx -pbc whole*). The problem comes when I try to use the g_polystat utility (*echo 2 |* *g_polystat -s md10-30.tpr -f pbc_whole.trr -n index_grdf.ndx -o polystat_pbc.dat -xvgr*). The error message it produces is : *trn version: GMX_trn_file (single precision)* *Reading frame 0 time0.000 * *---* *Program g_dist, VERSION 4.0.7* *Source code file: mshift.c, line: 103* * * *Fatal error:* *Molecule in topology has atom numbers below and above natoms (162).* *You are probably trying to use a trajectory which does not match the first 162 atoms of the run input file.* *You can make a matching run input file with tpbconv.* --- The same error message erupts when I try to use g_dist. (echo "5 24" | g_dist -f pbc_whole.trr -s md10-30.tpr -n index_grdf.ndx -o dist_N-N.xvg). But when I execute g_mindist (echo "5 24" | g_mindist -f pbc_whole.trr -s md10-30.tpr -n index_grdf.ndx -o mindist_N-N.xvg). I works w/o any error message. Can figure out the probable cause. Please help. -- Chandan kumar Choudhury NCL, Pune INDIA -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
AW: [gmx-users] mdrun mpi segmentation fault in high load situation
>On 23/12/2010 10:01 PM, Wojtyczka, André wrote: >> Dear Gromacs Enthusiasts. >> >> I am experiencing problems with mdrun_mpi (4.5.3) on a Nehalem cluster. >> >> Problem: >> This runs fine: >> mpiexec -np 72 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr >> >> This produces a segmentation fault: >> mpiexec -np 128 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr > >Unless you know you need it, don't use -pd. DD will be faster and is >probably better bug-tested too. > >Mark Hi Mark thanks for the push into that direction, but I am in the unfortunate situation where I really need -pd because I have long bonds which is the reason why my large system is decomposable just into a little number of domains. > >> So the only difference is the number of cores I am using. >> >> mdrun_mpi was compiled using the intel compiler 11.1.072 with my own fftw3 >> installation. >> >> While configuring and make mdrun / make install-mdrun no errors came >> up. >> >> Is there some issue with threading or mpi? >> >> If someone has a clue please give me a hint. >> >> >> integrator = md >> dt = 0.004 >> nsteps = 2500 >> nstxout = 0 >> nstvout = 0 >> nstlog = 25 >> nstenergy = 25 >> nstxtcout = 12500 >> xtc_grps = protein >> energygrps = protein non-protein >> nstlist = 2 >> ns_type = grid >> rlist= 0.9 >> coulombtype = PME >> rcoulomb = 0.9 >> fourierspacing = 0.12 >> pme_order= 4 >> ewald_rtol = 1e-5 >> rvdw = 0.9 >> pbc = xyz >> periodic_molecules = yes >> tcoupl = nose-hoover >> nsttcouple = 1 >> tc-grps = protein non-protein >> tau_t= 0.1 0.1 >> ref_t= 310 310 >> Pcoupl = no >> gen_vel = yes >> gen_temp = 310 >> gen_seed = 173529 >> constraints = all-bonds >> >> >> >> Error: >> Getting Loaded... >> Reading file full031K_mdrun_ions.tpr, VERSION 4.5.3 (single precision) >> Loaded with Money >> >> >> NOTE: The load imbalance in PME FFT and solve is 48%. >>For optimal PME load balancing >>PME grid_x (144) and grid_y (144) should be divisible by #PME_nodes_x >> (128) >>and PME grid_y (144) and grid_z (144) should be divisible by >> #PME_nodes_y (1) >> >> >> Step 0, time 0 (ps) >> PSIlogger: Child with rank 82 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 79 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 2 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 1 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 100 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 97 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 98 exited on signal 11: Segmentation fault >> PSIlogger: Child with rank 96 exited on signal 6: Aborted >> ... >> >> Ps, for now I don't care about the imbalanced PME load unless it's >> independent from my problem. >> >> Cheers >> André Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Re: "moleculetype DRG is redefined"
Please keep all Gromacs-related correspondence on the gmx-users list. I am not a private help service. I have CC'ed this message to the list and I would ask that all further correspondence be sent to the list. See comments below. Quoting AJANI HARESH : > Respected Sir, > > > I am HARESH AJANI from CADILA HEALTHCARE LTD, INDIA. > > I am using gromacs at my Department of Bioinformatics. > > I am going to simulate an enzyme complex taking a ligand at active site and > another ligand at allosteric site. > > When I ran md simulation taking ligand at active site only then it was ok. > But when I am running the simulation taking both they show "moleculetype DRG > is redefined" > > > my topolgy look like this > > #include "ffG43a1.itp" > > #include "rrg.itp" > > #include "drg.itp" > > and at the end it is like this > > > > Protein 1 > > RRG 1 > > DRG 1 > SOL10 > > SOL 17662 > > > > I can not understand why GROMACS takes into account bcause they are far apart > or is there any wrong to my pdb file. I am genereting the coordinate and > topology using PRODRG server. > The problem has nothing to do with your coordinate file. PRODRG, by default, names all of its [moleculetypes] "DRG," so you need to assign a proper name in this directive. Presumably, you want one of them to be RRG, but it is not set as such in rrg.itp. As always, beware the quality of PRODRG topologies: http://www.gromacs.org/Downloads/Related_Software/PRODRG#Tips -Justin > Please help me to find out the problem. > > - > HARESH AJANI > 09377756124 > ajani_har...@yahoo.co.in > > > Justin A. Lemkul Ph.D. Candidate ICTAS Doctoral Scholar MILES-IGERT Trainee Department of Biochemistry Virginia Tech Blacksburg, VA jalemkul[at]vt.edu | (540) 231-9080 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Unknown bond_atomtype MNH3
Quoting shikha agarwal : > Hello Justin, > > Thank you for your reply. > > I modifed ffG53a6bn_lipid.itp This is not ffG53a6nb_lipid.itp, this is simply a modified lipid.itp to which it looks like you've added a line for OW (which contains the wrong mass). The instructions in the tutorial direct you to copy the relevant information from lipid.itp into ffG53a6nb.itp to create ffG53a6nb_lipid.itp. So copy the [atomtypes] from lipid.itp into the [atomtypes] from ffG53a6nb.itp, etc. -Justin > [ atomtypes ] > ;name mass charge ptype c6 c12 > ; >LO15.99940.000 A 2.36400e-03 1.59000e-06 ;carbonyl O, OPLS > LOM15.99940.000 A 2.36400e-03 1.59000e-06 ;carboxyl O, OPLS > LNL14.00670.000 A 3.35300e-03 3.95100e-06 ;Nitrogen, OPLS >LC12.01100.000 A 4.88800e-03 1.35900e-05 ;Carbonyl C, OPLS > LH113.01900.000 A 4.03100e-03 1.21400e-05 ;CH1, OPLS > LH214.02700.000 A 7.00200e-03 2.48300e-05 ;CH2, OPLS >LP30.97380.000 A 9.16000e-03 2.50700e-05 ;phosphor, OPLS > LOS15.99940.000 A 2.56300e-03 1.86800e-06 ;ester oxygen, OPLS > LP214.02700.000 A 5.87400e-03 2.26500e-05 ;RB CH2, Bergers LJ > LP315.03500.000 A 8.77700e-03 3.38500e-05 ;RB CH3, Bergers LJ > LC315.03500.000 A 9.35700e-03 3.60900e-05 ;CH3, OPLS > LC214.02700.000 A 5.94700e-03 1.79000e-05 ;CH2, OPLS >OW0.000 0.000 A 2.617345-03 2.63412e-06 > [ nonbond_params ] > ; ijfuncc6 c12 >LOLO 1 2.36400e-03 1.59000e-06 >LO LOM 1 2.36400e-03 1.59000e-06 >LO LNL 1 2.81600e-03 2.50600e-06 >LOLC 1 3.39900e-03 4.64800e-06 >LO LH1 1 3.08700e-03 4.39300e-06 >LO LH2 1 4.06900e-03 6.28300e-06 >LOLP 1 4.65300e-03 6.31300e-06 >LO LOS 1 2.46100e-03 1.72300e-06 >LO LP2 1 3.72700e-03 6.0e-06 >LO LP3 1 4.55500e-03 7.33500e-06 >LO LC3 1 4.70300e-03 7.57400e-06 >LO LC2 1 3.74900e-03 5.33500e-06 > LOM LOM 1 2.36400e-03 1.59000e-06 > LOM LNL 1 2.81600e-03 2.50600e-06 > LOMLC 1 3.39900e-03 4.64800e-06 > LOM LH1 1 3.08700e-03 4.39300e-06 > LOM LH2 1 4.06900e-03 6.28300e-06 > LOMLP 1 4.65300e-03 6.31300e-06 > LOM LOS 1 2.46100e-03 1.72300e-06 > LOM LP2 1 3.72700e-03 6.0e-06 > LOM LP3 1 4.55500e-03 7.33500e-06 > LOM LC3 1 4.70300e-03 7.57400e-06 > LOM LC2 1 3.74900e-03 5.33500e-06 > LNL LNL 1 3.35300e-03 3.95100e-06 > LNLLC 1 4.04900e-03 7.32800e-06 > LNL LH1 1 3.67700e-03 6.92500e-06 > LNL LH2 1 4.84600e-03 9.90500e-06 > LNLLP 1 5.54200e-03 9.95300e-06 > LNL LOS 1 2.93200e-03 2.71700e-06 > LNL LP2 1 4.43800e-03 9.46000e-06 > LNL LP3 1 5.42500e-03 1.15600e-05 > LNL LC3 1 5.60100e-03 1.19400e-05 > LNL LC2 1 4.46600e-03 8.41100e-06 >LCLC 1 4.88800e-03 1.35900e-05 >LC LH1 1 4.43900e-03 1.28400e-05 >LC LH2 1 5.85100e-03 1.83700e-05 >LCLP 1 6.69100e-03 1.84600e-05 >LC LOS 1 3.53900e-03 5.03800e-06 >LC LP2 1 5.35900e-03 1.75400e-05 >LC LP3 1 6.55000e-03 2.14500e-05 >LC LC3 1 6.76300e-03 2.21500e-05 >LC LC2 1 5.39100e-03 1.56000e-05 > LH1 LH1 1 4.03100e-03 1.21400e-05 > LH1 LH2 1 5.31300e-03 1.73600e-05 > LH1LP 1 6.07700e-03 1.74400e-05 > LH1 LOS 1 3.21400e-03 4.76100e-06 > LH1 LP2 1 4.86600e-03 1.65800e-05 > LH1 LP3 1 5.94800e-03 2.02700e-05 > LH1 LC3 1 6.14200e-03 2.09300e-05 > LH1 LC2 1 4.89600e-03 1.47400e-05 > LH2 LH2 1 7.00200e-03 2.48300e-05 > LH2LP 1 8.00900e-03 2.49500e-05 > LH2 LOS 1 4.23600e-03 6.81000e-06 > LH2 LP2 1 6.41400e-03 2.37100e-05 > LH2 LP3 1 7.83900e-03 2.89900e-05 > LH2 LC3 1 8.09500e-03 2.99300e-05 > LH2 LC2 1 6.45300e-03 2.10800e-05 >LPLP 1 9.16000e-03 2.50700e-05 >LP LOS 1 4.84500e-03 6.84200e-06 >LP LP2 1 7.33500e-03 2.38300e-05 >LP LP3 1 8.96600e-03 2.91300e-05 >LP LC3 1 9.25800e-03 3.00800e-05 >LP LC2 1 7.38100e-03 2.11900e-05 > LOS LOS 1 2.56300e-03 1.86800e-06 > LOS LP2 1 3.88000e-03 6.50400e-06 > LOS LP3 1 4.74300e-03 7.95100e-06 > LOS LC3 1 4.89700e-03 8.21000e-06 > LOS LC2 1 3.90400e-03 5.78200e-06 > LP2 LP2 1 5.87400e-03 2.26500e-05 > LP2 LP3 1 7.18000e-03 2.76900e-05 > LP2 LC3 1 7.41400e-03 2.85900e-05 > LP2 LC2 1 5.91000e-03 2.01400e-05 > LP3 LP3 1 8.77700e-03 3.38500e-05 > LP3 LC3 1 9.06200e-03 3.49500e-05 > LP3 LC2 1 7.22400e-03 2.46200e-05 > LC3 LC3 1 9.35700e-03 3.60900e-05 > LC3 LC2 1 7.45900e-03 2.5420
[gmx-users] 25th Molecular Modelling Workshop 2011 Announcement
Dear All, This year, the 25th Molecular Modelling Workshop (http://www.chemie.uni-erlangen.de/ccc/conference/mmws11/) will take place on April, 4th to 6th. For the ninth time, the workshop will be hosted by the University of Erlangen-Nuremberg. The research group of Professor Tim Clark at the Computer Chemistry Center will be responsible for the technical organization. Dr. Christian Kramer, Novartis, Basel, will be responsible for the scientific organization. This workshop encourages young scientists - especially graduate students - to present and discuss their research topics. Young scientists at the beginning of their academic careers will be able to meet new colleagues from academia and gain feedback from industrial colleagues. Contributions are welcome from all areas of molecular modeling - from the life sciences, computational biology, computational chemistry to materials sciences. Plenary lectures We are pleased to announce the following three confirmed plenary speakers: Kenneth M. Merz, Jr.University of Florida at Gainesville Emad TajkhorshidBeckman Institute, University of Illinois at Urbana-Champaign Michele Parrinello ETH, Zurich Additionally, Jürgen Brickmann (Molcad GmbH, Darmstadt) will give historical overview on previous Molecular Modelling Workshops. Oral and poster presentations Oral presentations should not exceed 20 minutes (including discussion); posters should be prepared in portrait format (90x140cm). Talks with solely commercial content (i.e., that aim at advertising a product) will no longer be accepted. The deadline for registration and submission of abstracts for oral and poster presentation is March 18th, 2011. Registration is only possible online. Please submit abstracts for oral and poster presentation (one page DIN A4 format) no later than March 18th by e-mail to (mmws2011 @ chemie.uni-erlangen.de). Conference location All talks, coffee breaks, the poster session and the buffet dinner will take place at the Institute of Organic Chemistry, Henkestraße 42. MGMS-DS Travel Bursaries Participants must organize travelling and accommodation themselves. However, there is a limited amount of travel bursaries: -Participants from Germany could receive 200 Euro, -Participants from other countries could receive 350 Euro as travel bursary. These will be available only to undergraduate or graduate research students. Written applications for a bursary must include a supporting reference letter from the research project supervisor. Please address bursary applications tommws2011 @ chemie.uni-erlangen.de Miscellaneous For further information, and a list of abstracts already received, please visit this workshop website, http://www.chemie.uni-erlangen.de/ccc/conference/mmws11/ . Please address additional questions regarding the organization of the workshop to mmws2011 @ chemie.uni-erlangen.de. The general meeting of the MGMS (German Section) will be held during the workshop. The conference fee amounts to 50 Euro (Students: 25 Euro). This fee includes the annual membership fee for the MGMS-DS. We are looking forward to meeting you in Erlangen! Tim Clark Christian Kramer Harald Lanig Tatyana Shubina -- -- Dr. Tatyana Shubina Computer-Chemie-Centrum and Interdisciplinary Center for Molecular Materials Universitaet Erlangen/Nuernberg Naegelsbachstr. 25, D-91052 Erlangen Phone +49(0)9131-85 26580 Fax +49(0)9131-85 26565 email: tatyana.shubina AT chemie.uni-erlangen.de http://www.chemie.uni-erlangen.de/clark/shubina/ -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] Installation GROMACS UBUNTU
I would like to suggest to you read [1] Furthermore, you can read INSTALL.cmake file which explains some steps to compile Gromacs through cmake command. [1] http://www.gromacs.org/Developer_Zone/Cmake -- Rodrigo Antonio Faccioli Ph.D Student in Electrical Engineering University of Sao Paulo - USP Engineering School of Sao Carlos - EESC Department of Electrical Engineering - SEL Intelligent System in Structure Bioinformatics http://laips.sel.eesc.usp.br Phone: 55 (16) 3373-9366 Ext 229 Curriculum Lattes - http://lattes.cnpq.br/1025157978990218 Public Profile - http://br.linkedin.com/pub/rodrigo-faccioli/7/589/a5 On Thu, Dec 23, 2010 at 8:12 AM, Sergio Manzetti < sergio.manze...@vestforsk.no> wrote: > > > > > Dear Users, I am unable to get pass the first step of ./configure . This > step works, but when typing "make" it says: > > no targets specified and no makefile found. > > Should it be like this, or is it compulsory to use cmake? > > > -- > gmx-users mailing listgmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] amber convert gromacs input files
On 23/12/2010 6:02 PM, gromacs564 wrote: Hi , I have obtained some files(.top,.crd,.pdb) about disaccharide via glycam web(they are glycam06 force field,included in AMBER) ,but cannot converted this amber files to gromacs files format. Can anyone help me to convert this (amber) files to gromacs input files(top or itp,gro).? Many thanks! Not directly. You will need to read the underlying force field literature, and the relevant parts of AMBER and GROMACS manuals so you understand the content and file formats. Mark -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] mdrun mpi segmentation fault in high load situation
On 23/12/2010 10:01 PM, Wojtyczka, André wrote: Dear Gromacs Enthusiasts. I am experiencing problems with mdrun_mpi (4.5.3) on a Nehalem cluster. Problem: This runs fine: mpiexec -np 72 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr This produces a segmentation fault: mpiexec -np 128 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr Unless you know you need it, don't use -pd. DD will be faster and is probably better bug-tested too. Mark So the only difference is the number of cores I am using. mdrun_mpi was compiled using the intel compiler 11.1.072 with my own fftw3 installation. While configuring and make mdrun / make install-mdrun no errors came up. Is there some issue with threading or mpi? If someone has a clue please give me a hint. integrator = md dt = 0.004 nsteps = 2500 nstxout = 0 nstvout = 0 nstlog = 25 nstenergy = 25 nstxtcout = 12500 xtc_grps = protein energygrps = protein non-protein nstlist = 2 ns_type = grid rlist= 0.9 coulombtype = PME rcoulomb = 0.9 fourierspacing = 0.12 pme_order= 4 ewald_rtol = 1e-5 rvdw = 0.9 pbc = xyz periodic_molecules = yes tcoupl = nose-hoover nsttcouple = 1 tc-grps = protein non-protein tau_t= 0.1 0.1 ref_t= 310 310 Pcoupl = no gen_vel = yes gen_temp = 310 gen_seed = 173529 constraints = all-bonds Error: Getting Loaded... Reading file full031K_mdrun_ions.tpr, VERSION 4.5.3 (single precision) Loaded with Money NOTE: The load imbalance in PME FFT and solve is 48%. For optimal PME load balancing PME grid_x (144) and grid_y (144) should be divisible by #PME_nodes_x (128) and PME grid_y (144) and grid_z (144) should be divisible by #PME_nodes_y (1) Step 0, time 0 (ps) PSIlogger: Child with rank 82 exited on signal 11: Segmentation fault PSIlogger: Child with rank 79 exited on signal 11: Segmentation fault PSIlogger: Child with rank 2 exited on signal 11: Segmentation fault PSIlogger: Child with rank 1 exited on signal 11: Segmentation fault PSIlogger: Child with rank 100 exited on signal 11: Segmentation fault PSIlogger: Child with rank 97 exited on signal 11: Segmentation fault PSIlogger: Child with rank 98 exited on signal 11: Segmentation fault PSIlogger: Child with rank 96 exited on signal 6: Aborted ... Ps, for now I don't care about the imbalanced PME load unless it's independent from my problem. Cheers André Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] HPC mpi how to run batch system
On 23/12/2010 6:58 PM, gromacs wrote: I think the batch system will be more efficient. The interactive mode is not good, because we have to wait for the job. So is there any program or do we have to write some small program to creat a batch jobs? You normally need to write a script. The administrators of your system will have to provide you with an example. We can't give help for that. Mark Thanks Forwarding messages From: gromacs Date: 2010-12-15 09:42:04 To: gmx-users@gromacs.org Subject: HPC mpi how to run Hi , I have installed PSFTP and PuTTY. I have seen the Gromacs-4.0.7 in ./opt in wivenhoe (cluster), and i know how to upload my file to my folder using PSFTP. However i donot know how to run it in HPC; when i using Gromacs-4.0.7 in my desktop, if the Gromacs is installed properly, there will be a line when i type the command 'luck'. How can i see the gromacs whether it istalled properly? how can i run simulations on HPC? Thanks! -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
RE: [gmx-users] Illegal division by zero at inflategro
Try POP instead of POPE :) From: gmx-users-boun...@gromacs.org on behalf of shikha agarwal Sent: Thu 12/23/2010 4:21 PM To: gmx-users@gromacs.org Subject: [gmx-users] Illegal division by zero at inflategro Hi I am having a trouble during inflategro scaling my system.gro perl inflategro system.gro 4 POPE 14 system_inflated.gro 5 area.dat Reading. Scaling lipids There are 0 lipids... Illegal division by zero at inflategro line 300. regards: shikha << This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK & Malaysia legislation. >> <>-- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] mdrun mpi segmentation fault in high load situation
Dear Gromacs Enthusiasts. I am experiencing problems with mdrun_mpi (4.5.3) on a Nehalem cluster. Problem: This runs fine: mpiexec -np 72 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr This produces a segmentation fault: mpiexec -np 128 /../mdrun_mpi -pd -s full031K_mdrun_ions.tpr So the only difference is the number of cores I am using. mdrun_mpi was compiled using the intel compiler 11.1.072 with my own fftw3 installation. While configuring and make mdrun / make install-mdrun no errors came up. Is there some issue with threading or mpi? If someone has a clue please give me a hint. integrator = md dt = 0.004 nsteps = 2500 nstxout = 0 nstvout = 0 nstlog = 25 nstenergy = 25 nstxtcout = 12500 xtc_grps = protein energygrps = protein non-protein nstlist = 2 ns_type = grid rlist= 0.9 coulombtype = PME rcoulomb = 0.9 fourierspacing = 0.12 pme_order= 4 ewald_rtol = 1e-5 rvdw = 0.9 pbc = xyz periodic_molecules = yes tcoupl = nose-hoover nsttcouple = 1 tc-grps = protein non-protein tau_t= 0.1 0.1 ref_t= 310 310 Pcoupl = no gen_vel = yes gen_temp = 310 gen_seed = 173529 constraints = all-bonds Error: Getting Loaded... Reading file full031K_mdrun_ions.tpr, VERSION 4.5.3 (single precision) Loaded with Money NOTE: The load imbalance in PME FFT and solve is 48%. For optimal PME load balancing PME grid_x (144) and grid_y (144) should be divisible by #PME_nodes_x (128) and PME grid_y (144) and grid_z (144) should be divisible by #PME_nodes_y (1) Step 0, time 0 (ps) PSIlogger: Child with rank 82 exited on signal 11: Segmentation fault PSIlogger: Child with rank 79 exited on signal 11: Segmentation fault PSIlogger: Child with rank 2 exited on signal 11: Segmentation fault PSIlogger: Child with rank 1 exited on signal 11: Segmentation fault PSIlogger: Child with rank 100 exited on signal 11: Segmentation fault PSIlogger: Child with rank 97 exited on signal 11: Segmentation fault PSIlogger: Child with rank 98 exited on signal 11: Segmentation fault PSIlogger: Child with rank 96 exited on signal 6: Aborted ... Ps, for now I don't care about the imbalanced PME load unless it's independent from my problem. Cheers André Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Installation GROMACS UBUNTU
Dear Users, I am unable to get pass the first step of ./configure . This step works, but when typing "make" it says: no targets specified and no makefile found. Should it be like this, or is it compulsory to use cmake? -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] Illegal division by zero at inflategro
Hi I am having a trouble during inflategro scaling my system.gro perl inflategro system.gro 4 POPE 14 system_inflated.gro 5 area.dat Reading. Scaling lipids There are 0 lipids... Illegal division by zero at inflategro line 300. regards: shikha -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists