----- Original Message -----
From: jayant james <jayant.ja...@gmail.com>
Date: Thursday, September 30, 2010 9:36
Subject: [gmx-users] distance restrained D simulations
To: Discussion list for GROMACS users <gmx-users@gromacs.org>

> > Hi!
> I am trying to perform distance restrained MD simulations of a protein with 
> Gromacs4.0.5.
> I have a bunch of FRET distances ranging from 10Angs to 40 angs that I am 
> incorporating simular to NOE distance restraints in NMR.

The details of how you tried to do this are important. See 
http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
 and consider section 5.4 of the manual. There might be a solution that does 
not have this issue.

Mark

>  When I use one processor for the simulations its all fine, but, when I use 
> multiple processors I get a bunch of errors
> lets me start with the "NOTE" found below. Well do not want to increase the 
> cut-off distance but want the program to use multiple processors. How can I 
> overcome this problem?
> I would appreciate your input
> Thanks
> JJ
> 
> NOTE: atoms involved in distance restraints should be within the longest 
> cut-off distance, if this is not the case mdrun generates a fatal error, in 
> that case use particle decomposition (mdrun option -pd) > > > WARNING: Can 
> not write distance restraint data to energy file with domain decomposition > 
> Loaded with Money> > > 
> ------------------------------------------------------- > Program mdrun_mpi, 
> VERSION 4.0.5> Source code file: ../../../src/mdlib/domdec.c, line: 5873 > > 
> Fatal error:> There is no domain decomposition for 4 nodes that is compatible 
> with the given box and a minimum cell size of 8.89355 nm > Change the number 
> of nodes or mdrun option -rdd or -dds> Look in the log file for details on 
> the domain decomposition > 
> -------------------------------------------------------> > "What Kind Of Guru 
> are You, Anyway ?" (F. Zappa) > > Error on node 0, will try to stop all the 
> nodes> Halting parallel program mdrun_mpi on CPU 0 out of 4 > > gcq#21: "What 
> Kind Of Guru are You, Anyway ?" (F. Zappa)> > 
> -------------------------------------------------------------------------- > 
> mpirun has exited due to process rank 2 with PID 28700 on> node 
> compute-3-73.local exiting without calling "finalize". This may > have caused 
> other processes in the application to be> terminated by signals sent by 
> mpirun (as reported here). > 
> -------------------------------------------------------------------------> 
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with 
> errorcode -1.> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI 
> processes. > You may or may not see output from other processes, depending 
> on> exactly when Open MPI kills them. > 
> --------------------------------------------------------------------------> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> Jayasundar Jayant James
> 
 > www.chick.com/reading/tracts/0096/0096_01.asp) 
> 
 > -- 
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to