Dear David, 


Thanks for the reply! I was wondering if you have had the same issue with this force field, such that you could only use one domain for all the SWM4-NDP calculations?

Knowing that I could only use one MPI rank, is there a way to get the calculation work on two nodes? One of my systems has 10k water molecules :( 




Regards,

Kester


--------- 원본 메일 ---------
보낸사람 : David van der Spoel <sp...@xray.bmc.uu.se>
받는사람 : <gmx-us...@gromacs.org>
받은날짜 : 2014년 9월 19일(금) 02:11:26
제목 : Re: [gmx-users] Problem with: Shell particles are not implemented with domain de
On 2014-09-18 14:26, Justin Lemkul wrote:
>
>
> On 9/18/14 8:01 AM, Kester Wong wrote:
>> Dear gromacs users,
>>
>>
>> Has anyone experienced a problem with running polarisable water model
>> SWM4-NDP
>> with the following warning: Shell particles are not implemented with
>> domain
>> decomposition?
>>
>
> Out of curiosity, what is the source of your SWM4-NDP topology?
http://virtualchemistry.org/pol.php

>
>> The md.log also stated the following: Number of hardware threads
>> detected (12)
>> does not match the number reported by OpenMP (1). I don't think this
>> is the
>> cause, as this message was also found in my other "working" calculations.
>>
>>
>> I have tried using OpenMP and also tried a variety of -ntmpi and
>> -ntomp settings.
>>
>
> The only valid option here is -ntmpi 1 and -ntomp equal to whatever
> number of cores you're using.  Until I finish the DD implementation for
> shells/Drudes, only OpenMP is supported here.
>
> -Justin
>
>> The same calculation did not work in GROMACS versions 5.0 and 5.0.1,
>> in the
>> following cluster:
>>
>>
>> Gromacs version:    VERSION 5.0.1
>>
>> Precision:          double
>>
>> Memory model:       64 bit
>>
>> MPI library:        MPI
>>
>> OpenMP support:     enabled
>>
>> GPU support:        disabled
>>
>> invsqrt routine:    gmx_software_invsqrt(x)
>>
>> SIMD instructions:  NONE
>>
>> FFT library:        fftw-3.3.3
>>
>> RDTSCP usage:       disabled
>>
>> C++11 compilation:  disabled
>>
>> TNG support:        enabled
>>
>> Tracing support:    disabled
>>
>> Built on:           Thu Sep 18 20:14:41 KST 2014
>>
>> Built by:           r...@master.hpc [CMAKE]
>>
>> Build OS/arch:      Linux 2.6.18-274.7.1.el5 x86_64
>>
>> Build CPU vendor:   GenuineIntel
>>
>> Build CPU brand:    Intel(R) Xeon(R) CPU           X3220  @ 2.40GHz
>>
>> Build CPU family:   6   Model: 15   Stepping: 11
>>
>> Build CPU features: apic clfsh cmov cx8 cx16 lahf_lm mmx msr pdcm pse
>> sse2 sse3
>> ssse3
>>
>> C compiler:         /usr/bin/cc GNU 4.1.2
>>
>> C compiler flags:      -Wextra -Wno-missing-field-initializers
>> -Wno-sign-compare
>> -Wpointer-arith -Wall -Wno-unused -Wunused-value -Wunused-parameter
>> -fomit-frame-pointer -funroll-all-loops   -O3 -DNDEBUG
>>
>> C++ compiler:       /usr/bin/c++ GNU 4.1.2
>>
>> C++ compiler flags:    -Wextra -Wno-missing-field-initializers
>> -Wpointer-arith
>> -Wall -Wno-unused-function   -fomit-frame-pointer -funroll-all-loops
>> -O3 -DNDEBUG
>>
>> Boost version:      1.55.0 (internal)
>>
>>
>> Using 24 MPI processes
>> Using 1 OpenMP thread per MPI process
>>
>>
>> However, the same input files worked in another cluster (albeit very
>> slowly,
>> ~0.3ns/day).
>>
>>
>> Gromacs version:    VERSION 5.0
>>
>> Precision:          single
>>
>> Memory model:       64 bit
>>
>> MPI library:        MPI
>>
>> OpenMP support:     enabled
>>
>> GPU support:        disabled
>>
>> invsqrt routine:    gmx_software_invsqrt(x)
>>
>> SIMD instructions:  AVX_256
>>
>> FFT library:        fftpack (built-in)
>>
>> RDTSCP usage:       enabled
>>
>> C++11 compilation:  disabled
>>
>> TNG support:        enabled
>>
>> Tracing support:    disabled
>>
>> Built on:           Thu Aug 28 16:44:08 KST 2014
>>
>> Built by:           root@kant [CMAKE]
>>
>> Build OS/arch:      Linux 2.6.32-431.23.3.el6.x86_64 x86_64
>>
>> Build CPU vendor:   GenuineIntel
>>
>> Build CPU brand:    Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz
>>
>> Build CPU family:   6   Model: 60   Stepping: 3
>>
>> Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt
>> lahf_lm
>> mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp
>> sse2 sse3
>> sse4.1 sse4.2 ssse3 tdt x2apic
>>
>> C compiler:         /usr/bin/gcc GNU 4.4.7
>>
>> C compiler flags:    -mavx   -Wno-maybe-uninitialized -Wextra
>> -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
>> -Wno-unused -Wunused-value -Wunused-parameter   -fomit-frame-pointer
>> -funroll-all-loops  -Wno-array-bounds  -O3 -DNDEBUG
>>
>> C++ compiler:       /usr/bin/g++ GNU 4.4.7
>>
>> C++ compiler flags:  -mavx   -Wextra -Wno-missing-field-initializers
>> -Wpointer-arith -Wall -Wno-unused-function   -fomit-frame-pointer
>> -funroll-all-loops  -Wno-array-bounds  -O3 -DNDEBUG
>>
>> Boost version:      1.55.0 (internal)
>>
>>
>>
>> Using 1 MPI process
>>
>> Using 16 OpenMP threads
>>
>>
>> Regards,
>> Kester
>>
>>
>>
>>
>


-- 
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:	+46184714205.
sp...@xray.bmc.uu.se    http://folding.bmc.uu.se
-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to