[gmx-users] About gpu

2013-04-11 Thread
I have tested gromacs-4.6.1 with k20. But when I run the mdrun, I met some
problems.

1.Configure options are -DGMX_MPI=ON ,-DGMX_DOUBLE=ON -DGMX_GPU=OFF .

But if I run parallely with mpirun, it would get wrong.

Note: file tpx version 58, software tpx version 83

Fatal error in PMPI_Bcast: Invalid buffer pointer, error stack:

PMPI_Bcast(2011): MPI_Bcast(buf=(nil), count=56, MPI_BYTE, root=0,
MPI_COMM_WORLD) failed

PMPI_Bcast(1919): Null buffer pointer

APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)



2. Configure options are -DGMX_MPI=ON ,-DGMX_GPU=ON -DGMX_DOUBLE=OFF . But
if I run with gpu, the program would get wrong.

run one process with gpu:

Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single precision)

Note: file tpx version 73, software tpx version 83

NOTE: GPU(s) found, but the current simulation can not use GPUs

To use a GPU, set the mdp option: cutoff-scheme = Verlet

(for quick performance testing you can use the -testverlet option)

Using 1 MPI process

1 GPU detected on host node11:

#0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible

Back Off! I just backed up ener.edr to ./#ener.edr.4#

starting mdrun 'Protein'

-1 steps, infinite ps.

Segmentation Fault (core dumped)

run eight processes with gpu:

Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single precision)

Note: file tpx version 73, software tpx version 83

NOTE: GPU(s) found, but the current simulation can not use GPUs

To use a GPU, set the mdp option: cutoff-scheme = Verlet

(for quick performance testing you can use the -testverlet option)

Non-default thread affinity set, disabling internal thread affinity

Using 8 MPI processes

1 GPU detected on host node11:

#0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible

Back Off! I just backed up ener.edr to ./#ener.edr.6#

starting mdrun 'Protein'

-1 steps, infinite ps.

APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)

Thanks for your help!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About parallelization

2013-04-11 Thread
How to run combinedMPI/OPENMP parallelization with 2 or 4 Openmp threads
per MPI process?
-DGMX_MPI and -DGMX_THREAD_MPI can be use at the same time?
How to run?

Thanks for you help!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] K20 test

2013-04-11 Thread
Hi!
   When I run gromacs-4.6.1 with k20. I meet a question.
   I have 6 nodes.And each node has one K20.And I use one process on one
node with one gpu. But the test result shows that the runtime of one node
is less than that of six nodes.Is the scalability of GPU not good?
Thanks!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About 4.6.1

2013-04-10 Thread
I have tested gromacs-4.6.1 with k20.
But when I run the mdrun, I met some problems.
1.GPU only support float accelerating?
2.Configure options are -DGMX_MPI ,-DGMX_DOUBLE .
But if I run parallely with mpirun, it would get wrong with PMPI_Broadcast.
3. Configure options are -DGMX_MPI ,-DGMX_GPU.But if I run with gpu, the
program would get wrong with the message'Segmentation Fault'.

Thanks for your help!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About k20

2013-04-08 Thread
Hi!
   I want to improve my mdrun performance with k20.But there is something
wrong.
  My gromacs version is 4.6.1. My openmm version is 5.0.1. The wrong
message is include could not find load file:  ../contrib/BuildMdrunOpenMM
   Can the openmm version match k20?
  Thanks!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Improve performance

2013-04-06 Thread
Hi!
I have 6 nodes. Each node has two CPUs,12 cores totally.
How should I set the options like -rdd,-rcon,-dds,-gcom to improve the
performance?
Thanks!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About PP/PME

2013-04-05 Thread
I'm running gromacs in parallel.I have 6 nodes.There are 96 cores. But hoiw
to reduce the load imbalance and improve the performance?
The PME-node is 16 and the PP-node is 80.How to divide?
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Fwd: Improve performance

2013-04-05 Thread
Hi!
I have some questions.
1. Is there any method for me to improve my mdrun performance?
I have read the manual,and tested some option of mdrun,such as
-rdd,-rcon,-dds,-gcom. But they doesn't work. I have no idea about that.
2. Can gromacs 4.6.1 support K20?
   I want to use CPU+GPU. But I met some question when I compiled the mdrun
with k20.The question is that it can't find libopenmm.But I have installed
openmm5.1.I have no idea.

Thanks!
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists