On Thu, Mar 28, 2013 at 4:09 PM, Szilárd Páll <szilard.p...@cbr.su.se>wrote:

> Hi,
>
> If mdrun says that it could not detect GPUs it simply means that the GPU
> enumeration found no GPUs, otherwise it would have printed what was found.
> This is rather strange because mdrun uses the same mechanism the
> deviceQuery SDK example. I really don't have a good idea what could be the
> issue, but you could try recompiling or compiling with CUDA 4.2 to see if
> any of that makes a difference.
>
> Let us know if you figured out something.
>
> Cheers,
>

Thanks Szilárd for the eye opening comment.

I just tried running gromacs as root. I recalled I had executed deviceQuery
as root. While executing as user it produces the same error :

*/root/NVIDIA_CUDA-5.0_Samples/1_Utilities/deviceQuery/deviceQuery
Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 38
-> no CUDA-capable device is detected*

Now, running gromacs as root, it is running successfully (I suppose).

Output of nvidia-smi

+------------------------------------------------------+

| NVIDIA-SMI 4.310.40   Driver Version: 310.40
|
|-------------------------------+----------------------+----------------------+
| GPU  Name                     | Bus-Id        Disp.  | Volatile Uncorr.
ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage         | GPU-Util  Compute
M. |
|===============================+======================+======================|
|   0  NVS 300                  | 0000:03:00.0     N/A |
N/A |
| N/A   48C  N/A     N/A /  N/A |   3%   17MB /  511MB |     N/A
Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K20c               | 0000:04:00.0     Off |
Off |
| 50%   62C    P0   106W / 225W |   2%   87MB / 5119MB |     76%
Default |
+-------------------------------+----------------------+----------------------+


+-----------------------------------------------------------------------------+
| Compute processes:                                               GPU
Memory |
|  GPU       PID  Process name
Usage      |
|=============================================================================|
|    0            Not
Supported                                               |
|    1      9127  mdrun_461
72MB  |
+-----------------------------------------------------------------------------+

Output of md.log

2 GPUs detected:
  #0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC:  no, stat: compatible
  #1: NVIDIA NVS 300, compute cap.: 1.2, ECC:  no, stat: incompatible

1 GPU auto-selected for this run: #0


I think there is something relate to permissions. Though nvcc has 755
permission, something else might require additional permissions.

Chandan


> --
> Szilárd
>
>
> On Thu, Mar 28, 2013 at 2:39 AM, Berk Hess <g...@hotmail.com> wrote:
>
> >
> > Hi,
> >
> > I am not the expert on GPU detection, so we'll need to wait until an
> > expert replies.
> > Maybe GPU 0 is ignored and the GPUs are renumbered, could you try:
> > mdrun -ntmpi 1 -gpu_id 0
> >
> > Also your tpr file is from an older version. It will not run on a GPU.
> > You need to set the mdp option:
> > cutoff-scheme = Verlet
> > and run grompp to get a new tpr file.
> >
> > Cheers,
> >
> > Berk
> >
> > > From: iitd...@gmail.com
> > > Date: Thu, 28 Mar 2013 14:57:16 +0530
> > > Subject: Re: [gmx-users] no CUDA-capable device is detected
> > > To: gmx-users@gromacs.org
> > >
> > > On Thu, Mar 28, 2013 at 2:41 PM, Berk Hess <g...@hotmail.com> wrote:
> > >
> > > >
> > > > Hi,
> > > >
> > > > The code compiled, so the compiler is not the issue.
> > > >
> > > > I guess mdrun picked up GPU 0, which it should have ignored. You only
> > want
> > > > to use GPU 1.
> > > >
> > > > Could you try running:
> > > > mdrun -ntmpi 1 -gpu_id 1
> > > >
> > >
> > > $mdrun_461 -ntmpi 1 -gpu_id 1 -s md0-25.tpr
> > > Note: file tpx version 73, software tpx version 83
> > >
> > > NOTE: Error occurred during GPU detection:
> > >       no CUDA-capable device is detected
> > >       Can not use GPU acceleration, will fall back to CPU kernels.
> > >
> > >
> > > No GPUs detected
> > >
> > >
> > > -------------------------------------------------------
> > > Program mdrun_461, VERSION 4.6.1
> > > Source code file:
> > > /home/sudip/RPMs/gromacs-4.6.1/src/gmxlib/gmx_detect_hardware.c, line:
> > 580
> > >
> > > Fatal error:
> > > Some of the requested GPUs do not exist, behave strangely, or are not
> > > compatible:
> > >     GPU #1: inexistent
> > >
> > > >
> > > > Cheers,
> > > >
> > > > berk
> > > >
> > > > > Date: Thu, 28 Mar 2013 10:51:58 +0200
> > > > > Subject: Re: [gmx-users] no CUDA-capable device is detected
> > > > > From: g...@bioacademy.gr
> > > > > To: gmx-users@gromacs.org
> > > > >
> > > > > Hi Chandan
> > > > >
> > > > > Are you using the same version of GCC compiler that you used to
> > compile
> > > > > CUDA 5.0? In my hands, gcc 4.7.2 could not compile CUDA 5.0 (I
> think
> > > > there
> > > > > was some kind of incompatibility between the two).
> > > >
> > >
> > > There is an work around with gcc 4.7.2. Please see
> > >
> >
> http://svshift.blogspot.in/2013/03/running-nvidai-cuda-sdk-50-on-opensuse.html
> > >
> > > >
> > > > > Can you try compiling both CUDA 5.0 and GROMACS with gcc 4.6.1?
> This
> > > > > worked in my system (MacOS/Darwin).
> > > > >
> > > > > Just make sure to set the variables CC and CXX to point to the
> right
> > > > > compiler version when you run cmake.
> > > > >
> > > > > George
> > > > >
> > > > >
> > > > > > Dear GMX Users,
> > > > > >
> > > > > > I am trying to execute Gromacs 4.6.1 on one of the GPU server:
> > > > > > *OS*: OpenSuse 12.3 x86_64 3.7.10-1.1-desktop (Kernel Release)
> > > > > > *gcc*: 4.7.2
> > > > > >
> > > > > > CUDA Library paths
> > > > > > #CUDA-5.0
> > > > > > export CUDA_HOME=/usr/local/cuda-5.0
> > > > > > export PATH=$CUDA_HOME/bin:$PATH
> > > > > > export LD_LIBRARY_PATH=$CUDA_HOME/lib64:/lib:$LD_LIBRARY_PATH
> > > > > >
> > > > > > The gromacs has been compiled with
> > > > > >
> > > > > > CMAKE_PREFIX_PATH=/opt/apps/fftw-3.3.3/single:/usr/local/cuda-5.0
> > > > cmake ..
> > > > > > -DGMX_GPU=ON -DCMAKE_INSTALL_PREFIX=/opt/apps/gromacs/461/single
> > > > > > -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_461
> > > > -DGMX_LIBS_SUFFIX=_461
> > > > > > -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
> > > > > >
> > > > > > *Error on executing mdrun
> > > > > > *
> > > > > > *
> > > > > > *
> > > > > > *NOTE: Error occurred during GPU detection:
> > > > > > no CUDA-capable device is detected
> > > > > > Can not use GPU acceleration, will fall back to CPU kernels.
> > > > > >
> > > > > >
> > > > > > Will use 24 particle-particle and 8 PME only nodes
> > > > > > This is a guess, check the performance at the end of the log file
> > > > > > Using 32 MPI threads
> > > > > >
> > > > > > No GPUs detected
> > > > > >
> > > > > > *I checked my cuda installation. I am able to compile and execute
> > the
> > > > > > sample programmes e.g., deviceQuery.
> > > > > >
> > > > > > Also executed *nvidia-smi *:
> > > > > > +------------------------------------------------------+
> > > > > > | NVIDIA-SMI 4.310.40 Driver Version: 310.40 |
> > > > > >
> > > >
> >
> |-------------------------------+----------------------+----------------------+
> > > > > > | GPU Name | Bus-Id Disp. | Volatile Uncorr. ECC |
> > > > > > | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute
> M. |
> > > > > >
> > > >
> >
> |===============================+======================+======================|
> > > > > > | 0 NVS 300 | 0000:03:00.0 N/A | N/A |
> > > > > > | N/A 49C N/A N/A / N/A | 3% 16MB / 511MB | N/A Default |
> > > > > >
> > > >
> >
> +-------------------------------+----------------------+----------------------+
> > > > > > | 1 Tesla K20c | 0000:04:00.0 Off | Off |
> > > > > > | 30% 38C P8 16W / 225W | 0% 13MB / 5119MB | 0% Default |
> > > > > >
> > > >
> >
> +-------------------------------+----------------------+----------------------+
> > > > > >
> > > > > >
> > > >
> >
> +-----------------------------------------------------------------------------+
> > > > > > | Compute processes: GPU Memory |
> > > > > > | GPU PID Process name Usage |
> > > > > >
> > > >
> >
> |=============================================================================|
> > > > > > | 0 Not Supported |
> > > > > >
> > > >
> >
> +-----------------------------------------------------------------------------+
> > > > > >
> > > > > > What am I missing that Gromacs is not detecting the GPUs.
> > > > > >
> > > > > > Chandan
> > > > > >
> > > > > > --
> > > > > > Chandan kumar Choudhury
> > > > > > NCL, Pune
> > > > > > INDIA
> > > > > > --
> > > > > > gmx-users mailing list    gmx-users@gromacs.org
> > > > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > > > * Please search the archive at
> > > > > > http://www.gromacs.org/Support/Mailing_Lists/Search before
> > posting!
> > > > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >
> > > > >
> > > > >
> > > > > Dr. George Patargias
> > > > > Postdoctoral Researcher
> > > > > Biomedical Research Foundation
> > > > > Academy of Athens
> > > > > 4, Soranou Ephessiou
> > > > > 115 27
> > > > > Athens
> > > > > Greece
> > > > >
> > > > > Office: +302106597568
> > > > >
> > > > > --
> > > > > gmx-users mailing list    gmx-users@gromacs.org
> > > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >                                           --
> > > > gmx-users mailing list    gmx-users@gromacs.org
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > >
> > >
> > >
> > > --
> > > Chandan kumar Choudhury
> > > NCL, Pune
> > > INDIA
> > > --
> > > gmx-users mailing list    gmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >                                           --
> > gmx-users mailing list    gmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>


--
Chandan kumar Choudhury
NCL, Pune
INDIA
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to