Received from Lev Givon on Wed, Aug 03, 2011 at 02:57:35PM EDT:
> Using PyCUDA 2011.1.2 built against CUDA 4.0.17 on a 64-bit Linux
> system containing two GPU devices with compute capability >= 2.0, this
> code
> 
> import pycuda.driver as drv
> import atexit
> 
> drv.init()
> dev0 = drv.Device(0)
> dev1 = drv.Device(1)
> 
> ctx0 = dev0.make_context()
> ctx1 = dev1.make_context()
> 
> atexit.register(ctx0.pop)
> atexit.register(ctx1.pop)
> 
> ctx1.enable_peer_access(ctx0)
> 
> results in the following exception:
> 
> Traceback (most recent call last):
>   File "peer_memcpy.py", line 22, in <module>
>     ctx1.enable_peer_access(ctx0)
> pycuda._driver.LogicError: cuCtxEnablePeerAccess failed: invalid device
> 
> Why is this happening?

Answering my own question: although the GPUDirect info page on
NVIDIA's site and the CUDA 4.0 readiness guide don't explicitly say
so, support for peer-to-peer access is apparently limited to Tesla
devices with Fermi architecture (and not simply Fermi-based GPUs in
general).

                                           L.G.

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to