Hey Bryan,

On Montag 23 November 2009, Bryan Catanzaro wrote:
> I built 64-bit versions of Boost and PyCUDA on Mac OS X Snow Leopard, as
>  well as the 64-bit Python interpreter supplied by Apple, as well as the
>  CUDA 3.0 beta.  Everything built fine, but when I ran pycuda.autoinit, I
>  got an interesting CUDA error, which PyCUDA reported as "pointer is
>  64-bit". I'm wondering - is it impossible to use a 64-bit host program
>  with a 32-bit GPU program under CUDA 3.0?

First, I'm not sure I fully understand what's going on. You can indeed compile 
GPU code to match a 32-bit ABI on a 64-bit machine (nvcc --machine 32 ...). Is 
that what you're doing? If so, why? (Normally, nvcc will default to your 
host's ABI. By and large, this changes struct alignment rules and pointer 
widths.)

If you're not doing anything special to get 32-bit GPU code, then your GPU 
code should end up matching your host ABI. Or maybe nvcc draws the wrong 
conclusions or is a fat binary or something and we need to actually specify 
the --machine flag.

I also remember wondering what the error message referred to when I added it. 
I'm totally not sure. Which routine throws it?

Andreas

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
PyCUDA mailing list
[email protected]
http://tiker.net/mailman/listinfo/pycuda_tiker.net

Reply via email to