Sorry for not describing my problem more clearly. This error happens without even invoking nvcc. Here's a little more detail:
Using CUDA 3.0 Beta, Python 2.6, Mac OS X 10.6:
1. Python running in 32 bit mode, Boost & PyCUDA compiled for 32-bit
architecture => no problems.
2. (Default on OS X 10.6) Python running in 64 bit mode, Boost & PyCUDA
compiled for 64-bit architecture => the following error
import pycuda._driver # dynamically loaded from
/tmp/pycuda/build/lib.macosx-10.6-i386-2.6/pycuda/_driver.so
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/tmp/pycuda/build/lib.macosx-10.6-i386-2.6/pycuda/autoinit.py", line 4,
in <module>
cuda.init()
pycuda._driver.LogicError: cuInit failed: pointer is 64-bit
It's not a super big problem right now, since I can just force 32-bit mode when
I build everything, but since everything on OS X 10.6 is 64-bit by default,
it's a little extra snag for new people to have to compile everything
differently, and remember to set a special environment variable
(VERSIONER_PYTHON_PREFER_32_BIT=yes) to get Python to run in 32-bit mode.
Or maybe there's a quirk of my own particular installation that's causing this
and it's not going to be a problem for anyone else... ;)
- bryan
On Nov 23, 2009, at 8:31 PM, Andreas Klöckner wrote:
> Hey Bryan,
>
> On Montag 23 November 2009, Bryan Catanzaro wrote:
>> I built 64-bit versions of Boost and PyCUDA on Mac OS X Snow Leopard, as
>> well as the 64-bit Python interpreter supplied by Apple, as well as the
>> CUDA 3.0 beta. Everything built fine, but when I ran pycuda.autoinit, I
>> got an interesting CUDA error, which PyCUDA reported as "pointer is
>> 64-bit". I'm wondering - is it impossible to use a 64-bit host program
>> with a 32-bit GPU program under CUDA 3.0?
>
> First, I'm not sure I fully understand what's going on. You can indeed
> compile
> GPU code to match a 32-bit ABI on a 64-bit machine (nvcc --machine 32 ...).
> Is
> that what you're doing? If so, why? (Normally, nvcc will default to your
> host's ABI. By and large, this changes struct alignment rules and pointer
> widths.)
>
> If you're not doing anything special to get 32-bit GPU code, then your GPU
> code should end up matching your host ABI. Or maybe nvcc draws the wrong
> conclusions or is a fat binary or something and we need to actually specify
> the --machine flag.
>
> I also remember wondering what the error message referred to when I added it.
> I'm totally not sure. Which routine throws it?
>
> Andreas
> _______________________________________________
> PyCUDA mailing list
> [email protected]
> http://tiker.net/mailman/listinfo/pycuda_tiker.net
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ PyCUDA mailing list [email protected] http://tiker.net/mailman/listinfo/pycuda_tiker.net
