Alan,

I'm forwarding your message to the PyCUDA mailing list, maybe someone there 
has an idea.

Andreas
----------  Forwarded Message  ----------

Betreff: Re: about pyopencl
Datum: Montag 27 Juli 2009
Von: Alan <[email protected]>
An: Andreas Klöckner <[email protected]>

HI Andreas,
Thank you very much for your comments.

I did
try pyCuda first. After some struggle to get boost to work (I use Fink
x86_64), I did this for pycuda:

./configure.py --boost-inc-dir=/sw/include --boost-lib-dir=/sw/lib \
--boost-thread-libname=boost_thread-mt
--boost-python-libname=boost_python-mt

( edited mine siteconf.py:
BOOST_INC_DIR = ['/sw/include']
BOOST_LIB_DIR = ['/sw/lib']
BOOST_COMPILER = 'gcc-4.2'
BOOST_PYTHON_LIBNAME = ['boost_python-mt']
BOOST_THREAD_LIBNAME = ['boost_thread-mt']
CUDA_TRACE = False
CUDA_ENABLE_GL = False
CUDADRV_LIB_DIR = []
CUDADRV_LIBNAME = ['cuda']
CXXFLAGS = ['-m64']
LDFLAGS = []
)

make

[snip]
g++ -L/sw/lib -bundle -L/sw/lib/python2.6/config -lpython2.6
build/temp.macosx-10.5-i386-2.6/src/cpp/cuda.o
build/temp.macosx-10.5-i386-2.6/src/cpp/bitlog.o
build/temp.macosx-10.5-i386-2.6/src/wrapper/wrap_cudadrv.o
build/temp.macosx-10.5-i386-2.6/src/wrapper/mempool.o -L/sw/lib
-L/usr/local/cuda/lib -lboost_python-mt -lboost_thread-mt -lcuda -o
build/lib.macosx-10.5-i386-2.6/pycuda/_driver.so -arch i386
ld warning: in /sw/lib/python2.6/config/libpython2.6.dylib, file is not of
required architecture
ld warning: in build/temp.macosx-10.5-i386-2.6/src/cpp/cuda.o, file is not
of required architecture
ld warning: in build/temp.macosx-10.5-i386-2.6/src/cpp/bitlog.o, file is not
of required architecture
ld warning: in build/temp.macosx-10.5-i386-2.6/src/wrapper/wrap_cudadrv.o,
file is not of required architecture
ld warning: in build/temp.macosx-10.5-i386-2.6/src/wrapper/mempool.o, file
is not of required architecture
ld warning: in /sw/lib/libboost_python-mt.dylib, file is not of required
architecture
ld warning: in /sw/lib/libboost_thread-mt.dylib, file is not of required
architecture

I know quite well the warnings above and hence, make tests failed. It's
because:

otool -L build/lib.macosx-10.5-i386-2.6/pycuda/_driver.so
build/lib.macosx-10.5-i386-2.6/pycuda/_driver.so:
 /usr/local/cuda/lib/libcuda.dylib (compatibility version 1.1.0, current
version 2.3.0)
/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version
7.4.0)
 /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version
1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version
111.1.4)

file /usr/local/cuda/lib/libcuda.dylib
/usr/local/cuda/lib/libcuda.dylib: Mach-O dynamically linked shared library
i386

All others are x86_64. So basically my problem is libcuda is 32 bits.

And I guess I will be in this dead end until Nvidia release Cuda in 64bits
for Mac. And maybe this will happen only when Mac OSX becomes only 64
bits...

Cheers,
Alan

On Fri, Jul 24, 2009 at 10:50, Andreas Klöckner 
<[email protected]>wrote:

> Hi Alan,
>
> On Freitag 24 Juli 2009, you wrote:
> > I am about to start to play with pycuda but I am keen to consider
> > opencl and I was trying to find any information about pyopencl and got
> > nothing. And since your name is behind both project I was wondering if
> > pyopencl is not developed anymore.
>
> It's not that PyOpenCL isn't developed any more--I just haven't found time
> to
> grow it beyond its embryonic stage just yet, but it'll happen. Funnily, a
> different project called PyOpenCL has cropped up [1], which is a minimal,
> but
> functional, ctypes-based wrapper. I've gotten in touch with its author
> regarding working together or at least changing one of the names, but so
> far
> haven't gotten any response. Since I personally don't like ctypes for
> big-ish
> wrappers, I'll continue to develop "my" PyOpenCL as-is. Contributions to
> the
> code would be more than welcome.
>
> [1] http://pyopencl.next-touch.com/
>
> > I see CUDA has been ahead but for the future and the good of GPU
> > applications I believe opencl has to be the way and I am afraid of
> > developing my applications in CUDA only.
>
> While OpenCL promises vendor independence, I believe that switching devices
> will still involve significant code changes, so in that sense you're
> committing yourself to a device (and hence a vendor) anyway. Relatedly, it
> appears that Nvidia is the only vendor with a credible CL story right now,
> and
> it's plausible that they'll have CUDA stays 'ahead' in some sense on their
> own
> hardware--be that in terms of features or performance. If "just wait for a
> year or two and see what happens" is a valid choice for you, that may be
> what
> you want to do, but if you want to do GPU computing right now, CUDA is
> probably the best choice.
>
> HTH
> Andreas
>
> --
> Andreas Kloeckner
> Applied Mathematics, Brown University
> http://www.dam.brown.edu/people/kloeckner
> +1-401-648-0599
>



-- 
Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
Department of Biochemistry, University of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
>>http://www.bio.cam.ac.uk/~awd28<<

-------------------------------------------------------

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
PyCUDA mailing list
[email protected]
http://tiker.net/mailman/listinfo/pycuda_tiker.net

Reply via email to