Hello Andreas, Frederic,

2011/6/21 Andreas Kloeckner <[email protected]>:
> On Mon, 20 Jun 2011 09:40:02 -0400, Frédéric Bastien <[email protected]> wrote:

>> Currently there is not a good compilation system for this project as
>> you saw. What I currently have in mind is that it should
>> compile/install itself and not ask the other project to do so. But
>> this is not done yet. If you want and have time to do it, it would be
>> a good contribution.
>
> I actually disagree with that--I don't particularly see why compyte
> shouldn't use the surrounding project's (i.e. PyCUDA/PyOpenCL's)
> distutils scripts. No need to waste time maintaining another build
> infrastructure if the package is likely only installed as part of a
> surrounding project.

Yes, that's what I meant.

I took the liberty of cutting "visions" of compyte by Andreas and
Frederic, to make it more readable:

> I personally used to see compyte as a common infrastructure from which
> PyCUDA and PyOpenCL can derive actually working array classes. In any
> case, array functionality will not be removed from either
> package. Whatever you do with the existing array types will continue to
> work. More functionality will become available with time.

> I actually do see value in providing something like Numpy's buffer
> interface, but aimed at CL/CUDA.

> Beyond data access and localized speed hotspots, I don't see much of a
> need for C.

>> What problem do you see to have the base object and function in C?
>> One of the goal of compyte is to be usable by people who don't use
>> python.
>
>> So we need something in C. The first phase that I'm doing is to port
>> Theano code that is in C. But we don't plan to do all in C. In fact,
>> functionality not in Theano will probably be in python generated code
>> to easy the development.

And now I see where my problem with understanding current architecture
is: I had somewhat different vision in mind. Compyte was supposed to
contain Python-wrapped Cuda/CL kernels (pyfft, randoms, scan, reduce
etc) and (ndarray-like) array class. PyCUDA and PyOpenCL are providing
low-level access to Cuda/CL functions (well, that's what they do now);
GPUArray is moved from PyCUDA to compyte and generalized to work with
PyOpenCL too. Compyte is a separate package and has its own setup
script; all code that needs to be compiled against Cuda/CL libraries
is contained in PyCUDA/PyOpenCL (most of it is already there). Compyte
itself contains only Python code and (if really really necessary) C
code which uses Cuda/CL functions passed from PyCuda/PyOpenCL in
capsules.

I do not really understand why would we want provide any functionality
for non-Python programs; they have their own Cuda/CL wrappers which
make it much easier. Moreover, although kernels are general and can,
theoretically, be used by code in any language, they are not
standalone — they have accompanying Python code which renders
templates, sets launch parameters, provides convenient API and so on.
Are you planning to translate all this to C too or depend on Python
interpreter?

Best regards,
Bogdan

_______________________________________________
PyOpenCL mailing list
[email protected]
http://lists.tiker.net/listinfo/pyopencl

Reply via email to