Hi,

compyte is not ready for public use. The structure of files will
change when isolate the dependency on python lib later.

See more comment inlined in your post.

2011/6/18 Bogdan Opanchuk <manti...@gmail.com>:
> Hello,
>
> I finally have the time to contribute something to compyte, so I had a
> look at its sources. As far as I understand, at the moment it has:
> - sources for GPU platform-dependent memory operations (malloc()/free()/...)
> - sources for array class, which uses abstract API of these operations
> - some high-level Python code like scan.py with generalized kernels
>
> So I have a few questions about this layout:
> 1. It does not have its own setup script; is it supposed to be a part
> of PyCuda/PyOpenCL and get compiled with them or is it just a
> temporary solution?

Currently there is not a good compilation system for this project as
you saw. What I currently have in mind is that it should
compile/install itself and not ask the other project to do so. But
this is not done yet. If you want and have time to do it, it would be
a good contribution.

> In the former case, the second question:
> 2. Why was it decided to keep low-level memory operations in compyte?
> They require platform-specific makefiles (and the one currently
> committed to repo is quite specific and belongs to Frederic, as I
> understand from the paths inside). The only reason I can see is to
> keep memory operations API inside the single module, but in this case
> we will have to copy specialized building code from setup scripts of
> PyCuda/PyOpenCL, which, I think, is more serious violation of DRY.
> Memory API is small and unlikely to change much; we can create
> separate modules in PyCuda/PyOpenCL and pass pointers to memory
> functions to compyte using capsules.

compyte should be usable by other tools then PyCUDA and PyOpenCL. So
it need to have them. I suppose that Andreas will remove them from
PyCUDA/PyOpenCL and call the compyte version when it is ready.

> 3. Moreover, we can export some simple memory API in each of
> PyCuda/PyOpenCL (something like opaque Buffer object and memory
> functions that use it, like it's done in PyOpenCL) for people who want
> some fine tuning and do not want to use our general ndarray-like
> object. In fact, compyte developers are such people too. There can be
> some problems, of course, if you are inclined to write ndarray module
> in C (is it really necessary?), but they are, of course, solvable.

Someone that just want a buffer could allocate a simple vector and use
it as he want. Do you see problem with that? Did I missed something?
They won't need to use the function that we provide.

What problem do you see to have the base object and function in C? One
of the goal of compyte is to be usable by people who don't use python.
So we need something in C. The first phase that I'm doing is to port
Theano code that is in C. But we don't plan to do all in C. In fact,
functionality not in Theano will probably be in python generated code
to easy the development.

>
> Hope this makes sense. In any case, at the moment I am mostly
> interested in the answer to the first question, because it will remove
> some uncertainty in my current understanding.

If you have any questions/comments don't hesitate.


Fred

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to