Hi Karli,

Thanks for that great e-mail.

I'd be glad if you could send me the randomized SVD paper. I'll take a look
at this weekend. With the 'standard' SVD yes, I'll take a look at the
OpenCL part. For the generalized SVD is there an implementation for N=2 in
ViennaCL?

I did already fork both viennacl-dev and PyViennaCl-dev. I will re-fork it
on Monday and start from there. Also, I'm comfortable with git, so
pull-requests through that should not be a problem.

Regards,
Aanchan

On Fri, Oct 31, 2014 at 5:07 PM, Karl Rupp <r...@iue.tuwien.ac.at> wrote:

> Hi Aanchan,
>
> > Just to give you an intro:
>
>> I work in speech recognition, with C/C++ toolkits like HTK and Kaldi. We
>> have added some of our own libraries to these codebases maintained
>> internally at McGill. I am also familiar with Python, Perl, Bash and
>> CUDA. I use a Python wrapper called Gnumpy (wrapped around a CUDA matrix
>> library called CUDAMat) to train neural networks on GPU boards
>> routinely. I use Linear Algebra in my work routinely.
>>
>
> Ah, I see, thanks for the informations, all good stuff :-)
>
>
>  I had a very nice conversation with Philippe on IRC who mentioned that
>> there are some issues with GEMV in PyViennaCL. Since I am familiar with
>> Gnumpy I could look into that.
>>
>
> Hmm, that might be pretty hard to debug for a start, since it connects the
> most high-level-layer with the most low-level functionality. Also it seems
> like Philippe found a fix for this just shortly after your chat.
>
>
> > Philippe also mentioned that the current
>
>> implementation of the SVD is a bit slow. I could profile that, and start
>> from there. If necessary start with a newer implementation and see if I
>> get any better.
>>
>
> This is actually a very good starting point. We get a few user requests on
> a faster SVD, and I think there is quite a bit of optimization potential
> for the existing OpenCL implementation. A bachelor student will also port
> this to CUDA and OpenMP in the next months, so if you want to look at the
> OpenCL part now, it would certainly help.
>
>
>  I was also recently working with the Generalized SVD. As
>> of the 70's the GSVD was developed for N=2 matrices. Recently a group at
>> Utah has developed it for three or more matrices
>> :http://www.alterlab.org/HO_GSVD/. N=2 matrices case is based off the QR
>> decomposition, for N>2 a QR decomposition based method is not that
>> straightforward. But given their current (non-QR) solution I could try
>> and implement that.
>>
>
> This also sounds like a great project to start with. Although I only
> looked at the paper briefly, the algorithm looks sufficiently compact. More
> importantly, you also seem to have direct use for the implementation, which
> is a good thing maintenance-wise :-)
>
>
>  Do these sound like good things to start with?
>>
>
> Generalized SVD and 'standard' SVD are certainly good things to look at if
> you're familiar with the linear algebra behind it. I can also send you
> slides about a randomized SVD algorithm, which I know is fairly easy to
> implement for all three compute backends with the existing functionality in
> ViennaCL.
>
> Are you fine with issuing pull requests through GitHub for a start? If you
> want to start off with a stable state, I suggest you start (i.e. fork)
> early next week when the 1.6.0 release is out. :-)
>
> Best regards,
> Karli
>
>
------------------------------------------------------------------------------
_______________________________________________
ViennaCL-devel mailing list
ViennaCL-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/viennacl-devel

Reply via email to