KarlI am aware that Eigen can do it for its matrices and I am also aware that
VCL cannot do it natively. My question was this:In your example of interfacing
with Eigen, you have shown a VCL dense matrix interfacing with an Eigen dense
matrix. Do you have any example of interfacing an Eigen Sparse matrix with a
VCL dense matrix? Thanks and Regards
Sumit
From: Karl Rupp <[email protected]>
To: Sumit Kumar <[email protected]>
Cc: "[email protected]"
<[email protected]>
Sent: Friday, July 31, 2015 6:20 PM
Subject: Re: [ViennaCL-devel] ViennaCL reductions
Hi Sumit,
> I know that we can copy a dense eigen matrix to a dense VCL matrix and
> the same thing for sparse matrices also.
> Is it possible to copy a sparse eigen matrix to a dense eigen matrix?
> and vice-versa ?
I don't know for Eigen, you have to ask the Eigen developers.
It is not possible out-of-the-box in ViennaCL to assign a ViennaCL
sparse matrix to a ViennaCL dense matrix (and vice versa).
Best regards,
Karli
> ------------------------------------------------------------------------
> *From:* Karl Rupp <[email protected]>
> *To:* Sumit Kumar <[email protected]>
> *Cc:* "[email protected]"
> <[email protected]>
> *Sent:* Tuesday, July 28, 2015 4:01 PM
> *Subject:* Re: [ViennaCL-devel] ViennaCL reductions
>
> Hi,
>
> > That worked. So, the final question I have would be as follows:
> > viennacl::copy(vcl_C, eigen_mat);
> > is a valid statement
> > However, if I want to copy vcl_C to some particular region of a host
> > matrix, then, do I have to first create a temporary host matrix and then
> > copy that host matrix into a larger matrix or is there anything direct?
> > Something like this:
> > viennacl::copy(vcl_C, prod.block(0, j, source.rows(), bTemp.cols()));
> >
> > Otherwise the only way I can think of is this:
> > RMMatrix_Float temp(rows,cols);
> > viennacl::copy(vcl_C, temp);
> > RMMatrix_Float biggerMatrix.block(0,j,rows,cols) = temp
>
> you can either use temporaries, or create an Eigen::Map<> from your
> Eigen matrix for wrapping your existing matrix accordingly:
> http://eigen.tuxfamily.org/dox/classEigen_1_1Map.html
>
> Best regards,
> Karli
>
>
> > ------------------------------------------------------------------------
> > *From:* Karl Rupp <[email protected] <mailto:[email protected]>>
> > *To:* Sumit Kumar <[email protected] <mailto:[email protected]>>
> > *Cc:* "[email protected]
> <mailto:[email protected]>"
> > <[email protected]
> <mailto:[email protected]>>
> > *Sent:* Tuesday, July 28, 2015 2:06 AM
> > *Subject:* Re: [ViennaCL-devel] ViennaCL reductions
> >
> > Hi,
> >
> > > That's why I explicitly stated in my previous mail "Optional" :) I am
> > > one of those folks who likes the coziness of compile time
> decisions and
> > > (if possible) would like to make it available in VCL. However, I
> am not
> > > sure if doing this is possible via the current API structure of VCL. I
> > > will look into this as I first need to understand the intricacies of
> > > OpenCL.
> > >
> > > BTW, a previous question of mine was unanswered (or I may have
> > > overlooked). Are there any equivalent functions in VCL to do the
> > following:
> > > a.) Two matrices, when transferred to the GPU should undergo
> > > element_wise multiplication ?
> > > b.) A unary operation on a matrix that has been transferred to the
> GPU?
> >
> > a.) and b.) sound a lot like you are looking for element_*-functions
> >
> http://viennacl.sourceforge.net/doc/manual-operations.html#manual-operations-blas1,
> >
> <http://viennacl.sourceforge.net/doc/manual-operations.html#manual-operations-blas1,>
> > which work for vectors and matrices alike.
> >
> > Best regards,
> > Karli
> >
> >
> >
> > >
> ------------------------------------------------------------------------
> > > *From:* Karl Rupp <[email protected]
> <mailto:[email protected]> <mailto:[email protected]
> <mailto:[email protected]>>>
> > > *To:* Sumit Kumar <[email protected]
> <mailto:[email protected]> <mailto:[email protected]
> <mailto:[email protected]>>>
> > > *Cc:* "[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected]
> <mailto:[email protected]>>"
> > > <[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected]
> <mailto:[email protected]>>>
> > > *Sent:* Monday, July 27, 2015 9:13 PM
> > > *Subject:* Re: [ViennaCL-devel] ViennaCL reductions
> > >
> > > Hi Sumit,
> > >
> > > > Agreed, the names can be returned in any order. As you are
> using CMake
> > > > would it be possible to:
> > > > a.) Write a small helper script using CMake that lists what
> > devices the
> > > > user has on his/her machine that are OpenCL compliant?
> > >
> > > Exactly this is provided by running viennacl-info:
> > > $> mkdir build && cd build
> > > $> cmake ..
> > > $> make viennacl-info
> > > $> examples/tutorial/viennacl-info
> > >
> > >
> > > > b.) Make VCL select the appropriate device so that these issues of
> > > > context selection etc can be avoided?
> > >
> > > In most cases the first OpenCL device returned is the most appropriate
> > > device for running the computations. If you know of any better
> strategy
> > > for picking the default device, please let us know and we are happy to
> > > adopt it. :-)
> > >
> > >
> > > > c.) Of course, this would be purely optional and only available
> if the
> > > > user wants to pre-select the device before writing any VCL code!
> > >
> > > How is that different from what is possible now? What is the value of
> > > making a runtime-decision (current ViennaCL code) a compile-time
> > > decision (CMake)?
> > >
> > > Imagine you are building a big application with GUI and other
> bells and
> > > whistles. In such a scenario you would like the user to select the
> > > compute device through a dialog at runtime rather than asking the user
> > > to recompile the whole application just to change the default device.
> > > Our current mindset is to incrementally move away from compile time
> > > decisions in cases where they are not needed.
> > >
> > > Best regards,
> > > Karli
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > >
> > ------------------------------------------------------------------------
> > > > *From:* Karl Rupp <[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected] <mailto:[email protected]>>
> <mailto:[email protected] <mailto:[email protected]>
> > <mailto:[email protected] <mailto:[email protected]>>>>
> > > > *To:* Sumit Kumar <[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected] <mailto:[email protected]>>
> <mailto:[email protected] <mailto:[email protected]>
> > <mailto:[email protected] <mailto:[email protected]>>>>
> > > > *Cc:* "[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected]
> <mailto:[email protected]>>
> > > <mailto:[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected]
> <mailto:[email protected]>>>"
>
>
>
> > > > <[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected]
> <mailto:[email protected]>>
> > > <mailto:[email protected]
> <mailto:[email protected]>
> > <mailto:[email protected]
> <mailto:[email protected]>>>>
> >
> >
> >
> > > > *Sent:* Monday, July 27, 2015 7:03 PM
> > > > *Subject:* Re: [ViennaCL-devel] ViennaCL reductions
> > > >
> > > > Hi Sumit,
> > > >
> > > > > Thanks for the update. I will check it out. As for the
> second one,
> > > > > indeed thats what I ended up doing:
> > > > >
> > > > > // Get some context information for OpenCL compatible GPU
> > devices
> > > > > viennacl::ocl::platform pf;
> > > > > std::vector<viennacl::ocl::device> devices =
> > > > > pf.devices(CL_DEVICE_TYPE_GPU);
> > > > > // If no GPU devices are found, we select the CPU device
> > > > > if (devices.size() > 0)
> > > > > {
> > > > > // Now, often we may have an integrated GPU with the
> CPU. We
> > > would
> > > > > // like to avoid using that GPU. Instead, we search for a
> > > discrete
> > > > > // GPU.
> > > > > viennacl::ocl::setup_context(0, devices[1]);
> > > > > }
> > > > > else
> > > > > {
> > > > > devices = pf.devices(CL_DEVICE_TYPE_CPU);
> > > > > viennacl::ocl::setup_context(0, devices[0]);
> > > > > }
> > > >
> > > > Keep in mind that devices may be returned in arbitrary order. I've
> > > > already seen cases where the discrete GPU is returned as the first
> > > > device. (Unfortunately I don't know of a robust and general way of
> > > > dealing with such cases).
> > > >
> > > >
> > > > > Here is my kludge. Is there any "real" example of data
> > > partitioning and
> > > > > using multiple GPU's?
> > > >
> > > > No. PCI-Express latencies narrow down to sweet spot of real
> > performance
> > > > gains for most algorithms in ViennaCL a lot. Only algorithms
> relying
> > > > heavily on matrix-matrix multiplications are likely to show good
> > benefit
> > > > from multiple GPUs. As a consequence, we are currently keeping our
> > focus
> > > > on single GPUs. There is some multi-GPU support through
> ViennaCL as a
> > > > plugin for PETSc available, but that focuses on iterative
> solvers and
> > > > does not cover any eigenvalue routines yet.
> > > >
> > > >
> > > >
> > > >
> > > > Best regards,
> > > > Karli
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------------------------------------
_______________________________________________
ViennaCL-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/viennacl-devel