Hi Charles, On 05/26/2015 03:08 PM, Charles Determan wrote: > Thank you Karli, > > The ViennaCL library does appear to be impressive and I am excited to > begin using it. Just so I know I understand you clearly. Unless I > compile the libviennacl shared library (with dense linear algebra > focuse) and link against it I would need to use nvcc to compile a > program I write with viennacl with the intent to use the CUDA backend. > > Am I understanding you correctly?
Yes, this is correct. Out of curiosity: Would you prefer to have all functionality available via libviennacl (using C-functions rather than C++ template stuff) instead? Best regards, Karli > On Tue, May 26, 2015 at 7:47 AM, Karl Rupp <[email protected] > <mailto:[email protected]>> wrote: > > Hi Charles, > > > > I am new to the viennacl library after going through some of the > > documentation there was something I would like a bit of > clarification. > Could please confirm if the NVCC compiler is indeed required to > use the > CUDA backend or just for the makefiles of the examples? I ask > this as I > have compiled CUDA programs simply with g++ previously. If viennacl > does require NVCC to use CUDA backend, could someone kindly > explain why > this is so? > > > ViennaCL is a C++ header-only library, which has several implications: > a) you can just copy&paste the source tree and get going. No > configuration is needed, there's no shared library you need to build > first. > b) ViennaCL can provide neat operator overloads using expression > templates > c) each of your compilation units will see most (if not all) of > ViennaCL's sources. If you enable the CUDA backend, your compiler > will see all the kernels written in CUDA, hence it requires you to > use NVCC. This not only includes the examples, but also your > applications. We do not ship PTX-code and use the CUDA driver-API to > get around NVCC, as this is too much of a burden to maintain. > > If you don't want to use NVCC, you can still interface to > libviennacl, which compiles part of ViennaCL's functionality into a > shared library. It is pretty much BLAS-like and focuses on dense > linear algebra for now, so sparse linear algebra is not covered yet. > Such an interface to a shared library is most likely how you've > called CUDA-functionality in the past. > > I hope this clarifies things. Please let me know if you have further > questions. :-) > > Best regards, > Karli > > ------------------------------------------------------------------------------ One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y _______________________________________________ ViennaCL-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/viennacl-devel
