Thank for all of your responses. If I may trouble you for one further
related question, are there any examples of compiling a CUDA backend
program?
For example, a cpp file
main.cpp
#define VIENNA_WITH_CUDA // if I read the docs correctly
#include "viennacl/vector.hpp"
// my awesome gpu code
How would I compile this program? I can see the examples in the docs about
using cmake but what about a single file. I am trying to understand the
cmake files (not very familiar myself) but I can't seem to sort out where
nvcc comes in to play with these headers as opposed to normal, brute-force
CUDA whereby nvcc would be used on the .cu file and then linked to the main
program with g++.
Regards,
Charles
On Tue, May 26, 2015 at 8:56 AM, Karl Rupp <r...@iue.tuwien.ac.at> wrote:
>
> My interest comes in my development of additional programs that would
>> utilize ViennaCL. I would like them to be platform independent so the
>> prospect of a header-only library is very exciting to me. I definitely
>> prefer the C++ template format to the C-functions so that is fine. The
>> less linking and the less shared objects to be created the easier the
>> program will be for future users to install and begin using the new
>> program (future users will likely not be heavy OpenCL or even C++
>> programmers). In a perfect world, I would just like to include the
>> header for the respective function (OpenCL or CUDA) and compile the
>> program.
>>
>
> I see - our imaginations of a 'perfect world' are pretty similar when it
> comes to computing. ;-)
>
> (Note that with ViennaCL you don't have to include different headers for
> the different compute backends yourself: You just
> #include "viennacl/vector.hpp"
> and you get all the operations in a backend-agnostic way.)
>
>
> That said, I realize this scenario is very unlikely. However, if a user
>> has a NVIDIA GPU, they also would need to have the driver and SDK
>> installed to use it appropriately. So perhaps my next best hope would
>> be to just include the header for a given CUDA function and then just
>> link to the original CUDA shared library (which would/should already be
>> installed in a standard location on the given OS).
>>
>> What are your thoughts on this scenario?
>>
>
> Currently you have to specify at compile time which backends should be
> available. That is, you have to explicitly enable OpenCL, CUDA, or OpenCL
> via the respective switches. One common way to do this is to e.g. pass
> "-DVIENNACL_WITH_OPENCL" to your compiler.
>
> While such a static selection at compile time is common practice in
> scientific computing, it doesn't follow the 'perfect world' idea: Depending
> on the enabled computing backends, different libraries (like libOpenCL.so
> or the CUDA runtime) need to be available at the client machine. This
> imposes limitations when deploying binaries, for example if you want to
> provide ready-to-run binaries for your application built on top of ViennaCL.
>
> A possible improvement for ViennaCL is to switch to a plugin architecture
> similar to web browsers: ViennaCL could then be shipped and run with a
> minimum set of dependency and dynamically load additional compute backends
> (OpenCL, CUDA, etc.) at runtime (either from a shared library on the file
> system, or just enable/disable internal backends). This could be even
> interactive through a plugin loader. Such a plugin system might be too
> 'radical' with respect to the way libraries are used for scientific
> computing, so I'm still hesitant whether this would be worth the effort for
> the future. Any input on this is of course appreciated ;-)
>
> Best regards,
> Karli
>
>
>
>> On Tue, May 26, 2015 at 8:14 AM, Karl Rupp <r...@iue.tuwien.ac.at
>> <mailto:r...@iue.tuwien.ac.at>> wrote:
>>
>> Hi Charles,
>>
>> On 05/26/2015 03:08 PM, Charles Determan wrote:
>>
>> Thank you Karli,
>>
>> The ViennaCL library does appear to be impressive and I am
>> excited to
>> begin using it. Just so I know I understand you clearly. Unless
>> I
>> compile the libviennacl shared library (with dense linear algebra
>> focuse) and link against it I would need to use nvcc to compile a
>> program I write with viennacl with the intent to use the CUDA
>> backend.
>>
>> Am I understanding you correctly?
>>
>>
>> Yes, this is correct.
>>
>> Out of curiosity: Would you prefer to have all functionality
>> available via libviennacl (using C-functions rather than C++
>> template stuff) instead?
>>
>> Best regards,
>> Karli
>>
>>
>> On Tue, May 26, 2015 at 7:47 AM, Karl Rupp
>> <r...@iue.tuwien.ac.at <mailto:r...@iue.tuwien.ac.at>
>> <mailto:r...@iue.tuwien.ac.at <mailto:r...@iue.tuwien.ac.at>>>
>>
>> wrote:
>>
>> Hi Charles,
>>
>>
>> > I am new to the viennacl library after going through
>> some of the
>>
>> documentation there was something I would like a bit of
>> clarification.
>> Could please confirm if the NVCC compiler is indeed
>> required to
>> use the
>> CUDA backend or just for the makefiles of the
>> examples? I ask
>> this as I
>> have compiled CUDA programs simply with g++
>> previously. If viennacl
>> does require NVCC to use CUDA backend, could someone
>> kindly
>> explain why
>> this is so?
>>
>>
>> ViennaCL is a C++ header-only library, which has several
>> implications:
>> a) you can just copy&paste the source tree and get going.
>> No
>> configuration is needed, there's no shared library you need
>> to build
>> first.
>> b) ViennaCL can provide neat operator overloads using
>> expression
>> templates
>> c) each of your compilation units will see most (if not
>> all) of
>> ViennaCL's sources. If you enable the CUDA backend, your
>> compiler
>> will see all the kernels written in CUDA, hence it requires
>> you to
>> use NVCC. This not only includes the examples, but also your
>> applications. We do not ship PTX-code and use the CUDA
>> driver-API to
>> get around NVCC, as this is too much of a burden to maintain.
>>
>> If you don't want to use NVCC, you can still interface to
>> libviennacl, which compiles part of ViennaCL's
>> functionality into a
>> shared library. It is pretty much BLAS-like and focuses on
>> dense
>> linear algebra for now, so sparse linear algebra is not
>> covered yet.
>> Such an interface to a shared library is most likely how
>> you've
>> called CUDA-functionality in the past.
>>
>> I hope this clarifies things. Please let me know if you
>> have further
>> questions. :-)
>>
>> Best regards,
>> Karli
>>
>>
>>
>>
>>
>
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
ViennaCL-devel mailing list
ViennaCL-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/viennacl-devel