Karl,

At the moment, the only 'request' has been in the issue I linked to
previously where a user has a matrix too large to fit on one GPU.  This can
be addressed by explicit matrix chunking that I could handle perhaps more
simply from the R end. It just starts to get more complex with the solvers.
I'm not as familiar with those algorithms so not sure, at least right now,
how I would approach that. If you know I would certainly appreciate the
insight.

Regards,
Charles

On Thu, Apr 28, 2016 at 11:59 AM, Karl Rupp <r...@iue.tuwien.ac.at
<javascript:_e(%7B%7D,'cvml','r...@iue.tuwien.ac.at');>> wrote:

> Hi Charles,
>
> > Thanks for getting back, there isn't anything else at the moment.  The
>
>> issue was needing to use setup_context to explicitly assign the devices.
>>
>
> ok, thanks for reporting.
>
> However, since I have you attention, is there any way for using multiple
>> GPUs on a single job (e.g. GEMM)?
>>
>> Let's say I have two matrices that are too large for one GPU but will
>> fit on both my GPUs.  If I create a ONE context with BOTH GPUs and I
>> assign each matrix to one GPU would the normal GEMM workflow function
>> normally?
>>
>
> No, currently this is not implemented. You would have to take care of all
> the data decomposition yourself. The OpenCL semantics of how memory buffers
> are managed don't help either; in the end, one is left with all the data
> juggling.
>
> I don't have a long-term view of whether it is worth to implement such
> multi-GPU support. The (imho) best way to deal with multiple GPUs is via
> MPI, where one has to deal with distributed memory anyway. As such, using
> ViennaCL via PETSc gives you just that for sparse matrices, solvers, etc.
> At the same time, I understand that MPI is not always an option, for
> example in your context. It would be very helpful to know how important
> (vs. just "nice to have") such 'native' GPU support is for you and users of
> gpuR.
>
> Best regards,
> Karli
>
>
>
>
> On Thu, Apr 28, 2016 at 11:12 AM, Karl Rupp <r...@iue.tuwien.ac.at
>> <javascript:_e(%7B%7D,'cvml','r...@iue.tuwien.ac.at');>
>> <mailto:r...@iue.tuwien.ac.at
>> <javascript:_e(%7B%7D,'cvml','r...@iue.tuwien.ac.at');>>> wrote:
>>
>>     Hi Charles,
>>
>>     > I am trying to set up a list of contexts whereby each context
>> represents
>>
>>         one platform and one device.  I was thinking the function found
>> here
>>         (https://github.com/cdeterman/gpuR/blob/develop/src/context.cpp)
>>         starting at line 108 (listContexts) would work.
>>
>>         However, as seen in this issue
>>         (https://github.com/cdeterman/gpuR/issues/9) this is not the
>>         case.  For
>>         some reason a user with two AMD GPUs, iterating through the
>>         platforms
>>         and devices with each iterative context id (using
>>         'set_context_platform_index' and
>>         'get_context(id).switch_device(gpu_idx)') results in only the
>>         first gpu
>>         as recognizable.
>>
>>         Perhaps I have made some mistake in how these context functions
>>         work?  I
>>         can confirm access to the second gpu if referring directly to the
>>         'platforms' object but not with respect to a context.  Any
>>         insight would
>>         be appreciated.
>>
>>
>>     I see that the referenced issue on GitHub seems to be resolved
>>     (sorry for not being able to answer sooner). Is there anything left
>>     that I should look into?
>>
>>     Best regards,
>>     Karli
>>
>>
>>
>
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
ViennaCL-devel mailing list
ViennaCL-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/viennacl-devel

Reply via email to