On Thu, Sep 13, 2018 at 2:33 PM Manuel Valera wrote:
> So, from what i get here the round robin assignation to each GPU device is
> done automatically by PETSc, from mapping the system? or do i have to pass
> a command line argument to do that?
>
It is automatic.
Thanks,
Matt
> Thanks,
So, from what i get here the round robin assignation to each GPU device is
done automatically by PETSc, from mapping the system? or do i have to pass
a command line argument to do that?
Thanks,
On Wed, Sep 12, 2018 at 2:38 PM, Matthew Knepley wrote:
> On Wed, Sep 12, 2018 at 5:31 PM Manuel Val
On Wed, Sep 12, 2018 at 5:31 PM Manuel Valera wrote:
> Ok then, how can i try getting more than one GPU with the same number of
> MPI processes?
>
I do not believe we handle more than one GPU per MPI process. Is that what
you are asking?
Thanks,
Matt
> Thanks,
>
> On Wed, Sep 12, 2018
Ok then, how can i try getting more than one GPU with the same number of
MPI processes?
Thanks,
On Wed, Sep 12, 2018 at 2:20 PM, Matthew Knepley wrote:
> On Wed, Sep 12, 2018 at 5:13 PM Manuel Valera wrote:
>
>> Hello guys,
>>
>> I am working in a multi-gpu cluster and i want to request 2 or m
On Wed, Sep 12, 2018 at 5:13 PM Manuel Valera wrote:
> Hello guys,
>
> I am working in a multi-gpu cluster and i want to request 2 or more GPUs,
> how can i do that from PETSc? evidently mpirun -n # is for requesting
> processors, but what if i want to use one mpi processor but several GPUs
> ins
Hello guys,
I am working in a multi-gpu cluster and i want to request 2 or more GPUs,
how can i do that from PETSc? evidently mpirun -n # is for requesting
processors, but what if i want to use one mpi processor but several GPUs
instead?
Also, i understand the GPU handles the linear system solver