We have three problems here (at least).
1) I use PETSc's random vectors to compute an eigen estimate.
2) I use srand() to mix up the graph ordering for the MIS.
I could write a hash function like thing that takes the global ID and
generates a (bad) random number. These random number do not have
Massimiliano,
You should not be getting slower times on the GPU. I tried with a hardware
similar to what you mention, running SVD on a dense square matrix stored as
aij, and also with sparse rectangular matrices. In all cases, executions on the
GPU were roughly 2x faster than on the CPU. Are yo
Hi Matt,
On 08/10/2015 08:39 AM, Matthew Knepley wrote:
On Mon, Aug 10, 2015 at 9:55 AM, Dominic Meiser mailto:dmei...@txcorp.com>> wrote:
Hi Matt,
On 06/23/2015 11:24 AM, Matthew Knepley wrote:
There could be a bug with the calculation of chunksize. I will
run your
On Mon, Aug 10, 2015 at 9:55 AM, Dominic Meiser wrote:
> Hi Matt,
>
>
> On 06/23/2015 11:24 AM, Matthew Knepley wrote:
>
>> There could be a bug with the calculation of chunksize. I will run your
>> example as soon as I can.
>>
>
> Have you had a chance to run the example? Thanks.
Unfortunately
Hi Matt,
On 06/23/2015 11:24 AM, Matthew Knepley wrote:
There could be a bug with the calculation of chunksize. I will run your
example as soon as I can.
Have you had a chance to run the example? Thanks.
Dominic
--
Dominic Meiser
Tech-X Corporation
5621 Arapahoe Avenue
Boulder, CO 80303
USA
> -Original Message-
> From: Karl Rupp [mailto:r...@iue.tuwien.ac.at]
> Sent: 10 August 2015 14:13
> To: Leoni, Massimiliano
> Cc: slepc-ma...@upv.es; petsc-dev@mcs.anl.gov
> Subject: Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working
> with GPU and default configurat
On Mon, Aug 10, 2015 at 7:47 AM, Leoni, Massimiliano <
massimiliano.le...@rolls-royce.com> wrote:
> > -Original Message-
> > From: Karl Rupp [mailto:r...@iue.tuwien.ac.at]
> > Sent: 10 August 2015 11:54
> > To: Leoni, Massimiliano
> > Cc: slepc-ma...@upv.es; petsc-dev@mcs.anl.gov
> > Subje
Hi,
>> The use of aijcusp instead of a dense matrix type certainly adds to
the issue.
I know, but I couldn't find a dense gpu type in the petsc manual, please
correct me if there is any.
There is indeed no dense GPU matrix type in PETSc (yet).
Please send the output of -log_summary so tha
> -Original Message-
> From: Karl Rupp [mailto:r...@iue.tuwien.ac.at]
> Sent: 10 August 2015 11:54
> To: Leoni, Massimiliano
> Cc: slepc-ma...@upv.es; petsc-dev@mcs.anl.gov
> Subject: Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working
> with GPU and default configurations
Hi Dominic,
> With the current implementation the following can happen (v is of type
VECCUSP):
- Originally data on GPU, v.valid_GPU_array == PETSC_CUSP_GPU
- a call to VecPlaceArray(v, arr) unplaces the data on the host and sets
v.valid_CPU_array=CPU. Note that the GPU data does not get stashed
Hi Massimiliano,
On 08/10/2015 12:45 PM, Leoni, Massimiliano wrote:
Good, it is running now, but performances are really poor: I tried on 3
nodes with 2 GPUs and 12 CPU threads each, MPI+CUDA performs much worse
than pure MPI.
I have a few thoughts on why this might be happening:
·My problem h
Good, it is running now, but performances are really poor: I tried on 3 nodes
with 2 GPUs and 12 CPU threads each, MPI+CUDA performs much worse than pure MPI.
I have a few thoughts on why this might be happening:
· My problem has dense matrices but on GPU I use –mat_type aijcusp
·
12 matches
Mail list logo