I found a CUDAVersion.cu of STREAMS and tried to build it. I got it to
compile manually with:
nvcc -o CUDAVersion.o -ccbin pgc++
-I/autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/include
-Wn
in
> your link line. I know you will at least need the CUDA runtime "-lcudart".
> Look at something like PETSC_WITH_EXTERNAL_LIB for one of your CUDA-enabled
> PETSc builds in $PETSC_ARCH/lib/petsc/conf/petscvariables to see what else
> you might need.
>
> --Richard
FWIW, I've heard that CUSPARSE is going to provide integer matrix-matrix
products for indexing applications, and that it should be easy to extend
that to double, etc.
On Wed, Oct 2, 2019 at 6:00 PM Mills, Richard Tran via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:
> Fellow PETSc developers,
>
> I
My MR mark/gamg-eigest-sa-cheby seems to have vanished. It does not seem to
be in master. Anyone know where it is?
Thanks,
Mark
ght MR?
>
> --Richard
>
> On 10/10/19 3:51 PM, Mark Adams via petsc-dev wrote:
>
> My MR mark/gamg-eigest-sa-cheby seems to have vanished. It does not seem
> to be in master. Anyone know where it is?
> Thanks,
> Mark
>
>
>
What is the status of supporting SuperLU_DIST with GPUs?
Thanks,
Mark
> If one just wants to run a fixed number of iterations, not checking for
> convergence, why would one set ksp->errorifnotconverged to true?
>
>
Good question. I can see not worrying too much about convergence on the
coarse grids, but to not allow it ... and now that I think about it, it
seems like
I am puzzled.
I am running AMGx now, and I am getting flop counts/rates. How does that
happen? Does PETSc use hardware counters to get flops?
t; --Junchao Zhang
>
>
> On Wed, Nov 6, 2019 at 8:44 AM Mark Adams via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> I am puzzled.
>>
>> I am running AMGx now, and I am getting flop counts/rates. How does that
>> happen? Does PETSc use hardware counters to get flops?
>>
>
snes/ex13 is getting a ParMetis segv with GAMG and coarse grid
repartitioning. Below shows the branch and how to run it.
I've tried valgrind on Cori but it gives a lot of false positives. I've
seen this error in DDT but I have not had a chance to dig and try to fix
it. At least I know it has somet
On Sat, Nov 9, 2019 at 10:51 PM Fande Kong wrote:
> Hi Mark,
>
> Thanks for reporting this bug. I was surprised because we have sufficient
> heavy tests in moose using partition weights and do not have any issue so
> far.
>
>
I have been pounding on this code with elasticity and have not seen thi
Fande, the problem is k below seems to index beyond the end of htable,
resulting in a crazy m and a segv on the last line below.
I don't have a clean valgrind machine now, that is what is needed if no one
has seen anything like this. I could add a test in a MR and get the
pipeline to do it.
void
Fande, It looks to me like this branch in ParMetis must be taken to trigger
this error. First *Match_SHEM* and then CreateCoarseGraphNoMask.
/* determine which matching scheme you will use */
switch (ctrl->ctype) {
case METIS_CTYPE_RM:
Match_RM(ctrl, graph);
break;
101 - 113 of 113 matches
Mail list logo