> On Sep 19, 2019, at 9:30 PM, Balay, Satish wrote:
>
> On Fri, 20 Sep 2019, Smith, Barry F. via petsc-dev wrote:
>
>>
>>
>>> On Sep 19, 2019, at 9:11 PM, Balay, Satish wrote:
>>>
>>> On Fri, 20 Sep 2019, Smith, Barry F. via petsc-dev wrote:
>>>
This should be reported on
On Fri, 20 Sep 2019, Smith, Barry F. via petsc-dev wrote:
>
>
> > On Sep 19, 2019, at 9:11 PM, Balay, Satish wrote:
> >
> > On Fri, 20 Sep 2019, Smith, Barry F. via petsc-dev wrote:
> >
> >>
> >> This should be reported on gitlab, not in email.
> >>
> >> Anyways, my interpretation is
> On Sep 19, 2019, at 9:11 PM, Balay, Satish wrote:
>
> On Fri, 20 Sep 2019, Smith, Barry F. via petsc-dev wrote:
>
>>
>> This should be reported on gitlab, not in email.
>>
>> Anyways, my interpretation is that the machine runs low on swap space so
>> the OS is killing things. Once
On Fri, 20 Sep 2019, Smith, Barry F. via petsc-dev wrote:
>
>This should be reported on gitlab, not in email.
>
>Anyways, my interpretation is that the machine runs low on swap space so
> the OS is killing things. Once Satish and I sat down and checked the system
> logs on one machine
This should be reported on gitlab, not in email.
Anyways, my interpretation is that the machine runs low on swap space so the
OS is killing things. Once Satish and I sat down and checked the system logs on
one machine that had little swap and we saw system messages about low swap at
All failed tests just said "application called MPI_Abort" and had no stack
trace. They are not cuda tests. I updated SF to avoid CUDA related
initialization if not needed. Let's see the new test result.
not ok dm_impls_stag_tests-ex13_none_none_none_3d_par_stag_stencil_width-1
#
Failed? Means nothing, send link or cut and paste error
It could be that since we have multiple separate tests running at the same
time they overload the GPU or cause some inconsistent behavior that doesn't
appear every time the tests are run.
Barry
Maybe we need to sequentialize all
On Thu, Sep 19, 2019 at 3:24 PM Smith, Barry F.
mailto:bsm...@mcs.anl.gov>> wrote:
> On Sep 19, 2019, at 2:50 PM, Zhang, Junchao
> mailto:jczh...@mcs.anl.gov>> wrote:
>
> I saw your update. In PetscCUDAInitialize we have
>
>
>
>
>
> /* First get the device count */
>
> err =
> On Sep 19, 2019, at 2:50 PM, Zhang, Junchao wrote:
>
> I saw your update. In PetscCUDAInitialize we have
>
>
>
>
>
> /* First get the device count */
>
> err = cudaGetDeviceCount();
>
>
>
>
> /* next determine the rank and then set the device via a mod */
>
I saw your update. In PetscCUDAInitialize we have
/* First get the device count */
err = cudaGetDeviceCount();
/* next determine the rank and then set the device via a mod */
ierr = MPI_Comm_rank(comm,);CHKERRQ(ierr);
device = rank % devCount;
}
err =
Fixed the docs. Thanks for pointing out the lack of clarity
> On Sep 18, 2019, at 11:25 PM, Zhang, Junchao via petsc-dev
> wrote:
>
> Barry,
>
> I saw you added these in init.c
>
>
> + -cuda_initialize - do the initialization in PetscInitialize()
>
>
>
>
>
>
>
>
> Notes:
>
>
Barry,
I saw you added these in init.c
+ -cuda_initialize - do the initialization in PetscInitialize()
Notes:
Initializing cuBLAS takes about 1/2 second there it is done by default in
PetscInitialize() before logging begins
But I did not get otherwise with -cuda_initialize 0, when will
12 matches
Mail list logo