Fande Kong writes:
> On Wed, Jan 19, 2022 at 11:39 AM Jacob Faibussowitsch
> wrote:
>
>> Are you running on login nodes or compute nodes (I can’t seem to tell from
>> the configure.log)?
>>
>
> I was compiling codes on login nodes, and running codes on compute nodes.
> Login nodes do not have
Hi All,
It seems that mpiaijkok does not support 64-bit integers at this time. Do
we have any motivation for this? Or Is it just a bug?
Thanks,
Fande
petsc/src/mat/impls/aij/mpi/kokkos/mpiaijkok.kokkos.cxx(306): error: a
value of type "MatColumnIndexType *" cannot be assigned to an entity of
Varun,
This feature is merged to petsc main
https://gitlab.com/petsc/petsc/-/merge_requests/4727
Hong
From: petsc-users on behalf of Zhang, Hong
via petsc-users
Sent: Wednesday, January 19, 2022 9:37 AM
To: Varun Hiremath
Cc: Peder Jørgensgaard Olesen via
On Wed, Jan 19, 2022 at 11:39 AM Jacob Faibussowitsch
wrote:
> Are you running on login nodes or compute nodes (I can’t seem to tell from
> the configure.log)?
>
I was compiling codes on login nodes, and running codes on compute nodes.
Login nodes do not have GPUs, but compute nodes do have
Are you running on login nodes or compute nodes (I can’t seem to tell from the
configure.log)? If running from login nodes, do they support running with
GPU’s? Some clusters will install stub versions of cuda runtime on login nodes
(such that configuration can find them), but that won’t
On Wed, Jan 19, 2022 at 12:54 PM Marco Cisternino <
marco.cistern...@optimad.it> wrote:
> Thank you, Matthew.
>
> I’m going to pay attention to our non-dimensionalization, avoiding
> division by cell volume helps a lot.
>
>
>
> Sorry, Mark, I cannot get your point: which 1D problem are you
Thank you, Matthew.
I’m going to pay attention to our non-dimensionalization, avoiding division by
cell volume helps a lot.
Sorry, Mark, I cannot get your point: which 1D problem are you referring to?
The case I’m talking about is based on a 3D octree mesh.
Thank you all for your support.
Hi Fande,
What machine are you running this on? Please attach configure.log so I can
troubleshoot this.
Best regards,
Jacob Faibussowitsch
(Jacob Fai - booss - oh - vitch)
> On Jan 19, 2022, at 10:04, Fande Kong wrote:
>
> Hi All,
>
> Upgraded PETSc from 3.16.1 to the current main branch.
Hi All,
Upgraded PETSc from 3.16.1 to the current main branch. I suddenly got the
following error message:
2d_diffusion]$ ../../../moose_test-dbg -i 2d_diffusion_test.i
-use_gpu_aware_mpi 0 -gpu_mat_type aijcusparse -gpu_vec_type cuda
-log_view
[0]PETSC ERROR: - Error
Varun,
Good to know it works. FactorSymbolic function is still being called twice, but
the 2nd call is a no-op, thus it still appears in '-log_view'. I made changes
in the low level of mumps routine, not within PCSetUp() because I feel your use
case is limited to mumps, not other matrix package
Thanks, Sherry, and Satish,
I will try your suggestion, and report back to you as soon as possible.
Thanks,
Fande
On Tue, Jan 18, 2022 at 10:48 PM Satish Balay wrote:
> Sherry,
>
> This is with superlu-dist-7.1.1 [not master branch]
>
>
> Fande,
>
> >>
> Executing: mpifort -o
ILU is LU for this 1D problem and its singular.
ILU might have some logic to deal with a singular system. Not sure. LU
should fail.
You might try -ksp_type preonly and -pc_type lu
And -pc_type jacobi should not have any numerical problems. Try that.
On Wed, Jan 19, 2022 at 7:19 AM Matthew Knepley
On Wed, Jan 19, 2022 at 4:52 AM Marco Cisternino <
marco.cistern...@optimad.it> wrote:
> Thank you Matthew.
>
> But I cannot get the point. I got the point about the test but to try to
> explain my doubt I’m going to prepare another toy code.
>
>
>
> By words…
>
> I usually have a finite volume
Thank you Matthew.
But I cannot get the point. I got the point about the test but to try to
explain my doubt I’m going to prepare another toy code.
By words…
I usually have a finite volume discretization of the Laplace operator with
homogeneous Neumann BC on an octree mesh and it reads
Aij * xj
Hi Hong,
Thanks, I tested your branch and I think it is working fine. I don't see
any increase in runtime, however with -log_view I see that the
MatLUFactorSymbolic function is still being called twice, so is this
expected? Is the second call a no-op?
$ ./ex52.o -use_mumps_lu -print_mumps_memory
15 matches
Mail list logo