Hi
Thanks so much for all your help. I've gotten all the core tech working in
C++ and am working on the Fortran integration. Before I do that, I've been
doing some memory checks using Valgrind to ensure everything is acceptable
since I've been seeing random memory corruption errors for specific me
On Mon, Dec 26, 2022 at 10:49 AM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:
> Oh it's not a worry. I'm debugging this first in C++, and once it's
> working I don't actually need to view what's happening in Fortran when I
> move over. In my C debugging code. After I create the distribu
On Mon, Dec 26, 2022 at 10:29 AM Edoardo Centofanti <
edoardo.centofant...@universitadipavia.it> wrote:
> Thank you for your answer. Can you provide me the full path of the example
> you have in mind? The one I found does not seem to exploit the algebraic
> multigrid, but just the geometric one.
>
Oh it's not a worry. I'm debugging this first in C++, and once it's working
I don't actually need to view what's happening in Fortran when I move over.
In my C debugging code. After I create the distribution vector and
distribute the field based on your input, I'm adding
VecSetOperation(state_dist
On Mon, Dec 26, 2022 at 10:40 AM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:
> Hi Matt
>
> 1) I'm not sure I follow how to call this. If I insert the VecSetOperation
> call I'm not exactly sure what the VecView_Plex is or where it is defined?
>
Shoot, this cannot be done in Fortran. I
Hi Matt
1) I'm not sure I follow how to call this. If I insert the VecSetOperation
call I'm not exactly sure what the VecView_Plex is or where it is defined?
2) Otherwise I've solved this problem with the insight you provided into
the local section. Things look good on the ASCII output but if we
Thank you for your answer. Can you provide me the full path of the example
you have in mind? The one I found does not seem to exploit the algebraic
multigrid, but just the geometric one.
Thanks,
Edoardo
Il giorno lun 26 dic 2022 alle ore 15:39 Matthew Knepley
ha scritto:
> On Mon, Dec 26, 2022
On Mon, Dec 26, 2022 at 4:41 AM Edoardo Centofanti <
edoardo.centofant...@universitadipavia.it> wrote:
> Hi PETSc Users,
>
> I am experimenting some issues with the GAMG precondtioner when used with
> GPU.
> In particular, it seems to go out of memory very easily (around 5000
> dofs are enough to
On Mon, Dec 26, 2022 at 3:21 AM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:
> Hi Matt
>
> I was able to get this all squared away. It turns out I was initializing
> the viewer incorrectly—my mistake. However, there is a follow-up question.
> A while back, we discussed distributing a ve
Hi PETSc Users,
I am experimenting some issues with the GAMG precondtioner when used with
GPU.
In particular, it seems to go out of memory very easily (around 5000
dofs are enough to make it throw the "[0]PETSC ERROR: cuda error 2
(cudaErrorMemoryAllocation) : out of memory" error).
I have these i
Hi Matt
I was able to get this all squared away. It turns out I was initializing
the viewer incorrectly—my mistake. However, there is a follow-up question.
A while back, we discussed distributing a vector field from an initial DM
to a new distributed DM. The way you said to do this was
// Dis
11 matches
Mail list logo