possible to do ?!
Here the COMPACT matrix is a 4x4 matrix and the WIDE matrix 8x8, for a basic
test.
Thanks, Frank Bramkamp
program petsc_matrix_example
#include
use petsc
implicit none
PetscErrorCode :: ierr
Mat :: COMPACT, WIDE
PetscInt :: m, n, MM, NN, d_nz
PetscInt :: i, j
Ah ok,
Then I will have a look at matconvert.
And then maybe later switch to AIJ as well.
Thanks of the help, Frank
> On 29 May 2024, at 16:57, Barry Smith wrote:
>
>
> You can use MatConvert()
>
>
>> On May 29, 2024, at 10:53 AM, Frank Bramkamp wrote:
>&g
Hello Hong, Thank you for the clarification. If I already have a BAIJ matrix format, can I then convert it later into AIJ format as well ?! In that case I would have two matrices, but that would be ok for testing. I think that you sometimes
ZjQcmQRYFpfptBannerStart
This
bolic
only defined for the standard point-wise matrix format but not for a blocked format ?!
In the documentation, I could not see a hint on supported matrix formats or any limitations.
The examples also just use a point-wise format (AIJ), as I can see so far.
Greetings, Frank Bramkamp
compile it with cuda again as well.
We just start to get PETSC on GPUs with the cuda backend, and I start with openccc for our fortran code to get first experience how everything works with GPU
porting.
Good that you could fix the issue.
Thanks for the great help. Have a nice weekend, Frank
as a bug in my attempted fix so it actually did not skip the option.
>
> Try git pull and then run configure again.
>
>
>> On Apr 5, 2024, at 6:30 AM, Frank Bramkamp wrote:
>>
>> Dear Barry,
>>
>> I tried your fix for -lnvc. Unfortunately it di
Thanks for the response, My code is in fortran. I will try to explicitly set LIBS=. . as you suggested. At the moment I skip cuda, but later I also want to use cuda as well. Barry also tried to skip the “-lnvc”, but that did not work yet. Thanks
ZjQcmQRYFpfptBannerStart
Ok, I will have a look. It is already evening here in Sweden, so it might take until tomorrow. Thanks Frank
Ok, I will look for the config. log file. Frank
Thanks for the reply,
Do you know if you actively include the libnvc library ?!
Or is this somehow automatically included ?!
Greetings, Frank
> On 4 Apr 2024, at 15:56, Satish Balay wrote:
>
>
> On Thu, 4 Apr 2024, Frank Bramkamp wrote:
>
>> Dear PETSC Team,
>>
library is in $CUDA_ROOT/lib64
I am not sure where this library is on your system ?!
Thanks a lot, Frank Bramkamp
be also useful to have one day.
Greetings, Frank Bramkamp
I would first have to set a small test example for the parallel case.
I think there is also an include file where one can check the fortran
interfaces ?!
I forgot where to look this up.
Greetings, Frank Bramkamp
nother option for AGMRES ?!
The standard GMRES has the problem that MPI_Allreduce gets expensive for 2048
cores.
Therefore I wanted to see, if AGMRES has a bit less communication, as this is
mentioned in the description
of the method.
Greetings, Frank Bramkamp
14 matches
Mail list logo