possible to do ?!
Here the COMPACT matrix is a 4x4 matrix and the WIDE matrix 8x8, for a basic
test.
Thanks, Frank Bramkamp
program petsc_matrix_example
#include
use petsc
implicit none
PetscErrorCode :: ierr
Mat :: COMPACT, WIDE
PetscInt :: m, n, MM, NN, d_nz
PetscInt :: i, j
Ah ok,
Then I will have a look at matconvert.
And then maybe later switch to AIJ as well.
Thanks of the help, Frank
> On 29 May 2024, at 16:57, Barry Smith wrote:
>
>
> You can use MatConvert()
>
>
>> On May 29, 2024, at 10:53 AM, Frank Bramkamp wrote:
>&g
the BAIJ format.
Otherwise, I have to change it into an AIJ format from the beginning.
Thanks for the quick help,
Frank
bolic
only defined for the standard point-wise matrix format but not for a blocked format ?!
In the documentation, I could not see a hint on supported matrix formats or any limitations.
The examples also just use a point-wise format (AIJ), as I can see so far.
Greetings, Frank Bramkamp
compile it with cuda again as well.
We just start to get PETSC on GPUs with the cuda backend, and I start with openccc for our fortran code to get first experience how everything works with GPU
porting.
Good that you could fix the issue.
Thanks for the great help. Have a nice weekend, Frank
Thanks for effort, Barry. I will get it and give it another try. Thanks a lot, Frank > On 5 Apr 2024, at 15: 56, Barry Smith wrote: > > > There was a bug in my attempted fix so it actually did not skip the
ZjQcmQRYFpfptBannerStart
This Message Is From
also want to use cuda as well.
Barry also tried to skip the “-lnvc”, but that did not work yet.
Thanks a lot for the suggestions, Frank
Ok, I will have a look. It is already evening here in Sweden, so it might take until tomorrow. Thanks Frank
Ok, I will look for the config. log file. Frank
Thanks for the reply,
Do you know if you actively include the libnvc library ?!
Or is this somehow automatically included ?!
Greetings, Frank
> On 4 Apr 2024, at 15:56, Satish Balay wrote:
>
>
> On Thu, 4 Apr 2024, Frank Bramkamp wrote:
>
>> Dear PETSC Team,
>>
library is in $CUDA_ROOT/lib64
I am not sure where this library is on your system ?!
Thanks a lot, Frank Bramkamp
be also useful to have one day.
Greetings, Frank Bramkamp
I would first have to set a small test example for the parallel case.
I think there is also an include file where one can check the fortran
interfaces ?!
I forgot where to look this up.
Greetings, Frank Bramkamp
nother option for AGMRES ?!
The standard GMRES has the problem that MPI_Allreduce gets expensive for 2048
cores.
Therefore I wanted to see, if AGMRES has a bit less communication, as this is
mentioned in the description
of the method.
Greetings, Frank Bramkamp
n the above form
(b). The latter provides generic interface for both (a) and (b).
I am not sure if this relates to the error I get.
Thank you.
Frank
()
---
Do I have to set the interpolation first? How can I just print the
default interpolation matrix?
I attached the option file.
Thank you.
Frank
On 12/06/2016 02:31 PM, Jed Brown wrote:
frank writes:
Dear
actly how the full MG proceeds?
Also in the above example, I want to know what interpolation or
prolongation method is used from level1 to level2.
Can I get that info by adding some options? (not using PCMGGetInterpolation)
I attached the ksp_view info and my petsc options file.
Thank you.
Frank
L
e:
CALL DMCreateGlobalVector( ... )
CALL DMDAVecGetArrayF90( ... )
... each process computes its part of rhs...
CALL DMDAVecRestoreArrayF90(...)
CALL VecAssemblyBegin( ... )
CALL VecAssemblyEnd( ... )
Thank you
Regards,
Frank
On 10/04/2016 12:56 PM, Dave May wrote:
On Tuesday, 4 October 2
2016-10-05 10:56:19 -0500
6 [0]PETSC ERROR: [2]PETSC ERROR: ./test_ksp.exe on a gnu-dbg-32idx
named kolmog1 by frank Wed Oct 5 17:40:07 2016
7 [0]PETSC ERROR: Configure options --known-mpi-shared="0 "
--known-memcmp-ok --with-debugging="1 " --with-shared-libraries=0
--with
Hi Dave,
Thank you for the reply.
What do you mean by the "nested calls to KSPSolve"?
I tried to call KSPSolve twice, but the the second solve converged in 0
iteration. KSPSolve seems to remember the solution. How can I force both
solves start from the same initial guess?
Thank y
municator
should I use to improve the performance?
I attached the test code and the petsc options file for the 1024^3 cube
with 32768 cores.
Thank you.
Regards,
Frank
On 09/15/2016 03:35 AM, Dave May wrote:
HI all,
I the only unexpected memory usage I can see is associated with the
also run the code on a 512^3 mesh with 16 * 16 * 16 processes. The ksp
solver works fine.
I attached the code, ksp_view_pre's output and my petsc option file.
Thank you.
Frank
On 09/09/2016 06:38 PM, Hengjie Wang wrote:
Hi Barry,
I checked. On the supercomputer, I had the option "-ks
Hi Barry,
I think the first KSP view output is from -ksp_view_pre. Before I
submitted the test, I was not sure whether there would be OOM error or
not. So I added both -ksp_view_pre and -ksp_view.
Frank
On 09/09/2016 12:38 PM, Barry Smith wrote:
Why does ksp_view2.txt have two KSP
coarse_pc_type value: bjacobi
-mg_coarse_telescope_mg_levels_ksp_max_it value: 1
-mg_coarse_telescope_mg_levels_ksp_type value: richardson
Regards,
Frank
On 07/13/2016 05:47 PM, Dave May wrote:
On 14 July 2016 at 01:07, frank <mailto:hengj...@uci.edu>> wrote:
Hi Dave,
Sorry for the late reply.
ttings render each core can access only 2G memory on average
instead of 8G which I mentioned in previous email. I re-run the job with
8G memory per core on average and there is no "Out Of Memory" error. I
would do more test to see if there is still some memory issue.
Regards,
Frank
On
sp_view info is attached for comparison.
Thank you.
Frank
On 07/08/2016 10:38 PM, Dave May wrote:
On Saturday, 9 July 2016, frank > wrote:
Hi Barry and Dave,
Thank both of you for the advice.
@Barry
I made a mistake in the file names in last email. I attached the
corre
I still got the OOM error. The detailed petsc option file is attached.
Thank you so much.
Frank
On 07/06/2016 02:51 PM, Barry Smith wrote:
On Jul 6, 2016, at 4:19 PM, frank wrote:
Hi Barry,
Thank you for you advice.
I tried three test. In the 1st test, the grid is 3072*256*768 and
e the 'telescope' preconditioner that
allocated a lot of memory and caused the error in the 1st test.
Is there is a way to show how much memory it allocated?
Frank
On 07/05/2016 03:37 PM, Barry Smith wrote:
Frank,
You can run with -ksp_view_pre to have it "view" the K
e to re-produce the error
with a smaller problem either.
In addition, I tried to use the block jacobi as the preconditioner with
the same grid and same decomposition. The linear solver runs extremely
slow but there is no memory error.
How can I diagnose what exactly cause the error?
Thank you
y way I can use petsc to implement a 3d decomposed FFT?
Thank you,
Frank
Hi Barry,
Thank you for your prompt reply.
Which executable lib should I use ldd to check?
Thank you,
Frank.
On 05/26/2015 02:41 PM, Barry Smith wrote:
On May 26, 2015, at 4:18 PM, frank wrote:
Hi
I am trying to use multigrid to solve a large sparse linear system. I use Hypre
boomeramg as
nMP under Petsc ?
? Is there a way I can know explicitly whether Hypre is using OpenMP
under Petsc or not ?
Thank you so much
Frank
Hi,
I am thinking of solving a linear equations, whose coefficient matrix
has 27 nonzero diagonal bands(diagonal dominant). Does any body have any
idea about how this will perform? Do you have any recommendation about
which solver to choose?
I have solved a 11 nonzero diagonal bands matrix equatio
Hi,
Currently, I am solving a nonlinear equation with some linearization
method. I am thinking to modify it with a nonlinear solver. With PETSc
library, I am confident to do it. I just want to ask those who have
experiences with nonlinear solver if matrix-free method will be faster.
Thank y
Hi,
I am solving Ax=b repeatedly, and A does not change all the time. I did
things like this:
%1. Set up entries for matrix A%%%
CALL MATASSEMBLYBEGIN(A,MAT_FINAL_ASSEMBLY,IERR)
CALL MATASSEMBLYEND(A,MAT_FINAL_ASSEMBLY,IERR)
CALL MATSETOPTION(A,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE
Hi,
I have very weird problem here.
I am using FORTRAN to call PETSc to solve Poisson equation.
When I run my code with 8 cores, it works fine, and the consumed memory
does not increase. However, when it is run with 64 cores, first of all
it gives lots of error like this:
[n310:18951] [[62652,
Hi,
I am using PETSc to iterate a problem, that is to say I call KSPSolve
repeatedly.
Firstly, I write all the PETSc components in one subroutine, including
"MatCreate", "VecCreateMPI", etc. Everything works fine.
Then, I want to only initialize ksp once outside the loop, and the
matrix and rh
37 matches
Mail list logo