Re: [petsc-users] Code performance for solving multiple RHS

2016-08-11 Thread Barry Smith
> On Aug 11, 2016, at 10:14 PM, Harshad Ranadive > wrote: > > Hi Barry, > > Thanks for this recommendation. > > As you mention, the matrix factorization should be on a single processor. > If the factored matrix A is available on all processors can I then use > MatMatSolve(A,B,X) in paralle

Re: [petsc-users] Code performance for solving multiple RHS

2016-08-11 Thread Harshad Ranadive
Hi Barry, Thanks for this recommendation. As you mention, the matrix factorization should be on a single processor. If the factored matrix A is available on all processors can I then use MatMatSolve(A,B,X) in parallel? That is could the RHS block matrix 'B' and solution matrix 'X' be distributed

Re: [petsc-users] Changing DM domain from default [0,1]

2016-08-11 Thread Scott Dossa
Thanks Mohammad. That is exactly what I was searching for. -Scott Dossa On Thu, Aug 11, 2016 at 7:44 PM, Mohammad Mirzadeh wrote: > Have you tried DMDASetUniformCoordinates? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/ > DMDASetUniformCoordinates.html > > On Thu, Aug 11, 201

Re: [petsc-users] Changing DM domain from default [0,1]

2016-08-11 Thread Mohammad Mirzadeh
Have you tried DMDASetUniformCoordinates? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMDASetUniformCoordinates.html On Thu, Aug 11, 2016 at 8:41 PM, Scott Dossa wrote: > Hi All, > > Basic Question: > > When one creates a DMDA to handle objects, it sets the domain to [0,1] by

[petsc-users] Changing DM domain from default [0,1]

2016-08-11 Thread Scott Dossa
Hi All, Basic Question: When one creates a DMDA to handle objects, it sets the domain to [0,1] by default. Is there a call/function to change this? All the examples seem to be over the default domain. Thank you for the help! Best, Scott Dossa

Re: [petsc-users] different convergence behaviour

2016-08-11 Thread Hoang Giang Bui
Hi all I'm a bit embarrassed that after careful investigation I found that I made a wrong configuration in my problem settings. This makes the problem also not converged with MUMPS at the first place. After fixing that problem, the NR iterator converges normally with both MUMPS and Hypre. Although

Re: [petsc-users] Code performance for solving multiple RHS

2016-08-11 Thread Barry Smith
If it is sequential, which it probably should be, then you can you MatLUFactorSymbolic(), MatLUFactorNumeric() and MatMatSolve() where you put a bunch of your right hand side vectors into a dense array; not all million of them but maybe 10 to 100 at a time. Barry > On Aug 10, 2016, at

Re: [petsc-users] High order finite volume method in unstructured grid using PETSc

2016-08-11 Thread Matthew Knepley
On Thu, Aug 11, 2016 at 8:16 AM, leejearl wrote: > Hi, all: > > I want to build a high order finite volume method in unstructured grid > using PETSc. > > The first issue is to partition the grid. I use the DMPlex to manage the > data structure. > > The procedure is as follow: > > 1> DMPlexCre

Re: [petsc-users] A question about DMPlexDistribute

2016-08-11 Thread Matthew Knepley
On Thu, Aug 11, 2016 at 3:14 AM, leejearl wrote: > Hi, > Thank you for your reply. It help me very much. > But, for "/petsc-3.7.2/src/ts/examples/tutorials/ex11.c", when I set > the overlap to 2 levels, the command is > "mpirun -n 3 ./ex11 -f annulus-20.exo -ufv_mesh_overlap 2 -physics sw

Re: [petsc-users] mat option producing error for stash

2016-08-11 Thread Norihiro Watanabe
thanks! On Thu, Aug 11, 2016 at 4:35 PM, Satish Balay wrote: > On Thu, 11 Aug 2016, Norihiro Watanabe wrote: > >> Hi, >> >> I would like to check if my program assembles a matrix without >> generating stash. To help checking it, I wonder if there is a mat >> option producing errors if entries des

Re: [petsc-users] mat option producing error for stash

2016-08-11 Thread Satish Balay
On Thu, 11 Aug 2016, Norihiro Watanabe wrote: > Hi, > > I would like to check if my program assembles a matrix without > generating stash. To help checking it, I wonder if there is a mat > option producing errors if entries destined for other processors are > added/set. I mean something like MAT_

[petsc-users] High order finite volume method in unstructured grid using PETSc

2016-08-11 Thread leejearl
Hi, all: I want to build a high order finite volume method in unstructured grid using PETSc. The first issue is to partition the grid. I use the DMPlex to manage the data structure. The procedure is as follow: 1> DMPlexCreateFromFile(), to load a grid into DMPlex; 2> DMPlexDistribute(

Re: [petsc-users] A question about DMPlexDistribute

2016-08-11 Thread leejearl
Hi, Thank you for your reply. It help me very much. But, for "/petsc-3.7.2/src/ts/examples/tutorials/ex11.c", when I set the overlap to 2 levels, the command is "mpirun -n 3 ./ex11 -f annulus-20.exo -ufv_mesh_overlap 2 -physics sw", it suffers a error. It seems to me that setting ove

[petsc-users] mat option producing error for stash

2016-08-11 Thread Norihiro Watanabe
Hi, I would like to check if my program assembles a matrix without generating stash. To help checking it, I wonder if there is a mat option producing errors if entries destined for other processors are added/set. I mean something like MAT_NEW_NONZERO_LOCATION_ERR for stashing. Best, Nori

[petsc-users] [petsc4py] a problem with computeRHSFunctionLinear interface?

2016-08-11 Thread Francesco Caimmi
Dear all, I was trying to reproduce /ts/examples/tutorials/ex4.c in python to learn how to use TS solvers; the example uses the function TSComputeRHSFunctionLinear. However I get an error when running my code (attached in case you want to look at it), when I call ts.solve. Here is the trace: [