Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Jed Brown
See formJacobian() in bratu3d.py for matrix assembly using DMDA.

Robert Speck  writes:

> OK, thanks, I see. This is basically what happens in the poisson2d.py
> example, too, right?
>
> I tried it with the shell matrix (?) used in the poisson2d example and
> it works right away, but then I fail to see how to make use of the
> preconditioners for KSP (see my original message)..
>
> Thanks again!
> -Robert-
>
>
> On 06.05.18 16:52, Jed Brown wrote:
>> Robert Speck  writes:
>> 
>>> Thanks for your reply and help. Yes, this is going to be a PDE solver
>>> for structured grids. The first goal would be IDC (or Crank-Nicholson)
>>> for the heat equation, which would require both solving a linear system
>>> and application of the matrix.
>>>
>>> The code I wrote for testing parallel matrix-vector multiplication can
>>> be found here:
>>> https://github.com/Parallel-in-Time/pySDC/blob/petsc/pySDC/playgrounds/PETSc/playground_matmult.py
>>>
>>> Both vectors and matrix come from the DMDA, but I guess filling them is
>>> done in a wrong way? Or do I need to convert global/natural vectors to
>>> local ones somewhere?
>> 
>> Global and Natural are not the same (see user's manual for details).
>> The matrix acts on a Global vector.  See
>> petsc4py/demo/bratu3d/bratu3d.py for examples of efficiently setting
>> values (and computing residuals) using Global vectors.  It should be
>> simpler/cleaner code than you currently have.
>> 
>>>
>>> Best
>>> -Robert-
>>>
>>>
>>>
>>> On 06.05.18 14:44, Dave May wrote:
 On Sun, 6 May 2018 at 10:40, Robert Speck >>> > wrote:

 Hi!

 I would like to do a matrix-vector multiplication (besides using linear
 solvers and so on) with petsc4py. I took the matrix from this example

 
 (https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py)


 This example only produces a matrix. And from the code the matrix
 produced is identical in serial or parallel. 



 and applied it to a PETSc Vector. All works well in serial, but in
 parallel (in particular if ordering becomes relevant) the resulting
 vector looks very different. 


 Given this, the way you defined the x vector in y = A x must be
 different when run on 1 versus N mpi ranks. 


 Using the shell matrix of this example
 
 (https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py)
 helps, but then I cannot use matrix-based preconditioners for KSP
 directly (right?). I also tried using DMDA for creating vectors and
 matrix and for taking care of their ordering (which seems to be my
 problem here), but that did not help either.

 So, my question is this: How do I do easy parallel matrix-vector
 multiplication with petsc4py in a way that allows me to use parallel
 linear solvers etc. later on? I want to deal with spatial decomposition
 as little as possible. 


 What's the context - are you solving a PDE?

 Assuming you are using your own grid object (e.g. as you might have if
 solving a PDE), and assuming you are not solving a 1D problem, you
 actually have to "deal" with the spatial decomposition otherwise
 performance could be quite terrible - even for something simple like a 5
 point Laplacian on a structured grid in 2D

 What data structures should I use? DMDA or
 PETSc.Vec() and PETSc.Mat() or something else?


 The mat vec product is not causing you a problem. Your issue appears to
 be that you do not have a way to label entries in a vector in a
 consistent manner.

 What's the objective? Are you solving a PDE? If yes, structured grid? If
 yes again, use the DMDA. It takes care of all the local-to-global and
 global-to-local mapping you need.

 Thanks,
   Dave



 Thanks!
 -Robert-

 --
 Dr. Robert Speck
 Juelich Supercomputing Centre
 Institute for Advanced Simulation
 Forschungszentrum Juelich GmbH
 52425 Juelich, Germany

 Tel: +49 2461 61 1644
 Fax: +49 2461 61 6656

 Email:   r.sp...@fz-juelich.de 
 Website: http://www.fz-juelich.de/ias/jsc/speck_r
 PinT:    http://www.fz-juelich.de/ias/jsc/pint



 
 
 
 
 Forschungszentrum Juelich GmbH
 52425 Juelich
 Sitz der Gesellschaft: Juelich
 Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
 Ges

Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Dave May
On Sun, 6 May 2018 at 17:52, Robert Speck  wrote:

> OK, thanks, I see. This is basically what happens in the poisson2d.py
> example, too, right?
>
> I tried it with the shell matrix (?) used in the poisson2d example and
> it works right away, but then I fail to see how to make use of the
> preconditioners for KSP (see my original message)..


If you want assemble something within the operator created by
DMCreateMatrix(), take a look at this function

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetValuesStencil.html

Study some of the examples listed at the bottom of the page.

(But you will have to concern yourself with the spatial decomposition, just
like in the poisson2d and Bratu example.)

Thanks,
  Dave




>
> Thanks again!
> -Robert-
>
>
> On 06.05.18 16:52, Jed Brown wrote:
> > Robert Speck  writes:
> >
> >> Thanks for your reply and help. Yes, this is going to be a PDE solver
> >> for structured grids. The first goal would be IDC (or Crank-Nicholson)
> >> for the heat equation, which would require both solving a linear system
> >> and application of the matrix.
> >>
> >> The code I wrote for testing parallel matrix-vector multiplication can
> >> be found here:
> >>
> https://github.com/Parallel-in-Time/pySDC/blob/petsc/pySDC/playgrounds/PETSc/playground_matmult.py
> >>
> >> Both vectors and matrix come from the DMDA, but I guess filling them is
> >> done in a wrong way? Or do I need to convert global/natural vectors to
> >> local ones somewhere?
> >
> > Global and Natural are not the same (see user's manual for details).
> > The matrix acts on a Global vector.  See
> > petsc4py/demo/bratu3d/bratu3d.py for examples of efficiently setting
> > values (and computing residuals) using Global vectors.  It should be
> > simpler/cleaner code than you currently have.
> >
> >>
> >> Best
> >> -Robert-
> >>
> >>
> >>
> >> On 06.05.18 14:44, Dave May wrote:
> >>> On Sun, 6 May 2018 at 10:40, Robert Speck  >>> > wrote:
> >>>
> >>> Hi!
> >>>
> >>> I would like to do a matrix-vector multiplication (besides using
> linear
> >>> solvers and so on) with petsc4py. I took the matrix from this
> example
> >>>
> >>> (
> https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py
> )
> >>>
> >>>
> >>> This example only produces a matrix. And from the code the matrix
> >>> produced is identical in serial or parallel.
> >>>
> >>>
> >>>
> >>> and applied it to a PETSc Vector. All works well in serial, but in
> >>> parallel (in particular if ordering becomes relevant) the resulting
> >>> vector looks very different.
> >>>
> >>>
> >>> Given this, the way you defined the x vector in y = A x must be
> >>> different when run on 1 versus N mpi ranks.
> >>>
> >>>
> >>> Using the shell matrix of this example
> >>> (
> https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py
> )
> >>> helps, but then I cannot use matrix-based preconditioners for KSP
> >>> directly (right?). I also tried using DMDA for creating vectors and
> >>> matrix and for taking care of their ordering (which seems to be my
> >>> problem here), but that did not help either.
> >>>
> >>> So, my question is this: How do I do easy parallel matrix-vector
> >>> multiplication with petsc4py in a way that allows me to use
> parallel
> >>> linear solvers etc. later on? I want to deal with spatial
> decomposition
> >>> as little as possible.
> >>>
> >>>
> >>> What's the context - are you solving a PDE?
> >>>
> >>> Assuming you are using your own grid object (e.g. as you might have if
> >>> solving a PDE), and assuming you are not solving a 1D problem, you
> >>> actually have to "deal" with the spatial decomposition otherwise
> >>> performance could be quite terrible - even for something simple like a
> 5
> >>> point Laplacian on a structured grid in 2D
> >>>
> >>> What data structures should I use? DMDA or
> >>> PETSc.Vec() and PETSc.Mat() or something else?
> >>>
> >>>
> >>> The mat vec product is not causing you a problem. Your issue appears to
> >>> be that you do not have a way to label entries in a vector in a
> >>> consistent manner.
> >>>
> >>> What's the objective? Are you solving a PDE? If yes, structured grid?
> If
> >>> yes again, use the DMDA. It takes care of all the local-to-global and
> >>> global-to-local mapping you need.
> >>>
> >>> Thanks,
> >>>   Dave
> >>>
> >>>
> >>>
> >>> Thanks!
> >>> -Robert-
> >>>
> >>> --
> >>> Dr. Robert Speck
> >>> Juelich Supercomputing Centre
> >>> Institute for Advanced Simulation
> >>> Forschungszentrum Juelich GmbH
> >>> 52425 Juelich, Germany
> >>>
> >>> Tel: +49 2461 61 1644
> >>> Fax: +49 2461 61 6656
> >>>
> >>> Email:   r.sp...@fz-juelich.de 
> >>> Website: http://www.fz-juelich.de/ias/jsc/speck_r
> >>> PinT:http://www.fz-juelich.de/ias/jsc/pint
> >>>
> >>>
> >>>
> >>>
>  

Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Robert Speck
OK, thanks, I see. This is basically what happens in the poisson2d.py
example, too, right?

I tried it with the shell matrix (?) used in the poisson2d example and
it works right away, but then I fail to see how to make use of the
preconditioners for KSP (see my original message)..

Thanks again!
-Robert-


On 06.05.18 16:52, Jed Brown wrote:
> Robert Speck  writes:
> 
>> Thanks for your reply and help. Yes, this is going to be a PDE solver
>> for structured grids. The first goal would be IDC (or Crank-Nicholson)
>> for the heat equation, which would require both solving a linear system
>> and application of the matrix.
>>
>> The code I wrote for testing parallel matrix-vector multiplication can
>> be found here:
>> https://github.com/Parallel-in-Time/pySDC/blob/petsc/pySDC/playgrounds/PETSc/playground_matmult.py
>>
>> Both vectors and matrix come from the DMDA, but I guess filling them is
>> done in a wrong way? Or do I need to convert global/natural vectors to
>> local ones somewhere?
> 
> Global and Natural are not the same (see user's manual for details).
> The matrix acts on a Global vector.  See
> petsc4py/demo/bratu3d/bratu3d.py for examples of efficiently setting
> values (and computing residuals) using Global vectors.  It should be
> simpler/cleaner code than you currently have.
> 
>>
>> Best
>> -Robert-
>>
>>
>>
>> On 06.05.18 14:44, Dave May wrote:
>>> On Sun, 6 May 2018 at 10:40, Robert Speck >> > wrote:
>>>
>>> Hi!
>>>
>>> I would like to do a matrix-vector multiplication (besides using linear
>>> solvers and so on) with petsc4py. I took the matrix from this example
>>>
>>> 
>>> (https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py)
>>>
>>>
>>> This example only produces a matrix. And from the code the matrix
>>> produced is identical in serial or parallel. 
>>>
>>>
>>>
>>> and applied it to a PETSc Vector. All works well in serial, but in
>>> parallel (in particular if ordering becomes relevant) the resulting
>>> vector looks very different. 
>>>
>>>
>>> Given this, the way you defined the x vector in y = A x must be
>>> different when run on 1 versus N mpi ranks. 
>>>
>>>
>>> Using the shell matrix of this example
>>> 
>>> (https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py)
>>> helps, but then I cannot use matrix-based preconditioners for KSP
>>> directly (right?). I also tried using DMDA for creating vectors and
>>> matrix and for taking care of their ordering (which seems to be my
>>> problem here), but that did not help either.
>>>
>>> So, my question is this: How do I do easy parallel matrix-vector
>>> multiplication with petsc4py in a way that allows me to use parallel
>>> linear solvers etc. later on? I want to deal with spatial decomposition
>>> as little as possible. 
>>>
>>>
>>> What's the context - are you solving a PDE?
>>>
>>> Assuming you are using your own grid object (e.g. as you might have if
>>> solving a PDE), and assuming you are not solving a 1D problem, you
>>> actually have to "deal" with the spatial decomposition otherwise
>>> performance could be quite terrible - even for something simple like a 5
>>> point Laplacian on a structured grid in 2D
>>>
>>> What data structures should I use? DMDA or
>>> PETSc.Vec() and PETSc.Mat() or something else?
>>>
>>>
>>> The mat vec product is not causing you a problem. Your issue appears to
>>> be that you do not have a way to label entries in a vector in a
>>> consistent manner.
>>>
>>> What's the objective? Are you solving a PDE? If yes, structured grid? If
>>> yes again, use the DMDA. It takes care of all the local-to-global and
>>> global-to-local mapping you need.
>>>
>>> Thanks,
>>>   Dave
>>>
>>>
>>>
>>> Thanks!
>>> -Robert-
>>>
>>> --
>>> Dr. Robert Speck
>>> Juelich Supercomputing Centre
>>> Institute for Advanced Simulation
>>> Forschungszentrum Juelich GmbH
>>> 52425 Juelich, Germany
>>>
>>> Tel: +49 2461 61 1644
>>> Fax: +49 2461 61 6656
>>>
>>> Email:   r.sp...@fz-juelich.de 
>>> Website: http://www.fz-juelich.de/ias/jsc/speck_r
>>> PinT:    http://www.fz-juelich.de/ias/jsc/pint
>>>
>>>
>>>
>>> 
>>> 
>>> 
>>> 
>>> Forschungszentrum Juelich GmbH
>>> 52425 Juelich
>>> Sitz der Gesellschaft: Juelich
>>> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
>>> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
>>> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
>>> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
>>> Prof. Dr. Sebastian M. Schmidt
>>> 
>>> -

Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Jed Brown
Robert Speck  writes:

> Thanks for your reply and help. Yes, this is going to be a PDE solver
> for structured grids. The first goal would be IDC (or Crank-Nicholson)
> for the heat equation, which would require both solving a linear system
> and application of the matrix.
>
> The code I wrote for testing parallel matrix-vector multiplication can
> be found here:
> https://github.com/Parallel-in-Time/pySDC/blob/petsc/pySDC/playgrounds/PETSc/playground_matmult.py
>
> Both vectors and matrix come from the DMDA, but I guess filling them is
> done in a wrong way? Or do I need to convert global/natural vectors to
> local ones somewhere?

Global and Natural are not the same (see user's manual for details).
The matrix acts on a Global vector.  See
petsc4py/demo/bratu3d/bratu3d.py for examples of efficiently setting
values (and computing residuals) using Global vectors.  It should be
simpler/cleaner code than you currently have.

>
> Best
> -Robert-
>
>
>
> On 06.05.18 14:44, Dave May wrote:
>> On Sun, 6 May 2018 at 10:40, Robert Speck > > wrote:
>> 
>> Hi!
>> 
>> I would like to do a matrix-vector multiplication (besides using linear
>> solvers and so on) with petsc4py. I took the matrix from this example
>> 
>> 
>> (https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py)
>> 
>> 
>> This example only produces a matrix. And from the code the matrix
>> produced is identical in serial or parallel. 
>> 
>> 
>> 
>> and applied it to a PETSc Vector. All works well in serial, but in
>> parallel (in particular if ordering becomes relevant) the resulting
>> vector looks very different. 
>> 
>> 
>> Given this, the way you defined the x vector in y = A x must be
>> different when run on 1 versus N mpi ranks. 
>> 
>> 
>> Using the shell matrix of this example
>> 
>> (https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py)
>> helps, but then I cannot use matrix-based preconditioners for KSP
>> directly (right?). I also tried using DMDA for creating vectors and
>> matrix and for taking care of their ordering (which seems to be my
>> problem here), but that did not help either.
>> 
>> So, my question is this: How do I do easy parallel matrix-vector
>> multiplication with petsc4py in a way that allows me to use parallel
>> linear solvers etc. later on? I want to deal with spatial decomposition
>> as little as possible. 
>> 
>> 
>> What's the context - are you solving a PDE?
>> 
>> Assuming you are using your own grid object (e.g. as you might have if
>> solving a PDE), and assuming you are not solving a 1D problem, you
>> actually have to "deal" with the spatial decomposition otherwise
>> performance could be quite terrible - even for something simple like a 5
>> point Laplacian on a structured grid in 2D
>> 
>> What data structures should I use? DMDA or
>> PETSc.Vec() and PETSc.Mat() or something else?
>> 
>> 
>> The mat vec product is not causing you a problem. Your issue appears to
>> be that you do not have a way to label entries in a vector in a
>> consistent manner.
>> 
>> What's the objective? Are you solving a PDE? If yes, structured grid? If
>> yes again, use the DMDA. It takes care of all the local-to-global and
>> global-to-local mapping you need.
>> 
>> Thanks,
>>   Dave
>> 
>> 
>> 
>> Thanks!
>> -Robert-
>> 
>> --
>> Dr. Robert Speck
>> Juelich Supercomputing Centre
>> Institute for Advanced Simulation
>> Forschungszentrum Juelich GmbH
>> 52425 Juelich, Germany
>> 
>> Tel: +49 2461 61 1644
>> Fax: +49 2461 61 6656
>> 
>> Email:   r.sp...@fz-juelich.de 
>> Website: http://www.fz-juelich.de/ias/jsc/speck_r
>> PinT:    http://www.fz-juelich.de/ias/jsc/pint
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Forschungszentrum Juelich GmbH
>> 52425 Juelich
>> Sitz der Gesellschaft: Juelich
>> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
>> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
>> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
>> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
>> Prof. Dr. Sebastian M. Schmidt
>> 
>> 
>> 
>> 
>> 
>
> -- 
> Dr. Robert Speck
> Juelich Supercomputing Centre
> Institute for Advanced Simulation
> Forschungszentrum Juelich GmbH
> 52425 Juelich, Germany
>
> Tel: +49 2461 61 1644
> Fax: +49 2461 61 6656
>
> Email:   r.sp...@fz-juelich.de
> Website: http://www.fz-juelich.de/ias

Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Robert Speck
Thanks for your reply and help. Yes, this is going to be a PDE solver
for structured grids. The first goal would be IDC (or Crank-Nicholson)
for the heat equation, which would require both solving a linear system
and application of the matrix.

The code I wrote for testing parallel matrix-vector multiplication can
be found here:
https://github.com/Parallel-in-Time/pySDC/blob/petsc/pySDC/playgrounds/PETSc/playground_matmult.py

Both vectors and matrix come from the DMDA, but I guess filling them is
done in a wrong way? Or do I need to convert global/natural vectors to
local ones somewhere?

Best
-Robert-



On 06.05.18 14:44, Dave May wrote:
> On Sun, 6 May 2018 at 10:40, Robert Speck  > wrote:
> 
> Hi!
> 
> I would like to do a matrix-vector multiplication (besides using linear
> solvers and so on) with petsc4py. I took the matrix from this example
> 
> 
> (https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py)
> 
> 
> This example only produces a matrix. And from the code the matrix
> produced is identical in serial or parallel. 
> 
> 
> 
> and applied it to a PETSc Vector. All works well in serial, but in
> parallel (in particular if ordering becomes relevant) the resulting
> vector looks very different. 
> 
> 
> Given this, the way you defined the x vector in y = A x must be
> different when run on 1 versus N mpi ranks. 
> 
> 
> Using the shell matrix of this example
> 
> (https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py)
> helps, but then I cannot use matrix-based preconditioners for KSP
> directly (right?). I also tried using DMDA for creating vectors and
> matrix and for taking care of their ordering (which seems to be my
> problem here), but that did not help either.
> 
> So, my question is this: How do I do easy parallel matrix-vector
> multiplication with petsc4py in a way that allows me to use parallel
> linear solvers etc. later on? I want to deal with spatial decomposition
> as little as possible. 
> 
> 
> What's the context - are you solving a PDE?
> 
> Assuming you are using your own grid object (e.g. as you might have if
> solving a PDE), and assuming you are not solving a 1D problem, you
> actually have to "deal" with the spatial decomposition otherwise
> performance could be quite terrible - even for something simple like a 5
> point Laplacian on a structured grid in 2D
> 
> What data structures should I use? DMDA or
> PETSc.Vec() and PETSc.Mat() or something else?
> 
> 
> The mat vec product is not causing you a problem. Your issue appears to
> be that you do not have a way to label entries in a vector in a
> consistent manner.
> 
> What's the objective? Are you solving a PDE? If yes, structured grid? If
> yes again, use the DMDA. It takes care of all the local-to-global and
> global-to-local mapping you need.
> 
> Thanks,
>   Dave
> 
> 
> 
> Thanks!
> -Robert-
> 
> --
> Dr. Robert Speck
> Juelich Supercomputing Centre
> Institute for Advanced Simulation
> Forschungszentrum Juelich GmbH
> 52425 Juelich, Germany
> 
> Tel: +49 2461 61 1644
> Fax: +49 2461 61 6656
> 
> Email:   r.sp...@fz-juelich.de 
> Website: http://www.fz-juelich.de/ias/jsc/speck_r
> PinT:    http://www.fz-juelich.de/ias/jsc/pint
> 
> 
> 
> 
> 
> 
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> 
> 
> 

-- 
Dr. Robert Speck
Juelich Supercomputing Centre
Institute for Advanced Simulation
Forschungszentrum Juelich GmbH
52425 Juelich, Germany

Tel: +49 2461 61 1644
Fax: +49 2461 61 6656

Email:   r.sp...@fz-juelich.de
Website: http://www.fz-juelich.de/ias/jsc/speck_r
PinT:http://www.fz-juelich.de/ias/jsc/pint



Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Dave May
On Sun, 6 May 2018 at 10:40, Robert Speck  wrote:

> Hi!
>
> I would like to do a matrix-vector multiplication (besides using linear
> solvers and so on) with petsc4py. I took the matrix from this example
>
(https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py)


This example only produces a matrix. And from the code the matrix produced
is identical in serial or parallel.



> and applied it to a PETSc Vector. All works well in serial, but in
> parallel (in particular if ordering becomes relevant) the resulting
> vector looks very different.


Given this, the way you defined the x vector in y = A x must be different
when run on 1 versus N mpi ranks.


Using the shell matrix of this example
> (
> https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py
> )
> helps, but then I cannot use matrix-based preconditioners for KSP
> directly (right?). I also tried using DMDA for creating vectors and
> matrix and for taking care of their ordering (which seems to be my
> problem here), but that did not help either.
>
> So, my question is this: How do I do easy parallel matrix-vector
> multiplication with petsc4py in a way that allows me to use parallel
> linear solvers etc. later on? I want to deal with spatial decomposition
> as little as possible.


What's the context - are you solving a PDE?

Assuming you are using your own grid object (e.g. as you might have if
solving a PDE), and assuming you are not solving a 1D problem, you actually
have to "deal" with the spatial decomposition otherwise performance could
be quite terrible - even for something simple like a 5 point Laplacian on a
structured grid in 2D

What data structures should I use? DMDA or
> PETSc.Vec() and PETSc.Mat() or something else?


The mat vec product is not causing you a problem. Your issue appears to be
that you do not have a way to label entries in a vector in a consistent
manner.

What's the objective? Are you solving a PDE? If yes, structured grid? If
yes again, use the DMDA. It takes care of all the local-to-global and
global-to-local mapping you need.

Thanks,
  Dave


>
> Thanks!
> -Robert-
>
> --
> Dr. Robert Speck
> Juelich Supercomputing Centre
> Institute for Advanced Simulation
> Forschungszentrum Juelich GmbH
> 52425 Juelich, Germany
>
> Tel: +49 2461 61 1644
> Fax: +49 2461 61 6656
>
> Email:   r.sp...@fz-juelich.de
> Website: http://www.fz-juelich.de/ias/jsc/speck_r
> PinT:http://www.fz-juelich.de/ias/jsc/pint
>
>
>
>
> 
>
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
>
> 
>
> 
>
>


Re: [petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Matthew Knepley
On Sun, May 6, 2018 at 4:40 AM, Robert Speck  wrote:

> Hi!
>
> I would like to do a matrix-vector multiplication (besides using linear
> solvers and so on) with petsc4py. I took the matrix from this example
> (https://bitbucket.org/petsc/petsc4py/src/master/demo/
> kspsolve/petsc-mat.py)
> and applied it to a PETSc Vector. All works well in serial, but in
> parallel (in particular if ordering becomes relevant) the resulting
> vector looks very different.


You will have to be more specific about "looks different". If you look
inside any of
the solver code, you can see we are just calling MatMult() everywhere.

   Matt



> Using the shell matrix of this example
> (https://bitbucket.org/petsc/petsc4py/src/master/demo/
> poisson2d/poisson2d.py)
> helps, but then I cannot use matrix-based preconditioners for KSP
> directly (right?). I also tried using DMDA for creating vectors and
> matrix and for taking care of their ordering (which seems to be my
> problem here), but that did not help either.
>
> So, my question is this: How do I do easy parallel matrix-vector
> multiplication with petsc4py in a way that allows me to use parallel
> linear solvers etc. later on? I want to deal with spatial decomposition
> as little as possible. What data structures should I use? DMDA or
> PETSc.Vec() and PETSc.Mat() or something else?
>
> Thanks!
> -Robert-
>
> --
> Dr. Robert Speck
> Juelich Supercomputing Centre
> Institute for Advanced Simulation
> Forschungszentrum Juelich GmbH
> 52425 Juelich, Germany
>
> Tel: +49 2461 61 1644
> Fax: +49 2461 61 6656
>
> Email:   r.sp...@fz-juelich.de
> Website: http://www.fz-juelich.de/ias/jsc/speck_r
> PinT:http://www.fz-juelich.de/ias/jsc/pint
>
>
>
> 
> 
> 
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> 
> 
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Refactoring an MPI code to call from another mpi code

2018-05-06 Thread Matthew Knepley
On Sat, May 5, 2018 at 8:51 PM, Navid Shervani-Tabar 
wrote:

> Dear All,
>
> I have a question and I was advised that I might find people with related
> expertise here.
>
> I have two programs Main and Solver, each of which uses MPI for parallel
> processing. I am keeping Main as the main code and modifying Solver as a
> subroutine which is being called by Main.
>
> The problem arises when both codes initiate the MPI (since both were
> originally standalone programs) and then each have their own parallel
> structure, which leads the code to crash as MPI_init can only be called
> once in the now combined program.
>
> The easiest thing to do is call MPI_Initialized() in the programs to check
whether MPI_Init() has already been called (same for MPI_Finalized()).


> I thought this must have been a common problem and there should be
> discussions on how to approach this; However, to my surprise, I couldn't
> find any related topics on common websites like stackoverflow and the
> archives of this mailing list.
>
> Thanks,
>
> Navid
>
> PS1: When connecting the two codes, I decided to call Aux by Main rather
> than running the Aux as executable by Main to have a more optimized and
> robust setting and also avoid the overhead.
>
> PS2: Code Main is written in c++ and code Aux is written in Fortran.
>
> PS3: The Main code was originally using Deal.II as solver.
>
No one answered on the Deal.II list? :)

  Matt


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] petsc4py: parallel matrix-vector multiplication

2018-05-06 Thread Robert Speck
Hi!

I would like to do a matrix-vector multiplication (besides using linear
solvers and so on) with petsc4py. I took the matrix from this example
(https://bitbucket.org/petsc/petsc4py/src/master/demo/kspsolve/petsc-mat.py)
and applied it to a PETSc Vector. All works well in serial, but in
parallel (in particular if ordering becomes relevant) the resulting
vector looks very different. Using the shell matrix of this example
(https://bitbucket.org/petsc/petsc4py/src/master/demo/poisson2d/poisson2d.py)
helps, but then I cannot use matrix-based preconditioners for KSP
directly (right?). I also tried using DMDA for creating vectors and
matrix and for taking care of their ordering (which seems to be my
problem here), but that did not help either.

So, my question is this: How do I do easy parallel matrix-vector
multiplication with petsc4py in a way that allows me to use parallel
linear solvers etc. later on? I want to deal with spatial decomposition
as little as possible. What data structures should I use? DMDA or
PETSc.Vec() and PETSc.Mat() or something else?

Thanks!
-Robert-

--
Dr. Robert Speck
Juelich Supercomputing Centre
Institute for Advanced Simulation
Forschungszentrum Juelich GmbH
52425 Juelich, Germany

Tel: +49 2461 61 1644
Fax: +49 2461 61 6656

Email:   r.sp...@fz-juelich.de
Website: http://www.fz-juelich.de/ias/jsc/speck_r
PinT:http://www.fz-juelich.de/ias/jsc/pint





Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt