Re: [petsc-users] Nodes coordinates in distributed dmplex

2022-07-16 Thread Matthew Knepley
On Fri, Jul 15, 2022 at 7:29 PM David Fuentes  wrote:

> Hi, when looping over the nodal coordinates, is there a way to get the
> corresponding DOF in the index set for the DM Field(s) that is used in the
> residual vector?
> I would like to update the jacobian/residual entries obtained from
> DMCreateFieldIS that correspond to particular nodal coordinates that are
> not on the boundary.
>

I think I just need to understand exactly what you want. DMCreateFieldIS gives
you the global indices into the residual/Jacobian. You can get these same
indices
from the PetscSection that you get from DMGetGlobalSection(). You can put
any mesh point into the Section and get back an index
using PetscSectionGetOffset().

Maybe give me a more specific example and I can help write some code.

  Thanks,

  Matt

On Fri, Jul 10, 2015 at 11:14 AM Matthew Knepley  wrote:
>
>> On Fri, Jul 10, 2015 at 6:48 AM, Alejandro D Otero 
>> wrote:
>>
>>> Hi Matt, thanks for your answer.
>>>
>>> I got a vector from getCoordinates(). How are their components indexed?
>>> is (p * dim + d) with p: node, dim: of the problem, x_d coordinate of the
>>> node, ok?
>>> Which numbering for p? The local number of node, the number of point in
>>> the DAG of the dm, the original number of node?
>>>
>>
>> All data layouts for Plex are described by a PetscSection (which is how
>> it should be in the rest of PETSc as well). The PetscSection is a map
>> from mesh points to (# dof, offset) pairs. Thus, the way I get
>> coordinates is the following:
>>
>> DMGetCoordinatesLocal(dm, );
>> DMGetCoordinateDM(dm, );
>> DMGetDefaultSection(cdm, );
>> PetscSectionGetChart(cs, , );
>> VecGetArrayRead(coordinates, );
>> for (p = pStart; p < pEnd; ++p) {
>>   PetscInt dof, off;
>>
>>   PetscSectionGetDof(cs, p, );
>>   PetscSectionGetOffset(cs, p, );
>>   for (d = 0; d < dof; ++d) 
>> }
>> VecRestoreArrayRead(coordinates, );
>>
>>
>>> I am trying a simple square mesh with 16 4-node square elements parted
>>> into 2 process. Total of 25 nodes.
>>> The distributed dm seems alright to me. Each process gets a dm with 8
>>> elements an 15 nodes, which means that the 5 shared nodes are local to each
>>> process. But one of the process gives negative values for the shared nodes.
>>> How need them to be mapped to get the right number.
>>>
>>
>> Where do you get negative point numbers? I encode off-process point
>> numbers as -(remote point + 1).
>>
>>
>>> It seems I'm using a wrong approach to this. May be I need to get the
>>> coordinates in a somehow different way. I'm working on a from-scratch
>>> implementation of a FEM code based on petsc4py. I want to code the problem
>>> matrices assembly from elemental matrices. I've already done this
>>> sequentially, but I got stuck when trying to compute elemental matrices in
>>> parallel because I don't understand the way of obtaining the coordinates of
>>> the nodes in for each element.
>>>
>>
>> The way I get coordinates for the element c is
>>
>>   PetscScalar *coords = NULL;
>>   PetscIntcsize;
>>
>>   ierr = DMPlexVecGetClosure(dm, cs, coordinates, c, ,
>> );CHKERRQ(ierr);
>>   ierr = DMPlexVecRestoreClosure(dm, cs, coordinates, c, ,
>> );CHKERRQ(ierr);
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Again, thanks for your help,
>>>
>>> A.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Jul 9, 2015 at 5:18 PM, Matthew Knepley 
>>> wrote:
>>>
 On Thu, Jul 9, 2015 at 7:42 AM, Alejandro D Otero 
 wrote:

> Hi, sorry if this is an obvious question, but I cannot figure out how
> to recover finite element nodes coordinates once I have distributed a mesh
> stored as a dmplex. I am using petsc4py as interface to petsc rutines.
>
> I first created a dmplex using:
> dm.createFromCellList()
>
> In a sequential run I got the coordinates with:
> Coords = dm.getCoordinates()
>
> which gave a sequential vector with the coordinates of the mesh nodes.
>
> When I distribute the mesh with:
> dm.distribute()
>
> each mpi process has it own dm but the indexing of the vector
> resulting from getCoordinates() or getCoordinatesLocal() seems not
> consistent with the local numbering of the cells and nodes.
>

 When the mesh is distributed, the vertices are renumbered. Thus the
 coordinates you get out are
 for reordered local vertices, but they are consistent with the local
 topology (cells still contain the
 right vertices) and the overlap mapping (SF still connects the shared
 vertices).

 What do you need it to do?

   Thanks,

 Matt


> Which is the correct way of doing this in PETSc philosophy?
>
> Thanks in advance,
> Alejandro
>



 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

>>>
>>>
>>
>>
>> --

Re: [petsc-users] Nodes coordinates in distributed dmplex

2022-07-15 Thread David Fuentes
Hi, when looping over the nodal coordinates, is there a way to get the
corresponding DOF in the index set for the DM Field(s) that is used in the
residual vector?
I would like to update the jacobian/residual entries obtained from
DMCreateFieldIS that correspond to particular nodal coordinates that are
not on the boundary.


On Fri, Jul 10, 2015 at 11:14 AM Matthew Knepley  wrote:

> On Fri, Jul 10, 2015 at 6:48 AM, Alejandro D Otero 
> wrote:
>
>> Hi Matt, thanks for your answer.
>>
>> I got a vector from getCoordinates(). How are their components indexed?
>> is (p * dim + d) with p: node, dim: of the problem, x_d coordinate of the
>> node, ok?
>> Which numbering for p? The local number of node, the number of point in
>> the DAG of the dm, the original number of node?
>>
>
> All data layouts for Plex are described by a PetscSection (which is how it
> should be in the rest of PETSc as well). The PetscSection is a map
> from mesh points to (# dof, offset) pairs. Thus, the way I get coordinates
> is the following:
>
> DMGetCoordinatesLocal(dm, );
> DMGetCoordinateDM(dm, );
> DMGetDefaultSection(cdm, );
> PetscSectionGetChart(cs, , );
> VecGetArrayRead(coordinates, );
> for (p = pStart; p < pEnd; ++p) {
>   PetscInt dof, off;
>
>   PetscSectionGetDof(cs, p, );
>   PetscSectionGetOffset(cs, p, );
>   for (d = 0; d < dof; ++d) 
> }
> VecRestoreArrayRead(coordinates, );
>
>
>> I am trying a simple square mesh with 16 4-node square elements parted
>> into 2 process. Total of 25 nodes.
>> The distributed dm seems alright to me. Each process gets a dm with 8
>> elements an 15 nodes, which means that the 5 shared nodes are local to each
>> process. But one of the process gives negative values for the shared nodes.
>> How need them to be mapped to get the right number.
>>
>
> Where do you get negative point numbers? I encode off-process point
> numbers as -(remote point + 1).
>
>
>> It seems I'm using a wrong approach to this. May be I need to get the
>> coordinates in a somehow different way. I'm working on a from-scratch
>> implementation of a FEM code based on petsc4py. I want to code the problem
>> matrices assembly from elemental matrices. I've already done this
>> sequentially, but I got stuck when trying to compute elemental matrices in
>> parallel because I don't understand the way of obtaining the coordinates of
>> the nodes in for each element.
>>
>
> The way I get coordinates for the element c is
>
>   PetscScalar *coords = NULL;
>   PetscIntcsize;
>
>   ierr = DMPlexVecGetClosure(dm, cs, coordinates, c, ,
> );CHKERRQ(ierr);
>   ierr = DMPlexVecRestoreClosure(dm, cs, coordinates, c, ,
> );CHKERRQ(ierr);
>
>   Thanks,
>
>  Matt
>
>
>> Again, thanks for your help,
>>
>> A.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Jul 9, 2015 at 5:18 PM, Matthew Knepley 
>> wrote:
>>
>>> On Thu, Jul 9, 2015 at 7:42 AM, Alejandro D Otero 
>>> wrote:
>>>
 Hi, sorry if this is an obvious question, but I cannot figure out how
 to recover finite element nodes coordinates once I have distributed a mesh
 stored as a dmplex. I am using petsc4py as interface to petsc rutines.

 I first created a dmplex using:
 dm.createFromCellList()

 In a sequential run I got the coordinates with:
 Coords = dm.getCoordinates()

 which gave a sequential vector with the coordinates of the mesh nodes.

 When I distribute the mesh with:
 dm.distribute()

 each mpi process has it own dm but the indexing of the vector resulting
 from getCoordinates() or getCoordinatesLocal() seems not consistent with
 the local numbering of the cells and nodes.

>>>
>>> When the mesh is distributed, the vertices are renumbered. Thus the
>>> coordinates you get out are
>>> for reordered local vertices, but they are consistent with the local
>>> topology (cells still contain the
>>> right vertices) and the overlap mapping (SF still connects the shared
>>> vertices).
>>>
>>> What do you need it to do?
>>>
>>>   Thanks,
>>>
>>> Matt
>>>
>>>
 Which is the correct way of doing this in PETSc philosophy?

 Thanks in advance,
 Alejandro

>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] Nodes coordinates in distributed dmplex

2015-07-14 Thread Jed Brown
Alejandro D Otero aot...@fi.uba.ar writes:
 But when I set the viewer to use VTK_VTU (in this case ASCII just for make
 it readable, but the same in native vtk format when read with paraview) I
 got:

The ASCII viewer is unrelated I never use it.  The VTU
(XML/binary-appended) format writes multiple blocks into the same file.
Those blocks include ghost points (thus some redundancy) because that's
how the format is specified.


signature.asc
Description: PGP signature


Re: [petsc-users] Nodes coordinates in distributed dmplex

2015-07-13 Thread Matthew Knepley
On Mon, Jul 13, 2015 at 7:12 AM, Alejandro D Otero aot...@fi.uba.ar wrote:

 Hi, I seems to be having problems with PETSc vector VTK viewer. The output
 file I get has some values which are different from what I actually have.

 I have a scalar field represented by a vector. If I print it on screen I
 got the following output.

 Vec Object: 2 MPI processes
   type: mpi
 Process [0]
 0
 0
 0.25
 0.25
 0.5
 0.5
 0.75
 0.75
 1
 1
 Process [1]
 0
 0
 0
 0.25
 0.25
 0.25
 0.5
 0.5
 0.5
 0.75
 0.75
 0.75
 1
 1
 1

 But when I set the viewer to use VTK_VTU (in this case ASCII just for make
 it readable, but the same in native vtk format when read with paraview) I
 got:

 0.00e+00
 0.00e+00
 2.50e-01
 2.50e-01
 5.00e-01
 5.00e-01
 7.155608e-01
 7.198046e-01
 7.892083e-01
 1.00e+00
 0.00e+00
 2.248979e-01
 2.439858e-01
 2.788163e-01
 2.50e-01
 2.50e-01
 4.997506e-01
 5.042740e-01
 5.003098e-01
 7.50e-01
 7.50e-01
 7.50e-01
 1.00e+00
 1.00e+00
 1.00e+00

 The difference start in the 7th value on.Any clue of what could be
 happening?


Nope. Jed wrote the VTU viewer, so I can take a look at it. However, I
think the best way to output now is to use
HDF5. You can get human readable stuff using h5dump, its compact, parallel,
and you can use

  ./bin/petsc_gem_xmdf.py

to make XDMF files readable by Paraview. I output like this

  -dm_view hdf5:sol.h5
  -vec_view hdf5:sol.h5::append

and include PetscObjectViewFromOptions() calls in my code.

  Thanks,

 Matt


 Thanks in advance,
 Alejandro




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Nodes coordinates in distributed dmplex

2015-07-10 Thread Alejandro D Otero
Hi Matt, thanks for your answer.

I got a vector from getCoordinates(). How are their components indexed? is
(p * dim + d) with p: node, dim: of the problem, x_d coordinate of the
node, ok?
Which numbering for p? The local number of node, the number of point in the
DAG of the dm, the original number of node?

I am trying a simple square mesh with 16 4-node square elements parted into
2 process. Total of 25 nodes.
The distributed dm seems alright to me. Each process gets a dm with 8
elements an 15 nodes, which means that the 5 shared nodes are local to each
process. But one of the process gives negative values for the shared nodes.
How need them to be mapped to get the right number.

It seems I'm using a wrong approach to this. May be I need to get the
coordinates in a somehow different way. I'm working on a from-scratch
implementation of a FEM code based on petsc4py. I want to code the problem
matrices assembly from elemental matrices. I've already done this
sequentially, but I got stuck when trying to compute elemental matrices in
parallel because I don't understand the way of obtaining the coordinates of
the nodes in for each element.

Again, thanks for your help,

A.









On Thu, Jul 9, 2015 at 5:18 PM, Matthew Knepley knep...@gmail.com wrote:

 On Thu, Jul 9, 2015 at 7:42 AM, Alejandro D Otero aot...@fi.uba.ar
 wrote:

 Hi, sorry if this is an obvious question, but I cannot figure out how to
 recover finite element nodes coordinates once I have distributed a mesh
 stored as a dmplex. I am using petsc4py as interface to petsc rutines.

 I first created a dmplex using:
 dm.createFromCellList()

 In a sequential run I got the coordinates with:
 Coords = dm.getCoordinates()

 which gave a sequential vector with the coordinates of the mesh nodes.

 When I distribute the mesh with:
 dm.distribute()

 each mpi process has it own dm but the indexing of the vector resulting
 from getCoordinates() or getCoordinatesLocal() seems not consistent with
 the local numbering of the cells and nodes.


 When the mesh is distributed, the vertices are renumbered. Thus the
 coordinates you get out are
 for reordered local vertices, but they are consistent with the local
 topology (cells still contain the
 right vertices) and the overlap mapping (SF still connects the shared
 vertices).

 What do you need it to do?

   Thanks,

 Matt


 Which is the correct way of doing this in PETSc philosophy?

 Thanks in advance,
 Alejandro




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener



Re: [petsc-users] Nodes coordinates in distributed dmplex

2015-07-10 Thread Matthew Knepley
On Fri, Jul 10, 2015 at 6:48 AM, Alejandro D Otero aot...@fi.uba.ar wrote:

 Hi Matt, thanks for your answer.

 I got a vector from getCoordinates(). How are their components indexed?
 is (p * dim + d) with p: node, dim: of the problem, x_d coordinate of the
 node, ok?
 Which numbering for p? The local number of node, the number of point in
 the DAG of the dm, the original number of node?


All data layouts for Plex are described by a PetscSection (which is how it
should be in the rest of PETSc as well). The PetscSection is a map
from mesh points to (# dof, offset) pairs. Thus, the way I get coordinates
is the following:

DMGetCoordinatesLocal(dm, coordinates);
DMGetCoordinateDM(dm, cdm);
DMGetDefaultSection(cdm, cs);
PetscSectionGetChart(cs, pStart, pEnd);
VecGetArrayRead(coordinates, coords);
for (p = pStart; p  pEnd; ++p) {
  PetscInt dof, off;

  PetscSectionGetDof(cs, p, dof);
  PetscSectionGetOffset(cs, p, off);
  for (d = 0; d  dof; ++d) compute with coords[off+d]
}
VecRestoreArrayRead(coordinates, coords);


 I am trying a simple square mesh with 16 4-node square elements parted
 into 2 process. Total of 25 nodes.
 The distributed dm seems alright to me. Each process gets a dm with 8
 elements an 15 nodes, which means that the 5 shared nodes are local to each
 process. But one of the process gives negative values for the shared nodes.
 How need them to be mapped to get the right number.


Where do you get negative point numbers? I encode off-process point numbers
as -(remote point + 1).


 It seems I'm using a wrong approach to this. May be I need to get the
 coordinates in a somehow different way. I'm working on a from-scratch
 implementation of a FEM code based on petsc4py. I want to code the problem
 matrices assembly from elemental matrices. I've already done this
 sequentially, but I got stuck when trying to compute elemental matrices in
 parallel because I don't understand the way of obtaining the coordinates of
 the nodes in for each element.


The way I get coordinates for the element c is

  PetscScalar *coords = NULL;
  PetscIntcsize;

  ierr = DMPlexVecGetClosure(dm, cs, coordinates, c, csize,
coords);CHKERRQ(ierr);
  ierr = DMPlexVecRestoreClosure(dm, cs, coordinates, c, csize,
coords);CHKERRQ(ierr);

  Thanks,

 Matt


 Again, thanks for your help,

 A.









 On Thu, Jul 9, 2015 at 5:18 PM, Matthew Knepley knep...@gmail.com wrote:

 On Thu, Jul 9, 2015 at 7:42 AM, Alejandro D Otero aot...@fi.uba.ar
 wrote:

 Hi, sorry if this is an obvious question, but I cannot figure out how to
 recover finite element nodes coordinates once I have distributed a mesh
 stored as a dmplex. I am using petsc4py as interface to petsc rutines.

 I first created a dmplex using:
 dm.createFromCellList()

 In a sequential run I got the coordinates with:
 Coords = dm.getCoordinates()

 which gave a sequential vector with the coordinates of the mesh nodes.

 When I distribute the mesh with:
 dm.distribute()

 each mpi process has it own dm but the indexing of the vector resulting
 from getCoordinates() or getCoordinatesLocal() seems not consistent with
 the local numbering of the cells and nodes.


 When the mesh is distributed, the vertices are renumbered. Thus the
 coordinates you get out are
 for reordered local vertices, but they are consistent with the local
 topology (cells still contain the
 right vertices) and the overlap mapping (SF still connects the shared
 vertices).

 What do you need it to do?

   Thanks,

 Matt


 Which is the correct way of doing this in PETSc philosophy?

 Thanks in advance,
 Alejandro




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener





-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Nodes coordinates in distributed dmplex

2015-07-09 Thread Matthew Knepley
On Thu, Jul 9, 2015 at 7:42 AM, Alejandro D Otero aot...@fi.uba.ar wrote:

 Hi, sorry if this is an obvious question, but I cannot figure out how to
 recover finite element nodes coordinates once I have distributed a mesh
 stored as a dmplex. I am using petsc4py as interface to petsc rutines.

 I first created a dmplex using:
 dm.createFromCellList()

 In a sequential run I got the coordinates with:
 Coords = dm.getCoordinates()

 which gave a sequential vector with the coordinates of the mesh nodes.

 When I distribute the mesh with:
 dm.distribute()

 each mpi process has it own dm but the indexing of the vector resulting
 from getCoordinates() or getCoordinatesLocal() seems not consistent with
 the local numbering of the cells and nodes.


When the mesh is distributed, the vertices are renumbered. Thus the
coordinates you get out are
for reordered local vertices, but they are consistent with the local
topology (cells still contain the
right vertices) and the overlap mapping (SF still connects the shared
vertices).

What do you need it to do?

  Thanks,

Matt


 Which is the correct way of doing this in PETSc philosophy?

 Thanks in advance,
 Alejandro




-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener