[petsc-users] Shell Matrix Operations required for KSP solvers?

2018-10-22 Thread Andrew Ho
I have a specialized matrix structure I'm trying to take advantage of for
solving large scale (non)linear systems. I think for this purpose using a
Shell matrix is sufficient for interfacing with PETSc's KSP linear solvers.

Looking at the examples which use shell matrices, it seems most only
require implementing MatMult, and sometimes MatMultTranspose. Is there a
list of what operations are required (or optional but good to have) for the
different KSP solver types? This is specifically for the KSP solve itself,
not constructing the actual matrix. I'd also be interested if any of the
required/optional operations changes if preconditioners (left and/or right)
are used.


Re: [petsc-users] PETSc set off-diagonal matrix pre-allocation

2017-02-15 Thread Andrew Ho
I'm guessing it's because I'm using an optimized build why I get less error
checking. But regardless, what does the "local" column size mean in terms
of which MPI rank owns what portion of the matrix?

Is that the size of the local diagonal block when specifying the non-zeros
for pre-allocation?

The documentation does not make this part clear at all to me.

On Wed, Feb 15, 2017 at 11:01 AM, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
>Strange. I run your example and get an error message as I expect.
>
>
> $ petscmpiexec -n 4 ./ex5
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Nonconforming object sizes
> [0]PETSC ERROR: Sum of local lengths 12 does not equal global length 6, my
> local length 3
>   likely a call to VecSetSizes() or MatSetSizes() is wrong.
> See http://www.mcs.anl.gov/petsc/documentation/faq.html#split
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3049-ge39721325b
> GIT Date: 2017-02-05 20:43:45 -0600
> [0]PETSC ERROR: ./ex5 on a arch-basic named visitor097-125.wl.anl-
> external.org by barrysmith Wed Feb 15 12:55:53 2017
> [0]PETSC ERROR: Configure options --download-mpich PETSC_ARCH=arch-basic
> --download-hdf5 --download-triangle
> [0]PETSC ERROR: #1 PetscSplitOwnership() line 89 in
> /Users/barrysmith/Src/petsc/src/sys/utils/psplit.c
> [0]PETSC ERROR: #2 PetscLayoutSetUp() line 137 in
> /Users/barrysmith/Src/petsc/src/vec/is/utils/pmap.c
> [0]PETSC ERROR: #3 MatMPIAIJSetPreallocation_MPIAIJ() line 2640 in
> /Users/barrysmith/Src/petsc/src/mat/impls/aij/mpi/mpiaij.c
> [0]PETSC ERROR: #4 MatMPIAIJSetPreallocation() line 3368 in
> /Users/barrysmith/Src/petsc/src/mat/impls/aij/mpi/mpiaij.c
> [0]PETSC ERROR: #5 main() line 34 in /Users/barrysmith/Src/petsc/
> test-directory/ex5.c
> [0]PETSC ERROR: PETSc Option Table entries:
> [0]PETSC ERROR: -malloc_test
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Nonconforming object sizes
> [1]PETSC ERROR: Sum of local lengths 12 does not equal global length 6, my
> local length 3
>
>
>   Note that the input you provide is not consistent. You claim the matrix
> has a total of 6 rows but then you try to assign 3 rows to the first
> process, 3 rows to the second, 3 rows to the third, 3 rows to the fourth
> for a total of 12 rows. Hence it should error. Similarly the sum of the
> "local sizes" for columns has to sum up to exactly the total number of
> columns.
>
>
>
>
> > On Feb 15, 2017, at 9:52 AM, Andrew Ho <andre...@uw.edu> wrote:
> >
> > I don't understand how the matrices are partitioned then. The
> documentation online all shows each mpi rank owning entire rows, with no
> partitioning along columns.
> >
> > I've attached an example where I try to partition a 6x7 matrix into 4
> ranks:
> >
> > rank 0 should own a 3x4 section
> > rank 1 should own a 3x3 section
> > rank 2 should own a 3x4 section
> > rank 3 should own a 3x3 section
> >
> > However, when I run the example and print out the ownership range and
> ownership column range, I get:
> >
> > rank 0 owns rows/cols: [0,3), [0, 4)
> > rank 1 owns rows/cols: [3,6), [4, 7)
> > rank 2 owns rows/cols: [6,9), [7, 11)
> > rank 3 owns rows/cols: [9,12), [11, 14)
> >
> > Which makes no sense since these ranges extend beyond the actual size of
> the matrix.
> >
> > On Tue, Feb 14, 2017 at 6:05 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> >
> >1) on a single process all the columns are in the diagonal block and
> none in the off diagonal block (note that the column partitioning
> corresponding to a partitioning of the vector in the product A *x
> >
> >2) on two processes you guessed wrong how many columns PETSc would
> put on the first process. You guessed it would put the first three on the
> first process and the last three on the second process.
> >
> > a) it cannot do that, every column has to been owned by some process
> (in the vector x above) so it cannot be 3 and 3, it has to be 4 and 3 or 3
> and 4.
> >
> > b) PETSc puts the "extra" columns on the earlier processes not the
> later processes.
> >
> >   For rectangular matrices you really cannot get away with using
> "PETSC_DECIDE" for local columns when using preallocation
> > , sin

Re: [petsc-users] PETSc set off-diagonal matrix pre-allocation

2017-02-15 Thread Andrew Ho
I don't understand how the matrices are partitioned then. The documentation
online all shows each mpi rank owning entire rows, with no partitioning
along columns.

I've attached an example where I try to partition a 6x7 matrix into 4 ranks:

rank 0 should own a 3x4 section
rank 1 should own a 3x3 section
rank 2 should own a 3x4 section
rank 3 should own a 3x3 section

However, when I run the example and print out the ownership range and
ownership column range, I get:

rank 0 owns rows/cols: [0,3), [0, 4)
rank 1 owns rows/cols: [3,6), [4, 7)
rank 2 owns rows/cols: [6,9), [7, 11)
rank 3 owns rows/cols: [9,12), [11, 14)

Which makes no sense since these ranges extend beyond the actual size of
the matrix.

On Tue, Feb 14, 2017 at 6:05 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
>1) on a single process all the columns are in the diagonal block and
> none in the off diagonal block (note that the column partitioning
> corresponding to a partitioning of the vector in the product A *x
>
>2) on two processes you guessed wrong how many columns PETSc would put
> on the first process. You guessed it would put the first three on the first
> process and the last three on the second process.
>
> a) it cannot do that, every column has to been owned by some process
> (in the vector x above) so it cannot be 3 and 3, it has to be 4 and 3 or 3
> and 4.
>
> b) PETSc puts the "extra" columns on the earlier processes not the
> later processes.
>
>   For rectangular matrices you really cannot get away with using
> "PETSC_DECIDE" for local columns when using preallocation
> , since it may decide something different than what you assume.
>
>I've attached the parallel code that behaves as expected.
>
>Barry
>
> > On Feb 14, 2017, at 1:06 PM, Andrew Ho <andre...@uw.edu> wrote:
> >
> > The problem isn't only with 1 process, but it seems to be with all
> non-square matrices?
> >
> > For example, here I have a two process program which tries to set a 6x7
> matrix to (each process owns 3 rows):
> >
> > 0 0 0 1 1 1 1
> > 0 0 0 1 1 1 1
> > 0 0 0 1 1 1 1
> > 0 0 0 1 1 1 1
> > 0 0 0 1 1 1 1
> > 0 0 0 1 1 1 1
> >
> > #include 
> > #include 
> >
> > int main(int argc, char** argv)
> > {
> >   PetscErrorCode err;
> >   err = PetscInitialize(, , NULL, "help");
> >   CHKERRQ(err);
> >
> >   int rank, size;
> >   MPI_Comm_rank(MPI_COMM_WORLD, );
> >   MPI_Comm_size(MPI_COMM_WORLD, );
> >   if(size != 2)
> >   {
> > printf("must run with 2 processes");
> > MPI_Abort(MPI_COMM_WORLD, -1);
> >   }
> >
> >   // create a sparse AIJ matrix distributed across MPI
> >   Mat A;
> >   err = MatCreate(MPI_COMM_WORLD, );
> >   CHKERRQ(err);
> >   err = MatSetType(A, MATMPIAIJ);
> >   CHKERRQ(err);
> >   // setup pre-allocation for matrix space
> >   {
> > err =
> >   MatSetSizes(A, 3, PETSC_DECIDE, 6, 7);
> > CHKERRQ(err);
> > if(rank == 0)
> > {
> >   PetscInt d_nnz[] = {0, 0, 0};
> >   PetscInt o_nnz[] = {4, 4, 4};
> >   err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
> >   CHKERRQ(err);
> > }
> > else
> > {
> >   PetscInt d_nnz[] = {3, 3, 3};
> >   PetscInt o_nnz[] = {1, 1, 1};
> >   err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
> >   CHKERRQ(err);
> > }
> >   }
> >   err = MatSetUp(A);
> >   CHKERRQ(err);
> >
> >   // set values inside the matrix
> >   for (PetscInt row = 0; row < 3; ++row)
> >   {
> > for (PetscInt col = 3; col < 7; ++col)
> > {
> >   err = MatSetValue(A, 3 * rank + row, col, 1, INSERT_VALUES);
> >   CHKERRQ(err);
> > }
> >   }
> >
> >   err = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
> >   CHKERRQ(err);
> >   err = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
> >   CHKERRQ(err);
> >
> >   err = MatView(A, PETSC_VIEWER_STDOUT_WORLD);
> >   CHKERRQ(err);
> >
> >   // free memory
> >   err = MatDestroy();
> >   CHKERRQ(err);
> >
> >   // cleanup any internal PETSc data at end of program
> >   err = PetscFinalize();
> >   CHKERRQ(err);
> > }
> >
> >
> > On Tue, Feb 14, 2017 at 5:26 AM, Matthew Knepley <knep...@gmail.com>
> wrote:
> > On Tue, Feb 14, 2017 at 6:41 AM, Andrew Ho <andre...@uw.edu> wrote:
> > I have a 3x6 matrix, and I want to set it to (just as an example):
> >
> &g

Re: [petsc-users] PETSc set off-diagonal matrix pre-allocation

2017-02-14 Thread Andrew Ho
The problem isn't only with 1 process, but it seems to be with all
non-square matrices?

For example, here I have a two process program which tries to set a 6x7
matrix to (each process owns 3 rows):

0 0 0 1 1 1 1
0 0 0 1 1 1 1
0 0 0 1 1 1 1
0 0 0 1 1 1 1
0 0 0 1 1 1 1
0 0 0 1 1 1 1

#include 
#include 

int main(int argc, char** argv)
{
  PetscErrorCode err;
  err = PetscInitialize(, , NULL, "help");
  CHKERRQ(err);

  int rank, size;
  MPI_Comm_rank(MPI_COMM_WORLD, );
  MPI_Comm_size(MPI_COMM_WORLD, );
  if(size != 2)
  {
printf("must run with 2 processes");
MPI_Abort(MPI_COMM_WORLD, -1);
  }

  // create a sparse AIJ matrix distributed across MPI
  Mat A;
  err = MatCreate(MPI_COMM_WORLD, );
  CHKERRQ(err);
  err = MatSetType(A, MATMPIAIJ);
  CHKERRQ(err);
  // setup pre-allocation for matrix space
  {
err =
  MatSetSizes(A, 3, PETSC_DECIDE, 6, 7);
CHKERRQ(err);
if(rank == 0)
{
  PetscInt d_nnz[] = {0, 0, 0};
  PetscInt o_nnz[] = {4, 4, 4};
  err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
  CHKERRQ(err);
}
else
{
  PetscInt d_nnz[] = {3, 3, 3};
  PetscInt o_nnz[] = {1, 1, 1};
  err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
  CHKERRQ(err);
}
  }
  err = MatSetUp(A);
  CHKERRQ(err);

  // set values inside the matrix
  for (PetscInt row = 0; row < 3; ++row)
  {
for (PetscInt col = 3; col < 7; ++col)
{
  err = MatSetValue(A, 3 * rank + row, col, 1, INSERT_VALUES);
  CHKERRQ(err);
}
  }

  err = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
  CHKERRQ(err);
  err = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
  CHKERRQ(err);

  err = MatView(A, PETSC_VIEWER_STDOUT_WORLD);
  CHKERRQ(err);

  // free memory
  err = MatDestroy();
  CHKERRQ(err);

  // cleanup any internal PETSc data at end of program
  err = PetscFinalize();
  CHKERRQ(err);
}


On Tue, Feb 14, 2017 at 5:26 AM, Matthew Knepley <knep...@gmail.com> wrote:

> On Tue, Feb 14, 2017 at 6:41 AM, Andrew Ho <andre...@uw.edu> wrote:
>
>> I have a 3x6 matrix, and I want to set it to (just as an example):
>>
>> 0 0 0 1 1 1
>> 0 0 0 1 1 1
>> 0 0 0 1 1 1
>>
>> From what I can tell, according to the documentation the MPIAIJ sparsity
>> of this matrix is:
>>
>
> I believe this is an inconsistency in PETSc for rectangular matrices. We
> divide matrices into diagonal and
> off-diagonal parts mainly to separate communication from computation. Thus
> on 1 proc, we never divide
> them, even if the matrix is rectangular. Therefore we misinterpret your
> preallocation.
>
> Barry, do you think we want to try and change this?
>
>   Matt
>
>
>> d_nnz = [0, 0, 0]
>> o_nnz = [3, 3, 3]
>>
>> However, when I do this, I get the following error:
>>
>> [0]PETSC ERROR: - Error Message
>>> --
>>>
>>> [0]PETSC ERROR: Argument out of range
>>> [0]PETSC ERROR: New nonzero at (0,3) caused a malloc
>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to
>>> turn off this check
>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> for trouble shooting.
>>> [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown
>>> [0]PETSC ERROR: ./ex3 on a arch-linux2-c-opt Tue Feb 14 04:23:56 2017
>>> [0]PETSC ERROR: Configure options --with-debugging=0 --COPTFLAGS="-O3
>>> -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3
>>> -march=native"
>>> [0]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 582 in
>>> petsc/src/mat/impls/aij/mpi/mpiaij.c
>>> [0]PETSC ERROR: #2 MatSetValues() line 1190 in
>>> petsc/src/mat/interface/matrix.c
>>> [0]PETSC ERROR: #3 main() line 36 in ex3.c
>>> [0]PETSC ERROR: No PETSc Option Table entries
>>> [0]PETSC ERROR: End of Error Message ---send entire
>>> error message to petsc-ma...@mcs.anl.gov--
>>
>>
>> Here's a working test code:
>>
>> #include 
>>> #include 
>>> int main(int argc, char** argv)
>>> {
>>>   PetscErrorCode err;
>>>   err = PetscInitialize(, , NULL, "help");
>>>   CHKERRQ(err);
>>>   // create a sparse AIJ matrix distributed across MPI
>>>   PetscInt global_width = 6;
>>>   PetscInt global_height = 3;
>>>   Mat A;
>>>   err = MatCreate(MPI_COMM_WORLD, );
>>>   CHKERRQ(err);
>>>   err = MatSetType(A, MATMPIAIJ);
>>>   CHKERRQ(err);
>>>   // setup pre-allocation for matrix space
>>>   {
>>> err =
>

[petsc-users] PETSc set off-diagonal matrix pre-allocation

2017-02-14 Thread Andrew Ho
I have a 3x6 matrix, and I want to set it to (just as an example):

0 0 0 1 1 1
0 0 0 1 1 1
0 0 0 1 1 1

>From what I can tell, according to the documentation the MPIAIJ sparsity of
this matrix is:

d_nnz = [0, 0, 0]
o_nnz = [3, 3, 3]

However, when I do this, I get the following error:

[0]PETSC ERROR: - Error Message
> --
>
> [0]PETSC ERROR: Argument out of range
> [0]PETSC ERROR: New nonzero at (0,3) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
> off this check
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown
> [0]PETSC ERROR: ./ex3 on a arch-linux2-c-opt Tue Feb 14 04:23:56 2017
> [0]PETSC ERROR: Configure options --with-debugging=0 --COPTFLAGS="-O3
> -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3
> -march=native"
> [0]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 582 in
> petsc/src/mat/impls/aij/mpi/mpiaij.c
> [0]PETSC ERROR: #2 MatSetValues() line 1190 in
> petsc/src/mat/interface/matrix.c
> [0]PETSC ERROR: #3 main() line 36 in ex3.c
> [0]PETSC ERROR: No PETSc Option Table entries
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--


Here's a working test code:

#include 
> #include 
> int main(int argc, char** argv)
> {
>   PetscErrorCode err;
>   err = PetscInitialize(, , NULL, "help");
>   CHKERRQ(err);
>   // create a sparse AIJ matrix distributed across MPI
>   PetscInt global_width = 6;
>   PetscInt global_height = 3;
>   Mat A;
>   err = MatCreate(MPI_COMM_WORLD, );
>   CHKERRQ(err);
>   err = MatSetType(A, MATMPIAIJ);
>   CHKERRQ(err);
>   // setup pre-allocation for matrix space
>   {
> err =
>   MatSetSizes(A, global_height, PETSC_DECIDE, global_height,
> global_width);
> CHKERRQ(err);
> PetscInt d_nnz[] = {0, 0, 0};
> PetscInt o_nnz[] = {3, 3, 3};
> err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
> CHKERRQ(err);
>   }
>   err = MatSetUp(A);
>   CHKERRQ(err);
>   // set values inside the matrix
>   for (PetscInt row = 0; row < global_height; ++row)
>   {
> for (PetscInt col = global_height; col < global_width; ++col)
> {
>   err = MatSetValue(A, row, col, 1, INSERT_VALUES);
>   CHKERRQ(err);
> }
>   }
>   err = MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
>   CHKERRQ(err);
>   err = MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
>   CHKERRQ(err);
>   err = MatView(A, PETSC_VIEWER_STDOUT_WORLD);
>   CHKERRQ(err);
>   // free memory
>   err = MatDestroy();
>   CHKERRQ(err);
>   // cleanup any internal PETSc data at end of program
>   err = PetscFinalize();
>   CHKERRQ(err);
> }


Am I mis-understanding what the d_nnz and o_nnz parameter are supposed to
mean?
-- 
Andrew Ho


Re: [petsc-users] DMPlex higher order elements

2016-08-24 Thread Andrew Ho
>
> Correspondence should not be hard since you can mark the mesh however you
> like.


True, I guess I need to do this anyways in order to apply boundary
conditions.

If you are happy with quadratic surface approximations, then you are in a
> great spot here. However, I think its easy to push to a place where they
> are insufficient (needing exact normals for conservation or balance,
> needing accurate volume conservation, ...) and you must interact with the
> CAD model. However, we have tried this before. Jed had a hard time with the
> CAD file reader, and when Jed has a hard time I usually give up immediately.


I suppose that's true. The more I look at what Cubit/other meshing software
generate at the surface, I don't think it can exactly represent even simple
conics sections such as circular arcs and ellipses exactly, which is a
shame. Still, I'd take 3rd order accuracy with quadratic surfaces over 2nd
order accuracy, and I'm assuming that whatever is needed to support Tri6
elements in DMPlex generalizes easily to Tri10/Tri15/etc. whenever there
exists a meshing tool capable of generating these meshes.

-- 
Andrew Ho


Re: [petsc-users] DMPlex higher order elements

2016-08-24 Thread Andrew Ho
>
> Philosophically it's certainly part of the job of the mesh generator, but
> realistically I don't think there are any really viable high-order mesh
> generators or even file formats right now (correct me if I'm wrong). And as
> Matthew put, the mesh generator has no business telling us what our
> function basis should be, and by extension has no basis telling us where we
> should place our face nodes.
>

The mesh surface nodes have no correspondence to the basis set I use for
expanding my solution on; i.e. I could use sub/super parametric mappings. I
think this is why Exodus II doesn't support any elements higher than 6-node
triangles; I believe these are sufficient for representing conics sections
(not entirely sure about this).

This comes up a lot in high order mesh discussions, but I always wonder
> when do you *not* have access to the original geometric representation? If
> you've generated the mesh yourself you must have the geometry.


Yes, I have the original CAD files, but there are a few issues:

1. I don't have any code for reading in the CAD format (not entirely sure
how difficult this is)
2. I don't know of any easy way to correspond a given element in the mesh
with a surface in the CAD file, as in what part of the original surface is
my element on. I could do some "geometric tolerance" based approach, but I
don't know how robust this is especially near where two surfaces join
together.
3. Why do I need to add this complexity to all of my simulation codes? The
mesh generator already knows how to understand the geometry, and output
meshes which conform to the curved surface. I simply want to use this
existing feature and have my simulation code deal entirely with performing
the simulation, not deal with having to handle NURBS surfaces, conics
section, surface mapping, etc.

-- 
Andrew Ho


Re: [petsc-users] DMPlex higher order elements

2016-08-24 Thread Andrew Ho
That sounds like taking part of the job of the mesh generator and putting
it into my code. I was simply stating that without access to the original
geometric representation, I don't know if my simulation domain really has
flat walls, or even how curved are the walls if they are curved, so there's
no general way for me to accurately determine the shape of the element.

On Wed, Aug 24, 2016 at 10:40 AM, Mark Lohry <mlo...@princeton.edu> wrote:

> On 08/24/2016 11:27 AM, Andrew Ho wrote:
>
> Good geometric accuracy is very import for achieving appropriate
> convergence rates in complex geometry, not just using higher order
> polynomials on flat elements.
>
> If you look at Hesthaven's book Nodal Discontinuous Galerkin Methods,
> Table 9.1 shows that without support for curved elements, higher order DG
> element on flat elements converges at sub optimal rates due to inaccuracies
> produced by the boundary conditions.
>
> There's no way to re-construct this curved information correctly after the
> fact; it must be generated by the meshing software.
>
>
> I don't *entirely* agree with the suggestion the mesh generator has to
> provide that information. Some people reconstruct splines through the nodes
> to create higher order meshes after the fact (which also requires some
> detection or specification of sharp corners to break the spline). I
> personally take the given mesh from the generator in addition to an
> IGES/NURBS type CAD file for complex geometries, and then do a kind of
> projection for sub-cell curved boundary faces.
>
>
> On Wed, Aug 24, 2016 at 10:12 AM, Jed Brown <j...@jedbrown.org> wrote:
>
>> Matthew Knepley <knep...@gmail.com> writes:
>> >> Yes, I do not support that since I think its a crazy way to talk about
>> >> things. All the topological information is in the Tri3 mesh, and
>> >> Cubit has no business telling me about the function space.
>> >>
>> >>
>> >> Do you support / plan to support curved elements?
>> >>
>> >
>> > I had "support" in there, but there were bugs. Toby and Mark discovered
>> > these, and Toby has fixed them. I think
>> > all of the fixes are in master now.
>>
>> The context is clearly that the mesh generator needs to express the
>> curved elements.  DMPlex doesn't have a geometric model available, so it
>> doesn't know how to make the Tri3 elements curve to conform more
>> accurately to the boundary.  The mesh generator has no business telling
>> you what function space to use for your solution, but it'd be a shame to
>> prevent it from expressing element geometry.
>>
>
>
>
> --
> Andrew Ho
>
>
>
>


-- 
Andrew Ho


Re: [petsc-users] DMPlex higher order elements

2016-08-24 Thread Andrew Ho
True, Paraview does draw them in as "flat" edges, but this is less of a
concern for me; the important part is if I want to do a simulation on a
curved geometric model, I need to be able to achieve higher order numerical
accuracy because it reduces my simulation times by orders of magnitude.

On Wed, Aug 24, 2016 at 10:22 AM, Matthew Knepley <knep...@gmail.com> wrote:

> On Wed, Aug 24, 2016 at 12:57 AM, Andrew Ho <andre...@uw.edu> wrote:
>
>> I created an unstructured tri6 mesh in Cubit and am trying to read it
>> into PETSc using DMPlexCreateFromFile. However, when I do PETSc gives me an
>> error that it doesn't support this type of element.
>>
>> I know my PETSc install has Exodus II support because by giving a
>> different Exodus mesh file with Tri3 elements works just fine. I've
>> attached the Exodus mesh generated by Cubit. I can open up the mesh file in
>> Paraview just fine and it does appear to be valid (it has 2 elements
>> approximating a quarter circle wedge).
>>
>
> Okay, I will look at getting this to read in alright. Notice that Paraview
> does exactly the wrong thing here in that it has
> straight lines connecting the midpoints and corners of the triangles. They
> should be smooth quadratic curves, and this
> misunderstanding comes from the horrible format choice which pretends that
> there are "extra" vertices.
>
>Matt
>
>
>> Here's the error message:
>>
>> [0]PETSC ERROR: - Error Message
>>> --
>>>
>>> [0]PETSC ERROR: Argument out of range
>>> [0]PETSC ERROR: Cone size 6 not supported for dimension 2
>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> for trouble shooting.
>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.3-1257-ge04de0c
>>>  GIT Date: 2016-08-24 00:14:37 -0500
>>>
>>> [0]PETSC ERROR: bin/mesh_testing on a arch-linux2-c-opt named ahome by
>>> andrew Tue Aug 23 22:48:57 2016
>>>
>>> [0]PETSC ERROR: Configure options --with-debugging=0 --COPTFLAGS="-O3
>>> -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3
>>> -march=native" --download-exodusii=yes --download-hdf5=yes
>>> --download-netcdf=yes
>>> [0]PETSC ERROR: #8 DMPlexGetRawFaces_Internal() line 93 in
>>> petsc/src/dm/impls/plex/plexinterpolate.c
>>>
>>> [0]PETSC ERROR: #9 DMPlexGetFaces_Internal() line 20 in
>>> petsc/src/dm/impls/plex/plexinterpolate.c
>>>
>>> [0]PETSC ERROR: #10 DMPlexInterpolateFaces_Internal() line 172 in
>>> petsc/src/dm/impls/plex/plexinterpolate.c
>>>
>>> [0]PETSC ERROR: #11 DMPlexInterpolate() line 532 in
>>> petsc/src/dm/impls/plex/plexinterpolate.c
>>>
>>> [0]PETSC ERROR: #12 DMPlexCreateExodus() line 168 in
>>> petsc/src/dm/impls/plex/plexexodusii.c
>>>
>>> [0]PETSC ERROR: #13 DMPlexCreateExodusFromFile() line 46 in
>>> petsc/src/dm/impls/plex/plexexodusii.c
>>>
>>> [0]PETSC ERROR: #14 DMPlexCreateFromFile() line 1967 in
>>> petsc/src/dm/impls/plex/plexcreate.c
>>>
>>> [0]PETSC ERROR: #15 main() line 11 in mesh_testing.cpp
>>>
>>> [0]PETSC ERROR: No PETSc Option Table entries
>>> [0]PETSC ERROR: End of Error Message ---send entire
>>> error message to petsc-ma...@mcs.anl.gov--
>>>
>>
>>
>> --
>> Andrew Ho
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>



-- 
Andrew Ho


Re: [petsc-users] DMPlex higher order elements

2016-08-24 Thread Andrew Ho
Good geometric accuracy is very import for achieving appropriate
convergence rates in complex geometry, not just using higher order
polynomials on flat elements.

If you look at Hesthaven's book Nodal Discontinuous Galerkin Methods, Table
9.1 shows that without support for curved elements, higher order DG element
on flat elements converges at sub optimal rates due to inaccuracies
produced by the boundary conditions.

There's no way to re-construct this curved information correctly after the
fact; it must be generated by the meshing software.

On Wed, Aug 24, 2016 at 10:12 AM, Jed Brown <j...@jedbrown.org> wrote:

> Matthew Knepley <knep...@gmail.com> writes:
> >> Yes, I do not support that since I think its a crazy way to talk about
> >> things. All the topological information is in the Tri3 mesh, and
> >> Cubit has no business telling me about the function space.
> >>
> >>
> >> Do you support / plan to support curved elements?
> >>
> >
> > I had "support" in there, but there were bugs. Toby and Mark discovered
> > these, and Toby has fixed them. I think
> > all of the fixes are in master now.
>
> The context is clearly that the mesh generator needs to express the
> curved elements.  DMPlex doesn't have a geometric model available, so it
> doesn't know how to make the Tri3 elements curve to conform more
> accurately to the boundary.  The mesh generator has no business telling
> you what function space to use for your solution, but it'd be a shame to
> prevent it from expressing element geometry.
>



-- 
Andrew Ho


[petsc-users] DMPlex higher order elements

2016-08-23 Thread Andrew Ho
I created an unstructured tri6 mesh in Cubit and am trying to read it into
PETSc using DMPlexCreateFromFile. However, when I do PETSc gives me an
error that it doesn't support this type of element.

I know my PETSc install has Exodus II support because by giving a different
Exodus mesh file with Tri3 elements works just fine. I've attached the
Exodus mesh generated by Cubit. I can open up the mesh file in Paraview
just fine and it does appear to be valid (it has 2 elements approximating a
quarter circle wedge).

Here's the error message:

[0]PETSC ERROR: - Error Message
> --
>
> [0]PETSC ERROR: Argument out of range
> [0]PETSC ERROR: Cone size 6 not supported for dimension 2
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.3-1257-ge04de0c  GIT
> Date: 2016-08-24 00:14:37 -0500
>
> [0]PETSC ERROR: bin/mesh_testing on a arch-linux2-c-opt named ahome by
> andrew Tue Aug 23 22:48:57 2016
>
> [0]PETSC ERROR: Configure options --with-debugging=0 --COPTFLAGS="-O3
> -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3
> -march=native" --download-exodusii=yes --download-hdf5=yes
> --download-netcdf=yes
> [0]PETSC ERROR: #8 DMPlexGetRawFaces_Internal() line 93 in
> petsc/src/dm/impls/plex/plexinterpolate.c
>
> [0]PETSC ERROR: #9 DMPlexGetFaces_Internal() line 20 in
> petsc/src/dm/impls/plex/plexinterpolate.c
>
> [0]PETSC ERROR: #10 DMPlexInterpolateFaces_Internal() line 172 in
> petsc/src/dm/impls/plex/plexinterpolate.c
>
> [0]PETSC ERROR: #11 DMPlexInterpolate() line 532 in
> petsc/src/dm/impls/plex/plexinterpolate.c
>
> [0]PETSC ERROR: #12 DMPlexCreateExodus() line 168 in
> petsc/src/dm/impls/plex/plexexodusii.c
>
> [0]PETSC ERROR: #13 DMPlexCreateExodusFromFile() line 46 in
> petsc/src/dm/impls/plex/plexexodusii.c
>
> [0]PETSC ERROR: #14 DMPlexCreateFromFile() line 1967 in
> petsc/src/dm/impls/plex/plexcreate.c
>
> [0]PETSC ERROR: #15 main() line 11 in mesh_testing.cpp
>
> [0]PETSC ERROR: No PETSc Option Table entries
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--
>


-- 
Andrew Ho


tri6_wedge.exo
Description: Binary data


Re: [petsc-users] Multi-physics meshes with PETSc DM?

2016-08-03 Thread Andrew Ho
>
> It will not be periodic since periodic meshes in Plex are topologically
> periodic, meaning that
> there is no separate aliasing array. I could possibly read in the GMsh
> thing and do surgery
> on the mesh.
>

I'm not sure what you mean by this, but being able to handle periodic
domains is important for me. However periodicity is handled there needs to
be a way to get access to the correct coordinates for the edge vertices of
elements on both sides of the periodic edge so cell volumes/Jacobians can
be computed correctly. I don't know why an aliasing array is necessary,
since I thought for a topologically periodic mesh
*DMPlexConstructGhostCells* should handle this correctly?


Re: [petsc-users] Multi-physics meshes with PETSc DM?

2016-07-30 Thread Andrew Ho
Is there a reason the physical groups aren't sufficient for handling this?
As far as I can tell, this is the only way in GMsh to have any kind of
grouping of elements.

The Gmsh file format can be found here (happens to be the ASCII version,
but binary version is below that):
http://gmsh.info/doc/texinfo/gmsh.html#MSH-ASCII-file-format

All tags are attributed to elements; there may be multiple element types
(points, lines, triangles, etc.), but at the end of the day each element
just has a list of indices indicating which physical group(s) each element
belongs to.

>From the documentation for ASCII formatted mesh files:

number-of-tags

gives the number of integer tags that follow for the n-th element. By
> default, the first tag is the number of the physical entity to which the
> element belongs; the second is the number of the elementary geometrical
> entity to which the element belongs; the third is the number of mesh
> partitions to which the element belongs, followed by the partition ids
> (negative partition ids indicate ghost cells). A zero tag is equivalent to
> no tag. Gmsh and most codes using the MSH 2 format require at least the
> first two tags (physical and elementary tags).


My understanding is to support markers you only need to add a 4th stratum
level which has one node per physical group. It would be helpful (though
not necessary) if this subdomain marker stratum level had the physical tag
name labels properly associated with the corresponding nodes on the graph,
but this is not necessary since it's just as easy to refer to them by node
number as long as the node numbering matches or is a simple transform of
the numbering scheme in the original physical group id's.


On Sat, Jul 30, 2016 at 11:11 AM, Matthew Knepley <knep...@gmail.com> wrote:

> On Sat, Jul 30, 2016 at 1:06 PM, Andrew Ho <andre...@uw.edu> wrote:
>
>> 1) I don't use Physical Groups from GMsh since its unclear how this would
>>> be reflected in the discretization
>>
>>
>> If I'm not using physical groups in GMsh, how do I easily denote what
>> part of the domain should be handled with which physics? I would like to be
>> able to use the same code with similar but not identical meshes (for
>> example to do a convergence study), so manually iterating through a list of
>> vertices at the element height stratum in a chart doesn't provide any hints
>> on which subdomain an element is suppose to belong in.
>>
>
> I think the right way to handle all this is to just mark pieces of the
> mesh. Mesh formats should just have a generic marking
> ability which does not differentiate between vertices, edges, faces, and
> cells. Some formats come close (ExodusII) and some
> are just crazy (GMsh). If you can point me toward the documentation for
> the GMsh format, I will put in code to translate whatever
> part marks cells to a cell label, as we do for ExodusII.
>
>   Thanks,
>
>  Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>



-- 
Andrew Ho


Re: [petsc-users] Multi-physics meshes with PETSc DM?

2016-07-30 Thread Andrew Ho
>
> 1) I don't use Physical Groups from GMsh since its unclear how this would
> be reflected in the discretization


If I'm not using physical groups in GMsh, how do I easily denote what part
of the domain should be handled with which physics? I would like to be able
to use the same code with similar but not identical meshes (for example to
do a convergence study), so manually iterating through a list of vertices
at the element height stratum in a chart doesn't provide any hints on which
subdomain an element is suppose to belong in.


[petsc-users] Multi-physics meshes with PETSc DM?

2016-07-30 Thread Andrew Ho
I am trying to solve a multi-physics problem consisting of some physics on
a rectangular domain which is split in half such that one set of physics is
solved on the left, and the other set of physics is solved on the right.

Each set has their own set of variable components, and I would like to not
allocate both variable sets across the entire domain because the physics in
one subdomain happens to have lots of components per mesh element, which
the other subdomain doesn't need except to compute boundary interactions.

For testing right now, I am using the attached gmsh file to generate a mesh
with 2 physical groups to represent each subdomain (called "left" and
"right"). It has periodic boundaries on all sides.

However, when I try to load the generated mesh into PETSc using the
*DMPlexCreateFromFile* function, PETSc complains that the mesh is not a
valid Gmsh file. I've attached the sample mesh, as well as the error
message PETSc spits out.

Here's the relevant code (should be a complete working example) which
re-creates what I'm doing:

#include 


> int main(int argc, char** argv)
> {
>   PetscInitialize(, , NULL, "multi physics testing");
>   DM dm;
>   CHKERRQ(DMPlexCreateFromFile(PETSC_COMM_WORLD, "periodic_square.msh",
> PETSC_TRUE, ));
>   PetscFinalize();
> }


What is the correct procedure for creating a multi-physics mesh using PETSc
DM objects for mesh management?


periodic_square.geo
Description: Binary data


periodic_square.msh
Description: Mesh model
[0]PETSC ERROR: - Error Message -- 
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: File is not a valid Gmsh file
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.3, unknown 
[0]PETSC ERROR: bin/mesh_testing on a arch-linux2-c-opt named ahome by andrew Sat Jul 30 09:53:37 2016 
[0]PETSC ERROR: Configure options --with-debugging=0 --prefix=/usr/local --COPTFLAGS="-O3 -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3 -march=native" 
[0]PETSC ERROR: #1 DMPlexCreateGmsh() line 140 in /home/andrew/tools/petsc/src/dm/impls/plex/plexgmsh.c
[0]PETSC ERROR: #2 DMPlexCreateGmshFromFile() line 59 in /home/andrew/tools/petsc/src/dm/impls/plex/plexgmsh.c 
[0]PETSC ERROR: #3 DMPlexCreateFromFile() line 1746 in /home/andrew/tools/petsc/src/dm/impls/plex/plexcreate.c 
[0]PETSC ERROR: - Error Message -- 
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: File is not a valid Gmsh file
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.3, unknown 
[0]PETSC ERROR: bin/mesh_testing on a arch-linux2-c-opt named ahome by andrew Sat Jul 30 09:53:37 2016 
[0]PETSC ERROR: Configure options --with-debugging=0 --prefix=/usr/local --COPTFLAGS="-O3 -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3 -march=native" 
[0]PETSC ERROR: #4 DMPlexCreateGmsh() line 140 in /home/andrew/tools/petsc/src/dm/impls/plex/plexgmsh.c
[0]PETSC ERROR: #5 DMPlexCreateGmshFromFile() line 59 in /home/andrew/tools/petsc/src/dm/impls/plex/plexgmsh.c 
[0]PETSC ERROR: #6 DMPlexCreateFromFile() line 1746 in /home/andrew/tools/petsc/src/dm/impls/plex/plexcreate.c 
[0]PETSC ERROR: #7 main() line 10 in /home/andrew/code/research/dg/dg/examples/fvm/mesh_testing.cpp
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: End of Error Message ---send entire error message to petsc-ma...@mcs.anl.gov--
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 62.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

Re: [petsc-users] How to create a quadrature object?

2016-07-29 Thread Andrew Ho
Thanks, this fix works.

On Thu, Jul 28, 2016 at 8:58 PM, Matthew Knepley <knep...@gmail.com> wrote:

> On Thu, Jul 28, 2016 at 7:08 PM, Andrew Ho <andre...@uw.edu> wrote:
>
>>
>> On Thu, Jul 28, 2016 at 4:35 PM, Matthew Knepley <knep...@gmail.com>
>> wrote:
>>
>>>
>>> This is strange. First, you should only have uninitialized classids if
>>> you build without dynamics libraries. Did you? If you
>>> include the entire error output, it would show us.
>>>
>>>
>> This is pretty much the entire error output; I trimmed off the ending
>> "MPI Abort called" message (it's the standard message printed by OpenMPI
>> when MPIAbort is called). The code snippet I had is a complete working
>> example which is able to give me this error.
>>
>>
>>> If so, then PetscInitialize should set the value of this classid, unless
>>> you built this with PETSC_HAVE_DYNAMIC_LIBRARIES,
>>> but linked against static libraries, as the error message says. Do you
>>> have multiple versions of PETSc on your machine?
>>>
>>
>> I have a single install of PETSc at any one time. I've tried builds with
>> --with-shared-libraries on and off with the same results (I made sure to
>> purge any previous install first). Here's a diff if you want to test the
>> same configs I tried running.
>>
>
> Crap. We changed the way initialization works, but this was left behind.
> You can make this change
>
> diff --git a/src/dm/dt/interface/dt.c b/src/dm/dt/interface/dt.c
> index 5d12959..d6f3454 100644
> --- a/src/dm/dt/interface/dt.c
> +++ b/src/dm/dt/interface/dt.c
> @@ -50,7 +50,7 @@ PetscErrorCode PetscQuadratureCreate(MPI_Comm comm,
> PetscQuadrature *q)
>
>PetscFunctionBegin;
>PetscValidPointer(q, 2);
> -  ierr = DMInitializePackage();CHKERRQ(ierr);
> +  ierr = PetscSysInitializePackage();CHKERRQ(ierr);
>ierr =
> PetscHeaderCreate(*q,PETSC_OBJECT_CLASSID,"PetscQuadrature","Quadrature","DT",comm,PetscQuadratureDestroy,PetscQuadratureView);CHKERRQ(ierr);
>(*q)->dim   = -1;
>(*q)->order = -1;
>
> and then rebuild
>
>   cd $PETSC_DIR
>   make -f ./gmakefile
>
> and try running your example again.
>
> I will get this change in.
>
>   Thanks,
>
>  Matt
>
>
>> Just run *make ex4* in the src/dm/dt/examples/tests folder
>>
>> Configure command:
>>
>> ./configure --with-debugging=0 COPTFLAGS="-O3 -march=native"
>> CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native"
>>
>> --
>> Andrew Ho
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>



-- 
Andrew Ho


Re: [petsc-users] How to create a quadrature object?

2016-07-28 Thread Andrew Ho
On Thu, Jul 28, 2016 at 4:35 PM, Matthew Knepley <knep...@gmail.com> wrote:

>
> This is strange. First, you should only have uninitialized classids if you
> build without dynamics libraries. Did you? If you
> include the entire error output, it would show us.
>
>
This is pretty much the entire error output; I trimmed off the ending "MPI
Abort called" message (it's the standard message printed by OpenMPI when
MPIAbort is called). The code snippet I had is a complete working example
which is able to give me this error.


> If so, then PetscInitialize should set the value of this classid, unless
> you built this with PETSC_HAVE_DYNAMIC_LIBRARIES,
> but linked against static libraries, as the error message says. Do you
> have multiple versions of PETSc on your machine?
>

I have a single install of PETSc at any one time. I've tried builds with
--with-shared-libraries on and off with the same results (I made sure to
purge any previous install first). Here's a diff if you want to test the
same configs I tried running.

Just run *make ex4* in the src/dm/dt/examples/tests folder

Configure command:

./configure --with-debugging=0 COPTFLAGS="-O3 -march=native"
CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native"

-- 
Andrew Ho
2 files changed, 18 insertions(+), 1 deletion(-)
src/dm/dt/examples/tests/ex4.c| 14 ++
src/dm/dt/examples/tests/makefile |  5 -

new file   src/dm/dt/examples/tests/ex4.c
@@ -0,0 +1,14 @@
+static char help[] = "Tests quadrature.\n\n";
+
+#include 
+
+int main(int argc, char** argv)
+{
+  CHKERRQ(PetscInitialize(, , PETSC_NULL, help));
+  PetscQuadrature quad;
+  CHKERRQ(PetscQuadratureCreate(PETSC_COMM_SELF, ));
+  CHKERRQ(PetscQuadratureDestroy());
+  CHKERRQ(PetscFinalize());
+  return 0;
+}
+
modified   src/dm/dt/examples/tests/makefile
@@ -4,7 +4,7 @@ FFLAGS	=
 CPPFLAGS=
 FPPFLAGS=
 LOCDIR  = src/dm/dt/examples/tests/
-EXAMPLESC   = ex1.c ex2.c ex3.c
+EXAMPLESC   = ex1.c ex2.c ex3.c ex4.c
 EXAMPLESF   =
 MANSEC  = DM
 
@@ -20,6 +20,9 @@ ex2: ex2.o   chkopts
 ex3: ex3.o   chkopts
 	-${CLINKER} -o ex3 ex3.o  ${PETSC_DM_LIB}
 	${RM} -f ex3.o
+ex4: ex4.o   chkopts
+	-${CLINKER} -o ex4 ex4.o  ${PETSC_DM_LIB}
+	${RM} -f ex4.o
 
 #---
 runex1:



[petsc-users] How to create a quadrature object?

2016-07-28 Thread Andrew Ho
I am trying to create a very simple quadrature object, but for some reason
PETSc keeps giving me an "invalid argument" error.

Relevant code:

#include 
> int main(int argc, char** argv)
> {
>   CHKERRQ(PetscInitialize(, , nullptr, "quadrature testing"));
>   PetscQuadrature quad;
>   CHKERRQ(PetscQuadratureCreate(PETSC_COMM_SELF, ));
>
  CHKERRQ(PetscQuadratureDestroy());

  CHKERRQ(PetscFinalize());
> }


Error message:

>
> [0]PETSC ERROR: - Error Message
> 
> --
>
> [0]PETSC ERROR: Invalid argument
> [0]PETSC ERROR: Invalid object classid 0
> This could happen if you compile with PETSC_HAVE_DYNAMIC_LIBRARIES, but
> link with static libraries.
>
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for troubleshooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.2, unknown
> [0]PETSC ERROR: Configure options --with-debugging=0 COPTFLAGS="-O3
> -march=native" CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3
> -march=native" --prefix=/usr/local
> [0]PETSC ERROR: #1 PetscClassRegLogGetClass() line 290 in
> /home/andrew/tools/petsc/petsc/src/sys/logging/utils/classlog.c
> [0]PETSC ERROR: #2 PetscLogObjCreateDefault() line 317 in
> /home/andrew/tools/petsc/petsc/src/sys/logging/utils/classlog.c
> [0]PETSC ERROR: #3 PetscQuadratureCreate() line 54 in
> /home/andrew/tools/petsc/petsc/src/dm/dt/interface/dt.c
>
> [0]PETSC ERROR: - Error Message
> --
>
> [0]PETSC ERROR: Invalid argument
> [0]PETSC ERROR: Invalid object classid 0
> This could happen if you compile with PETSC_HAVE_DYNAMIC_LIBRARIES, but
> link with static libraries.
>
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for troubleshooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.2, unknown
> [0]PETSC ERROR: Configure options --with-debugging=0 COPTFLAGS="-O3
> -march=native" CXXOPTFLAGS="
> -O3 -march=native" FOPTFLAGS="-O3 -march=native" --prefix=/usr/local
> [0]PETSC ERROR: #4 PetscClassRegLogGetClass() line 290 in
> /home/andrew/tools/petsc/petsc/src/sys/logging/utils/classlog.c
>
> [0]PETSC ERROR: #5 PetscLogObjCreateDefault() line 317 in
> /home/andrew/tools/petsc/petsc/src/sys/logging/utils/classlog.c
>
> [0]PETSC ERROR: #6 PetscQuadratureCreate() line 54 in
> /home/andrew/tools/petsc/petsc/src/dm/dt/interface/dt.c
>
> [0]PETSC ERROR: #7 main() line 5 in /home/user/tests/pquad.cpp
> [0]PETSC ERROR: No PETSc Option Table entries
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--


I've had no problems using other parts of PETSc with the exact same build
options/install (SNES, linear solvers), so I suspect this is likely just
user error. I have tried building PETSc both as a shared and static
library, and both methods fail in the same way.

As a related question, what is the "Order" of quadrature object suppose to
be? The documentation for "PetscQuadratureGetOrder" and
"PetscQuadratureSetOrder" says this is the highest degree polynomial that
is exactly integrated.

However, when I looked at the source code for
"PetscDTGaussTensorQuadrature", this appears to set the order to the number
of quadrature points, which for Gaussian quadrature means it can integrate
a 2*npoints-1 polynomial exactly.

If I wanted to implement my own quadrature scheme (Gauss-Lobatto) which is
exact for polynomials up to 2*npoints-3, what should I set the quadrature
order to?

-- 
Andrew Ho


[petsc-users] Implementing discontinuous Galerkin FEM?

2016-07-28 Thread Andrew Ho
I am trying to implement a discontinuous Galerkin discretization using the
PETSc DM features to handle most of the topology/geometry specific
functions. However, I'm not really sure which direction to approach this
from since DG is kind of a middle ground between finite volume and
traditional continuous Galerkin finite element methods.

It appears to me that if I want to implement a nodal DG method, then it
would be more practical to extend the PetscFE interface, but for a modal DG
method perhaps the PetscFV interface is better?

There are still a few questions that I don't know the answers to, though.

Questions about implementing nodal DG:

1. Does PetscFE support sub/super parametric element types? If so, how do I
express the internal node structure for a nodal DG method (say, for example
located at the abscissa of a Gauss-Lobatto quadrature scheme)?
2. How would I go about making the dataset stored discontinuous between
neighboring elements (specifically at shared nodes for a nodal DG method)?
3. Similar to 2, how would I handle boundary conditions? Specifically, I
need a layer of data space of just the boundary nodes (not a complete
"ghost" element), and these are the actual constrained points.

Questions about implementing modal DG:

A. What does specifying the quadrature object for a PetscFV object actually
do? Is it purely a surface flux integration quadrature? How does the
quadrature object handle simplex-type elements in 2D/3D?
B. How would I go about modifying the limiters to take into account these
multiple modes?

-- 
Andrew Ho


Re: [petsc-users] SNES_QN_RESTART_POWELL fails to converge?

2016-07-15 Thread Andrew Ho
I've attached two modified versions of ex1:

ex1_powell.c uses the Powell restart
ex1_none.c uses no restart

For the default initial guess (x0 = [0.5,0.5]), both converge just fine.
However, for the initial guess x0 = [3.,3.], the Powell solution fails to
converge, while None and Periodic both still converge. This is with the
"easy" equation set (run without -hard).

Interestingly enough, the Powell restart still "finishes" in a reasonable
number of iterations (7 iterations), but the residual is very large (on the
order of 1e254).

On Thu, Jul 14, 2016 at 5:50 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
> > On Jul 14, 2016, at 6:18 PM, Andrew Ho <andre...@uw.edu> wrote:
> >
> > I am trying to solve a simple ionization/recombination ODE using PETSc's
> quasi-newton SNES.
> >
> > This is a basic non-linear coupled ODE system:
> >
> > delta = -a u^2 + b u v
> > d_t u = delta
> > d_t v = -delta
> >
> > a and b are constants.
> >
> > I wrote a backwards Euler root finding function (yes, I know the TS
> module has BE implemented, but this is more of a learning exercise).
> >
> > Here is the function evaluation:
> >
> > struct ion_rec_ctx
> > {
> >   PetscScalar rate_a, rate_b;
> >   PetscScalar dt;
> > };
> > PetscErrorCode bdf1(SNES snes, Vec x, Vec f, void *ctx)
> > {
> >   const PetscScalar *xx;
> >   PetscScalar *ff;
> >   ion_rec_ctx& params = *reinterpret_cast<ion_rec_ctx*>(ctx);
> >   CHKERRQ(VecGetArrayRead(x, ));
> >   CHKERRQ(VecGetArray(f,));
> >   auto delta = (-params.rate_a*xx[0]*xx[0]+params.rate_b*xx[1]*xx[0]);
> >   ff[0] = xx[0]-params.dt*delta;
> >   ff[1] = xx[1]-params.dt*-delta;
> >   CHKERRQ(VecRestoreArrayRead(x,));
> >   CHKERRQ(VecRestoreArray(f,));
> >   return 0;
> > }
> >
> > To setup the solver and solve one time step:
> >
> > // q0, q1, and res are Vec's previously initialized
> > // initial conditions: q0 = [1e19,1e19]
> > SNES solver;
> > CHKERRQ(SNESCreate(comm, ));
> > CHKERRQ(SNESSetType(solver, SNESQN));
> > CHKERRQ(SNESQNSetType(solver, SNES_QN_LBFGS));
> > ion_rec_ctx params = {9.59e-16, 1.15e-19, 1.};
> > CHKERRQ(SNESSetFunction(solver, res, , ));
> > CHKERRQ(SNESSolve(solver, q0, q1));
> >
> > When I run this, the solver fails to converge to a solution for this
> rather large time step.
> > The solution produced when the SNES module finally gives up is:
> >
> > q1 = [-2.72647e142, 2.72647e142]
> >
> > For reference, when I disable the scale and restart types, I get these
> values:
> >
> > q1 = [1.0279e17, 1.98972e19]
> >
> > This is only a problem when I use the SNES_QN_RESTART_POWELL restart
> type (seems to be regardless of the scale type type). I get reasonable
> answers for other combinations of restart/scale type. I've tried every
> combination of restart type/scale type except for SNES_QN_SCALE_JACOBIAN
> (my ultimate application doesn't have an available Jacobian), and only
> cases using SNES_QN_RESTART_POWELL are failing.
> >
> > I'm unfamiliar with Powell's restart criterion, but is it suppose to
> work reasonably well with Quasi-Newton methods? I tried it on the simple
> problem given in this example:
> http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex1.c.html
> >
> > And Powell restarts also fails to converge to a meaningful solution
> (solving for f(x) = [1,1], for x0 = [1,1]), but the other restart methods
> do converge properly.
>
>Could you please send the exact options you are using for the ex1.c
> that both fail and work and we'll see if there is some problem with the
> Powell restart.
>
> Thanks
>
> Barry
>
> >
> > Software information:
> >
> > PETSc version 3.7.2 (built from git maint branch)
> > PETSc arch: arch-linux2-c-opt
> > OS: Ubuntu 15.04 x64
> > Compiler: gcc 4.9.2
>
>


-- 
Andrew Ho

static char help[] = "Newton's method for a two-variable system, sequential.\n\n";

/*T
   Concepts: SNES^basic example
T*/

/*
   Include "petscsnes.h" so that we can use SNES solvers.  Note that this
   file automatically includes:
 petscsys.h   - base PETSc routines   petscvec.h - vectors
 petscmat.h - matrices
 petscis.h - index setspetscksp.h - Krylov subspace methods
 petscviewer.h - viewers   petscpc.h  - preconditioners
 petscksp.h   - linear solvers
*/
/*F
This examples solves either
\begin{equation}
  F\genfrac{(}{)}{0pt}{}{x_0}{x_1} = \genfrac{(}{)}{0pt}{}{x^2_0 + x_0 x_1 - 3}{x_0 x_1 + x^2_1 - 6}
\end{equation}
o

Re: [petsc-users] SNES_QN_RESTART_POWELL fails to converge?

2016-07-14 Thread Andrew Ho
On Thu, Jul 14, 2016 at 4:22 PM, Matthew Knepley <knep...@gmail.com> wrote:

> On Thu, Jul 14, 2016 at 6:18 PM, Andrew Ho <andre...@uw.edu> wrote:
>
>> I am trying to solve a simple ionization/recombination ODE using PETSc's
>> quasi-newton SNES.
>>
>> This is a basic non-linear coupled ODE system:
>>
>> delta = -a u^2 + b u v
>> d_t u = delta
>> d_t v = -delta
>>
>> a and b are constants.
>>
>> I wrote a backwards Euler root finding function (yes, I know the TS
>> module has BE implemented, but this is more of a learning exercise).
>>
>> Here is the function evaluation:
>>
>> struct ion_rec_ctx
>>> {
>>>   PetscScalar rate_a, rate_b;
>>>   PetscScalar dt;
>>> };
>>> PetscErrorCode bdf1(SNES snes, Vec x, Vec f, void *ctx)
>>> {
>>>   const PetscScalar *xx;
>>>   PetscScalar *ff;
>>>   ion_rec_ctx& params = *reinterpret_cast<ion_rec_ctx*>(ctx);
>>>   CHKERRQ(VecGetArrayRead(x, ));
>>>   CHKERRQ(VecGetArray(f,));
>>>   auto delta = (-params.rate_a*xx[0]*xx[0]+params.rate_b*xx[1]*xx[0]);
>>>   ff[0] = xx[0]-params.dt*delta;
>>>
>>
> I do not understand this. Shouldn't it be (xx[0] - xxold[0]) here?
>
>  Matt
>

No, the time discretization is as such:

xnew = xold + dt*f(xnew)

I re-arrange this to be

xnew - dt*f(xnew) = xold

The left hand side I am defining as g(x), which is what the bdf1 function
evaluates. The SNES module solves for g(x) = b, so I simply set b = xold.


[petsc-users] SNES_QN_RESTART_POWELL fails to converge?

2016-07-14 Thread Andrew Ho
I am trying to solve a simple ionization/recombination ODE using PETSc's
quasi-newton SNES.

This is a basic non-linear coupled ODE system:

delta = -a u^2 + b u v
d_t u = delta
d_t v = -delta

a and b are constants.

I wrote a backwards Euler root finding function (yes, I know the TS module
has BE implemented, but this is more of a learning exercise).

Here is the function evaluation:

struct ion_rec_ctx
> {
>   PetscScalar rate_a, rate_b;
>   PetscScalar dt;
> };
> PetscErrorCode bdf1(SNES snes, Vec x, Vec f, void *ctx)
> {
>   const PetscScalar *xx;
>   PetscScalar *ff;
>   ion_rec_ctx& params = *reinterpret_cast(ctx);
>   CHKERRQ(VecGetArrayRead(x, ));
>   CHKERRQ(VecGetArray(f,));
>   auto delta = (-params.rate_a*xx[0]*xx[0]+params.rate_b*xx[1]*xx[0]);
>   ff[0] = xx[0]-params.dt*delta;
>   ff[1] = xx[1]-params.dt*-delta;
>   CHKERRQ(VecRestoreArrayRead(x,));
>   CHKERRQ(VecRestoreArray(f,));
>   return 0;
> }


To setup the solver and solve one time step:

// q0, q1, and res are Vec's previously initialized
> // initial conditions: q0 = [1e19,1e19]
> SNES solver;
> CHKERRQ(SNESCreate(comm, ));
> CHKERRQ(SNESSetType(solver, SNESQN));
> CHKERRQ(SNESQNSetType(solver, SNES_QN_LBFGS));
> ion_rec_ctx params = {9.59e-16, 1.15e-19, 1.};
> CHKERRQ(SNESSetFunction(solver, res, , ));
> CHKERRQ(SNESSolve(solver, q0, q1));


When I run this, the solver fails to converge to a solution for this rather
large time step.
The solution produced when the SNES module finally gives up is:

q1 = [-2.72647e142, 2.72647e142]

For reference, when I disable the scale and restart types, I get these
values:

q1 = [1.0279e17, 1.98972e19]

This is only a problem when I use the SNES_QN_RESTART_POWELL restart type
(seems to be regardless of the scale type type). I get reasonable answers
for other combinations of restart/scale type. I've tried every combination
of restart type/scale type except for SNES_QN_SCALE_JACOBIAN (my ultimate
application doesn't have an available Jacobian), and only cases using
SNES_QN_RESTART_POWELL are failing.

I'm unfamiliar with Powell's restart criterion, but is it suppose to work
reasonably well with Quasi-Newton methods? I tried it on the simple problem
given in this example:
http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex1.c.html

And Powell restarts also fails to converge to a meaningful solution
(solving for f(x) = [1,1], for x0 = [1,1]), but the other restart methods
do converge properly.

Software information:

PETSc version 3.7.2 (built from git maint branch)
PETSc arch: arch-linux2-c-opt
OS: Ubuntu 15.04 x64
Compiler: gcc 4.9.2