Re: [petsc-users] Time integrated adjoints

2015-04-15 Thread Dave Makhija
Barry,

Is the method exact if I have a time dependent mass matrix (dF/dU_t changes
each time step)? I'm not sure it's possible if TSSetIJacobian expects a
function which computes the combination of dF/dU and a*dF/dU_t. But,
perhaps there is something I'm missing.

Otherwise, it looks very comprehensive. I like how you added the cost
integrand. If I recall correctly, the Trilinos package Rythmos is missing
this.

Thanks!

Dave

On Tue, Apr 14, 2015 at 8:30 AM, Barry Smith bsm...@mcs.anl.gov wrote:


  On Apr 14, 2015, at 1:06 AM, Dave Makhija makhi...@colorado.edu wrote:
 
  Hello,
 
  I'm evaluating my options for computing time dependent adjoints. I did
 not think PETSc supported this, but I see TSAdjointSolve in the
 development branch. That would be fantastic news if you plan to support
 time integrated adjoints!
 
  What features are envisioned and when is the targeted release date?

   Dave,

We currently have discrete adjoint computations for explicit RK methods
 and for implicit theta methods (backward Euler and Crank-Nicholson). The
 development copy of the users manual has some discussion (
 http://www.mcs.anl.gov/petsc/petsc-master/docs/manual.pdf page 135) and
 there are several examples.   I urge you to try them out now in master and
 stay in contact with us to insure we provide what you need as we are
 actively developing them.
 We plan a release for early June.

   Barry

 
  Thanks!
 
  Dave




[petsc-users] Time integrated adjoints

2015-04-14 Thread Dave Makhija
Hello,

I'm evaluating my options for computing time dependent adjoints. I did not
think PETSc supported this, but I see TSAdjointSolve in the development
branch. That would be fantastic news if you plan to support time integrated
adjoints!

What features are envisioned and when is the targeted release date?

Thanks!

Dave


[petsc-users] nnz's in finite element stiffness matrix

2010-09-07 Thread Dave Makhija
I used to have a decent setup that worked as follows:

1. Build node-element table (For a given node, which elements contain
this node. You may already have this) OR build a node connectivity
table (For a given node, which nodes are connected).
2. Build element-node table (For a given element, which nodes are
contained in this element. You probably already have this)
4. Loop over nodes and get the global DOFs contained in that node. For
those DOF id rows, add the number of nodal DOFs for each unique node
connected to the current node using the node-element and
element-node table OR the node connectivity table.
5. Loop over elements and add the number of elemental DOFs to each
nodal DOF global id row contained in this element using element-node
table. Also add the number of elemental DOF's and the sum of the nodal
DOF's to the elemental global DOF id rows.
6. Add contributions of multi-point constraints, i.e. Lagrange multiplier DOF's.

Note that in parallel you may have to scatter values to global DOF ids
owned by off-processors to get an accurate Onz. This setup as a whole
can be pretty fast but can scale poorly if you don't have a good way
of developing the node-element table or node connectivity since it
requires some loops within loops.

Another way is to use the PETSc preallocation macros such as
MatPreallocateSet. You can essentially do a dry run of a Jacobian
Matrix assembly into the preallocation macros. They can be tricky to
use, so if you have problems you can simply look at the PETSc
documentation for those macros and hand code them yourself. This
strategy will overestimate the memory, but a matrix inversion will
dwarf how much this will waste.  I vaguely remember a PETSc archive
asking how to free the unneeded memory if this is absolutely
necessary, but I don't think anything really worked without a full
matrix copy. If someone by chance knows how the Trilinos
OptimizeStorage routine works for Epetra matricies they could
potentially shed some light on how to do this - if it is even
possible.


Dave Makhija



On Tue, Sep 7, 2010 at 2:24 PM, stali stali at purdue.edu wrote:
 Petsc-users

 How can I efficiently calculate the _exact_ number of non-zeros that would
 be in the global sparse (stiffness) matrix given an unstructured mesh?

 Thanks