Ah, okay. To confirm, you have a DM that you are solving for, and in its user 
context, you have several other DMs, each with a Vec, describing the "problem 
data" like coefficients, forcing terms, and internal discontinuities?
Yes, but because of the structure of my problem, the application context also 
contains a link to the original DM.
Think solving F(u,v) = 0 where and v don't have the same layout, or the same 
dimensionality, without field split.
My iterate is
u_{n+1} = argmin_w F(w,v_n)
   v_{n+1} = argmin_w F(u_{n+1},w)
The DM associated with u is DMu and the one with v is DMv
For the application context, I can either a create AppCtx with components 
(pointer to u, pointer to v, pointed to DMu, pointer to DMu), or
AppCtxu with components (pointer to v, pointer to DMv) and AppCtxv with 
components (pointer to u, pointer to DMu)
In this setting, is calling SNESSolve with Vec u and Application Context AppCtx 
legal, or do I have to use AppCtxu / AppCtxv?


That is completely fine, and not "aliasing", but it does not play well with 
geometric multigrid because coarse grids reference the same application 
context. We have a system of hooks for managing such resolution-dependent data, 
though only with a C interface so far. (We needed this to get geometric 
multigrid and FAS to work with TS. Most non-toy applications need it too.)

Can you point me to an example? Are interfaces C only because nobody has ever 
needed fortran versions, or is there a technical reason.


I'm not sure if there is a way to make this easier. We have been using 
PetscObjectCompose() to attach things to the DM on different levels. We could 
have a slightly friendlier "user" interface for that.
I debated AppCtx vs. ObjectCompose for quite a bit. As far as I understand, 
PetscCompose/Query uses object names to reference objects, so I would have 
ended passing the name of the coefficients DM and Vecs in the appctx. I opted 
to put pointers to them in the appctx instead of names. It looked a bit simpler 
at the time. Now I have two good reasons I should have gone the PetscObject 
way. Oh well...


So keeping those things in the app context is just fine, but if you want to use 
geometric multigrid, you'll have to take them out of the app context and put 
them in a different structure attached to the DM that is not transparently 
propagated under coarsening and refinement. If you think you might do this 
eventually, I recommend commenting/organizing your app context so that 
resolution-dependent stuff is easily identifiable.

I had not thought about it, but it is quite feasible. I store all coefficients 
that are constant per block of cells or Dof in PetscBags, and everything that 
has variation at the scale of the finite element space in Vecs.  How would the 
creation of the coarse DMs be handled, though? The geometry part is trivial to 
propagate using DMClone, but you may need to user feedback for the data layout 
part, unless you have a scheme that describes it (i.e. for this IS of cells, n 
dof at the vertices, p at the faces etc)


I don't mind that, but can't you have an index set describing the codim 0 
elements (maybe all of them) and another index set for the codim 1 elements on 
the features you care about? You can take their union (or concatenate) for your 
assembly loop if you like. Is there something wrong with this approach?

Thats a very good point. In the end it doesn't really matter. As far as I 
remember, the main reason I ended with my current scheme is that DMMesh did not 
play well with partially interpolated meshes. I don't know what the current 
status of DMComplex is.

Okay, I think it's important to eventually support partially interpolated 
meshes to avoid using a lot of memory when used with low-order discretizations. 
I see no reason why there can't also be a direct cache for closure. For a P1 
basis, that amounts to a point range

[cells, boundary faces, vertices]

closure: [cells -> vertices, faces -> vertices]

So cell -> face need not be stored anywhere. Presumably there is a reason why 
Matt didn't write it this way. Is it just uniformity of data structure?

Do we mean the same by partially interpolated mesh? What I mean is a mesh the 
only faces that are explicitly stored are the one we are interested in 
(typically the boundary mesh). For P1 elements, we need only to know of [cells, 
some faces, vertices]. I tried to manually build partially interpolated sieves 
with only some of the faces, and distribution would fail. That's how I ended up 
with a mix of cells of co-dimension 0 and 1. If one wants access to ALL faces / 
edges or none of them, there is no problem in the current implementation.

Blaise


--
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin







-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20121110/2f42d601/attachment.html>

Reply via email to