On Sun, Dec 13, 2009 at 6:41 PM, Jed Brown <jed at 59a2.org> wrote: > On Sun, 13 Dec 2009 18:06:04 -0600, Matthew Knepley <knepley at gmail.com> > wrote: > > I will explain the history since I am to blame for this one. I needed > > coarsenHierarchy because repeated coarsening can cause degradation of > > the meshes. Thus all coarse meshes must be made at once. I added the > > hierarchy option, but only the coarsen branch was used. > > I suspected something like that. So one real difference with hierarchy > is that (in principle) DMRefine can move to a different communicator. > Is this flexibility used (it seems like this could be quite powerful for > additive multigrid, but I'm not aware of such a thing working now)? If > so, DMRefineHierarchy and DMCoarsenHierarchy should also take an array > of communicators (passing NULL would mean to use the same communicator). > > > > DMCoarsenHierarchy is implemented for Mesh, but the array is currently > > > leaking (DMMGSetDM forgets a PetscFree). Is it indeed preferred that > > > the caller does not allocate the array (despite knowing how much is > > > needed) but is responsible for freeing it (I ask because this is clumsy > > > for a single level of refinement). Either way, I'll document the > choice > > > and fix the leak. > > > > > > > Cool. I guess we could have caller allocation, but that is harder to > > check for correctness. > > The alternative (unless we want two ways to do the same thing) seems > much worse: > > DM *tmp; > DMRefineHierarchy(dmc,1,&tmp); > dmf = *tmp; > PetscFree(tmp); >
I am not sure why this is much worse. There are many things users are required to destroy. Matt > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091213/41b8673f/attachment.html>