Hi Luca,

Sorry for the delayed answer.

   +--------------------------------------------------+
   |                                                  |
   |                                                  |
   |                        Estimate/mark/refine      |
   |                                 ^                |
   |                                 +                |
   |       Current-level       Current-level          |
   |            +                    ^                |
   |  Pre smooth|                    | Post smooth    |
   |            v                    +                |
   |       Coarser level       Coarser level          |
   |            +                    ^                |
   |  Pre smooth|                    | Post smooth    |
   |            v                    +                |
   |       Coarsest level +--> Coarse Solve           |
   |                                                  |
   +--------------------------------------------------+
This is an interesting approach.
The issue I’m facing is the following: on each level, I’m setting up the whole 
hierarchy of the MG framework, just to apply one or two V-cycle steps, and the 
whole setup cost is basically comparable to the application of the single 
V-cycle algorithm.

The thing is, once I finished the V-cycle, estimated the error, and transferred 
the solution, the whole hierarchy of MG is destroyed and rebuilt from scratch. 
While this is fine if you are using MG as a preconditioner, and need to call it 
many times, in my case this is the most time consuming part (!).

I can easily imagine that. The question is: Have you measured what exactly consumes the time? If you through the whole process again, I think the heaviest parts are typically the setup of the MatrixFree objects, followed by MGTransferMatrixFree (I assume those are the data structures you are using). But the setup of the Triangulation and DoFHandler::distribute(_mg)_dofs() should also be quite expensive.


Is there a way to reuse as much as possible of the existing MG objects, i.e., 
detect what levels need to be rebuilt and what can remain the same?

There is a way, but it is not straight-forward and takes a few days of work (if you already know where to look). As you suspect, the coarse hierarchy depends on the partitioning on the finest level, so the objects really need to be refreshed because the parallel distribution might change. I think the easiest way would be to hook things up on the global coarsening infrastructure that Peter (on CC) has been building recently, because there we have already separate triangulations and other things. At the same time, the transfer would probably be a bit orthogonal to what is done right now, because you would only transfer into new unknowns on a few isolated mesh cells.

I think we could definitely identify some strategy to make this work, as I see you point of reusing a coarse AMG, for example. How is your time plan and resources on this topic?

Best,
Martin

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/36a08e9e-2acc-fd1e-5c23-9077f40b457c%40gmail.com.

Reply via email to