Hi Wayne,

currently MGTransferBlockGlobalCoarsening takes in the constructor only a 
single MGTransferGlobalCoarsening instance and uses this for all blocks 
(i.e., uses the same AffineConstraints instance for all 
blocks): 
https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.h#L691-L692
 

However, this can be simply generalized to add a second constructor that 
takes a vector of MGTransferGlobalCoarsening objects. This should not be 
too much work, since MGTransferBlockGlobalCoarsening is derived from 
MGTransferGlobalCoarsening, which contains all the logic. The new 
constructor would have false here 
https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3379
 
and the 
function 
https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3385-L3393
 
would need to generalized to access entries within the vector.

Hope this helps,
Peter

On Friday, 18 November 2022 at 02:53:29 UTC+1 yy.wayne wrote:

> Hi Peter, sorry for coming back to this discussion again. 
>
> I've applied the vmult function and compute diagonal function for a 
> complex MatrixFree object, and tested these functions. But there's one last 
> ingredient absent in the multi-component MatrixFree framework with 
> p-coarsening, transfers.
>
> Different to the your spirk article where it has two components with same 
> constraints, in my case the Dirichlet BC on real dofs and imaginary dofs 
> are different. However when constructing MGTransferBlockGlobalCoarsening or 
> MGTransferGlobalCoarsening, a MGTwoLevelTransfer object is needed but it 
> only receive a single dofhandler and constraint. The problem here is the 
> single constraint, since dofhandler for real and imaginary parts are same. 
> I haven't figured out the solution yet.
>
> Or I've notice that a GMG block transfer class MGTransferBlockMatrixFree 
> <http://Same%20as%20above%20for%20the%20case%20that%20each%20block%20has%20its%20own%20DoFHandler.>
>  receives 
> a vector of  MGConstrainedDoFs. Your implementation of a complex 
> Helmholtz MatrixFree method with p-coarsening is very helpful, it address 
> most of my problems, but only differ in different constrains for two 
> components.
>
> Best,
> Wayne
> 在2022年11月4日星期五 UTC+8 09:40:50<yy.wayne> 写道:
>
>> Thanks for you patience and wonderful answer :)
>>
>> Best,
>> Wayne
>>
>> 在2022年11月4日星期五 UTC+8 00:06:28<peterr...@gmail.com> 写道:
>>
>>> Hi Wayne,
>>>
>>> in that case I would provide the same DoFHandler to MatrixFree twice and 
>>> two different AffineConstraint objects. See the version of 
>>> MatrixFree::reinit() that takes vectors 
>>> https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838
>>> .
>>>
>>> Peter
>>>
>>> On Thursday, 3 November 2022 at 11:18:58 UTC+1 yy.wayne wrote:
>>>
>>>> Hi Peter,
>>>>
>>>> Thanks for your comprehensive reply, really helpful. By assemble 
>>>> actually I mean the MatrixFree integrator part which corresponds to 
>>>> assembling in MatrixBased form :). But you resolve my question, since my 
>>>> main concern is one scalar DoFHandler describing both real and imaginary 
>>>> part. A last question is what will you do when real and imaginary parts 
>>>> have different boundary conditions, if only on DoFHandler is provided? 
>>>>
>>>> Best,
>>>> Wayne
>>>> 在2022年11月3日星期四 UTC+8 15:08:03<peterr...@gmail.com> 写道:
>>>>
>>>>> Hi Wayne,
>>>>>
>>>>> > The solution in HeatProblem is a non-complex one then(half size of 
>>>>> src and dst processed in MatrixFree operator)?
>>>>>
>>>>> We are solving equation (10) from the paper; it is the  reformulated 
>>>>> version of equation (9), which is a complex system. The heat equation is 
>>>>> indeed not complex; but the diagonalization approach in the case of fully 
>>>>> implicit stage parallel IRK makes the system complex (inter term of 
>>>>> equation (2) consists of complex blocks that need to be solved).
>>>>>
>>>>> > In your github code, you actually assemble a block matrix system(in 
>>>>> MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
>>>>> There's no multi dof_handlers.
>>>>>
>>>>> I am not sure which lines you are referring to, but if I assemble the 
>>>>> matrix it only happens for the coarse-grid problem; also I don't think I 
>>>>> did setup the matrix for the complex system and instead used simply 
>>>>> Chebyshev iterations around point Jacobi. I have another project, where 
>>>>> we 
>>>>> actually assemble the matrix.
>>>>>
>>>>> I am not sure what happens in the Stokes examples. But my guess is 
>>>>> that they use a FESystem consisting of a FE for pressure and a vectorial 
>>>>> FE 
>>>>> for velocity. To be able to use block vector in this case the DoFs need 
>>>>> to 
>>>>> be renumbered so that DoFs of a component are contiguous, which is not 
>>>>> the 
>>>>> case normally.
>>>>>
>>>>> In my case, I am using a single a single scalar DoFHandler. This 
>>>>> describes both the real and the imaginary part. As a consequence, one 
>>>>> only 
>>>>> has to pass a single DoFHander/AffineConstraints to MatrixFree. This is 
>>>>> somewhat different approach than it is done in the other tutorials and 
>>>>> has 
>>>>> been adopted from our two-phase solver adaflo (
>>>>> https://github.com/kronbichler/adaflo), where we extensively use it 
>>>>> reduce overhead for the level-set problem (single DoFHandler that 
>>>>> describes 
>>>>> the level-set field, normal, and curvature). MatrixFree works quite 
>>>>> nicely 
>>>>> (just like in the case if you would use FESystem) if you use it with a 
>>>>> scalar DoFHandler and pass a BlockVector to the matrix-free loops. I 
>>>>> think 
>>>>> we don't have a tutorial for this way to work with block vectors in the 
>>>>> context of MatrixFree. However, we made lot of progress in improving the 
>>>>> usability in the last two years, since we use this feature in 2-3 
>>>>> projects 
>>>>> with external collaboration partners. Unfortunately, we have not moved 
>>>>> all 
>>>>> utility functions to deal.II yet.
>>>>>
>>>>> > Can I implement p-multigrid precondition with given 
>>>>> MGTransferBlockGlobalCoarsening?
>>>>>
>>>>> Yes. It is used in the code.
>>>>>
>>>>> > It's a parallel version but vectors are not initialized with 
>>>>> solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
>>>>> matrix_free->initialize_dof_vector has same function done?
>>>>>
>>>>> Yes, initialize_dof_vector() does the same just smarter. A general 
>>>>> recommendation: always use initialize_dof_vector. If MatrixFree notices 
>>>>> that you use the internal partitioner, it can do some short cuts. Else, 
>>>>> it 
>>>>> needs to do some additional checks to figure out if the vectors are 
>>>>> comptible.
>>>>>
>>>>> Hope this helps,
>>>>> Peter
>>>>>
>>>>> On Thursday, 3 November 2022 at 03:47:05 UTC+1 yy.wayne wrote:
>>>>>
>>>>>> Thank you peter.
>>>>>>
>>>>>> It looks awesome! I've got a few question about it.
>>>>>>
>>>>>>    1. In your github code, you actually assemble a block matrix 
>>>>>>    system(in MatrixFree) but didn't renumber dofs as done in most Stokes 
>>>>>>    examples? There's no multi dof_handlers. The solution in HeatProblem 
>>>>>> is a 
>>>>>>    non-complex one then(half size of src and dst processed in MatrixFree 
>>>>>>    operator)?
>>>>>>    2. Can I implement p-multigrid precondition with given 
>>>>>>    MGTransferBlockGlobalCoarsening?
>>>>>>    3. It's a parallel version but vectors are not initialized with 
>>>>>>    solution(locally_owned_dofs, locally_relevant_dofs, 
>>>>>> mpi_communicator). So 
>>>>>>    matrix_free->initialize_dof_vector has same function done?
>>>>>>
>>>>>> Best,
>>>>>> Wayne
>>>>>> 在2022年11月3日星期四 UTC+8 04:44:53<peterr...@gmail.com> 写道:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> The block version of MGTransferGlobalCoarsening is 
>>>>>>> MGTransferBlockGlobalCoarsening (see  
>>>>>>> https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockGlobalCoarsening.html
>>>>>>> ).
>>>>>>>
>>>>>>> I don't know the details of step-29. But what I would recommend you 
>>>>>>> to do is to take a look at 
>>>>>>> https://github.com/peterrum/dealii-spirk/blob/bb52df5d1023f9c41ca51a082be769bc0171bf75/include/operator.h#L616-L660.
>>>>>>>  
>>>>>>> Here, I am solving a complex-valued Helmholtz operator (in my case a 
>>>>>>> sum of 
>>>>>>> mass and Laplace matrix, which arises from complex implicit Runge-Kutta 
>>>>>>> formulation; see https://arxiv.org/abs/2209.06700). This complex 
>>>>>>> code was the motivation for writing MGTransferBlockGlobalCoarsening in 
>>>>>>> the 
>>>>>>> fist place^^
>>>>>>>
>>>>>>> I think the code should answer most of your questions.
>>>>>>>
>>>>>>> Regarding
>>>>>>>
>>>>>>> > FEEvaluation<...,1> real(matrix_free_data, 0) and 
>>>>>>> FEEvaluation<...,1> real(matrix_free_data, 1)
>>>>>>>
>>>>>>> I guess you have to replace the "1" by a "0" in the second FEEval.
>>>>>>>
>>>>>>> Hope this helps,
>>>>>>> Peter
>>>>>>>
>>>>>>> On Wednesday, 2 November 2022 at 17:55:44 UTC+1 yy.wayne wrote:
>>>>>>>
>>>>>>>> Well a sad thing I notice is a function used for p-multigrid in 
>>>>>>>> step-75: reinit() in MGTwoLevelTransfer, is imlpemented only for 
>>>>>>>> LinearAlgebra::distributed::Vector, not  
>>>>>>>> LinearAlgebra::distributed::BlockVector.
>>>>>>>> [image: 202211-3.png]
>>>>>>>>
>>>>>>>> I'm fine with writing some unimplemented functions myself, but a 
>>>>>>>> guidence is really helpful. While going through this problem I found 
>>>>>>>> different choices and not sure what's the best way having this done. 
>>>>>>>> (For 
>>>>>>>> example, LinearAlgebraTrilinos::MPI::BlockVector or 
>>>>>>>> LinearAlgebra::distributed::BlockVector? Whether we need 
>>>>>>>> locally_relevant_dofs when reinit solution vector?(It's not used here 
>>>>>>>> stokes_computation 
>>>>>>>> <https://github.com/dealii/dealii/blob/2376c62a4b89953b74801852983eb8556930d54d/tests/matrix_free/stokes_computation.cc>)
>>>>>>>>  
>>>>>>>> Or is the non-specialized MGTransferLevel Class works as well for p 
>>>>>>>> coarsening?)
>>>>>>>>
>>>>>>>> In short, I'm solving step-29(2 components) with matrix-free and 
>>>>>>>> p-multigrid technique. Is current dealii capable of solving it? If not 
>>>>>>>> what's the fastest way to implement it? Thanks!
>>>>>>>> 在2022年11月2日星期三 UTC+8 21:40:36<yy.wayne> 写道:
>>>>>>>>
>>>>>>>>> I tried to modify my code that FEEvaluation is constructed without 
>>>>>>>>> matrix_free_data as in the above .cc in dealii/test, but get 
>>>>>>>>> following 
>>>>>>>>> error information:
>>>>>>>>>         "FEEvaluation was initialized without a matrix-free 
>>>>>>>>> object. Integer indexing is not possible".
>>>>>>>>>
>>>>>>>>> Guess the only method to do MatrixFree problem with 
>>>>>>>>> multi-components now is through BlockVector in matrix_vector_Stokes 
>>>>>>>>> test 
>>>>>>>>> <https://github.com/dealii/dealii/blob/master/tests/matrix_free/matrix_vector_stokes_noflux.cc>?
>>>>>>>>>  
>>>>>>>>> Since I never write dealii prokect with blocked dofs before I'm 
>>>>>>>>> affraid 
>>>>>>>>> codes besides MatrixFree parts should be changed as well... Should I 
>>>>>>>>> try 
>>>>>>>>> rearrange step-29 like problem with block vector? 
>>>>>>>>>
>>>>>>>>> 在2022年11月2日星期三 UTC+8 17:24:23<yy.wayne> 写道:
>>>>>>>>>
>>>>>>>>>> In test/matrix-free/assemble_matrix_02.cc 
>>>>>>>>>> <https://github.com/dealii/dealii/blob/d2d20fe3ca3a1390420c51420307a3ef680c503c/tests/matrix_free/assemble_matrix_02.cc>
>>>>>>>>>>  a 
>>>>>>>>>> Stokes matrix is assembled in MatrixFree form, and DoFs correspond 
>>>>>>>>>> to 
>>>>>>>>>> velocity and pressure are called by FEEvaluation<...,dim> 
>>>>>>>>>> velocity(..., 0) 
>>>>>>>>>> and  FEEvaluation<...,1> velocity(..., dim), respectively.
>>>>>>>>>>
>>>>>>>>>> I have a multi-comp system two as in step-29 with real and 
>>>>>>>>>> imaginary parts. But initializing the 2 FEEvaluations by 
>>>>>>>>>> FEEvaluation<...,1> real(matrix_free_data, 0) and 
>>>>>>>>>> FEEvaluation<...,1> 
>>>>>>>>>> real(matrix_free_data, 1) gives error, because 
>>>>>>>>>> matrix_free_data.n_components() is 1 not 2. Maybe it's because I'm 
>>>>>>>>>> using 
>>>>>>>>>> fe_collection, not just FESystem in the assemble_matrix_02 test.
>>>>>>>>>>
>>>>>>>>>> If I have a FEEvaluation<...,dim=2> integrator including real and 
>>>>>>>>>> imaginary parts, is there a way to decouple them like in FEValues 
>>>>>>>>>> (for 
>>>>>>>>>> example, fe.system_to_component_index, or fe_values_real = 
>>>>>>>>>> fe_values[FEValuesExtractors])?
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>> Wayne
>>>>>>>>>>
>>>>>>>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ce38cec6-3968-4e12-be23-afd6f06b3719n%40googlegroups.com.

Reply via email to