Thank you Barry. I think this is what I want. 

Hui


________________________________________
From: Barry Smith [bsm...@mcs.anl.gov]
Sent: Wednesday, March 04, 2015 10:53 AM
To: Sun, Hui
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver

> On Mar 4, 2015, at 8:53 AM, Sun, Hui <hus...@ucsd.edu> wrote:
>
> I thought DMDAcreateMatrix would already specify the size of the matrix to be 
> N by N, where N is the number of unknowns in the grid. And each processor 
> takes care of a submatrix of size N_i by N_i.

  Not really. Each process "owns" certain entire rows of the matrix. Different 
processors cannot contain parts of the same rows.
>
> However, I wish to form a matrix of size M by N, where N is the number of 
> unknowns in the grid, and  1<<M<<N. Furthermore, I wish to divide the M by N 
> matrix A into the processors, in such a way, that each processor has a 
> submatrix of size M by N_i.

   PETSc doesn't provide a matrix format where sets of columns are on different 
processes. It only provides formats where certain rows are on each process. 
Depending on what you want to do with your matrix; for example if you only need 
to do Matrix-Vector products with your matrix, you could store by row the 
transpose of the matrix you want and use MatMultTranspose() to do the multiply.

   To create such a "transposed matrix" you can call DMGetGlobalVector(), 
VecGetOwnershipRange(v,&start,&end); then call MatCreateMPIAIJ(comm, 
end-start,PETSC_DECIDE,PETSC_DETERMINE,M,0,NULL,0,NULL,&A);  With this matrix 
you can then call MatMultTranspose(A,v,w);

   Barry


>> DMDAGetLocalInfo



>
> Hui
>
>
> ________________________________________
> From: Barry Smith [bsm...@mcs.anl.gov]
> Sent: Wednesday, March 04, 2015 5:27 AM
> To: Sun, Hui
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>
>  If at all possible you should use DMCreateMatrix() it manages everything for 
> you. Why can't you use it?
>
>  Barry
>
>
>
>> On Mar 4, 2015, at 1:20 AM, Sun, Hui <hus...@ucsd.edu> wrote:
>>
>> Thank you Barry for giving me some hint of using DMDAGetLocalInfo() to 
>> determine the local size. However, I'm still confused about the process of 
>> creating such a matrix, which is composed of serial rows of DMDA parallel 
>> vectors.
>>
>> Should I somehow use the following functions?
>> MatCreate
>> DMDAGetLocalInfo
>> ISLocalToGlobalMappingCreate
>> MatSetLocalToGlobalMapping
>> MatGetLocalSubMatrix
>>
>> However, I still need some more help on putting everything together to 
>> create such a matrix. I'd really appreciate your time.
>>
>> Best,
>> Hui
>>
>> ________________________________________
>> From: Barry Smith [bsm...@mcs.anl.gov]
>> Sent: Sunday, March 01, 2015 9:24 AM
>> To: Sun, Hui
>> Cc: petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>
>>> On Mar 1, 2015, at 12:30 AM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>
>>> Thank you Barry. I have yet two more questions:
>>>
>>> 1) If I have a DMDA and I use KSPSetComputeOperators and KSPSetComputeRHS 
>>> to set up matrices and rhs, and I use geometric mg, what if I want to 
>>> change my rhs many times? Should I write many KSPSetComputeRHS, and 
>>> register them with ksp? Or is there a simple way to just register the rhs 
>>> with ksp as a vector?
>>>
>>> 2) How do I create a Mat, whose cols follow the DMDA parallelization, and 
>>> whose rows are serial?
>>
>> Normally one uses DMCreateMatrix() to get the matrices; it has the correct 
>> parallel layout and the correct nonzero pattern.   If you create the 
>> matrices yourself you need to first call DMDAGetLocalInfo() and from that 
>> information determine how many local rows you have.
>>
>>>
>>> By the way, I've figured out and fixed the bugs in my code concerning using 
>>> mg with DMDA having 4 dof. It has to do with the interpolations. Now I can 
>>> see mg works well with 4 dof DMDA.
>>>
>>> Best,
>>> Hui
>>>
>>> ________________________________________
>>> From: Barry Smith [bsm...@mcs.anl.gov]
>>> Sent: Saturday, February 28, 2015 9:35 AM
>>> To: Sun, Hui
>>> Cc: petsc-users@mcs.anl.gov
>>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>>
>>>> On Feb 27, 2015, at 7:25 PM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>>
>>>> Thank you Barry. Another question: I observe that in those ksp examples, 
>>>> whenever multigrid is used, DMDA is also used, besides, 
>>>> KSPSetComputeOperators and KSPSetComputeRHS are also used.
>>>>
>>>> Is it true that
>>>> 1) Only DMDA can use mg?
>>>
>>> No this is not true
>>>
>>>> 2) We have to set up matrices and rhs using KSPSetComputeOperators  and 
>>>> KSPSetComputeRHS?
>>>
>>> No you do not have to
>>>
>>>> We cannot create a matrix and add it to KSP if we want to use mg?
>>>
>>>  Yes you can.
>>>
>>> There are many many variants of multigrid one can do with PETSc; we don't 
>>> have the time to have examples of all the possibilities.
>>>
>>> More details
>>>
>>>> 1) Only DMDA can use mg?
>>>
>>> Because DMDA provides structured grids with easy interpolation between 
>>> levels and it is easy for users to write Jacobians we have many examples 
>>> that use the DMDA. However, so long as YOU (or something) can provide 
>>> interpolation between the multigrid levels you can use multigrid. For 
>>> example PCGAMG uses algebraic multigrid to generate the interpolations. If 
>>> you have your own interpolations you can provide them with 
>>> PCMGSetInterpolation() (when you use PCMG with DMDA PETSc essentially 
>>> handles those details automatically for you).
>>>
>>>> 2) We have to set up matrices and rhs using KSPSetComputeOperators  and 
>>>> KSPSetComputeRHS?
>>>
>>> Normally with geometric multigrid one discretizes the operator on each 
>>> level of the grid. Thus the user has to provide several matrices (one for 
>>> each level). KSPSetComputeOperators() is ONE way that the user can provide 
>>> them. You can also provide them by call PCMGetSmoother(pc,level,&ksp) and 
>>> then call KSPSetOperators(ksp,...) for each of the levels 
>>> (KSPSetComputeOperators() essentially does the book keeping for you).
>>>
>>>> We cannot create a matrix and add it to KSP if we want to use mg?
>>>
>>>  As I said in 2 normally multigrid requires you to provide a discretized 
>>> operator at each level. But with Galerkin coarse grids (which is what 
>>> algebraic multigrid users and can also be used by geometric multigrid) the 
>>> user does not provide coarser grid operators instead the code computes them 
>>> automatically from the formula R*A*P where R is the restriction operator 
>>> used in multigrid and P is the interpolation operator (usually the 
>>> transpose of P).
>>>
>>> If you are looking for a simple automatic multigrid then you want to use 
>>> PCGAMG in PETSc, it does algebraic multigrid and doesn't require you 
>>> provide interpolations or coarser operators. However algebraic multigrid 
>>> doesn't work for all problems; though it does work for many. Try it with 
>>> -pc_type gamg
>>>
>>> Barry
>>>
>>>>
>>>> Best,
>>>> Hui
>>>>
>>>> ________________________________________
>>>> From: Barry Smith [bsm...@mcs.anl.gov]
>>>> Sent: Friday, February 27, 2015 5:11 PM
>>>> To: Sun, Hui
>>>> Cc: petsc-users@mcs.anl.gov
>>>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>>>
>>>>> On Feb 27, 2015, at 6:36 PM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>>>
>>>>> I'm trying to work on 4 Poisson's equations defined on a DMDA grid, Hence 
>>>>> the parameter dof in DMDACreate3d should be 4, and I've set stencil width 
>>>>> to be 4, and stencil type to be star.
>>>>
>>>> Use a stencil width of 1, not 4. The stencil width is defined in terms of 
>>>> dof.
>>>>>
>>>>> If I run the code with -pc_type ilu and -ksp_type gmres, it works alright.
>>>>>
>>>>> However, if I run with pc_type mg, it gives me an error saying that when 
>>>>> it is doing MatSetValues, the argument is out of range, and there is a 
>>>>> new nonzero at (60,64) in the matrix. However, that new nonzero is 
>>>>> expected to be there, the row number 60 corresponds to i=15 and c=0 in x 
>>>>> direction, and the column number 64 corresponds to i=16 and c=0 in x 
>>>>> direction. So they are next to each other, and the star stencil with 
>>>>> width 1 should include that. I have also checked with the memory 
>>>>> allocations, and I'm found no problem.
>>>>>
>>>>> So I'm wondering if there is any problem of using multigrid on a DMDA 
>>>>> with dof greater than 1?
>>>>
>>>> No it handles dof > 1 fine.
>>>>
>>>> Send your code.
>>>>
>>>> Barry
>>>>
>>>>>
>>>>> Thank you!

Reply via email to