Thank you, Barry. 

In DMCreateMatrix_DA_3d_MPIAIJ, there are two lines which I think are crucial: 
ierr = MatPreallocateSetLocal(ltog,nc,rows,ltog,cnt,cols,dnz,onz);CHKERRQ(ierr);
ierr = MatSetLocalToGlobalMapping(J,ltog,ltog);CHKERRQ(ierr);

For my case, I just need the ltog for the rows, and my columns are sequential, 
what shall I do here? I want to get a rltog for rows and a cltog for columns. 
There is no question that I get rltog from da, but where do I get the cltog? 

It is not for interpolation between levels of DMDA. The matrix is for some 
quantity (of dimension M) that doesn't live on the grid (of size n^3), but 
depends on some other quantity (of dimension N=n^3, with 1<<M<<N) which lives 
on the grid. 

Hui


________________________________________
From: Barry Smith [bsm...@mcs.anl.gov]
Sent: Saturday, March 07, 2015 12:17 PM
To: Sun, Hui
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver

> On Mar 7, 2015, at 1:17 PM, Sun, Hui <hus...@ucsd.edu> wrote:
>
> Here is one more problem along this line. The matrix A created in this way 
> may not have a natural global numbering. Let's say, if we are on grid point 
> (i,j,k), the element in A with row (i,j,k) and column 1 is not necessarily at 
> row position (i+(j+k*my)*mx globally. I mean, how do I set the correspondence 
> between the MatStencils and int for the index? Should I use something like 
> ISLocalToGlobalMapping?

  You can.

   Look at the routines in PETSc that form the matrices like 
DMCreateMatrix_DA_3d_MPIAIJ() and modify it for the matrix you need.

   BTW: what is the matrix you a creating for. PETSc also creates matrices that 
interpolate between levels of DMDA like DMCreateInterpolation_DA_3D_Q1() it's 
structure might be useful for you.

  Barry


>
> Best,
> Hui
>
>
> ________________________________________
> From: Barry Smith [bsm...@mcs.anl.gov]
> Sent: Wednesday, March 04, 2015 10:53 AM
> To: Sun, Hui
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>
>> On Mar 4, 2015, at 8:53 AM, Sun, Hui <hus...@ucsd.edu> wrote:
>>
>> I thought DMDAcreateMatrix would already specify the size of the matrix to 
>> be N by N, where N is the number of unknowns in the grid. And each processor 
>> takes care of a submatrix of size N_i by N_i.
>
>  Not really. Each process "owns" certain entire rows of the matrix. Different 
> processors cannot contain parts of the same rows.
>>
>> However, I wish to form a matrix of size M by N, where N is the number of 
>> unknowns in the grid, and  1<<M<<N. Furthermore, I wish to divide the M by N 
>> matrix A into the processors, in such a way, that each processor has a 
>> submatrix of size M by N_i.
>
>   PETSc doesn't provide a matrix format where sets of columns are on 
> different processes. It only provides formats where certain rows are on each 
> process. Depending on what you want to do with your matrix; for example if 
> you only need to do Matrix-Vector products with your matrix, you could store 
> by row the transpose of the matrix you want and use MatMultTranspose() to do 
> the multiply.
>
>   To create such a "transposed matrix" you can call DMGetGlobalVector(), 
> VecGetOwnershipRange(v,&start,&end); then call MatCreateMPIAIJ(comm, 
> end-start,PETSC_DECIDE,PETSC_DETERMINE,M,0,NULL,0,NULL,&A);  With this matrix 
> you can then call MatMultTranspose(A,v,w);
>
>   Barry
>
>
>>> DMDAGetLocalInfo
>
>
>
>>
>> Hui
>>
>>
>> ________________________________________
>> From: Barry Smith [bsm...@mcs.anl.gov]
>> Sent: Wednesday, March 04, 2015 5:27 AM
>> To: Sun, Hui
>> Cc: petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>
>> If at all possible you should use DMCreateMatrix() it manages everything for 
>> you. Why can't you use it?
>>
>> Barry
>>
>>
>>
>>> On Mar 4, 2015, at 1:20 AM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>
>>> Thank you Barry for giving me some hint of using DMDAGetLocalInfo() to 
>>> determine the local size. However, I'm still confused about the process of 
>>> creating such a matrix, which is composed of serial rows of DMDA parallel 
>>> vectors.
>>>
>>> Should I somehow use the following functions?
>>> MatCreate
>>> DMDAGetLocalInfo
>>> ISLocalToGlobalMappingCreate
>>> MatSetLocalToGlobalMapping
>>> MatGetLocalSubMatrix
>>>
>>> However, I still need some more help on putting everything together to 
>>> create such a matrix. I'd really appreciate your time.
>>>
>>> Best,
>>> Hui
>>>
>>> ________________________________________
>>> From: Barry Smith [bsm...@mcs.anl.gov]
>>> Sent: Sunday, March 01, 2015 9:24 AM
>>> To: Sun, Hui
>>> Cc: petsc-users@mcs.anl.gov
>>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>>
>>>> On Mar 1, 2015, at 12:30 AM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>>
>>>> Thank you Barry. I have yet two more questions:
>>>>
>>>> 1) If I have a DMDA and I use KSPSetComputeOperators and KSPSetComputeRHS 
>>>> to set up matrices and rhs, and I use geometric mg, what if I want to 
>>>> change my rhs many times? Should I write many KSPSetComputeRHS, and 
>>>> register them with ksp? Or is there a simple way to just register the rhs 
>>>> with ksp as a vector?
>>>>
>>>> 2) How do I create a Mat, whose cols follow the DMDA parallelization, and 
>>>> whose rows are serial?
>>>
>>> Normally one uses DMCreateMatrix() to get the matrices; it has the correct 
>>> parallel layout and the correct nonzero pattern.   If you create the 
>>> matrices yourself you need to first call DMDAGetLocalInfo() and from that 
>>> information determine how many local rows you have.
>>>
>>>>
>>>> By the way, I've figured out and fixed the bugs in my code concerning 
>>>> using mg with DMDA having 4 dof. It has to do with the interpolations. Now 
>>>> I can see mg works well with 4 dof DMDA.
>>>>
>>>> Best,
>>>> Hui
>>>>
>>>> ________________________________________
>>>> From: Barry Smith [bsm...@mcs.anl.gov]
>>>> Sent: Saturday, February 28, 2015 9:35 AM
>>>> To: Sun, Hui
>>>> Cc: petsc-users@mcs.anl.gov
>>>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>>>
>>>>> On Feb 27, 2015, at 7:25 PM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>>>
>>>>> Thank you Barry. Another question: I observe that in those ksp examples, 
>>>>> whenever multigrid is used, DMDA is also used, besides, 
>>>>> KSPSetComputeOperators and KSPSetComputeRHS are also used.
>>>>>
>>>>> Is it true that
>>>>> 1) Only DMDA can use mg?
>>>>
>>>> No this is not true
>>>>
>>>>> 2) We have to set up matrices and rhs using KSPSetComputeOperators  and 
>>>>> KSPSetComputeRHS?
>>>>
>>>> No you do not have to
>>>>
>>>>> We cannot create a matrix and add it to KSP if we want to use mg?
>>>>
>>>> Yes you can.
>>>>
>>>> There are many many variants of multigrid one can do with PETSc; we don't 
>>>> have the time to have examples of all the possibilities.
>>>>
>>>> More details
>>>>
>>>>> 1) Only DMDA can use mg?
>>>>
>>>> Because DMDA provides structured grids with easy interpolation between 
>>>> levels and it is easy for users to write Jacobians we have many examples 
>>>> that use the DMDA. However, so long as YOU (or something) can provide 
>>>> interpolation between the multigrid levels you can use multigrid. For 
>>>> example PCGAMG uses algebraic multigrid to generate the interpolations. If 
>>>> you have your own interpolations you can provide them with 
>>>> PCMGSetInterpolation() (when you use PCMG with DMDA PETSc essentially 
>>>> handles those details automatically for you).
>>>>
>>>>> 2) We have to set up matrices and rhs using KSPSetComputeOperators  and 
>>>>> KSPSetComputeRHS?
>>>>
>>>> Normally with geometric multigrid one discretizes the operator on each 
>>>> level of the grid. Thus the user has to provide several matrices (one for 
>>>> each level). KSPSetComputeOperators() is ONE way that the user can provide 
>>>> them. You can also provide them by call PCMGetSmoother(pc,level,&ksp) and 
>>>> then call KSPSetOperators(ksp,...) for each of the levels 
>>>> (KSPSetComputeOperators() essentially does the book keeping for you).
>>>>
>>>>> We cannot create a matrix and add it to KSP if we want to use mg?
>>>>
>>>> As I said in 2 normally multigrid requires you to provide a discretized 
>>>> operator at each level. But with Galerkin coarse grids (which is what 
>>>> algebraic multigrid users and can also be used by geometric multigrid) the 
>>>> user does not provide coarser grid operators instead the code computes 
>>>> them automatically from the formula R*A*P where R is the restriction 
>>>> operator used in multigrid and P is the interpolation operator (usually 
>>>> the transpose of P).
>>>>
>>>> If you are looking for a simple automatic multigrid then you want to use 
>>>> PCGAMG in PETSc, it does algebraic multigrid and doesn't require you 
>>>> provide interpolations or coarser operators. However algebraic multigrid 
>>>> doesn't work for all problems; though it does work for many. Try it with 
>>>> -pc_type gamg
>>>>
>>>> Barry
>>>>
>>>>>
>>>>> Best,
>>>>> Hui
>>>>>
>>>>> ________________________________________
>>>>> From: Barry Smith [bsm...@mcs.anl.gov]
>>>>> Sent: Friday, February 27, 2015 5:11 PM
>>>>> To: Sun, Hui
>>>>> Cc: petsc-users@mcs.anl.gov
>>>>> Subject: Re: [petsc-users] DMDA with dof=4, multigrid solver
>>>>>
>>>>>> On Feb 27, 2015, at 6:36 PM, Sun, Hui <hus...@ucsd.edu> wrote:
>>>>>>
>>>>>> I'm trying to work on 4 Poisson's equations defined on a DMDA grid, 
>>>>>> Hence the parameter dof in DMDACreate3d should be 4, and I've set 
>>>>>> stencil width to be 4, and stencil type to be star.
>>>>>
>>>>> Use a stencil width of 1, not 4. The stencil width is defined in terms of 
>>>>> dof.
>>>>>>
>>>>>> If I run the code with -pc_type ilu and -ksp_type gmres, it works 
>>>>>> alright.
>>>>>>
>>>>>> However, if I run with pc_type mg, it gives me an error saying that when 
>>>>>> it is doing MatSetValues, the argument is out of range, and there is a 
>>>>>> new nonzero at (60,64) in the matrix. However, that new nonzero is 
>>>>>> expected to be there, the row number 60 corresponds to i=15 and c=0 in x 
>>>>>> direction, and the column number 64 corresponds to i=16 and c=0 in x 
>>>>>> direction. So they are next to each other, and the star stencil with 
>>>>>> width 1 should include that. I have also checked with the memory 
>>>>>> allocations, and I'm found no problem.
>>>>>>
>>>>>> So I'm wondering if there is any problem of using multigrid on a DMDA 
>>>>>> with dof greater than 1?
>>>>>
>>>>> No it handles dof > 1 fine.
>>>>>
>>>>> Send your code.
>>>>>
>>>>> Barry
>>>>>
>>>>>>
>>>>>> Thank you!

Reply via email to