Barry,
This is same request from fabien delalondre <delalf at scorec.rpi.edu>
few weeks ago. We have this support in petsc-dev. I'll forward you
our communications with Fabien regarding this matter  in my next email.

Hong

On Thu, Jan 27, 2011 at 8:40 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> ?What's up with this?
>
> Begin forwarded message:
>
>> From: "Stephen C. Jardin" <sjardin at pppl.gov>
>> Date: January 27, 2011 6:49:27 PM CST
>> To: <bsmith at mcs.anl.gov>
>> Cc: "Lois Curfman McInnes" <curfman at mcs.anl.gov>, <egng at lbl.gov>, 
>> <kd2112 at columbia.edu>
>> Subject: Request for new PETSc capability
>>
>> Dear Barry,
>>
>> The M3D-C1 project is one of the major code projects in CEMM. ? It is a fully
>> implicit formulation of the 3D MHD equations using high-order 3D finite
>> elements with continuous derivatives in all directions. ?In a typical
>> problem, the 3D domain consists of approximately 100 2D planes, spread out
>> equally around a torus. ?The grid we use is unstructured within each 2D plane
>> (where the coupling of elements is very strong), but is structured and
>> regular across the planes (where the coupling is much weaker and is confined
>> to nearest neighbors.
>>
>> Our plan has always been to solve the large sparse matrix equation we get
>> using GMRES with a block Jacobi preconditioner obtained by using SuperLU_dist
>> within each 2D plane. ? We have implemented this using PETSc and find that it
>> leads to a very efficient iterative solve that converges in just a few
>> iterations for the time step and other parameters that we normally use.
>> However, the present version of PETSc/3.1 only allows a single processor per
>> plane(block) when using the block Jacobi preconditioner. ?This severely
>> limits the maximum problem size that we can run, as we can use only 100
>> processors for a problem with 100 2D planes.
>>
>> Several years ago, when we were planning this project, we spoke with Hong
>> Zhang about this solver strategy and she told us that if there was a demand
>> for it, the present limitation restricting the block Jacobi preconditioner to
>> a single processor could be lifted. ? ?We are now at point in our project
>> where we need to request this. ?We have demonstrated good convergence of the
>> iterative solver, but need to be able to run with 10-100 processors per plane
>> (block) in order use 1000-10000 processors total to obtain the kind of
>> resolution we need for our applications.
>>
>> Would it be possible for your group to generalize the block Jacobi
>> preconditioner option so that the blocks could be distributed over multiple
>> processors? ?If so, could you give us a timeline for this to appear in a
>> PETSc release?
>>
>> Thank you, and Best Regards,
>>
>> Steve Jardin (for the CEMM team)
>>
>>
>> Cc Lois McInnes
>> ? Esmond Ng
>> ? David Keyes
>
>

Reply via email to