Dear all,

We are using PETSc in our (Fortran) CFD code, which we have recently 
parallelized.

We need to access the values stored in the solution vector to assemble the 
matrix of the linear system. In particular, we need to access the values on the 
local elements, plus the ones on the first neighbors, which might be handled by 
other processes.
Hence, we defined our solution vector as MPI Vec with ghost values.

At the moment, to access all the required values, at each time step we update 
the ghost values, then call 'VecGhostGetLocalForm' and finally 
'VecGetArrayReadF90'.
Since all these procedures are collective, we store the gotten values in a 
local Fortran array (of length n_local + n_ghosts) and then proceed with the 
parallel matrix assembly.
The code works perfectly, so I believe we are doing things in the right way.

However, I was wondering whether there is a more efficient way to access the 
local values of a ghosted vector; something not collective, so that we could 
access the values on-demand while assembling the matrix.
In this way, we could avoid storing the values in Fortran array, thus saving 
memory.

Less important issue, but it puzzles me: Why is VecGetArrayReadF90 collective 
while its C counterpart VecGetArrayRead is not? (same thing for 
VecRestoreArrayReadF90 and VecRestoreArrayRead).


Thanks a lot for your time and help.


Best regards,
Marco Tiberga
PhD candidate
Delft University of Technology
Faculty of Applied Sciences
Radiation Science & Technology Department
Mekelweg 15, 2629 JB Delft, The Netherlands
E-Mail: [email protected]<mailto:[email protected]>
Website: http://www.nera.rst.tudelft.nl/


Reply via email to