Are you certain ALL ranks in PETSC_COMM_WORLD call these function(s). These
functions cannot be inside if statements like
if (rank == 0){
  VecCreateMPI(...)
}


On Wed, 4 Jan 2017 at 23:34, Manuel Valera <mval...@mail.sdsu.edu> wrote:

> Thanks Dave for the quick answer, appreciate it,
>
> I just tried that and it didn't make a difference, any other suggestions ?
>
> Thanks,
> Manuel
>
> On Wed, Jan 4, 2017 at 2:29 PM, Dave May <dave.mayhe...@gmail.com> wrote:
>
> You need to swap the order of your function calls.
> Call VecSetSizes() before VecSetType()
>
> Thanks,
>   Dave
>
>
> On Wed, 4 Jan 2017 at 23:21, Manuel Valera <mval...@mail.sdsu.edu> wrote:
>
> Hello all, happy new year,
>
> I'm working on parallelizing my code, it worked and provided some results
> when i just called more than one processor, but created artifacts because i
> didn't need one image of the whole program in each processor, conflicting
> with each other.
>
> Since the pressure solver is the main part i need in parallel im chosing
> mpi to run everything in root processor until its time to solve for
> pressure, at this point im trying to create a distributed vector using
> either
>
>      call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,nbdp,xp,ierr)
> or
>
>      call VecCreate(PETSC_COMM_WORLD,xp,ierr); CHKERRQ(ierr)
>
>      call VecSetType(xp,VECMPI,ierr)
>
>      call VecSetSizes(xp,PETSC_DECIDE,nbdp,ierr); CHKERRQ(ierr)
>
>
>
> In both cases program hangs at this point, something it never happened on
> the naive way i described before. I've made sure the global size, nbdp, is
> the same in every processor. What can be wrong?
>
>
> Thanks for your kind help,
>
>
> Manuel.
>
>
>
>
>
>
>
>

Reply via email to