Sajid,

     The comm in the IS isn't really important. It is the comm in the Vecs that 
matter.

 In the parallel vector to parallel vector case each process just provides 
"some" original locations (from) and their corresponding new locations (all in 
global numbering of the vector). Each process can provide any part of the 
"some".

 In the parallel to sequential vector or sequential to parallel vector case the 
indexing of the to (or from depending on which one is sequential) is relative 
to the local numbering of the sequential vector and each process can only list 
entries associated with its sequential vector.

  Barry


> On Apr 30, 2019, at 4:04 PM, Matthew Knepley via petsc-users 
> <petsc-users@mcs.anl.gov> wrote:
> 
> On Tue, Apr 30, 2019 at 12:42 PM Sajid Ali via petsc-users 
> <petsc-users@mcs.anl.gov> wrote:
> Hi PETSc Developers, 
> 
> I see that in the examples for ISCreateGeneral, the index sets are created by 
> copying values from int arrays (which were created by PetscMalloc1 which is 
> not collective). 
> 
> If I the ISCreateGeneral is called with PETSC_COMM_WORLD and the int arrays 
> on each rank are independently created, does the index set created 
> concatenate all the int-arrays into one ? If not, what needs to be done to 
> get such an index set ? 
> 
> It does not sound scalable, but you can use 
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISOnComm.html
> 
>    Matt
>  
> PS: For context, I want to write a fftshift convenience function (like numpy, 
> MATLAB) but for large distributed vectors. I thought that I could do this 
> with VecScatter and two index sets, one shifted and one un-shifted.  
> 
> Thank You,
> Sajid Ali
> Applied Physics
> Northwestern University
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/

Reply via email to