Re: [deal.II] Re: maybe a novice question about parallel vectors in deal.ii

2017-06-17 Thread itomas
Dear Timo:

After reading your comment:

"SparseMatrix::vmult(dst, src) does not require ghost entries in 
the vector src (but it won't hurt either)"

I started to check this in my own codes (that uses a lot of vmults) and 
realized that I always use ''src'' that is locally relevant. I did that 
since from the point of view of each MPI process ''SparseMatrix'' is just a 
''flat'' (rather than tall) rectangular matrix: meaning that, to perform a 
matrix vector product the information that I need from ''src'' is the 
locally relevant one, not just the locally owned one.

Does it pay-off to use ''src'' that is locally relevant rather than locally 
owned? Is this more efficient? For me is clear that if ''src'' is merely 
locally owned it just doesn't have enough information to compute ''its 
part'', and at some point  vmult will have to ship (from another MPI 
process) the missing ''relevant'' information in the src vector, is this 
correct? So it seems as if, from the very beginning, it is better to feed 
vmult with with locally relevant ''src''?

Just trying to optimize my code …

Many thanks

Ignacio.

On Wednesday, February 26, 2014 at 4:53:33 PM UTC-6, Timo Heister wrote:
>
> > Thanks for the help! I understand better what it is I cannot do. But I 
> still 
> > need a piece of advice on how to compute and store in a parallel 
> simulation 
> > the quantity: M(u-uold) 
>
> No, SparseMatrix::vmult(dst, src) does not require ghost entries in 
> the vector src (but it won't hurt either). Because we are writing into 
> dst, dst must not have ghosts. 
>
> > I still think this is convoluted. Any (better) ideas? 
>
> Ghosted vectors are only needed if you read from individual vector 
> entries and in a couple of places in the library: 
> - DataOut 
> - error computation 
> - FEValues::get_function_values 
>
> And probably a few other things I forgot. 
>
> -- 
> Timo Heister 
> http://www.math.clemson.edu/~heister/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: maybe a novice question about parallel vectors in deal.ii

2017-06-17 Thread itomas
Dear Timo:

After reading your comment:

"SparseMatrix::vmult(dst, src) does not require ghost entries in 
the vector src (but it won't hurt either)"

I started to check this in my own codes (that uses a lot of vmults) and 
realized that I always use ''src'' that is locally relevant. I did that 
since from the point of view of each MPI process ''SparseMatrix'' is just a 
''flat'' (rather than tall) rectangular matrix: meaning that, to perform a 
matrix vector product the information that I need from ''src'' is the 
locally relevant one, not just the locally owned one.

Does it pay-off to use ''src'' that is locally relevant rather than locally 
owned? Is this more efficient? For me is clear that if ''src'' is merely 
locally owned it just doesn't have enough information to compute ''its 
part'', and at some point  vmult will have to ship the missing ''relevant'' 
information in the src vector, is this correct? So it seems as if, from the 
very beginning, it is better to feed vmult with with locally relevant 
''src''?

Just trying to optimize my code …

Many thanks

Ignacio.

On Wednesday, February 26, 2014 at 4:53:33 PM UTC-6, Timo Heister wrote:
>
> > Thanks for the help! I understand better what it is I cannot do. But I 
> still 
> > need a piece of advice on how to compute and store in a parallel 
> simulation 
> > the quantity: M(u-uold) 
>
> No, SparseMatrix::vmult(dst, src) does not require ghost entries in 
> the vector src (but it won't hurt either). Because we are writing into 
> dst, dst must not have ghosts. 
>
> > I still think this is convoluted. Any (better) ideas? 
>
> Ghosted vectors are only needed if you read from individual vector 
> entries and in a couple of places in the library: 
> - DataOut 
> - error computation 
> - FEValues::get_function_values 
>
> And probably a few other things I forgot. 
>
> -- 
> Timo Heister 
> http://www.math.clemson.edu/~heister/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: making a package of PETSc vectors

2017-03-24 Thread itomas
No worried, 

VectorPacket[0].reset(new PETScWrappers::MPI::Vector(locally_owned_dofs, 
mpi_communicator) ) ;

now did the job.

Thanks.


On Friday, March 24, 2017 at 8:01:18 PM UTC-5, Ignacio Tomas wrote:
>
> I wanted to do something like (in reality I would do this with a for loop)
>
> std::vector< std::shared_ptr > VectorPacket ;
> VectorPacket.resize(3) ; 
> VectorPacket[0].reset(new PETScWrappers::MPI::Vector) ;
> VectorPacket[1].reset(new PETScWrappers::MPI::Vector) ;
> VectorPacket[2].reset(new PETScWrappers::MPI::Vector) ;
>
> and later initialize all of them (with a for loop too). Yet this does not 
> seem to work. Otherwise
>
> std::vector< std::shared_ptr > VectorPacket ;
> VectorPacket.resize(3) ; 
> VectorPacket[0].reset() ;
> VectorPacket[1].reset() ;
> VectorPacket[2].reset() ;
>
> with Vect1, Vect2 and Vect3 defined/declared/initialized a priori works 
> fine. But it comes with the overhead of definining these vectors separately 
> rather than with the the vector of shared pointers. 
>
> On Fri, Mar 24, 2017 at 1:50 AM, Jean-Paul Pelteret <jppelte...@gmail.com> 
> wrote:
>
>> Dear Itomas,
>>
>> I would recommend going the route suggested in this recent post 
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_forum_-23-21topic_dealii_kToGg5lNhFE=DwMFaQ=ODFT-G5SujMiGrKuoJJjVg=ihd-8rKZdGkf12UV9R7-pA=vXwEg0P0IkHcAecANEcqpdfWm7DS8YbChufy5v0CsGs=geXIi3kQ2mQALSRuqz3mdU0XT1V4__auq1W-uVxNPA4=>,
>>  
>> namely creating a vector of shared pointers of PETSc vectors, i.e. 
>> std::vector<std::shared >. Using shared 
>> pointers means that you safeguard against memory leaks, and can be stored 
>> as entries in a vector. That would give you the flexibility that you 
>> require with a minimal interface change to your code (apart from how you 
>> create the vectors initially, you would need to dereference these vector 
>> entries with the "->" operator instead of the "." operator). 
>>
>> I hope that this helps.
>> Best regards,
>> Jean-Paul
>>
>> On Friday, March 24, 2017 at 2:37:07 AM UTC+1, ito...@tamu.edu wrote:
>>>
>>> Dear everybody:
>>>
>>> I don't have any particular problem, just trying to streamline a code. I 
>>> am have a mess with the interfaces of my code, in particular with the 
>>> functions and classes I have created, primarily because I have to handle a 
>>> ''packet'' of vectors, say 
>>>
>>> PETScWrappers::MPI::Vector  Vect1;
>>> PETScWrappers::MPI::Vector  Vect2;
>>> PETScWrappers::MPI::Vector  Vect3;
>>> PETScWrappers::MPI::Vector  Vect4;
>>> …
>>> PETScWrappers::MPI::Vector  Vect7;
>>>
>>> and use them either as input or output arguments of some functions, 
>>> solvers, complementary computations, etc. This is quite cumbersome, and as 
>>> solution I naively thought I could do the following
>>>
>>> std::vector  VectorPacket ;
>>> VectorPacket.resize(7) ;
>>> for (int i=0; i<7 ; ++i) 
>>>   VectorPacket[i].reinit (locally_owned_dofs, mpi_communicator ) ;
>>>
>>> So, now VectorPacket is really a packet, rather than seven isolated 
>>> vectors. It is almost needless to say that this is bad idea, std::vector 
>>> will demand quite a few things from the class PETScWrappers::MPI::Vector 
>>> which are just not there (in particular some operators). In reality the 
>>> size of the packet of vector is not always seven, which means that I really 
>>> need to use a proper container for which I can change it's size. Do you 
>>> have any suggestion (a.k.a. creative approach) on how to deal with an issue 
>>> like this one? Which container from the STL would be appropriate?
>>>
>>> Many thanks.
>>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/ 
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_=DwMFaQ=ODFT-G5SujMiGrKuoJJjVg=ihd-8rKZdGkf12UV9R7-pA=vXwEg0P0IkHcAecANEcqpdfWm7DS8YbChufy5v0CsGs=BnA2TuiZyt6TTwla_oPMt9ygwFZ3pgcuPbqbiLgtI-E=>
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en 
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_forum_dealii-3Fhl-3Den=DwMFaQ=ODFT-G5SujMiGrKuoJJjVg=ihd-8rKZdGkf12UV9R7-pA=vXwEg0P0IkHcAecANEcqpdfWm7DS8YbChufy5v0CsGs=kS559TWqee-h6dwoqNfQ-KSDgtFQ9b1eKFOe8Y8gEVk=>
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "deal

[deal.II] making a package of PETSc vectors

2017-03-23 Thread itomas
Dear everybody:

I don't have any particular problem, just trying to streamline a code. I am 
have a mess with the interfaces of my code, in particular with the 
functions and classes I have created, primarily because I have to handle a 
''packet'' of vectors, say 

PETScWrappers::MPI::Vector  Vect1;
PETScWrappers::MPI::Vector  Vect2;
PETScWrappers::MPI::Vector  Vect3;
PETScWrappers::MPI::Vector  Vect4;
…
PETScWrappers::MPI::Vector  Vect7;

and use them either as input or output arguments of some functions, 
solvers, complementary computations, etc. This is quite cumbersome, and as 
solution I naively thought I could do the following

std::vector  VectorPacket ;
VectorPacket.resize(7) ;
for (int i=0; i<7 ; ++i) 
  VectorPacket[i].reinit (locally_owned_dofs, mpi_communicator ) ;

So, now VectorPacket is really a packet, rather than seven isolated 
vectors. It is almost needless to say that this is bad idea, std::vector 
will demand quite a few things from the class PETScWrappers::MPI::Vector 
which are just not there (in particular some operators). In reality the 
size of the packet of vector is not always seven, which means that I really 
need to use a proper container for which I can change it's size. Do you 
have any suggestion (a.k.a. creative approach) on how to deal with an issue 
like this one? Which container from the STL would be appropriate?

Many thanks.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.