Dear Roy,
On Wed, 14 Jan 2009, Roy Stogner wrote:
> On Wed, 14 Jan 2009, Tim Kroeger wrote:
>
>> Now, the first problem I encounter -- but actually it's not really a
>> problem -- is that PETSc supplies the possibility to automatically
>> communicate any changes of vector components to the othe
I'm pretty sure that Trilinos doesn't have this capability. I'm with
Roy in thinking that it would be better to code this once ourselves so
it will work for all NumericVectors.
Derek
--
This SF.net email is sponsored
On Wed, 14 Jan 2009, Tim Kroeger wrote:
> Now, the first problem I encounter -- but actually it's not really a problem
> -- is that PETSc supplies the possibility to automatically communicate any
> changes of vector components to the other processors that may have those
> indices as ghost indi
Dear Roy,
On Wed, 14 Jan 2009, Roy Stogner wrote:
> By "that" I just meant the stuff you've referred to as "the other
> side"; I don't have time to figure out the right PETSc APIs for
> part-contiguous, part-sparse vectors.
Okay, so I will start doing the NumericVector and the PetscVector
stuff
On Wed, 14 Jan 2009, Tim Kroeger wrote:
>>> I think that adding the required functionality to NumericVector and
>>> PetscVector would not be too complicated (PETSc's VecCreateGhost()
>>> seems to do the trick). I can try to do this part myself, that is to
>>> add a constructor that aditionally t
>>> I don't think that the (serial) grid itself is the decisive thing in
>>> my application. (I watched the memory consumption, and it remains
>>> small during the grid creation procedure and increases drastically
>>> when the systems are created and initialized.)
>>
>> Keep in mind, the serial m
Dear Roy,
On Wed, 14 Jan 2009, Roy Stogner wrote:
> On Wed, 14 Jan 2009, Tim Kroeger wrote:
>
>> I don't think that the (serial) grid itself is the decisive thing in
>> my application. (I watched the memory consumption, and it remains
>> small during the grid creation procedure and increases dra
On Wed, 14 Jan 2009, Tim Kroeger wrote:
> I don't think that the (serial) grid itself is the decisive thing in
> my application. (I watched the memory consumption, and it remains
> small during the grid creation procedure and increases drastically
> when the systems are created and initialized.)
Dear libMesh team,
Is there any chance that the memory scaling of
System::current_local_solution will be improved in the near future?
In my application, I have a large number of systems and a large number
of cells, and the fact that System::current_local_solution is always a
serial vector seem