On Thu, 15 Jan 2009, Roy Stogner wrote:
>> (A local-to-global mapping seems to be provided by PETSc.)
>
> Interesting - like a sparsity pattern for vectors. Is it shared
> between multiple vectors?
I think yes, but I'm not sure.
>> My idea would be that PetscVector creates the global-to-local m
On Thu, 15 Jan 2009, Tim Kroeger wrote:
> But there is another problem (things turn out to be more difficult than I
> thought): In the ghost cell case, PETSc does not provide the appropriate
> global-to-local mapping that would be required for e.g.
> NumericVector::operator()(unsigned int). I
Dear Roy,
On Wed, 14 Jan 2009, Roy Stogner wrote:
> On Wed, 14 Jan 2009, Tim Kroeger wrote:
>
>> Now, the first problem I encounter -- but actually it's not really a
>> problem -- is that PETSc supplies the possibility to automatically
>> communicate any changes of vector components to the othe
I'm pretty sure that Trilinos doesn't have this capability. I'm with
Roy in thinking that it would be better to code this once ourselves so
it will work for all NumericVectors.
Derek
--
This SF.net email is sponsored
On Wed, 14 Jan 2009, Tim Kroeger wrote:
> Now, the first problem I encounter -- but actually it's not really a problem
> -- is that PETSc supplies the possibility to automatically communicate any
> changes of vector components to the other processors that may have those
> indices as ghost indi
Dear Roy,
On Wed, 14 Jan 2009, Roy Stogner wrote:
> By "that" I just meant the stuff you've referred to as "the other
> side"; I don't have time to figure out the right PETSc APIs for
> part-contiguous, part-sparse vectors.
Okay, so I will start doing the NumericVector and the PetscVector
stuff
On Wed, 14 Jan 2009, Tim Kroeger wrote:
>>> I think that adding the required functionality to NumericVector and
>>> PetscVector would not be too complicated (PETSc's VecCreateGhost()
>>> seems to do the trick). I can try to do this part myself, that is to
>>> add a constructor that aditionally t
>>> I don't think that the (serial) grid itself is the decisive thing in
>>> my application. (I watched the memory consumption, and it remains
>>> small during the grid creation procedure and increases drastically
>>> when the systems are created and initialized.)
>>
>> Keep in mind, the serial m
Dear Roy,
On Wed, 14 Jan 2009, Roy Stogner wrote:
> On Wed, 14 Jan 2009, Tim Kroeger wrote:
>
>> I don't think that the (serial) grid itself is the decisive thing in
>> my application. (I watched the memory consumption, and it remains
>> small during the grid creation procedure and increases dra
On Wed, 14 Jan 2009, Tim Kroeger wrote:
> I don't think that the (serial) grid itself is the decisive thing in
> my application. (I watched the memory consumption, and it remains
> small during the grid creation procedure and increases drastically
> when the systems are created and initialized.)
Dear libMesh team,
Is there any chance that the memory scaling of
System::current_local_solution will be improved in the near future?
In my application, I have a large number of systems and a large number
of cells, and the fact that System::current_local_solution is always a
serial vector seem
11 matches
Mail list logo