On Wed, 10 Dec, 2014 at 10:01 AM, Johan Hake <[email protected]> wrote:


On Wed, Dec 10, 2014 at 10:07 AM, Garth N. Wells <[email protected]> wrote:


On Wed, 10 Dec, 2014 at 8:59 AM, Johan Hake <[email protected]> wrote:
On Tue, Dec 9, 2014 at 11:56 PM, Garth N. Wells <[email protected]> wrote:

> On 9 Dec 2014, at 20:10, Johan Hake <[email protected]> wrote:
>
>
>
> On Tue, Dec 9, 2014 at 7:20 PM, Garth N. Wells <[email protected]> wrote:
>
> > On 9 Dec 2014, at 18:12, Johan Hake <[email protected]> wrote:
> >
> > In a local branch I have now stripped the whole c++ implementation of the GenericVector indexing. I have moved all logic of checking indices to the Python layer. I have removed all usage of slices as the latter really does not make sense in parallel. The following now works:
> >
> >  v[indices] = values
> >
> > where indices and values can be:
> >
> > 1) indices: some int; values must be scalar
> > 2) indices: list of ints or ndarray of ints; values can be either scalar or ndarray
> >
> > indices must be in range [0..local_size].
>
> Does the range [0, local_size) include ghost values?
>
> Not intentionally. Can these be set through the set_local interface?

Through set_local(const double*, std::size_t, const dolfin::la_index*), yes.

> local_size is given by GenericVector::local_size().​
>

The concept of ‘local_size’ is a bit vague (the PETSc doc cautions the use of VecGetLocalSize), e.g. should it contain ghost values or not?

To me it seems like VecGetLocalSize only returns the local dofs.

It returns PETSc's concept of the local size. It breaks abstractions because it makes an assumption on the underlying storage which is not necessarily valid for all implementations.

What do you mean with different implementations​? Different usage?

Implementations. The 'feature' of PETSc, Tpetra, etc is that they abstract away details of the internal representation/storage. For example, the concept of 'local size' is different for Tpetra and PETSc.

But the local_range is always fixed (as long as we do not repartition), right?

We don't have a consistent concept of local_range. We could go with 'all values present on a process'. This would include ghosts/shared entries.

If so we can assume that only dofs within the local range can be set, at least from the numpy interface. These dofs are set using local numbering by the method set_local. Aren't, at least for now, ghost values stored on top of the local dof range, so if we keep our indices below local_range[1]-local_range[0] we should be fine?

You need to first get the ghosted vector by VecGhostGetLocalForm and then call VecGetSize on that to also get the size of the local + ghosted vector. It also seems like local_size is the same as local_range[1]-local_range[0] regardless of the size of the local dofs and the present of any ghosted dofs.

Can you give an example where the ghost dofs are set using set_local?


During assembly. set_local(const double*, std::size_t, const dolfin::la_index*) takes local indices - the actual data can reside elsewhere.

​So local size will vary during assemble?​

When a vector is created, a local-to-global map for the vector is set.

And then fixing the local_size?


No, because the local-to-global map can contain off-process entries. The local indices are in [0, n), but the target entry may reside on a different process, e.g. [0, m) entries on the process, and [m, n) on another process.


My understanding of Tepetra is that it doesn’t have a concept of ‘ownership’, i.e. vectors entries can be stored on more than one process and are updated via a function call. No one process is designated as the ‘owner’.

​So you have shared dofs instead of dedicated owned and ghosted dofs?

Yes.

That will of course make things more complicated...

In terms of interfacing to NumPy, yes. At a lower level in DOLFIN I think it will makes things simpler.

​Ok, are we going to change dolfin's concept of owing a dofs (think what we do in the dofmap) when tepetra will be added?

This is a different topic because it's not pure linear algebra - we can decide how dof ownership is defined.

I'm leaning towards 'local' meaning all available entries on a process, which in cases will mean duplication for shared/ghost values.

Garth

Johan




Garth


Johan



Garth

> > If indices and values all are of correct type and range GenericVector.set_local(indices, values) are eventually called followed by a call to apply("insert"). If an error occurs it will be catched in the __setitem__ method and apply("insert") is called in the except statement. The latter to avoid deadlocks.
> >
> > In additional boolean array indicing works:
> >
> >   v[v<5.] = 5.0
> >
> > This obviously restricts to local values.
> >
> > I settled with calling apply("insert") inside the __setitem__ method. If a user want to have more fine grain control he can use set_local directly, and then take the responsibility for calling apply("insert") him self.
> >
> > What this new python layer implementation does not cover is slice assignments. Typically:
> >
> >   v[0:20:2] = 1.0
> >
> > But I am not aware of any who uses it and it really does not make any sense in a parallel setting.
> >
> > Even though this is a pretty big change close to a release, I think it is long overdue and should go in before 1.5 release.
> >
> > The branch will be ready for review at the end of this week but any comments this far is highly appreciated.
> >
>
> I can take a look early next week.
>
> ​Cool.
>
> Johan​
>
>
>
> Garth
>
> > Johan
> >
> >
> >
> >
> >
> > On Fri, Nov 28, 2014 at 3:59 PM, Martin Sandve Alnæs <[email protected]> wrote:
> > If doing low level editing of vector values, yes.
> >
> > Unless we set dirty flags on __setitem__, and call apply elsewhere whenever an updated vector is needed, as discussed before.
> >
> > There's probably a lot of common operations that we can add high level utility functions for performing without accessing the vector directly, making this issue rarer.
> >
> > Martin
> >
> >
> > On 28 November 2014 at 15:45, Johan Hake <[email protected]> wrote: > > Are you saying that apply calls should be up to the user to call?
> >
> > Joahn
> >
> > On Fri, Nov 28, 2014 at 3:39 PM, Martin Sandve Alnæs <[email protected]> wrote: > > I think there's a lot of merit to the concept of using numpy views of the local vectors and require apply calls to communicate.
> >
> > Martin
> >
> > 28. nov. 2014 15:04 skrev "Garth N. Wells" <[email protected]>:
> >
> > On Thu, 27 Nov, 2014 at 7:38 PM, Johan Hake <[email protected]> wrote:
> > Hello!
> >
> > In some code I have I uses the indices interface to set local dofs in a vector. It turns out that v[indices] = some_values uses the GenericVector::set function instead of GenericVector::set_local. This means that one need to pass global indices.
> >
> > I typically use the slicing together with some combination of indices I got from the vertex_to_dofs functionality. However, now that returns local dofs and it then makes more sense to switch the behavior of v[indices] to use local dofs.
> >
> > Any objections against switching to local indices in v[indices]?
> >
> > I don't have any objections, but I also don't have a clear view of how we should interact with distributed vectors from Python re the NumPy wrapping. It's a bigger job, but it would be nice to think this through for a consistent interaction between distributed DOLFIN vectors and wrapping as NumPy objects.
> >
> > Garth
> >
> >
> > Johan
> >
> >
> >
> >
> >
> > _______________________________________________
> > fenics mailing list
> > [email protected]
> > http://fenicsproject.org/mailman/listinfo/fenics
> >
> >
> >
> > _______________________________________________
> > fenics mailing list
> > [email protected]
> > http://fenicsproject.org/mailman/listinfo/fenics





_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to