Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Bill Fischofer
On Thu, Jun 9, 2016 at 9:38 AM, Jerin Jacob 
wrote:

> On Thu, Jun 09, 2016 at 03:20:23PM +0200, Christophe Milard wrote:
> > Yes, we do have name lookup functions to get handles, but I cannot see
> > what would make these the only way to retrieve handles from ODP.
> > Why couldn't your first process send the handle to the second process
> > via another IPC, e.g. unix sockets, signaling, message passing or even
> > writing/reading to a file?
> > My point is: Do you agree that an ODP handle for a given ODP object is
> > unique within an ODP instance, meaning that if a thread B gets the
> > handle from process A, regardless of how, B will be able to use that
> > handle...
> >  Or
> > Do you think that your lookup functions could return 2 differents
> > handles in 2 different threads (for the same ODP object name), meaning
> > that the scope of the handle is actually the ODP thread.?
>
> Two different process may return 2 different handles for a physical
> resource on multiprocess environment. Portable applications should
> not send the "handles" in some other means to other process(like IPC)
> and access it.Instead, use lookup API to get the handle in other process
> for a given odp resource.


That's a very good point that deserves to be highlighted in the user guide.
The general rule is that applications should not attempt to cache values
"for efficiency" in lieu of calling ODP APIs if they want best portability.


>
>
Jerin
>
> >
> > Thanks,
> >
> > Christophe.
> >
> > On 9 June 2016 at 14:54, Jerin Jacob 
> wrote:
> > > On Thu, Jun 09, 2016 at 02:13:49PM +0200, Christophe Milard wrote:
> > >> Bill:
> > >> S19: When you write:"These addresses are intended to be used within
> > >> the scope of the calling thread and should not be assumed to have any
> > >> validity outside of that context", do you really mean this, regardless
> > >> on how ODP threads are implemented?
> > >>
> > >> Jerrin:
> > >> S18: your answer to S18 is confusing me. see my returned question in
> > >> the doc. please answer it :-)
> > >
> > > |Christophe to Jerin: if 2 different ODP threads are expected to
> retrieve
> > > |each its own handle
> > > |via lookup calls, aren't you saying that handles are NOT valid over
> the
> > > |whole ODP
> > > |instance? If you are just saying that ODP may provide additional APIs
> to
> > > |retrieve handles,
> > > |then fine, as long as the usage of this APIs is optionnal and passing
> > > |the handles via IPC is as good
> > >
> > > We do have ODP API for the name to handle lookup. For instance a _true_
> > > multi-process ODP implementation(i.e Second process NOT derived from
> first
> > > process(two ODP process started at two terminals on different cores),
> > > one as primary and other one secondary). In those cases only way to
> > > share the handles is through ODP lookup API)
> > >
> > > Jerin
> > >
> > >>
> > >> All: I have updated the doc with a summary table and the comments from
> > >> Jon Rosen. Please, check that What I wrote matches what you think.
> > >>
> > >> I have also added a list of possible behaviour (numbered A to F) for
> > >> S19. Can you read these and pick your choice? If none of these
> > >> alternative match your choice, please add your description.
> > >>
> > >> Christophe
> > >>
> > >> On 9 June 2016 at 02:53, Bill Fischofer 
> wrote:
> > >> >
> > >> >
> > >> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
> > >> >>
> > >> >> Hi, Christophe and team
> > >> >>
> > >> >> For the shmem part:
> > >> >>
> > >> >> S18: yes
> > >> >> S19, S20: I agree with Bill's comments are very accurate and
> formalized.
> > >> >>
> > >> >> S21: to cover the case shm addresses(pointers) passed within data
> > >> >> structure
> > >> >>
> > >> >> especially think of cases:
> > >> >> 3) Context pointer getters (odp_queue_context(),
> > >> >> odp_packet_user_ptr(), odp_timeout_user_ptr(),
> > >> >> odp_tm_node_context(), odp_tm_queue_context(), and
> > >> >> I agree with the solution, only feel it may meet difficulties
> > >> >> in runtime:
> > >> >>
> > >> >> According to man page:
> > >> >> Using shmat() with shmaddr equal to NULL is the preferred,
> > >> >> portable way of attaching a shared memory segment. Be aware
> > >> >> that the shared memory segment attached in this way may be
> > >> >> attached at different addresses in different processes.
> > >> >> Therefore, any pointers maintained within the shared memory
> > >> >> must be made relative (typically to the starting address of
> > >> >> the segment), rather than absolute.
> > >> >> I found an alternate sweetheart only available in Windows,
> > >> >> called based pointers (C++)
> > >> >> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
> > >> >>
> > >> >> Maybe we can spent some time to look for a counterpart in
> > >> >> std C world, that will be perfect.
> > >> >
> > >> >
> > >> > For students of programming history, 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Christophe Milard
to start with: Thanks Bill for taking the time to answer :-). I DO appreciate.

The only thing I really want us to agree on is that your approach is
really a handles-only approach (when communicating between odp
threads), regardless of what a thread is.
Because if we don't define the behaviour, in other cases then it cannot be used.

If we agree on that, then at least, I can say I understand your
proposal. As long as you say "this is the minimum we guarentee" , and
the "maximum" is not defined, I feel anyone will have to use the
minimum. => handles

I am afraid a handle only approach would have too poor performance on
some system, yes, but this another story. This is even maybe wrong. To
start with I just want to understand how your proposal could be used
by an programmer

On 9 June 2016 at 16:30, Bill Fischofer  wrote:
>
>
> On Thu, Jun 9, 2016 at 9:01 AM, Christophe Milard
>  wrote:
>>
>> On 9 June 2016 at 14:30, Bill Fischofer  wrote:
>> >
>> >
>> > On Thu, Jun 9, 2016 at 7:13 AM, Christophe Milard
>> >  wrote:
>> >>
>> >> Bill:
>> >> S19: When you write:"These addresses are intended to be used within
>> >> the scope of the calling thread and should not be assumed to have any
>> >> validity outside of that context", do you really mean this, regardless
>> >> on how ODP threads are implemented?
>> >
>> >
>> > Yes. This is simply a statement of the minimum guarantees that ODP
>> > makes.
>> > Applications that wish to ensure maximum portability should follow this
>> > practice. If they wish to make use of special knowledge outside of what
>> > the
>> > ODP API guarantees they are free to do so with the understanding that
>> > they
>> > are trading off portability for some other benefit known to them.
>>
>> But who is expected to provide that "special knowledge outside of what
>> the ODP API guarantees" ?  I guess that the fact that an packet data
>> address returned by an ODP API call is valid within all odpthreads
>> sharing the pool (i.e. if you fork after pool create, you are fine)
>> belongs to that "special knowledge"...
>> Where do you expect the user to find this kind of information if not
>> in the ODP docs?
>>
>> How would a user know that a child process can make use of an odp
>> pointer after fork? The user then have to know whether the pointer
>> points to a shared page or not... These are things that the odp
>> implementation knows...
>
>
> The object you are sharing is the packet (represented by its handle), not a
> pointer to some part of a packet mapping. That's the key distinction.
> Separating object manipulation from object mapping is one of the key
> distinguishing features of ODP that makes it possible to implement on a wide
> array of platforms that have very different internal memory models.
>
> Pointers are needed only if you want to request unmediated access to some
> part of a packet contents. If you're manipulating packet headroom or
> tailroom, for example, you're using the packet handle, not pointers. If
> you're splitting a packet you use packet handles, not pointers. If you're
> adding, removing, or copying data from a packet you're using handles, not
> pointers. The ODP APIs are handle-oriented, not pointer oriented, so that's
> what you share between various parts of your application.
>
>>
>>
>> how would a user dare  assuming things which are not documented? when
>> using pthreads, who says odp_init_local() does not invalidate some of
>> the ODP pointers?
>>
>> As a programmer, I'd go for the rule: undefined = don't use.
>>
>> following that rule would make me use handles only. Portable: yes.
>> Acceptable performance wise... not sure...
>
>
> No, that's not what we're saying. What we're saying is that packets are
> represented by handles and programs communicate by passing these handles
> around via event queues. If you want to look at the contents of a packet you
> can obtain addressability to portions of a packet via well-defined ODP API
> calls and those pointers have meaning for you. When someone else obtains
> access to the packet handle it may do the same.
>
> What is the specific use case you're worried about?  Petri mentioned wanting
> to have multiple threads work on a single packet at the same time. Ignore
> for a moment the rarity of such cases (the normal packet processing model is
> one packet per thread and the reason why we have things like ordered queues
> is so that you can process multiple packets in a single flow concurrently)
> this is trivially handled today without further specification. Presumably
> each thread wants to work on a different offset in the packet, so each such
> thread does so by simply calling odp_packet_offset(pkt, myoffset, ,
> NULL) to get addressability to that part of the packet and goes about its
> business.  Note that there is no other way you could program this since you
> cannot "compute" a proper packet address 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Jerin Jacob
On Thu, Jun 09, 2016 at 03:20:23PM +0200, Christophe Milard wrote:
> Yes, we do have name lookup functions to get handles, but I cannot see
> what would make these the only way to retrieve handles from ODP.
> Why couldn't your first process send the handle to the second process
> via another IPC, e.g. unix sockets, signaling, message passing or even
> writing/reading to a file?
> My point is: Do you agree that an ODP handle for a given ODP object is
> unique within an ODP instance, meaning that if a thread B gets the
> handle from process A, regardless of how, B will be able to use that
> handle...
>  Or
> Do you think that your lookup functions could return 2 differents
> handles in 2 different threads (for the same ODP object name), meaning
> that the scope of the handle is actually the ODP thread.?

Two different process may return 2 different handles for a physical
resource on multiprocess environment. Portable applications should
not send the "handles" in some other means to other process(like IPC)
and access it.Instead, use lookup API to get the handle in other process
for a given odp resource.

Jerin

> 
> Thanks,
> 
> Christophe.
> 
> On 9 June 2016 at 14:54, Jerin Jacob  wrote:
> > On Thu, Jun 09, 2016 at 02:13:49PM +0200, Christophe Milard wrote:
> >> Bill:
> >> S19: When you write:"These addresses are intended to be used within
> >> the scope of the calling thread and should not be assumed to have any
> >> validity outside of that context", do you really mean this, regardless
> >> on how ODP threads are implemented?
> >>
> >> Jerrin:
> >> S18: your answer to S18 is confusing me. see my returned question in
> >> the doc. please answer it :-)
> >
> > |Christophe to Jerin: if 2 different ODP threads are expected to retrieve
> > |each its own handle
> > |via lookup calls, aren't you saying that handles are NOT valid over the
> > |whole ODP
> > |instance? If you are just saying that ODP may provide additional APIs to
> > |retrieve handles,
> > |then fine, as long as the usage of this APIs is optionnal and passing
> > |the handles via IPC is as good
> >
> > We do have ODP API for the name to handle lookup. For instance a _true_
> > multi-process ODP implementation(i.e Second process NOT derived from first
> > process(two ODP process started at two terminals on different cores),
> > one as primary and other one secondary). In those cases only way to
> > share the handles is through ODP lookup API)
> >
> > Jerin
> >
> >>
> >> All: I have updated the doc with a summary table and the comments from
> >> Jon Rosen. Please, check that What I wrote matches what you think.
> >>
> >> I have also added a list of possible behaviour (numbered A to F) for
> >> S19. Can you read these and pick your choice? If none of these
> >> alternative match your choice, please add your description.
> >>
> >> Christophe
> >>
> >> On 9 June 2016 at 02:53, Bill Fischofer  wrote:
> >> >
> >> >
> >> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
> >> >>
> >> >> Hi, Christophe and team
> >> >>
> >> >> For the shmem part:
> >> >>
> >> >> S18: yes
> >> >> S19, S20: I agree with Bill's comments are very accurate and formalized.
> >> >>
> >> >> S21: to cover the case shm addresses(pointers) passed within data
> >> >> structure
> >> >>
> >> >> especially think of cases:
> >> >> 3) Context pointer getters (odp_queue_context(),
> >> >> odp_packet_user_ptr(), odp_timeout_user_ptr(),
> >> >> odp_tm_node_context(), odp_tm_queue_context(), and
> >> >> I agree with the solution, only feel it may meet difficulties
> >> >> in runtime:
> >> >>
> >> >> According to man page:
> >> >> Using shmat() with shmaddr equal to NULL is the preferred,
> >> >> portable way of attaching a shared memory segment. Be aware
> >> >> that the shared memory segment attached in this way may be
> >> >> attached at different addresses in different processes.
> >> >> Therefore, any pointers maintained within the shared memory
> >> >> must be made relative (typically to the starting address of
> >> >> the segment), rather than absolute.
> >> >> I found an alternate sweetheart only available in Windows,
> >> >> called based pointers (C++)
> >> >> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
> >> >>
> >> >> Maybe we can spent some time to look for a counterpart in
> >> >> std C world, that will be perfect.
> >> >
> >> >
> >> > For students of programming history, or those old enough to remember, the
> >> > concept of BASED storage originated in the (now ancient) programming
> >> > language PL/I. Pointers to BASED storage were stored as offsets and the
> >> > compiler automatically handled the relative addressing. They are very
> >> > convenient for this sort of purpose.
> >> >
> >> >>
> >> >>
> >> >> S23: agree with Bill's comments covered the cases.
> >> >>
> >> >> Thanks and best regards, Yi
> >> >>
> >> >> On 8 June 2016 at 17:04, Christophe Milard 
> >> >> 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Bill Fischofer
On Thu, Jun 9, 2016 at 9:01 AM, Christophe Milard <
christophe.mil...@linaro.org> wrote:

> On 9 June 2016 at 14:30, Bill Fischofer  wrote:
> >
> >
> > On Thu, Jun 9, 2016 at 7:13 AM, Christophe Milard
> >  wrote:
> >>
> >> Bill:
> >> S19: When you write:"These addresses are intended to be used within
> >> the scope of the calling thread and should not be assumed to have any
> >> validity outside of that context", do you really mean this, regardless
> >> on how ODP threads are implemented?
> >
> >
> > Yes. This is simply a statement of the minimum guarantees that ODP makes.
> > Applications that wish to ensure maximum portability should follow this
> > practice. If they wish to make use of special knowledge outside of what
> the
> > ODP API guarantees they are free to do so with the understanding that
> they
> > are trading off portability for some other benefit known to them.
>
> But who is expected to provide that "special knowledge outside of what
> the ODP API guarantees" ?  I guess that the fact that an packet data
> address returned by an ODP API call is valid within all odpthreads
> sharing the pool (i.e. if you fork after pool create, you are fine)
> belongs to that "special knowledge"...
> Where do you expect the user to find this kind of information if not
> in the ODP docs?

How would a user know that a child process can make use of an odp
> pointer after fork? The user then have to know whether the pointer
> points to a shared page or not... These are things that the odp
> implementation knows...
>

The object you are sharing is the packet (represented by its handle), not a
pointer to some part of a packet mapping. That's the key distinction.
Separating object manipulation from object mapping is one of the key
distinguishing features of ODP that makes it possible to implement on a
wide array of platforms that have very different internal memory models.

Pointers are needed only if you want to request unmediated access to some
part of a packet contents. If you're manipulating packet headroom or
tailroom, for example, you're using the packet handle, not pointers. If
you're splitting a packet you use packet handles, not pointers. If you're
adding, removing, or copying data from a packet you're using handles, not
pointers. The ODP APIs are handle-oriented, not pointer oriented, so that's
what you share between various parts of your application.


>
> how would a user dare  assuming things which are not documented? when
> using pthreads, who says odp_init_local() does not invalidate some of
> the ODP pointers?
>
> As a programmer, I'd go for the rule: undefined = don't use.
>
> following that rule would make me use handles only. Portable: yes.
> Acceptable performance wise... not sure...
>

No, that's not what we're saying. What we're saying is that packets are
represented by handles and programs communicate by passing these handles
around via event queues. If you want to look at the contents of a packet
you can obtain addressability to portions of a packet via well-defined ODP
API calls and those pointers have meaning for you. When someone else
obtains access to the packet handle it may do the same.

What is the specific use case you're worried about?  Petri mentioned
wanting to have multiple threads work on a single packet at the same time.
Ignore for a moment the rarity of such cases (the normal packet processing
model is one packet per thread and the reason why we have things like
ordered queues is so that you can process multiple packets in a single flow
concurrently) this is trivially handled today without further
specification. Presumably each thread wants to work on a different offset
in the packet, so each such thread does so by simply calling
odp_packet_offset(pkt, myoffset, , NULL) to get addressability to
that part of the packet and goes about its business.  Note that there is no
other way you could program this since you cannot "compute" a proper packet
address from another address because of possible discontinuities due to
segment boundaries. That's what the ODP API does, so you just use it as
intended.


>
>
>
> >
> >>
> >>
> >> Jerrin:
> >> S18: your answer to S18 is confusing me. see my returned question in
> >> the doc. please answer it :-)
> >>
> >> All: I have updated the doc with a summary table and the comments from
> >> Jon Rosen. Please, check that What I wrote matches what you think.
> >>
> >> I have also added a list of possible behaviour (numbered A to F) for
> >> S19. Can you read these and pick your choice? If none of these
> >> alternative match your choice, please add your description.
> >>
> >> Christophe
> >>
> >> On 9 June 2016 at 02:53, Bill Fischofer 
> wrote:
> >> >
> >> >
> >> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
> >> >>
> >> >> Hi, Christophe and team
> >> >>
> >> >> For the shmem part:
> >> >>
> >> >> S18: yes
> >> >> S19, S20: I agree with Bill's 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Christophe Milard
On 9 June 2016 at 14:30, Bill Fischofer  wrote:
>
>
> On Thu, Jun 9, 2016 at 7:13 AM, Christophe Milard
>  wrote:
>>
>> Bill:
>> S19: When you write:"These addresses are intended to be used within
>> the scope of the calling thread and should not be assumed to have any
>> validity outside of that context", do you really mean this, regardless
>> on how ODP threads are implemented?
>
>
> Yes. This is simply a statement of the minimum guarantees that ODP makes.
> Applications that wish to ensure maximum portability should follow this
> practice. If they wish to make use of special knowledge outside of what the
> ODP API guarantees they are free to do so with the understanding that they
> are trading off portability for some other benefit known to them.

But who is expected to provide that "special knowledge outside of what
the ODP API guarantees" ?  I guess that the fact that an packet data
address returned by an ODP API call is valid within all odpthreads
sharing the pool (i.e. if you fork after pool create, you are fine)
belongs to that "special knowledge"...
Where do you expect the user to find this kind of information if not
in the ODP docs?
How would a user know that a child process can make use of an odp
pointer after fork? The user then have to know whether the pointer
points to a shared page or not... These are things that the odp
implementation knows...

how would a user dare  assuming things which are not documented? when
using pthreads, who says odp_init_local() does not invalidate some of
the ODP pointers?

As a programmer, I'd go for the rule: undefined = don't use.

following that rule would make me use handles only. Portable: yes.
Acceptable performance wise... not sure...



>
>>
>>
>> Jerrin:
>> S18: your answer to S18 is confusing me. see my returned question in
>> the doc. please answer it :-)
>>
>> All: I have updated the doc with a summary table and the comments from
>> Jon Rosen. Please, check that What I wrote matches what you think.
>>
>> I have also added a list of possible behaviour (numbered A to F) for
>> S19. Can you read these and pick your choice? If none of these
>> alternative match your choice, please add your description.
>>
>> Christophe
>>
>> On 9 June 2016 at 02:53, Bill Fischofer  wrote:
>> >
>> >
>> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
>> >>
>> >> Hi, Christophe and team
>> >>
>> >> For the shmem part:
>> >>
>> >> S18: yes
>> >> S19, S20: I agree with Bill's comments are very accurate and
>> >> formalized.
>> >>
>> >> S21: to cover the case shm addresses(pointers) passed within data
>> >> structure
>> >>
>> >> especially think of cases:
>> >> 3) Context pointer getters (odp_queue_context(),
>> >> odp_packet_user_ptr(), odp_timeout_user_ptr(),
>> >> odp_tm_node_context(), odp_tm_queue_context(), and
>> >> I agree with the solution, only feel it may meet difficulties
>> >> in runtime:
>> >>
>> >> According to man page:
>> >> Using shmat() with shmaddr equal to NULL is the preferred,
>> >> portable way of attaching a shared memory segment. Be aware
>> >> that the shared memory segment attached in this way may be
>> >> attached at different addresses in different processes.
>> >> Therefore, any pointers maintained within the shared memory
>> >> must be made relative (typically to the starting address of
>> >> the segment), rather than absolute.
>> >> I found an alternate sweetheart only available in Windows,
>> >> called based pointers (C++)
>> >> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
>> >>
>> >> Maybe we can spent some time to look for a counterpart in
>> >> std C world, that will be perfect.
>> >
>> >
>> > For students of programming history, or those old enough to remember,
>> > the
>> > concept of BASED storage originated in the (now ancient) programming
>> > language PL/I. Pointers to BASED storage were stored as offsets and the
>> > compiler automatically handled the relative addressing. They are very
>> > convenient for this sort of purpose.
>> >
>> >>
>> >>
>> >> S23: agree with Bill's comments covered the cases.
>> >>
>> >> Thanks and best regards, Yi
>> >>
>> >> On 8 June 2016 at 17:04, Christophe Milard
>> >> 
>> >> wrote:
>> >>>
>> >>> OK. good that you agree (and please update the shared doc so it
>> >>> becomes the central point of information).
>> >>> There is something I like though, in your willingness to decouple the
>> >>> function and the pinning...
>> >>> Even if I am not sure this can be enforced by ODP at all time (as
>> >>> already stated), there is definitively a point in helping the
>> >>> application that with to do so. So please keep an eye on that!
>> >>>
>> >>> Your opinion on S19 S20 and S21 would be very welcome as well... This
>> >>> is the main hurting point
>> >>>
>> >>> Christophe
>> >>>
>> >>> On 8 June 2016 at 09:41, Yi He  wrote:
>> >>> > Hi, thanks 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Christophe Milard
Yes, we do have name lookup functions to get handles, but I cannot see
what would make these the only way to retrieve handles from ODP.
Why couldn't your first process send the handle to the second process
via another IPC, e.g. unix sockets, signaling, message passing or even
writing/reading to a file?
My point is: Do you agree that an ODP handle for a given ODP object is
unique within an ODP instance, meaning that if a thread B gets the
handle from process A, regardless of how, B will be able to use that
handle...
 Or
Do you think that your lookup functions could return 2 differents
handles in 2 different threads (for the same ODP object name), meaning
that the scope of the handle is actually the ODP thread.?

Thanks,

Christophe.

On 9 June 2016 at 14:54, Jerin Jacob  wrote:
> On Thu, Jun 09, 2016 at 02:13:49PM +0200, Christophe Milard wrote:
>> Bill:
>> S19: When you write:"These addresses are intended to be used within
>> the scope of the calling thread and should not be assumed to have any
>> validity outside of that context", do you really mean this, regardless
>> on how ODP threads are implemented?
>>
>> Jerrin:
>> S18: your answer to S18 is confusing me. see my returned question in
>> the doc. please answer it :-)
>
> |Christophe to Jerin: if 2 different ODP threads are expected to retrieve
> |each its own handle
> |via lookup calls, aren't you saying that handles are NOT valid over the
> |whole ODP
> |instance? If you are just saying that ODP may provide additional APIs to
> |retrieve handles,
> |then fine, as long as the usage of this APIs is optionnal and passing
> |the handles via IPC is as good
>
> We do have ODP API for the name to handle lookup. For instance a _true_
> multi-process ODP implementation(i.e Second process NOT derived from first
> process(two ODP process started at two terminals on different cores),
> one as primary and other one secondary). In those cases only way to
> share the handles is through ODP lookup API)
>
> Jerin
>
>>
>> All: I have updated the doc with a summary table and the comments from
>> Jon Rosen. Please, check that What I wrote matches what you think.
>>
>> I have also added a list of possible behaviour (numbered A to F) for
>> S19. Can you read these and pick your choice? If none of these
>> alternative match your choice, please add your description.
>>
>> Christophe
>>
>> On 9 June 2016 at 02:53, Bill Fischofer  wrote:
>> >
>> >
>> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
>> >>
>> >> Hi, Christophe and team
>> >>
>> >> For the shmem part:
>> >>
>> >> S18: yes
>> >> S19, S20: I agree with Bill's comments are very accurate and formalized.
>> >>
>> >> S21: to cover the case shm addresses(pointers) passed within data
>> >> structure
>> >>
>> >> especially think of cases:
>> >> 3) Context pointer getters (odp_queue_context(),
>> >> odp_packet_user_ptr(), odp_timeout_user_ptr(),
>> >> odp_tm_node_context(), odp_tm_queue_context(), and
>> >> I agree with the solution, only feel it may meet difficulties
>> >> in runtime:
>> >>
>> >> According to man page:
>> >> Using shmat() with shmaddr equal to NULL is the preferred,
>> >> portable way of attaching a shared memory segment. Be aware
>> >> that the shared memory segment attached in this way may be
>> >> attached at different addresses in different processes.
>> >> Therefore, any pointers maintained within the shared memory
>> >> must be made relative (typically to the starting address of
>> >> the segment), rather than absolute.
>> >> I found an alternate sweetheart only available in Windows,
>> >> called based pointers (C++)
>> >> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
>> >>
>> >> Maybe we can spent some time to look for a counterpart in
>> >> std C world, that will be perfect.
>> >
>> >
>> > For students of programming history, or those old enough to remember, the
>> > concept of BASED storage originated in the (now ancient) programming
>> > language PL/I. Pointers to BASED storage were stored as offsets and the
>> > compiler automatically handled the relative addressing. They are very
>> > convenient for this sort of purpose.
>> >
>> >>
>> >>
>> >> S23: agree with Bill's comments covered the cases.
>> >>
>> >> Thanks and best regards, Yi
>> >>
>> >> On 8 June 2016 at 17:04, Christophe Milard 
>> >> wrote:
>> >>>
>> >>> OK. good that you agree (and please update the shared doc so it
>> >>> becomes the central point of information).
>> >>> There is something I like though, in your willingness to decouple the
>> >>> function and the pinning...
>> >>> Even if I am not sure this can be enforced by ODP at all time (as
>> >>> already stated), there is definitively a point in helping the
>> >>> application that with to do so. So please keep an eye on that!
>> >>>
>> >>> Your opinion on S19 S20 and S21 would be very welcome as well... This
>> >>> is the main hurting point
>> >>>
>> >>> 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Jerin Jacob
On Thu, Jun 09, 2016 at 02:13:49PM +0200, Christophe Milard wrote:
> Bill:
> S19: When you write:"These addresses are intended to be used within
> the scope of the calling thread and should not be assumed to have any
> validity outside of that context", do you really mean this, regardless
> on how ODP threads are implemented?
> 
> Jerrin:
> S18: your answer to S18 is confusing me. see my returned question in
> the doc. please answer it :-)

|Christophe to Jerin: if 2 different ODP threads are expected to retrieve
|each its own handle
|via lookup calls, aren't you saying that handles are NOT valid over the
|whole ODP
|instance? If you are just saying that ODP may provide additional APIs to
|retrieve handles,
|then fine, as long as the usage of this APIs is optionnal and passing
|the handles via IPC is as good

We do have ODP API for the name to handle lookup. For instance a _true_
multi-process ODP implementation(i.e Second process NOT derived from first
process(two ODP process started at two terminals on different cores),
one as primary and other one secondary). In those cases only way to
share the handles is through ODP lookup API)

Jerin

> 
> All: I have updated the doc with a summary table and the comments from
> Jon Rosen. Please, check that What I wrote matches what you think.
> 
> I have also added a list of possible behaviour (numbered A to F) for
> S19. Can you read these and pick your choice? If none of these
> alternative match your choice, please add your description.
> 
> Christophe
> 
> On 9 June 2016 at 02:53, Bill Fischofer  wrote:
> >
> >
> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
> >>
> >> Hi, Christophe and team
> >>
> >> For the shmem part:
> >>
> >> S18: yes
> >> S19, S20: I agree with Bill's comments are very accurate and formalized.
> >>
> >> S21: to cover the case shm addresses(pointers) passed within data
> >> structure
> >>
> >> especially think of cases:
> >> 3) Context pointer getters (odp_queue_context(),
> >> odp_packet_user_ptr(), odp_timeout_user_ptr(),
> >> odp_tm_node_context(), odp_tm_queue_context(), and
> >> I agree with the solution, only feel it may meet difficulties
> >> in runtime:
> >>
> >> According to man page:
> >> Using shmat() with shmaddr equal to NULL is the preferred,
> >> portable way of attaching a shared memory segment. Be aware
> >> that the shared memory segment attached in this way may be
> >> attached at different addresses in different processes.
> >> Therefore, any pointers maintained within the shared memory
> >> must be made relative (typically to the starting address of
> >> the segment), rather than absolute.
> >> I found an alternate sweetheart only available in Windows,
> >> called based pointers (C++)
> >> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
> >>
> >> Maybe we can spent some time to look for a counterpart in
> >> std C world, that will be perfect.
> >
> >
> > For students of programming history, or those old enough to remember, the
> > concept of BASED storage originated in the (now ancient) programming
> > language PL/I. Pointers to BASED storage were stored as offsets and the
> > compiler automatically handled the relative addressing. They are very
> > convenient for this sort of purpose.
> >
> >>
> >>
> >> S23: agree with Bill's comments covered the cases.
> >>
> >> Thanks and best regards, Yi
> >>
> >> On 8 June 2016 at 17:04, Christophe Milard 
> >> wrote:
> >>>
> >>> OK. good that you agree (and please update the shared doc so it
> >>> becomes the central point of information).
> >>> There is something I like though, in your willingness to decouple the
> >>> function and the pinning...
> >>> Even if I am not sure this can be enforced by ODP at all time (as
> >>> already stated), there is definitively a point in helping the
> >>> application that with to do so. So please keep an eye on that!
> >>>
> >>> Your opinion on S19 S20 and S21 would be very welcome as well... This
> >>> is the main hurting point
> >>>
> >>> Christophe
> >>>
> >>> On 8 June 2016 at 09:41, Yi He  wrote:
> >>> > Hi, thanks Christophe and happy to discuss with and learn from team in
> >>> > yesterday's ARCH call :)
> >>> >
> >>> > The question which triggers this kind of thinking is: how to use ODP as
> >>> > a
> >>> > framework to produce re-usable building blocks to compose "Network
> >>> > Function
> >>> > Instance" in runtime, since until runtime it will prepare resources for
> >>> > function to settle down, thus comes the thought of seperating function
> >>> > implementation and launching.
> >>> >
> >>> > I agree your point it seems upper layer consideration, I'll have some
> >>> > time
> >>> > to gain deeper understanding and knowledge on how upstair playings,
> >>> > thus I
> >>> > agree to the current S11, S12, S13, S14, S15, S16, S17 approach and we
> >>> > can
> >>> > revisit if really realized upper layer programming practise/model

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Bill Fischofer
On Thu, Jun 9, 2016 at 7:13 AM, Christophe Milard <
christophe.mil...@linaro.org> wrote:

> Bill:
> S19: When you write:"These addresses are intended to be used within
> the scope of the calling thread and should not be assumed to have any
> validity outside of that context", do you really mean this, regardless
> on how ODP threads are implemented?
>

Yes. This is simply a statement of the minimum guarantees that ODP makes.
Applications that wish to ensure maximum portability should follow this
practice. If they wish to make use of special knowledge outside of what the
ODP API guarantees they are free to do so with the understanding that they
are trading off portability for some other benefit known to them.


>
> Jerrin:
> S18: your answer to S18 is confusing me. see my returned question in
> the doc. please answer it :-)
>
> All: I have updated the doc with a summary table and the comments from
> Jon Rosen. Please, check that What I wrote matches what you think.
>
> I have also added a list of possible behaviour (numbered A to F) for
> S19. Can you read these and pick your choice? If none of these
> alternative match your choice, please add your description.
>
> Christophe
>
> On 9 June 2016 at 02:53, Bill Fischofer  wrote:
> >
> >
> > On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
> >>
> >> Hi, Christophe and team
> >>
> >> For the shmem part:
> >>
> >> S18: yes
> >> S19, S20: I agree with Bill's comments are very accurate and formalized.
> >>
> >> S21: to cover the case shm addresses(pointers) passed within data
> >> structure
> >>
> >> especially think of cases:
> >> 3) Context pointer getters (odp_queue_context(),
> >> odp_packet_user_ptr(), odp_timeout_user_ptr(),
> >> odp_tm_node_context(), odp_tm_queue_context(), and
> >> I agree with the solution, only feel it may meet difficulties
> >> in runtime:
> >>
> >> According to man page:
> >> Using shmat() with shmaddr equal to NULL is the preferred,
> >> portable way of attaching a shared memory segment. Be aware
> >> that the shared memory segment attached in this way may be
> >> attached at different addresses in different processes.
> >> Therefore, any pointers maintained within the shared memory
> >> must be made relative (typically to the starting address of
> >> the segment), rather than absolute.
> >> I found an alternate sweetheart only available in Windows,
> >> called based pointers (C++)
> >> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
> >>
> >> Maybe we can spent some time to look for a counterpart in
> >> std C world, that will be perfect.
> >
> >
> > For students of programming history, or those old enough to remember, the
> > concept of BASED storage originated in the (now ancient) programming
> > language PL/I. Pointers to BASED storage were stored as offsets and the
> > compiler automatically handled the relative addressing. They are very
> > convenient for this sort of purpose.
> >
> >>
> >>
> >> S23: agree with Bill's comments covered the cases.
> >>
> >> Thanks and best regards, Yi
> >>
> >> On 8 June 2016 at 17:04, Christophe Milard <
> christophe.mil...@linaro.org>
> >> wrote:
> >>>
> >>> OK. good that you agree (and please update the shared doc so it
> >>> becomes the central point of information).
> >>> There is something I like though, in your willingness to decouple the
> >>> function and the pinning...
> >>> Even if I am not sure this can be enforced by ODP at all time (as
> >>> already stated), there is definitively a point in helping the
> >>> application that with to do so. So please keep an eye on that!
> >>>
> >>> Your opinion on S19 S20 and S21 would be very welcome as well... This
> >>> is the main hurting point
> >>>
> >>> Christophe
> >>>
> >>> On 8 June 2016 at 09:41, Yi He  wrote:
> >>> > Hi, thanks Christophe and happy to discuss with and learn from team
> in
> >>> > yesterday's ARCH call :)
> >>> >
> >>> > The question which triggers this kind of thinking is: how to use ODP
> as
> >>> > a
> >>> > framework to produce re-usable building blocks to compose "Network
> >>> > Function
> >>> > Instance" in runtime, since until runtime it will prepare resources
> for
> >>> > function to settle down, thus comes the thought of seperating
> function
> >>> > implementation and launching.
> >>> >
> >>> > I agree your point it seems upper layer consideration, I'll have some
> >>> > time
> >>> > to gain deeper understanding and knowledge on how upstair playings,
> >>> > thus I
> >>> > agree to the current S11, S12, S13, S14, S15, S16, S17 approach and
> we
> >>> > can
> >>> > revisit if really realized upper layer programming practise/model
> >>> > affects
> >>> > ODP's design in this aspect.
> >>> >
> >>> > Best Regards, Yi
> >>> >
> >>> > On 7 June 2016 at 16:47, Christophe Milard
> >>> > 
> >>> > wrote:
> >>> >>
> >>> >> On 7 June 2016 at 10:22, Yi He  wrote:
> >>> >> > Hi, team
> >>> >> >
> >>> >> > 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-09 Thread Christophe Milard
Bill:
S19: When you write:"These addresses are intended to be used within
the scope of the calling thread and should not be assumed to have any
validity outside of that context", do you really mean this, regardless
on how ODP threads are implemented?

Jerrin:
S18: your answer to S18 is confusing me. see my returned question in
the doc. please answer it :-)

All: I have updated the doc with a summary table and the comments from
Jon Rosen. Please, check that What I wrote matches what you think.

I have also added a list of possible behaviour (numbered A to F) for
S19. Can you read these and pick your choice? If none of these
alternative match your choice, please add your description.

Christophe

On 9 June 2016 at 02:53, Bill Fischofer  wrote:
>
>
> On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:
>>
>> Hi, Christophe and team
>>
>> For the shmem part:
>>
>> S18: yes
>> S19, S20: I agree with Bill's comments are very accurate and formalized.
>>
>> S21: to cover the case shm addresses(pointers) passed within data
>> structure
>>
>> especially think of cases:
>> 3) Context pointer getters (odp_queue_context(),
>> odp_packet_user_ptr(), odp_timeout_user_ptr(),
>> odp_tm_node_context(), odp_tm_queue_context(), and
>> I agree with the solution, only feel it may meet difficulties
>> in runtime:
>>
>> According to man page:
>> Using shmat() with shmaddr equal to NULL is the preferred,
>> portable way of attaching a shared memory segment. Be aware
>> that the shared memory segment attached in this way may be
>> attached at different addresses in different processes.
>> Therefore, any pointers maintained within the shared memory
>> must be made relative (typically to the starting address of
>> the segment), rather than absolute.
>> I found an alternate sweetheart only available in Windows,
>> called based pointers (C++)
>> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
>>
>> Maybe we can spent some time to look for a counterpart in
>> std C world, that will be perfect.
>
>
> For students of programming history, or those old enough to remember, the
> concept of BASED storage originated in the (now ancient) programming
> language PL/I. Pointers to BASED storage were stored as offsets and the
> compiler automatically handled the relative addressing. They are very
> convenient for this sort of purpose.
>
>>
>>
>> S23: agree with Bill's comments covered the cases.
>>
>> Thanks and best regards, Yi
>>
>> On 8 June 2016 at 17:04, Christophe Milard 
>> wrote:
>>>
>>> OK. good that you agree (and please update the shared doc so it
>>> becomes the central point of information).
>>> There is something I like though, in your willingness to decouple the
>>> function and the pinning...
>>> Even if I am not sure this can be enforced by ODP at all time (as
>>> already stated), there is definitively a point in helping the
>>> application that with to do so. So please keep an eye on that!
>>>
>>> Your opinion on S19 S20 and S21 would be very welcome as well... This
>>> is the main hurting point
>>>
>>> Christophe
>>>
>>> On 8 June 2016 at 09:41, Yi He  wrote:
>>> > Hi, thanks Christophe and happy to discuss with and learn from team in
>>> > yesterday's ARCH call :)
>>> >
>>> > The question which triggers this kind of thinking is: how to use ODP as
>>> > a
>>> > framework to produce re-usable building blocks to compose "Network
>>> > Function
>>> > Instance" in runtime, since until runtime it will prepare resources for
>>> > function to settle down, thus comes the thought of seperating function
>>> > implementation and launching.
>>> >
>>> > I agree your point it seems upper layer consideration, I'll have some
>>> > time
>>> > to gain deeper understanding and knowledge on how upstair playings,
>>> > thus I
>>> > agree to the current S11, S12, S13, S14, S15, S16, S17 approach and we
>>> > can
>>> > revisit if really realized upper layer programming practise/model
>>> > affects
>>> > ODP's design in this aspect.
>>> >
>>> > Best Regards, Yi
>>> >
>>> > On 7 June 2016 at 16:47, Christophe Milard
>>> > 
>>> > wrote:
>>> >>
>>> >> On 7 June 2016 at 10:22, Yi He  wrote:
>>> >> > Hi, team
>>> >> >
>>> >> > I send my thoughts on the ODP thread part:
>>> >> >
>>> >> > S1, S2, S3, S4
>>> >> > Yi: Yes
>>> >> > This set of statements defines ODP thread concept as a higher
>>> >> > level abstraction of underlying concurrent execution context.
>>> >> >
>>> >> > S5, S6, S7, S8, S9, S10:
>>> >> > Yi: Yes
>>> >> > This set of statements add several constraints upon ODP
>>> >> > instance concept.
>>> >> >
>>> >> > S11, S12, S13, S14, S15, S16, S17:
>>> >> > Yi: Not very much
>>> >> >
>>> >> > Currently ODP application still mixes the Function Implementation
>>> >> > and its Deployment, by this I mean in ODP application code, there
>>> >> > is code to implement Function, there is also code to deploy the

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-08 Thread Bill Fischofer
On Wed, Jun 8, 2016 at 10:54 AM, Yi He  wrote:

> Hi, Christophe and team
>
> For the shmem part:
>
> S18: yes
> S19, S20: I agree with Bill's comments are very accurate and formalized.
>
> S21: to cover the case shm addresses(pointers) passed within data structure
>
> especially think of cases:
> 3) Context pointer getters (odp_queue_context(),
> odp_packet_user_ptr(), odp_timeout_user_ptr(),
> odp_tm_node_context(), odp_tm_queue_context(), and
> I agree with the solution, only feel it may meet difficulties
> in runtime:
>
> According to man page:
> Using shmat() with shmaddr equal to NULL is the preferred,
> portable way of attaching a shared memory segment. Be aware
> that the shared memory segment attached in this way may be
> attached at *different addresses in different processes*.
> Therefore, *any pointers maintained within the shared memory *
> * must be made relative* (typically to the starting address of
> the segment), rather than absolute.
> I found an alternate sweetheart only available in Windows,
> called based pointers (C++)
> https://msdn.microsoft.com/en-us/library/57a97k4e.aspx
>
> Maybe we can spent some time to look for a counterpart in
> std C world, that will be perfect.
>

For students of programming history, or those old enough to remember, the
concept of BASED storage originated in the (now ancient) programming
language PL/I. Pointers to BASED storage were stored as offsets and the
compiler automatically handled the relative addressing. They are very
convenient for this sort of purpose.


>
> S23: agree with Bill's comments covered the cases.
>
> Thanks and best regards, Yi
>
> On 8 June 2016 at 17:04, Christophe Milard 
> wrote:
>
>> OK. good that you agree (and please update the shared doc so it
>> becomes the central point of information).
>> There is something I like though, in your willingness to decouple the
>> function and the pinning...
>> Even if I am not sure this can be enforced by ODP at all time (as
>> already stated), there is definitively a point in helping the
>> application that with to do so. So please keep an eye on that!
>>
>> Your opinion on S19 S20 and S21 would be very welcome as well... This
>> is the main hurting point
>>
>> Christophe
>>
>> On 8 June 2016 at 09:41, Yi He  wrote:
>> > Hi, thanks Christophe and happy to discuss with and learn from team in
>> > yesterday's ARCH call :)
>> >
>> > The question which triggers this kind of thinking is: how to use ODP as
>> a
>> > framework to produce re-usable building blocks to compose "Network
>> Function
>> > Instance" in runtime, since until runtime it will prepare resources for
>> > function to settle down, thus comes the thought of seperating function
>> > implementation and launching.
>> >
>> > I agree your point it seems upper layer consideration, I'll have some
>> time
>> > to gain deeper understanding and knowledge on how upstair playings,
>> thus I
>> > agree to the current S11, S12, S13, S14, S15, S16, S17 approach and we
>> can
>> > revisit if really realized upper layer programming practise/model
>> affects
>> > ODP's design in this aspect.
>> >
>> > Best Regards, Yi
>> >
>> > On 7 June 2016 at 16:47, Christophe Milard <
>> christophe.mil...@linaro.org>
>> > wrote:
>> >>
>> >> On 7 June 2016 at 10:22, Yi He  wrote:
>> >> > Hi, team
>> >> >
>> >> > I send my thoughts on the ODP thread part:
>> >> >
>> >> > S1, S2, S3, S4
>> >> > Yi: Yes
>> >> > This set of statements defines ODP thread concept as a higher
>> >> > level abstraction of underlying concurrent execution context.
>> >> >
>> >> > S5, S6, S7, S8, S9, S10:
>> >> > Yi: Yes
>> >> > This set of statements add several constraints upon ODP
>> >> > instance concept.
>> >> >
>> >> > S11, S12, S13, S14, S15, S16, S17:
>> >> > Yi: Not very much
>> >> >
>> >> > Currently ODP application still mixes the Function Implementation
>> >> > and its Deployment, by this I mean in ODP application code, there
>> >> > is code to implement Function, there is also code to deploy the
>> >> > Function onto platform resources (CPU cores).
>> >> >
>> >> > I'd like to propose a programming practice/model as an attempt to
>> >> > decouple these two things, ODP application code only focuses on
>> >> > implementing Function, and leave it to a deployment script or
>> >> > launcher to take care the deployment with different resource spec
>> >> > (and sufficiency check also).
>> >>
>> >> Well, that goes straight against Barry's will that apps should pin
>> >> their tasks  on cpus...
>> >> Please join the public call today: if Barry is there you can discuss
>> >> this: I am happy to hear other voices in this discussion
>> >>
>> >> >
>> >> > /* Use Case: a upper layer orchestrator, say Daylight is deploying
>> >> > * an ODP application (can be a VNF instance in my opinion) onto
>> >> > * some platform:
>> >> > */
>> >> >
>> >> > /* The application can accept command line options */
>> >> > 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-08 Thread Yi He
Hi, Christophe and team

For the shmem part:

S18: yes
S19, S20: I agree with Bill's comments are very accurate and formalized.

S21: to cover the case shm addresses(pointers) passed within data structure

especially think of cases:
3) Context pointer getters (odp_queue_context(),
odp_packet_user_ptr(), odp_timeout_user_ptr(),
odp_tm_node_context(), odp_tm_queue_context(), and
I agree with the solution, only feel it may meet difficulties
in runtime:

According to man page:
Using shmat() with shmaddr equal to NULL is the preferred,
portable way of attaching a shared memory segment. Be aware
that the shared memory segment attached in this way may be
attached at *different addresses in different processes*.
Therefore, *any pointers maintained within the shared memory *
* must be made relative* (typically to the starting address of
the segment), rather than absolute.
I found an alternate sweetheart only available in Windows,
called based pointers (C++)
https://msdn.microsoft.com/en-us/library/57a97k4e.aspx

Maybe we can spent some time to look for a counterpart in
std C world, that will be perfect.

S23: agree with Bill's comments covered the cases.

Thanks and best regards, Yi

On 8 June 2016 at 17:04, Christophe Milard 
wrote:

> OK. good that you agree (and please update the shared doc so it
> becomes the central point of information).
> There is something I like though, in your willingness to decouple the
> function and the pinning...
> Even if I am not sure this can be enforced by ODP at all time (as
> already stated), there is definitively a point in helping the
> application that with to do so. So please keep an eye on that!
>
> Your opinion on S19 S20 and S21 would be very welcome as well... This
> is the main hurting point
>
> Christophe
>
> On 8 June 2016 at 09:41, Yi He  wrote:
> > Hi, thanks Christophe and happy to discuss with and learn from team in
> > yesterday's ARCH call :)
> >
> > The question which triggers this kind of thinking is: how to use ODP as a
> > framework to produce re-usable building blocks to compose "Network
> Function
> > Instance" in runtime, since until runtime it will prepare resources for
> > function to settle down, thus comes the thought of seperating function
> > implementation and launching.
> >
> > I agree your point it seems upper layer consideration, I'll have some
> time
> > to gain deeper understanding and knowledge on how upstair playings, thus
> I
> > agree to the current S11, S12, S13, S14, S15, S16, S17 approach and we
> can
> > revisit if really realized upper layer programming practise/model affects
> > ODP's design in this aspect.
> >
> > Best Regards, Yi
> >
> > On 7 June 2016 at 16:47, Christophe Milard  >
> > wrote:
> >>
> >> On 7 June 2016 at 10:22, Yi He  wrote:
> >> > Hi, team
> >> >
> >> > I send my thoughts on the ODP thread part:
> >> >
> >> > S1, S2, S3, S4
> >> > Yi: Yes
> >> > This set of statements defines ODP thread concept as a higher
> >> > level abstraction of underlying concurrent execution context.
> >> >
> >> > S5, S6, S7, S8, S9, S10:
> >> > Yi: Yes
> >> > This set of statements add several constraints upon ODP
> >> > instance concept.
> >> >
> >> > S11, S12, S13, S14, S15, S16, S17:
> >> > Yi: Not very much
> >> >
> >> > Currently ODP application still mixes the Function Implementation
> >> > and its Deployment, by this I mean in ODP application code, there
> >> > is code to implement Function, there is also code to deploy the
> >> > Function onto platform resources (CPU cores).
> >> >
> >> > I'd like to propose a programming practice/model as an attempt to
> >> > decouple these two things, ODP application code only focuses on
> >> > implementing Function, and leave it to a deployment script or
> >> > launcher to take care the deployment with different resource spec
> >> > (and sufficiency check also).
> >>
> >> Well, that goes straight against Barry's will that apps should pin
> >> their tasks  on cpus...
> >> Please join the public call today: if Barry is there you can discuss
> >> this: I am happy to hear other voices in this discussion
> >>
> >> >
> >> > /* Use Case: a upper layer orchestrator, say Daylight is deploying
> >> > * an ODP application (can be a VNF instance in my opinion) onto
> >> > * some platform:
> >> > */
> >> >
> >> > /* The application can accept command line options */
> >> > --list-launchable
> >> > list all algorithm functions that need to be arranged
> >> > into an execution unit.
> >> >
> >> > --preparing-threads
> >> > control<1,2,3>@cpu<1>, worker<1,2,3,4>@cpu<2,3,4,5>
> >> > I'm not sure if the control/worker distinguish is needed
> >> > anymore, DPDK has similar concept lcore.
> >> >
> >> >   --wait-launching-signal
> >> >   main process can pause after preparing above threads but
> >> > before launching algorithm functions, here orchestrator can
> >> > further fine-tuning the threads by 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-08 Thread Christophe Milard
Thanks Jerin,

I have merged your comments into the doc: there were many points you
did not comments (not sure whether the yes below a group applied to
the group of statement). You can review the doc to make sure I did not
corrupt your feelings :-)

once again, Thanks

Christophe.

On 8 June 2016 at 14:11, Jerin Jacob  wrote:
> On Fri, Jun 03, 2016 at 11:15:43AM +0200, Christophe Milard wrote:
>> since V3: Update following Bill's comments
>> since V2: Update following Barry and Bill's comments
>> since V1: Update following arch call 31 may 2016
>>
>> This is a tentative to sum up the discussions around the thread/process
>> that have been happening these last weeks.
>> Sorry for the formalism of this mail, but it seems we need accuracy
>> here...
>>
>> This summary is organized as follows:
>>
>> It is a set of statements, each of them expecting a separate answer
>> from you. When no specific ODP version is specified, the statement
>> regards the"ultimate" goal (i.e what we want eventually to achieve).
>> Each statement is prefixed with:
>>   - a statement number for further reference (e.g. S1)
>>   - a status word (one of 'agreed' or 'open', or 'closed').
>> Agreed statements expect a yes/no answers: 'yes' meaning that you
>> acknowledge that this is your understanding of the agreement and will
>> not nack an implementation based on this statement. You can comment
>> after a yes, but your comment will not block any implementation based
>> on the agreed statement. A 'no' implies that the statement does not
>> reflect your understanding of the agreement, or you refuse the
>> proposal.
>> Any 'no' received on an 'agreed' statement will push it back as 'open'.
>> Open statements are fully open for further discussion.
>>
>> S1  -agreed: an ODP thread is an OS/platform concurrent execution
>> environment object (as opposed to an ODP objects). No more specific
>> definition is given by the ODP API itself.
>>
>> Barry: YES
>> Bill: Yes
>
> Jerin: Yes
>
>>
>> ---
>>
>> S2  -agreed: Each ODP implementation must tell what is allowed to be
>> used as ODP thread for that specific implementation: a linux-based
>> implementation, for instance, will have to state whether odp threads
>> can be linux pthread, linux processes, or both, or any other type of
>> concurrent execution environment. ODP implementations can put any
>> restriction they wish on what an ODP thread is allowed to be. This
>> should be documented in the ODP implementation documentation.
>>
>> Barry: YES
>> Bill: Yes
>
> Jerin: Yes, Apart from Linux process or thread based implementation, an
> implementation may select to have bare-metal execution environment(i.e
> without any OS)
>
>>
>> ---
>>
>> S3  -agreed: in the linux generic ODP implementation a odpthread will be
>> either:
>> * a linux process descendant (or same as) the odp instantiation
>> process.
>> * a pthread 'member' of a linux process descendant (or same
>> as) the odp instantiation process.
>>
>> Barry: YES
>> Bill: Yes
>>
>> ---
>>
>> S4  -agreed: For monarch, the linux generic ODP implementation only
>> supports odp thread as pthread member of the instantiation process.
>>
>> Barry: YES
>> Bill: Yes
>>
>> ---
>>
>> S5  -agreed: whether multiple instances of ODP can be run on the same
>> machine is left as a implementation decision. The ODP implementation
>> document should state what is supported and any restriction is allowed.
>>
>> Barry: YES
>> Bill: Yes
>
> Jerin: Yes
>
>>
>> ---
>>
>> S6  -agreed: The l-g odp implementation will support multiple odp
>> instances whose instantiation processes are different and not
>> ancestor/descendant of each others. Different instances of ODP will,
>> of course, be restricted in sharing common OS ressources (The total
>> amount of memory available for each ODP instances may decrease as the
>> number of instances increases, the access to network interfaces will
>> probably be granted to the first instance grabbing the interface and
>> denied to others... some other rule may apply when sharing other
>> common ODP ressources.)
>>
>> Bill: Yes
>>
>> ---
>>
>> S7  -agreed: the l-g odp implementation will not support multiple ODP
>> instances initiated from the same linux process (calling
>> odp_init_global() multiple times).
>> As an illustration, This means that a single process P is not allowed
>> to execute the following calls (in any order)
>> instance1 = odp_init_global()
>> instance2 = odp_init_global()
>> pthread_create (and, in that thread, run odp_local_init(instance1) )
>> pthread_create (and, in that thread, run odp_local_init(instance2) )
>>
>> Bill: Yes
>>
>> ---
>>
>> S8  -agreed: the l-g odp implementation will not support multiple ODP
>> instances initiated from related linux processes (descendant/ancestor
>> of each other), hence 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-08 Thread Jerin Jacob
On Fri, Jun 03, 2016 at 11:15:43AM +0200, Christophe Milard wrote:
> since V3: Update following Bill's comments
> since V2: Update following Barry and Bill's comments
> since V1: Update following arch call 31 may 2016
> 
> This is a tentative to sum up the discussions around the thread/process
> that have been happening these last weeks.
> Sorry for the formalism of this mail, but it seems we need accuracy
> here...
> 
> This summary is organized as follows:
> 
> It is a set of statements, each of them expecting a separate answer
> from you. When no specific ODP version is specified, the statement
> regards the"ultimate" goal (i.e what we want eventually to achieve).
> Each statement is prefixed with:
>   - a statement number for further reference (e.g. S1)
>   - a status word (one of 'agreed' or 'open', or 'closed').
> Agreed statements expect a yes/no answers: 'yes' meaning that you
> acknowledge that this is your understanding of the agreement and will
> not nack an implementation based on this statement. You can comment
> after a yes, but your comment will not block any implementation based
> on the agreed statement. A 'no' implies that the statement does not
> reflect your understanding of the agreement, or you refuse the
> proposal.
> Any 'no' received on an 'agreed' statement will push it back as 'open'.
> Open statements are fully open for further discussion.
> 
> S1  -agreed: an ODP thread is an OS/platform concurrent execution
> environment object (as opposed to an ODP objects). No more specific
> definition is given by the ODP API itself.
> 
> Barry: YES
> Bill: Yes

Jerin: Yes

> 
> ---
> 
> S2  -agreed: Each ODP implementation must tell what is allowed to be
> used as ODP thread for that specific implementation: a linux-based
> implementation, for instance, will have to state whether odp threads
> can be linux pthread, linux processes, or both, or any other type of
> concurrent execution environment. ODP implementations can put any
> restriction they wish on what an ODP thread is allowed to be. This
> should be documented in the ODP implementation documentation.
> 
> Barry: YES
> Bill: Yes

Jerin: Yes, Apart from Linux process or thread based implementation, an
implementation may select to have bare-metal execution environment(i.e
without any OS)

> 
> ---
> 
> S3  -agreed: in the linux generic ODP implementation a odpthread will be
> either:
> * a linux process descendant (or same as) the odp instantiation
> process.
> * a pthread 'member' of a linux process descendant (or same
> as) the odp instantiation process.
> 
> Barry: YES
> Bill: Yes
> 
> ---
> 
> S4  -agreed: For monarch, the linux generic ODP implementation only
> supports odp thread as pthread member of the instantiation process.
> 
> Barry: YES
> Bill: Yes
> 
> ---
> 
> S5  -agreed: whether multiple instances of ODP can be run on the same
> machine is left as a implementation decision. The ODP implementation
> document should state what is supported and any restriction is allowed.
> 
> Barry: YES
> Bill: Yes

Jerin: Yes

> 
> ---
> 
> S6  -agreed: The l-g odp implementation will support multiple odp
> instances whose instantiation processes are different and not
> ancestor/descendant of each others. Different instances of ODP will,
> of course, be restricted in sharing common OS ressources (The total
> amount of memory available for each ODP instances may decrease as the
> number of instances increases, the access to network interfaces will
> probably be granted to the first instance grabbing the interface and
> denied to others... some other rule may apply when sharing other
> common ODP ressources.)
> 
> Bill: Yes
> 
> ---
> 
> S7  -agreed: the l-g odp implementation will not support multiple ODP
> instances initiated from the same linux process (calling
> odp_init_global() multiple times).
> As an illustration, This means that a single process P is not allowed
> to execute the following calls (in any order)
> instance1 = odp_init_global()
> instance2 = odp_init_global()
> pthread_create (and, in that thread, run odp_local_init(instance1) )
> pthread_create (and, in that thread, run odp_local_init(instance2) )
> 
> Bill: Yes
> 
> ---
> 
> S8  -agreed: the l-g odp implementation will not support multiple ODP
> instances initiated from related linux processes (descendant/ancestor
> of each other), hence enabling ODP 'sub-instance'? As an illustration,
> this means that the following is not supported:
> instance1 = odp_init_global()
> pthread_create (and, in that thread, run odp_local_init(instance1) )
> if (fork()==0) {
> instance2 = odp_init_global()
> pthread_create (and, in that thread, run odp_local_init(instance2) )
> }
> 
> Bill: Yes
> 
> 
> 
> S9  -agreed: the odp instance passed as parameter to odp_local_init()
> must 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-08 Thread Yi He
Hi, thanks Christophe and happy to discuss with and learn from team in
yesterday's ARCH call :)

The question which triggers this kind of thinking is: how to use ODP as a
framework to produce re-usable building blocks to compose "Network Function
Instance" in runtime, since until runtime it will prepare resources for
function to settle down, thus comes the thought of seperating function
implementation and launching.

I agree your point it seems upper layer consideration, I'll have some time
to gain deeper understanding and knowledge on how upstair playings, thus I
agree to the current S11, S12, S13, S14, S15, S16, S17 approach and we can
revisit if really realized upper layer programming practise/model affects
ODP's design in this aspect.

Best Regards, Yi

On 7 June 2016 at 16:47, Christophe Milard 
wrote:

> On 7 June 2016 at 10:22, Yi He  wrote:
> > Hi, team
> >
> > I send my thoughts on the ODP thread part:
> >
> > S1, S2, S3, S4
> > Yi: Yes
> > This set of statements defines ODP thread concept as a higher
> > level abstraction of underlying concurrent execution context.
> >
> > S5, S6, S7, S8, S9, S10:
> > Yi: Yes
> > This set of statements add several constraints upon ODP
> > instance concept.
> >
> > S11, S12, S13, S14, S15, S16, S17:
> > Yi: Not very much
> >
> > Currently ODP application still mixes the Function Implementation
> > and its Deployment, by this I mean in ODP application code, there
> > is code to implement Function, there is also code to deploy the
> > Function onto platform resources (CPU cores).
> >
> > I'd like to propose a programming practice/model as an attempt to
> > decouple these two things, ODP application code only focuses on
> > implementing Function, and leave it to a deployment script or
> > launcher to take care the deployment with different resource spec
> > (and sufficiency check also).
>
> Well, that goes straight against Barry's will that apps should pin
> their tasks  on cpus...
> Please join the public call today: if Barry is there you can discuss
> this: I am happy to hear other voices in this discussion
>
> >
> > /* Use Case: a upper layer orchestrator, say Daylight is deploying
> > * an ODP application (can be a VNF instance in my opinion) onto
> > * some platform:
> > */
> >
> > /* The application can accept command line options */
> > --list-launchable
> > list all algorithm functions that need to be arranged
> > into an execution unit.
> >
> > --preparing-threads
> > control<1,2,3>@cpu<1>, worker<1,2,3,4>@cpu<2,3,4,5>
> > I'm not sure if the control/worker distinguish is needed
> > anymore, DPDK has similar concept lcore.
> >
> >   --wait-launching-signal
> >   main process can pause after preparing above threads but
> > before launching algorithm functions, here orchestrator can
> > further fine-tuning the threads by scripting (cgroups, etc).
> > CPU, disk/net IO, memory quotas, interrupt bindings, etc.
> >
> > --launching
> >   main@control<1>,algorithm_one@control<2>,
> > algorithm_two@worker<1,2,3,4>...
> >
> > In source code, the only thing ODP library and application need to do
> > related to deployment is to declare launchable algorithm functions by
> > adding them into special text section:
> >
> > int main(...) __attribute__((section(".launchable")));
> > int algorithm_one(void *) __attribute__((section(".launchable")));
>
> interresting ... Does it need to be part of ODP, though? Cannot the
> ODP api provide means of pinning, and the apps that wish it can
> provide the options you mentioned to do it: i.e. the application that
> wants would pin according to its command line interface and the
> application that can't/does not want would pin manually...
> I mean one could also imagine cases where the app needs to pin: if
> specific HW exists connected to some cpus, or if serialisation exists
> (e.g. something like: pin tasks 1,2,and 3 to cpu 1,2,3, and then, when
> this work is done (e.g. after a barrier joining task 1,2,3) pin taks
> 4,5,6 on cpu 1,2,3 again.) There could be theoriticaly complex such
> dependency graphs where all cpus cannot be pinned from the beginning,
> and where your approach seem too restrictive.
>
> But I am glad to hear your voice on the arch call :-)
>
> Christophe.
> >
> > Best Regards, Yi
> >
> >
> > On 4 June 2016 at 05:56, Bill Fischofer 
> wrote:
> >>
> >> I realized I forgot to respond to s23.  Corrected here.
> >>
> >> On Fri, Jun 3, 2016 at 4:15 AM, Christophe Milard <
> >> christophe.mil...@linaro.org> wrote:
> >>
> >> > since V3: Update following Bill's comments
> >> > since V2: Update following Barry and Bill's comments
> >> > since V1: Update following arch call 31 may 2016
> >> >
> >> > This is a tentative to sum up the discussions around the
> thread/process
> >> > that have been happening these last weeks.
> >> > Sorry for the formalism of this mail, but it seems we need accuracy
> >> > here...
> >> >
> >> > 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-07 Thread Christophe Milard
On 7 June 2016 at 10:22, Yi He  wrote:
> Hi, team
>
> I send my thoughts on the ODP thread part:
>
> S1, S2, S3, S4
> Yi: Yes
> This set of statements defines ODP thread concept as a higher
> level abstraction of underlying concurrent execution context.
>
> S5, S6, S7, S8, S9, S10:
> Yi: Yes
> This set of statements add several constraints upon ODP
> instance concept.
>
> S11, S12, S13, S14, S15, S16, S17:
> Yi: Not very much
>
> Currently ODP application still mixes the Function Implementation
> and its Deployment, by this I mean in ODP application code, there
> is code to implement Function, there is also code to deploy the
> Function onto platform resources (CPU cores).
>
> I'd like to propose a programming practice/model as an attempt to
> decouple these two things, ODP application code only focuses on
> implementing Function, and leave it to a deployment script or
> launcher to take care the deployment with different resource spec
> (and sufficiency check also).

Well, that goes straight against Barry's will that apps should pin
their tasks  on cpus...
Please join the public call today: if Barry is there you can discuss
this: I am happy to hear other voices in this discussion

>
> /* Use Case: a upper layer orchestrator, say Daylight is deploying
> * an ODP application (can be a VNF instance in my opinion) onto
> * some platform:
> */
>
> /* The application can accept command line options */
> --list-launchable
> list all algorithm functions that need to be arranged
> into an execution unit.
>
> --preparing-threads
> control<1,2,3>@cpu<1>, worker<1,2,3,4>@cpu<2,3,4,5>
> I'm not sure if the control/worker distinguish is needed
> anymore, DPDK has similar concept lcore.
>
>   --wait-launching-signal
>   main process can pause after preparing above threads but
> before launching algorithm functions, here orchestrator can
> further fine-tuning the threads by scripting (cgroups, etc).
> CPU, disk/net IO, memory quotas, interrupt bindings, etc.
>
> --launching
>   main@control<1>,algorithm_one@control<2>,
> algorithm_two@worker<1,2,3,4>...
>
> In source code, the only thing ODP library and application need to do
> related to deployment is to declare launchable algorithm functions by
> adding them into special text section:
>
> int main(...) __attribute__((section(".launchable")));
> int algorithm_one(void *) __attribute__((section(".launchable")));

interresting ... Does it need to be part of ODP, though? Cannot the
ODP api provide means of pinning, and the apps that wish it can
provide the options you mentioned to do it: i.e. the application that
wants would pin according to its command line interface and the
application that can't/does not want would pin manually...
I mean one could also imagine cases where the app needs to pin: if
specific HW exists connected to some cpus, or if serialisation exists
(e.g. something like: pin tasks 1,2,and 3 to cpu 1,2,3, and then, when
this work is done (e.g. after a barrier joining task 1,2,3) pin taks
4,5,6 on cpu 1,2,3 again.) There could be theoriticaly complex such
dependency graphs where all cpus cannot be pinned from the beginning,
and where your approach seem too restrictive.

But I am glad to hear your voice on the arch call :-)

Christophe.
>
> Best Regards, Yi
>
>
> On 4 June 2016 at 05:56, Bill Fischofer  wrote:
>>
>> I realized I forgot to respond to s23.  Corrected here.
>>
>> On Fri, Jun 3, 2016 at 4:15 AM, Christophe Milard <
>> christophe.mil...@linaro.org> wrote:
>>
>> > since V3: Update following Bill's comments
>> > since V2: Update following Barry and Bill's comments
>> > since V1: Update following arch call 31 may 2016
>> >
>> > This is a tentative to sum up the discussions around the thread/process
>> > that have been happening these last weeks.
>> > Sorry for the formalism of this mail, but it seems we need accuracy
>> > here...
>> >
>> > This summary is organized as follows:
>> >
>> > It is a set of statements, each of them expecting a separate answer
>> > from you. When no specific ODP version is specified, the statement
>> > regards the"ultimate" goal (i.e what we want eventually to achieve).
>> > Each statement is prefixed with:
>> >   - a statement number for further reference (e.g. S1)
>> >   - a status word (one of 'agreed' or 'open', or 'closed').
>> > Agreed statements expect a yes/no answers: 'yes' meaning that you
>> > acknowledge that this is your understanding of the agreement and will
>> > not nack an implementation based on this statement. You can comment
>> > after a yes, but your comment will not block any implementation based
>> > on the agreed statement. A 'no' implies that the statement does not
>> > reflect your understanding of the agreement, or you refuse the
>> > proposal.
>> > Any 'no' received on an 'agreed' statement will push it back as 'open'.
>> > Open statements are fully open for further discussion.
>> >
>> > S1  -agreed: an ODP thread is an 

Re: [lng-odp] thread/shmem discussion summary V4

2016-06-07 Thread Yi He
Hi, team

I send my thoughts on the ODP thread part:

S1, S2, S3, S4
Yi: Yes
This set of statements defines ODP thread concept as a higher
level abstraction of underlying concurrent execution context.

S5, S6, S7, S8, S9, S10:
Yi: Yes
This set of statements add several constraints upon ODP
instance concept.

S11, S12, S13, S14, S15, S16, S17:
Yi: Not very much

Currently ODP application still mixes the Function Implementation
and its Deployment, by this I mean in ODP application code, there
is code to implement Function, there is also code to deploy the
Function onto platform resources (CPU cores).

I'd like to propose a programming practice/model as an attempt to
decouple these two things, ODP application code only focuses on
implementing Function, and leave it to a deployment script or
launcher to take care the deployment with different resource spec
(and sufficiency check also).

/* Use Case: a upper layer orchestrator, say Daylight is deploying
* an ODP application (can be a VNF instance in my opinion) onto
* some platform:
*/

/* The application can accept command line options */
--list-launchable
list all algorithm functions that need to be arranged
into an execution unit.

--preparing-threads
control<1,2,3>@cpu<1>, worker<1,2,3,4>@cpu<2,3,4,5>
I'm not sure if the control/worker distinguish is needed
anymore, DPDK has similar concept lcore.

  --wait-launching-signal
  main process can pause after preparing above threads but
before launching algorithm functions, here orchestrator can
further fine-tuning the threads by scripting (cgroups, etc).
CPU, disk/net IO, memory quotas, interrupt bindings, etc.

--launching
  main@control<1>,algorithm_one@control<2>,
algorithm_two@worker<1,2,3,4>...

In source code, the only thing ODP library and application need to do
related to deployment is to declare launchable algorithm functions by
adding them into special text section:

int main(...) __attribute__((section(".launchable")));
int algorithm_one(void *) __attribute__((section(".launchable")));

Best Regards, Yi


On 4 June 2016 at 05:56, Bill Fischofer  wrote:

> I realized I forgot to respond to s23.  Corrected here.
>
> On Fri, Jun 3, 2016 at 4:15 AM, Christophe Milard <
> christophe.mil...@linaro.org> wrote:
>
> > since V3: Update following Bill's comments
> > since V2: Update following Barry and Bill's comments
> > since V1: Update following arch call 31 may 2016
> >
> > This is a tentative to sum up the discussions around the thread/process
> > that have been happening these last weeks.
> > Sorry for the formalism of this mail, but it seems we need accuracy
> > here...
> >
> > This summary is organized as follows:
> >
> > It is a set of statements, each of them expecting a separate answer
> > from you. When no specific ODP version is specified, the statement
> > regards the"ultimate" goal (i.e what we want eventually to achieve).
> > Each statement is prefixed with:
> >   - a statement number for further reference (e.g. S1)
> >   - a status word (one of 'agreed' or 'open', or 'closed').
> > Agreed statements expect a yes/no answers: 'yes' meaning that you
> > acknowledge that this is your understanding of the agreement and will
> > not nack an implementation based on this statement. You can comment
> > after a yes, but your comment will not block any implementation based
> > on the agreed statement. A 'no' implies that the statement does not
> > reflect your understanding of the agreement, or you refuse the
> > proposal.
> > Any 'no' received on an 'agreed' statement will push it back as 'open'.
> > Open statements are fully open for further discussion.
> >
> > S1  -agreed: an ODP thread is an OS/platform concurrent execution
> > environment object (as opposed to an ODP objects). No more specific
> > definition is given by the ODP API itself.
> >
> > Barry: YES
> > Bill: Yes
> >
> > ---
> >
> > S2  -agreed: Each ODP implementation must tell what is allowed to be
> > used as ODP thread for that specific implementation: a linux-based
> > implementation, for instance, will have to state whether odp threads
> > can be linux pthread, linux processes, or both, or any other type of
> > concurrent execution environment. ODP implementations can put any
> > restriction they wish on what an ODP thread is allowed to be. This
> > should be documented in the ODP implementation documentation.
> >
> > Barry: YES
> > Bill: Yes
> >
> > ---
> >
> > S3  -agreed: in the linux generic ODP implementation a odpthread will be
> > either:
> > * a linux process descendant (or same as) the odp instantiation
> > process.
> > * a pthread 'member' of a linux process descendant (or same
> > as) the odp instantiation process.
> >
> > Barry: YES
> > Bill: Yes
> >
> > ---
> >
> > S4  -agreed: For monarch, the linux generic ODP implementation only
> > supports odp thread as pthread member of the 

[lng-odp] thread/shmem discussion summary V4

2016-06-03 Thread Christophe Milard
since V3: Update following Bill's comments
since V2: Update following Barry and Bill's comments
since V1: Update following arch call 31 may 2016

This is a tentative to sum up the discussions around the thread/process
that have been happening these last weeks.
Sorry for the formalism of this mail, but it seems we need accuracy
here...

This summary is organized as follows:

It is a set of statements, each of them expecting a separate answer
from you. When no specific ODP version is specified, the statement
regards the"ultimate" goal (i.e what we want eventually to achieve).
Each statement is prefixed with:
  - a statement number for further reference (e.g. S1)
  - a status word (one of 'agreed' or 'open', or 'closed').
Agreed statements expect a yes/no answers: 'yes' meaning that you
acknowledge that this is your understanding of the agreement and will
not nack an implementation based on this statement. You can comment
after a yes, but your comment will not block any implementation based
on the agreed statement. A 'no' implies that the statement does not
reflect your understanding of the agreement, or you refuse the
proposal.
Any 'no' received on an 'agreed' statement will push it back as 'open'.
Open statements are fully open for further discussion.

S1  -agreed: an ODP thread is an OS/platform concurrent execution
environment object (as opposed to an ODP objects). No more specific
definition is given by the ODP API itself.

Barry: YES
Bill: Yes

---

S2  -agreed: Each ODP implementation must tell what is allowed to be
used as ODP thread for that specific implementation: a linux-based
implementation, for instance, will have to state whether odp threads
can be linux pthread, linux processes, or both, or any other type of
concurrent execution environment. ODP implementations can put any
restriction they wish on what an ODP thread is allowed to be. This
should be documented in the ODP implementation documentation.

Barry: YES
Bill: Yes

---

S3  -agreed: in the linux generic ODP implementation a odpthread will be
either:
* a linux process descendant (or same as) the odp instantiation
process.
* a pthread 'member' of a linux process descendant (or same
as) the odp instantiation process.

Barry: YES
Bill: Yes

---

S4  -agreed: For monarch, the linux generic ODP implementation only
supports odp thread as pthread member of the instantiation process.

Barry: YES
Bill: Yes

---

S5  -agreed: whether multiple instances of ODP can be run on the same
machine is left as a implementation decision. The ODP implementation
document should state what is supported and any restriction is allowed.

Barry: YES
Bill: Yes

---

S6  -agreed: The l-g odp implementation will support multiple odp
instances whose instantiation processes are different and not
ancestor/descendant of each others. Different instances of ODP will,
of course, be restricted in sharing common OS ressources (The total
amount of memory available for each ODP instances may decrease as the
number of instances increases, the access to network interfaces will
probably be granted to the first instance grabbing the interface and
denied to others... some other rule may apply when sharing other
common ODP ressources.)

Bill: Yes

---

S7  -agreed: the l-g odp implementation will not support multiple ODP
instances initiated from the same linux process (calling
odp_init_global() multiple times).
As an illustration, This means that a single process P is not allowed
to execute the following calls (in any order)
instance1 = odp_init_global()
instance2 = odp_init_global()
pthread_create (and, in that thread, run odp_local_init(instance1) )
pthread_create (and, in that thread, run odp_local_init(instance2) )

Bill: Yes

---

S8  -agreed: the l-g odp implementation will not support multiple ODP
instances initiated from related linux processes (descendant/ancestor
of each other), hence enabling ODP 'sub-instance'? As an illustration,
this means that the following is not supported:
instance1 = odp_init_global()
pthread_create (and, in that thread, run odp_local_init(instance1) )
if (fork()==0) {
instance2 = odp_init_global()
pthread_create (and, in that thread, run odp_local_init(instance2) )
}

Bill: Yes



S9  -agreed: the odp instance passed as parameter to odp_local_init()
must always be one of the odp_instance returned by odp_global_init()

Barry: YES
Bill: Yes

---

S10 -agreed: For l-g, if the answer to S7 and S8 are 'yes', then due to S3,
the odp_instance an odp_thread can attach to is completely defined by
the ancestor of the thread, making the odp_instance parameter of
odp_init_local redundant. The odp l-g implementation guide will
enlighten this
redundancy, but will stress that even in this case the parameter to
odp_local_init() still have