On Tue, Apr 11, 2017 at 4:47 AM, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
> Petri,  whom review is needed for that?

We currently have conflicting approaches to this from Petri and
Honnappa. They need to come to some consensus on this to move forward.

>
> Maxim.
>
> On 11 April 2017 at 09:29, Savolainen, Petri (Nokia - FI/Espoo) <
> petri.savolai...@nokia-bell-labs.com> wrote:
>
>> Ping. Spec for queue size and capability.
>>
>>
>> > -----Original Message-----
>> > From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> > Savolainen, Petri (Nokia - FI/Espoo)
>> > Sent: Friday, April 07, 2017 10:53 AM
>> > To: lng-odp@lists.linaro.org
>> > Subject: Re: [lng-odp] [PATCH v3 1/2] api: queue: added queue size param
>> >
>> > Ping.
>> >
>> > > -----Original Message-----
>> > > From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> > Petri
>> > > Savolainen
>> > > Sent: Monday, April 03, 2017 1:11 PM
>> > > To: lng-odp@lists.linaro.org
>> > > Subject: [lng-odp] [PATCH v3 1/2] api: queue: added queue size param
>> > >
>> > > Added capability information about maximum number of queues
>> > > and queue sizes. Both are defined per queue type, since
>> > > plain and scheduled queues may have different implementations
>> > > (e.g. one uses HW while the other is SW).
>> > >
>> > > Added queue size parameter, which specifies how large
>> > > storage size application requires in minimum.
>> > >
>> > > Signed-off-by: Petri Savolainen <petri.savolai...@linaro.org>
>> > > ---
>> > >  include/odp/api/spec/queue.h | 35 ++++++++++++++++++++++++++++++++++-
>> > >  1 file changed, 34 insertions(+), 1 deletion(-)
>> > >
>> > > diff --git a/include/odp/api/spec/queue.h
>> b/include/odp/api/spec/queue.h
>> > > index 7972fea..9c83322 100644
>> > > --- a/include/odp/api/spec/queue.h
>> > > +++ b/include/odp/api/spec/queue.h
>> > > @@ -100,7 +100,9 @@ typedef enum odp_queue_op_mode_t {
>> > >   * Queue capabilities
>> > >   */
>> > >  typedef struct odp_queue_capability_t {
>> > > -   /** Maximum number of event queues */
>> > > +   /** Maximum number of event queues of any type. Use this in
>> > > addition to
>> > > +     * queue type specific 'max_num', if both queue types are used
>> > > +     * simultaneously. */
>> > >     uint32_t max_queues;
>> > >
>> > >     /** Maximum number of ordered locks per queue */
>> > > @@ -112,6 +114,28 @@ typedef struct odp_queue_capability_t {
>> > >     /** Number of scheduling priorities */
>> > >     unsigned sched_prios;
>> > >
>> > > +   /** Plain queue capabilities */
>> > > +   struct {
>> > > +           /** Maximum number of a plain queues. */
>> > > +           uint32_t max_num;
>> > > +
>> > > +           /** Maximum number of events a plain queue can store
>> > > +             * simultaneously. The value of zero means
>> > > unlimited. */
>> > > +           uint32_t max_size;
>> > > +
>> > > +   } plain;
>> > > +
>> > > +   /** Scheduled queue capabilities */
>> > > +   struct {
>> > > +           /** Maximum number of a scheduled queues. */
>> > > +           uint32_t max_num;
>> > > +
>> > > +           /** Maximum number of events a scheduled queue can
>> > > store
>> > > +             * simultaneously. The value of zero means
>> > > unlimited. */
>> > > +           uint32_t max_size;
>> > > +
>> > > +   } sched;
>> > > +
>> > >  } odp_queue_capability_t;
>> > >
>> > >  /**
>> > > @@ -165,6 +189,15 @@ typedef struct odp_queue_param_t {
>> > >       * The implementation may use this value as a hint for the
>> > > number of
>> > >       * context data bytes to prefetch. Default value is zero (no
>> > > hint). */
>> > >     uint32_t context_len;
>> > > +
>> > > +   /** Queue size
>> > > +     *
>> > > +     * The queue must be able to store at minimum this many events
>> > > +     * simultaneously. The value must not exceed 'max_size' queue
>> > > +     * capability. The value of zero means implementation specific
>> > > +     * default size. */
>> > > +   uint32_t size;
>> > > +
>> > >  } odp_queue_param_t;
>> > >
>> > >  /**
>> > > --
>> > > 2.8.1
>>
>>

Reply via email to