Hi Fiona

> -----Original Message-----
> From: Trahe, Fiona [mailto:fiona.tr...@intel.com]
> Sent: 19 January 2018 17:30
> To: Verma, Shally <shally.ve...@cavium.com>; dev@dpdk.org;
> akhil.go...@nxp.com
> Cc: Challa, Mahipal <mahipal.cha...@cavium.com>; Athreya, Narayana
> Prasad <narayanaprasad.athr...@cavium.com>; De Lara Guarch, Pablo
> <pablo.de.lara.gua...@intel.com>; Gupta, Ashish
> <ashish.gu...@cavium.com>; Sahu, Sunila <sunila.s...@cavium.com>;
> Jain, Deepak K <deepak.k.j...@intel.com>; Hemant Agrawal
> <hemant.agra...@nxp.com>; Roy Pledge <roy.ple...@nxp.com>; Youri
> Querry <youri.querr...@nxp.com>; Ahmed Mansour
> <ahmed.mans...@nxp.com>; Trahe, Fiona <fiona.tr...@intel.com>
> Subject: RE: [RFC v3 1/1] lib: add compressdev API
> 
> Hi Shally,
> 
> > -----Original Message-----
> > From: Verma, Shally [mailto:shally.ve...@cavium.com]
> > Sent: Thursday, January 18, 2018 12:54 PM
> > To: Trahe, Fiona <fiona.tr...@intel.com>; dev@dpdk.org
> > Cc: Challa, Mahipal <mahipal.cha...@cavium.com>; Athreya, Narayana
> Prasad
> > <narayanaprasad.athr...@cavium.com>; De Lara Guarch, Pablo
> <pablo.de.lara.gua...@intel.com>;
> > Gupta, Ashish <ashish.gu...@cavium.com>; Sahu, Sunila
> <sunila.s...@cavium.com>; Jain, Deepak K
> > <deepak.k.j...@intel.com>; Hemant Agrawal
> <hemant.agra...@nxp.com>; Roy Pledge
> > <roy.ple...@nxp.com>; Youri Querry <youri.querr...@nxp.com>;
> Ahmed Mansour
> > <ahmed.mans...@nxp.com>
> > Subject: RE: [RFC v3 1/1] lib: add compressdev API
> >
> > Hi Fiona
> >
> > While revisiting this, we identified few questions and additions. Please see
> them inline.
> >
> >
> > > -----Original Message-----
> > > From: Trahe, Fiona [mailto:fiona.tr...@intel.com]
> > > Sent: 15 December 2017 23:19
> > > To: dev@dpdk.org; Verma, Shally <shally.ve...@cavium.com>
> > > Cc: Challa, Mahipal <mahipal.cha...@cavium.com>; Athreya, Narayana
> > > Prasad <narayanaprasad.athr...@cavium.com>;
> > > pablo.de.lara.gua...@intel.com; fiona.tr...@intel.com
> > > Subject: [RFC v3 1/1] lib: add compressdev API
> > >
> > > Signed-off-by: Trahe, Fiona <fiona.tr...@intel.com>
> > > ---
> >
> > //snip
> >
> > > +
> > > +int
> > > +rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t
> > > queue_pair_id,
> > > +         uint32_t max_inflight_ops, int socket_id)
> >
> > [Shally] Is max_inflights_ops different from nb_streams_per_qp in struct
> rte_compressdev_info?
> > I assume they both carry same purpose. If yes, then it will be better to use
> single naming convention to
> > avoid confusion.
> [Fiona] No, I think they have different purposes.
> max_inflight_ops should be used to configure the qp with the number of ops
> the application expects to be able to submit to the qp before it needs to poll
> for a response. It can be configured differently for each qp. In the QAT case 
> it
> dictates the depth of the qp created, it may have different implications on
> other PMDs.
> nb_sessions_per_qp and nb_streams_per_qp are limitations the devices
> reports and are same for all qps on the device. QAT doesn't have those
> limitations and so would report 0, however I assumed they may be necessary
> for other devices.
> This assumption is based on the patch submitted by NXP to cryptodev in Feb
> 2017
>  http://dpdk.org/ml/archives/dev/2017-March/060740.html
> I also assume these are not necessarily the max number of sessions in ops on
> the qp at a given time, but the total number attached, i.e. if the device has
> this limitation then sessions must be attached to qps, and presumably
> reserve some resources. Being attached doesn't imply there is an op on the
> qp at that time using that session. So it's not to relating to the inflight op
> count, but to the number of sessions attached/detached to the qp.
> Including Akhil on the To list, maybe NXP can confirm if these params are
> needed.

[Shally] Ok. Then let's wait for NXP to confirm on this requirement as 
currently spec doesn't have any API to attach 
queue_pair_to_specific_session_or_stream as cryptodev.

But then how application could know limit on max_inflight_ops supported on a 
qp? As it can pass any random number during qp_setup().
Do you believe we need to add a capability field in dev_info to indicate limit 
on max_inflight_ops?

Thanks
Shally
> 
> 
> > Also, is it optional API? Like Is this a valid use case?:
> > dev_configure() --> dev_start() --> qp_start() --> enqueue/dequeue() -->
> qp_stop() --> dev_stop() -->
> > dev_close()?
> [Fiona] I don't think it should be optional as some PMDs need to allocate
> resources based on the setup data passed in on this API.
> 
> > //snip
> >
> > > +
> > > +#define RTE_COMPRESSDEV_PMD_NAME_ARG
> > >   ("name")
> > > +#define RTE_COMPRESSDEV_PMD_MAX_NB_QP_ARG
> > >   ("max_nb_queue_pairs")
> > > +#define RTE_COMPRESSDEV_PMD_SOCKET_ID_ARG
>       ("socket_id")
> > > +
> >
> > [Shally] Need to define argument macro for max_nb_session_per_qp and
> max_nb_streams_per_qp  as
> > well
> [Fiona] ok
> 
> > > +
> > > +static const char * const compressdev_pmd_valid_params[] = {
> > > + RTE_COMPRESSDEV_PMD_NAME_ARG,
> > > + RTE_COMPRESSDEV_PMD_MAX_NB_QP_ARG,
> > > + RTE_COMPRESSDEV_PMD_SOCKET_ID_ARG
> > > +};
> >
> > [Shally] Likewise, array need to be updated with other mentioned two
> arguments
> Fiona] ok
> 
> 
> > > +
> > > +/**
> > > + * @internal
> > > + * Initialisation parameters for comp devices
> > > + */
> > > +struct rte_compressdev_pmd_init_params {
> > > + char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
> > > + size_t private_data_size;
> > > + int socket_id;
> > > + unsigned int max_nb_queue_pairs;
> >
> > [Shally] And this also need to be updated with max_nb_sessions_per_qp
> and max_streams_per_qp
> [Fiona] ok
> 
> > //snip
> >
> > Thanks
> > Shally

Reply via email to