On 7/29/2017 8:42 PM, Jerin Jacob wrote:
-----Original Message-----
Date: Thu, 27 Jul 2017 16:28:29 +0530
From: "Rao, Nikhil" <nikhil....@intel.com>
To: Jerin Jacob <jerin.ja...@caviumnetworks.com>
CC: gage.e...@intel.com, dev@dpdk.org, tho...@monjalon.net,
  bruce.richard...@intel.com, harry.van.haa...@intel.com,
  hemant.agra...@nxp.com, nipun.gu...@nxp.com, narender.vang...@intel.com,
  Abhinandan Gujjar <abhinandan.guj...@intel.com>, nikhil....@intel.com
Subject: Re: [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
  Thunderbird/52.2.1



In the case of a SW thread we would like to use the servicing weight
specified in the queue to do WRR across <ports, queues[]>, in keeping with

OK, then lets work together to address in transparent manner where it
works for HW and SW.

the adaper per <eventdev, eth port> model, one way to do this is to use the
same cfg.service_name in the rte_event_eth_rx_adapter_configure() call.

However this creates a few difficulties/inconsistencies:

I agree. If we are thinking about WRR across <ports,queues[]> then above
proposal implementation creates inconsistencies. On the other side, it create 
challenges
with HW implementation to have unified adapter API works for both HW and SW.


1)Service has the notion of a socket id. Multiple event dev IDs can be
included in the same service, each event dev has a socket ID -> this seems
to be an inconsistency that shouldn’t be allowed by design.

2)Say, the Rx event adapter doesn’t drop packets (could be configurable),
i.e,  if events cannot be enqueued into the event device, these remain in a
buffer, when the buffer fills up packets aren’t dequeued from the eth
device.

In the simplest case the Rx event adapter service has a single <event
device, event port> across multiple eth ports, it dequeues from the wrr[]
and buffers events, bulk enqueues BATCH_SIZE events into the <event device,
event port>.

With adapters having different <event device, event port> code can be
optimized so that adapters that have a common <event device, event port> can
be made to refer to a common enqueue buffer { event dev, event port, buffer
} structure but this adds more book keeping in the code.

3)Every adapter can be configured with max_nb_rx ( a max nb of packets that
it can process in any invocation) – but the max_nb_rx seems like a service
level parameter instead of it being a summation across adapters.

1 & 3 could be solved by restricting the adapters to the same (as in the
first rte_event_eth_rx_adapter_configure() call) socket ID, and perhaps
using the max value of max_nb_rx or using the same value of max_nb_rx across
adapters. #2 is doable but has a bit of code complexity to handle the
generic case.

Before we go there, I wanted to check if there is an alternative possible
that would remove the difficulties above. Essentially allow multiple ports
within an adapter but avoid the problem of the inconsistent <eventdev, port>
combinations when using multiple ports with a single eventdev.

Instead of
==
rte_event_eth_rx_adapter_create()
rte_event_eth_rx_adapter_get_info();
rte_event_eth_rx_adapter_configure();
rte_event_eth_rx_adapter_queue_add();
==

How about ?
==

rte_event_eth_rx_adapter_get_info(uint8_t dev_id, uint8_t eth_port_id,
         struct rte_event_eth_rx_adap_info *info);

struct rte_event_eth_rx_adap_info {
         uint32_t cap;

/* adapter has inbuilt port, no need to create producer port */
#define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT  (1ULL << 0)
/* adapter does not need service function */
#define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1)

}

rte_event_eth_rx_adapter_conf cfg;
cfg.event_port = event_port;
cfg.service_name = “rx_adapter_service”;

Does application need to specify the service name? IMO, it better a
component(rx_adapter) defines it name and format and expose in 
rte_event_eth_rx_adapter.h


I have had the application specify the name. so that it can call

struct rte_service_spec *rte_service_get_by_name(const char *name);

followed by
int32_t rte_service_enable_on_lcore(struct rte_service_spec *service,
                                   uint32_t lcore);

given that the application knows the socket id of the event device associated with the adapter.


// all ports in eth_port_id[] have cap =
//!RTE_EVENT_ETHDEV_CAP_INBUILT_PORT
// && ! RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC
rte_event_eth_rx_adapter_create(dev_id, eth_port_id[], N, id, &cfg);

The downside might be:
- application has different flow based on based on the capability.
Listing down a few capabilities/limitation below.

===
int rte_event_eth_rx_adapter_queue_add() would need a port id in the N>1
port case, that can be ignored if the adapter doesn’t need it (N=1).

thanks for reading the long email, thoughts ?

I have bit another thought to solve the above mentioned downside.

- Theme is based on your original rx adapter proposal but with eventpmd
   ops(not adapter ops).i.e Reuse as much of your existing Rx adapter
implementation as common code and add hooks for HW based adapters. For
example, before you add  <ethdev, queue_id> to "rx_poll" in eth_poll_wrr_calc(),
Check for eventdev PMD ops is available adding it HW. If yes, Don't add in 
"rx_poll"

adapter_api
------------
int rte_event_eth_rx_adapter_create(id, rte_event_eth_rx_adapter_conf *conf)
int rte_event_eth_rx_adapter_queue_add(uint8_t id, uint8_t eth_dev_id, int32_t 
rx_queue_id, rte_event_eth_rx_adapter_queue_conf *conf);

eventdev PMD op api(not as adapter PMD as discussed earlier)
-------------------

1) typedef uint64_t (*eventdev_rx_adap_capa)(struct rte_eventdev *dev,  uint8_t 
ethdev_id)

Return the adapter capability of a given eventdev when it needs to
connected to a specific ethdev_id


Doesn't the capability of a <eventdev, ethdev> also need to be made available to the application as an adapter API ?

for e.g., so the application can call rte_event_eth_rx_adapter_queue_add() with rx_queue_id = -1 if RX_ADAPTER_CAP_ADD_QUEUE is not set.

In the same manner, if I understand RX_ADAPTER_CAP_SET_FLOW_ID, the application would have provide a flow ID in rte_event_eth_rx_adapter_queue_add(), if RX_ADAPTER_CAP_SET_FLOW_ID is not set.

Also if a given <eventdev, ethdev> are connected in HW then is the servicing weight of a queue applicable ?

Possible capability values based on my understating for existing SW and Cavium
HW PMD. NXP folks can add new ones.

- RX_ADAPTER_CAP_INBUILT_PORT - /* adapter has inbuilt port, no need to create 
producer port by common code */
- RX_ADAPTER_CAP_SET_FLOW_ID  - /* adapter capable of setting 
RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID */
- RX_ADAPTER_CAP_ADD_QUEUE /* adapter capable of adding any specific
ethdev rx queue to any eventdev queue. Some eventdev PMD has a limitation
that once a < ethdev_id , queue_id> connected to specific eventdev queue,
all the all queues_id under the same ethdev_id need to be connected to
same eventdev queue. aka works only on the 
rte_event_eth_rx_adapter_queue_conf.rx_queue_id == -1 mode, */


2) typedef int (*eventdev_rx_adap_add)(struct rte_eventdev *dev,  uint8_t 
ethdev_id, int queue_id, rte_event_eth_rx_adapter_queue_conf *conf));
-- if implemented by eventdev PMD and returns zero then COMMON code does not 
need to poll */


3) typedef int (*eventdev_rx_adap_del)(struct rte_eventdev *dev,  uint8_t 
ethdev_id, int queue_id)
-- remove previously added


*** Haven't spend a lot of time on API/macro name.Please use better naming 
conversion.


Another notes based on your existing implementation  + eventdev ops scheme

1) rte_event_eth_rx_adapter_creates() registers service function by
default. It should be delayed to when common adapter code find a device
with !RX_ADAPTER_CAP_INBUILT_PORT cap on rte_event_eth_rx_adapter_queue_add()

2) Do we need rx_adapter start/stop functions?

Yes.


3) If it happens to be case where rte_event_eth_rx_adapter_queue_add()
use only RX_ADAPTER_CAP_INBUILT_PORT then common code should not
create any service.

Yes.

4) If adapter uses one port with service core and other one with HW
adapter. rte_event_eth_rx_adapter_stats.rx_packets will be not updated
correctly, We need eventdev PMD ops to get those stats. If we agree
overall PMD ops + adapter API partitioning then we can refine additionally
eventpmd for stats etc or xstat based scheme etc.

Wouldnt the rx_packets be the sum of the service core thread packet count and the count provided by the eventdev pmd ops and be obtained from a call to

rte_event_eth_rx_adapter_stats_get(uint8_t id,
        struct rte_event_eth_rx_adapter_stats *stats);


5) specifying rte_event_eth_rx_adapter_conf.rx_event_port_id on
rte_event_eth_rx_adapter_create() would waste one HW eventdev port if its
happen to be used RX_ADAPTER_CAP_INBUILT_PORT on 
rte_event_eth_rx_adapter_queue_add().
unlike SW eventdev port, HW eventdev ports are costly so I think, We
need to have another eventdev PMD ops to create service/producer ports.
Or any other scheme that creates rte_event_eth_rx_adapter_conf.rx_event_port_id
on demand by common code.


One solution is:

struct rte_event_eth_rx_adapter_conf {
    uint8_t dev_id;

int (*conf_cb)(uint8_t id, uint8_t port_id, uint32_t flags, struct rte_event_eth_rx_adapter_conf *conf);

    unsigned int max_nb_rx;

    int event_port_id;

    char service_name[];
}

Where dev_id and conf_cb have to be specified in the create call, but event_port_id and service_name will be filled in when conf_cb() is invoked if required. flags will specify what parameters of conf need to be filled in. The conf parameter can be separated into a different

struct rte_event_eth_rx_adapter_ext_conf {
    unsigned int max_nb_rx;

    int event_port_id;

    char service_name[];
}

Nikhil

Reply via email to