[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
> > > > > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > > > > > > Rationale: > > > > > > rx and tx number of queue might be different if RX and TX are > > > > > > configured in different mode. This allow to inform VF about > > > > > > proper number of queues. > > > > > > Nice move! Ouyang, this is a nice answer to my recent remarks about your > > PATCH4 in "Enable VF RSS for Niantic" series. > > After I respond your last comments, I see this, :-), I am sure we both agree > it is > the right way to resolve it in vmdq dcb case. > I am now dividing this patch with your suggestions and I am little confused. In this (DCB in SRIOV) case the primary cause for spliting nb_q_per_pool into nb_rx_q_per_pool and nb_tx_q_per_pool was because of this code: diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index af9e261..be3afe4 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -537,8 +537,8 @@ default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */ /* if nothing mq mode configure, use default scheme */ dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY; - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1) - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1; + if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1) + RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1; break; } @@ -553,17 +553,18 @@ default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */ /* if nothing mq mode configure, use default scheme */ dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY; - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1) - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1; + if (RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool > 1) + RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool = 1; break; } /* check valid queue number */ - if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) || - (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) { + if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool) || + (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)) { PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, " - "queue number must less equal to %d\n", - port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool); + "rx/tx queue number must less equal to %d/%d\n", + port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool, + RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool); return (-EINVAL); } } else { -- This introduced an issue when RX and TX was configure in different way. The problem was that the RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool as common for RX and TX and it is changed. So I did the above. But when testpmd was adjusted for DCB in SRIOV there was another issue. Testpmd is pre-configuring ports by default and since nb_rx_q_per_pool and nb_tx_q_per_pool was already reset to 1 there was no way to use it for DCB in SRIOV. So I did another modification: > + uint16_t nb_rx_q_per_pool = > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool; > + uint16_t nb_tx_q_per_pool = > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool; > + > switch (dev_conf->rxmode.mq_mode) { > - case ETH_MQ_RX_VMDQ_RSS: > case ETH_MQ_RX_VMDQ_DCB: > + break; > + case ETH_MQ_RX_VMDQ_RSS: > case ETH_MQ_RX_VMDQ_DCB_RSS: > - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */ > + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */ > PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 > " SRIOV active, " > "unsupported VMDQ mq_mode rx %u\n", > @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, > default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */ > /* if nothing mq mode configure, use default scheme */ > dev->data->dev_conf.rxmode.mq_mode = > ETH_MQ_RX_VMDQ_ONLY; > - if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1) > - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1; > + if (nb_rx_q_per_pool > 1) > + nb_rx_q_per_pool = 1; > break; > } > >
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
> -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Vlad Zolotarov > Sent: Tuesday, January 13, 2015 6:14 PM > To: Jastrzebski, MichalX K; dev at dpdk.org > Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe > > > On 01/12/15 17:50, Michal Jastrzebski wrote: > > From: Pawel Wodkowski > > > > This patch add support for DCB in SRIOV mode. When no PFC is enabled > > this feature might be used as multiple queues (up to 8 or 4) for VF. > > > > It incorporate following modifications: > > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). > > Rationale: > > in SRIOV mode PF use first free VF to RX/TX. If VF count > > is 16 or 32 all recources are assigned to VFs so PF can > > be used only for configuration. > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > > Rationale: > > rx and tx number of queue might be different if RX and TX are > > configured in different mode. This allow to inform VF about > > proper number of queues. > > - extern mailbox API for DCB mode > > IMHO each bullet above is worth a separate patch. ;) It would be much easier > to review. > > thanks, > vlad > Agree with Vlad
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
> -Original Message- > From: Vlad Zolotarov [mailto:vladz at cloudius-systems.com] > Sent: Tuesday, January 13, 2015 6:09 PM > To: Jastrzebski, MichalX K; dev at dpdk.org > Cc: Ouyang, Changchun > Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe > > > On 01/12/15 16:43, Michal Jastrzebski wrote: > > Date: Mon, 12 Jan 2015 15:39:40 +0100 > > Message-Id: > > <1421073581-6644-2-git-send-email-michalx.k.jastrzebski at intel.com> > > X-Mailer: git-send-email 2.1.1 > > In-Reply-To: > > <1421073581-6644-1-git-send-email-michalx.k.jastrzebski at intel.com> > > References: > > <1421073581-6644-1-git-send-email-michalx.k.jastrzebski at intel.com> > > > > From: Pawel Wodkowski > > > > > > This patch add support for DCB in SRIOV mode. When no PFC > > > > is enabled this feature might be used as multiple queues > > > > (up to 8 or 4) for VF. > > > > > > > > It incorporate following modifications: > > > > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). > > > > Rationale: > > > > in SRIOV mode PF use first free VF to RX/TX. If VF count > > > > is 16 or 32 all recources are assigned to VFs so PF can > > > > be used only for configuration. > > > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > > > > Rationale: > > > > rx and tx number of queue might be different if RX and TX are > > > > configured in different mode. This allow to inform VF about > > > > proper number of queues. > > > Nice move! Ouyang, this is a nice answer to my recent remarks about your > PATCH4 in "Enable VF RSS for Niantic" series. After I respond your last comments, I see this, :-), I am sure we both agree it is the right way to resolve it in vmdq dcb case. > Michal, could u, pls., respin this series after fixing the formatting and > (maybe) > using "git send-email" for sending? ;) > > thanks, > vlad > > > > > > - extern mailbox API for DCB mode > > > > > > > > Signed-off-by: Pawel Wodkowski > > > > --- > > > > lib/librte_ether/rte_ethdev.c | 84 +- > > > > lib/librte_ether/rte_ethdev.h |5 +- > > > > lib/librte_pmd_e1000/igb_pf.c |3 +- > > > > lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- > > > > lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + > > > > lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 > ++- > > > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- > > > > 7 files changed, 159 insertions(+), 49 deletions(-) > > > > > > > > diff --git a/lib/librte_ether/rte_ethdev.c > > b/lib/librte_ether/rte_ethdev.c > > > > index 95f2ceb..4c1a494 100644 > > > > --- a/lib/librte_ether/rte_ethdev.c > > > > +++ b/lib/librte_ether/rte_ethdev.c > > > > @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev > > *dev, uint16_t nb_queues) > > > > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", > > > > sizeof(dev->data->rx_queues[0]) * > nb_queues, > > > > RTE_CACHE_LINE_SIZE); > > > > - if (dev->data->rx_queues == NULL) { > > > > + if (dev->data->rx_queues == NULL && nb_queues > 0) { > > > > dev->data->nb_rx_queues = 0; > > > > return -(ENOMEM); > > > > } > > > > @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev > > *dev, uint16_t nb_queues) > > > > dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", > > > > sizeof(dev->data->tx_queues[0]) * > nb_queues, > > > > RTE_CACHE_LINE_SIZE); > > > > - if (dev->data->tx_queues == NULL) { > > > > + if (dev->data->tx_queues == NULL && nb_queues > 0) { > > > > dev->data->nb_tx_queues = 0; > > > > return -(ENOMEM); > > > > } > > > > @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > > uint16_t nb_rx_q, uint16_t nb_tx_q, > > > > const struct rte_eth_conf
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
On 01/12/15 17:50, Michal Jastrzebski wrote: > From: Pawel Wodkowski > > This patch add support for DCB in SRIOV mode. When no PFC > is enabled this feature might be used as multiple queues > (up to 8 or 4) for VF. > > It incorporate following modifications: > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). > Rationale: > in SRIOV mode PF use first free VF to RX/TX. If VF count > is 16 or 32 all recources are assigned to VFs so PF can > be used only for configuration. > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > Rationale: > rx and tx number of queue might be different if RX and TX are > configured in different mode. This allow to inform VF about > proper number of queues. > - extern mailbox API for DCB mode IMHO each bullet above is worth a separate patch. ;) It would be much easier to review. thanks, vlad > > Signed-off-by: Pawel Wodkowski > --- > lib/librte_ether/rte_ethdev.c | 84 +- > lib/librte_ether/rte_ethdev.h |5 +- > lib/librte_pmd_e1000/igb_pf.c |3 +- > lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- > lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + > lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 > ++- > lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- > 7 files changed, 159 insertions(+), 49 deletions(-) > > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c > index 95f2ceb..4c1a494 100644 > --- a/lib/librte_ether/rte_ethdev.c > +++ b/lib/librte_ether/rte_ethdev.c > @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, > uint16_t nb_queues) > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", > sizeof(dev->data->rx_queues[0]) * nb_queues, > RTE_CACHE_LINE_SIZE); > - if (dev->data->rx_queues == NULL) { > + if (dev->data->rx_queues == NULL && nb_queues > 0) { > dev->data->nb_rx_queues = 0; > return -(ENOMEM); > } > @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, > uint16_t nb_queues) > dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", > sizeof(dev->data->tx_queues[0]) * nb_queues, > RTE_CACHE_LINE_SIZE); > - if (dev->data->tx_queues == NULL) { > + if (dev->data->tx_queues == NULL && nb_queues > 0) { > dev->data->nb_tx_queues = 0; > return -(ENOMEM); > } > @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, > const struct rte_eth_conf *dev_conf) > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > + struct rte_eth_dev_info dev_info; > > if (RTE_ETH_DEV_SRIOV(dev).active != 0) { > /* check multi-queue mode */ > @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, > return (-EINVAL); > } > > + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) && > + (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) { > + enum rte_eth_nb_pools rx_pools = > + > dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools; > + enum rte_eth_nb_pools tx_pools = > + > dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools; > + > + if (rx_pools != tx_pools) { > + /* Only equal number of pools is supported when > + * DCB+VMDq in SRIOV */ > + PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 > + " SRIOV active, DCB+VMDQ mode, " > + "number of rx and tx pools is > not eqaul\n", > + port_id); > + return (-EINVAL); > + } > + } > + > + uint16_t nb_rx_q_per_pool = > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool; > + uint16_t nb_tx_q_per_pool = > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool; > + > switch (dev_conf->rxmode.mq_mode) { > - case ETH_MQ_RX_VMDQ_RSS: > case ETH_MQ_RX_VMDQ_DCB: > + break; > + case ETH_MQ_RX_VMDQ_RSS: > case ETH_MQ_RX_VMDQ_DCB_RSS: > - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */ > + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */ > PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 > " SRIOV active, " >
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
On 01/12/15 16:43, Michal Jastrzebski wrote: > Date: Mon, 12 Jan 2015 15:39:40 +0100 > Message-Id: <1421073581-6644-2-git-send-email-michalx.k.jastrzebski at > intel.com> > X-Mailer: git-send-email 2.1.1 > In-Reply-To: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski at > intel.com> > References: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski at > intel.com> > > From: Pawel Wodkowski > > > This patch add support for DCB in SRIOV mode. When no PFC > > is enabled this feature might be used as multiple queues > > (up to 8 or 4) for VF. > > > > It incorporate following modifications: > > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). > > Rationale: > > in SRIOV mode PF use first free VF to RX/TX. If VF count > > is 16 or 32 all recources are assigned to VFs so PF can > > be used only for configuration. > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > > Rationale: > > rx and tx number of queue might be different if RX and TX are > > configured in different mode. This allow to inform VF about > > proper number of queues. Nice move! Ouyang, this is a nice answer to my recent remarks about your PATCH4 in "Enable VF RSS for Niantic" series. Michal, could u, pls., respin this series after fixing the formatting and (maybe) using "git send-email" for sending? ;) thanks, vlad > > - extern mailbox API for DCB mode > > > > Signed-off-by: Pawel Wodkowski > > --- > > lib/librte_ether/rte_ethdev.c | 84 +- > > lib/librte_ether/rte_ethdev.h |5 +- > > lib/librte_pmd_e1000/igb_pf.c |3 +- > > lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- > > lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + > > lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 > ++- > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- > > 7 files changed, 159 insertions(+), 49 deletions(-) > > > > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c > > index 95f2ceb..4c1a494 100644 > > --- a/lib/librte_ether/rte_ethdev.c > > +++ b/lib/librte_ether/rte_ethdev.c > > @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, > uint16_t nb_queues) > > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", > > sizeof(dev->data->rx_queues[0]) * nb_queues, > > RTE_CACHE_LINE_SIZE); > > - if (dev->data->rx_queues == NULL) { > > + if (dev->data->rx_queues == NULL && nb_queues > 0) { > > dev->data->nb_rx_queues = 0; > > return -(ENOMEM); > > } > > @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, > uint16_t nb_queues) > > dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", > > sizeof(dev->data->tx_queues[0]) * nb_queues, > > RTE_CACHE_LINE_SIZE); > > - if (dev->data->tx_queues == NULL) { > > + if (dev->data->tx_queues == NULL && nb_queues > 0) { > > dev->data->nb_tx_queues = 0; > > return -(ENOMEM); > > } > > @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, > > const struct rte_eth_conf *dev_conf) > > { > > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > > + struct rte_eth_dev_info dev_info; > > > > if (RTE_ETH_DEV_SRIOV(dev).active != 0) { > > /* check multi-queue mode */ > > @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t > nb_rx_q, uint16_t nb_tx_q, > > return (-EINVAL); > > } > > > > + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) && > > + (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) { > > + enum rte_eth_nb_pools rx_pools = > > + > dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools; > > + enum rte_eth_nb_pools tx_pools = > > + > dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools; > > + > > + if (rx_pools != tx_pools) { > > + /* Only equal number of pools is supported when > > + * DCB+VMDq in SRIOV */ > > + PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 > > + " SRIOV active, DCB+VMDQ mode, " > > + "number of rx and tx pools is > not eqaul\n", > > + port_id); > > + return (-EINVAL); > > + } > > + } > > + > > + uint16_t nb_rx_q_per_pool = > RTE_ETH_DEV
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
On 01/12/15 17:46, Jastrzebski, MichalX K wrote: >> -Original Message- >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Michal Jastrzebski >> Sent: Monday, January 12, 2015 3:43 PM >> To: dev at dpdk.org >> Subject: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe >> >> Date: Mon, 12 Jan 2015 15:39:40 +0100 >> Message-Id: <1421073581-6644-2-git-send-email- >> michalx.k.jastrzebski at intel.com> >> X-Mailer: git-send-email 2.1.1 >> In-Reply-To: <1421073581-6644-1-git-send-email- >> michalx.k.jastrzebski at intel.com> >> References: <1421073581-6644-1-git-send-email- >> michalx.k.jastrzebski at intel.com> >> >> From: Pawel Wodkowski >> >> >> This patch add support for DCB in SRIOV mode. When no PFC >> >> is enabled this feature might be used as multiple queues >> >> (up to 8 or 4) for VF. >> >> >> >> It incorporate following modifications: >> >> - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). >> >> Rationale: >> >> in SRIOV mode PF use first free VF to RX/TX. If VF count >> >> is 16 or 32 all recources are assigned to VFs so PF can >> >> be used only for configuration. >> >> - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool >> >> Rationale: >> >> rx and tx number of queue might be different if RX and TX are >> >> configured in different mode. This allow to inform VF about >> >> proper number of queues. >> >> - extern mailbox API for DCB mode >> >> >> >> Signed-off-by: Pawel Wodkowski >> >> --- >> >> lib/librte_ether/rte_ethdev.c | 84 +- >> >> lib/librte_ether/rte_ethdev.h |5 +- >> >> lib/librte_pmd_e1000/igb_pf.c |3 +- >> >> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- >> >> lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + >> >> lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++-- >> --- >> >> lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- >> >> 7 files changed, 159 insertions(+), 49 deletions(-) >> >> >> >> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c >> >> index 95f2ceb..4c1a494 100644 >> >> --- a/lib/librte_ether/rte_ethdev.c >> >> +++ b/lib/librte_ether/rte_ethdev.c >> >> @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev >> *dev, uint16_t nb_queues) >> >> dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", >> >> sizeof(dev->data->rx_queues[0]) * nb_queues, >> >> RTE_CACHE_LINE_SIZE); >> >> -if (dev->data->rx_queues == NULL) { >> >> +if (dev->data->rx_queues == NULL && nb_queues > 0) { >> >> dev->data->nb_rx_queues = 0; >> >> return -(ENOMEM); >> >> } >> >> @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev >> *dev, uint16_t nb_queues) >> >> dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", >> >> sizeof(dev->data->tx_queues[0]) * nb_queues, >> >> RTE_CACHE_LINE_SIZE); >> >> -if (dev->data->tx_queues == NULL) { >> >> +if (dev->data->tx_queues == NULL && nb_queues > 0) { >> >> dev->data->nb_tx_queues = 0; >> >> return -(ENOMEM); >> >> } >> >> @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, >> uint16_t nb_rx_q, uint16_t nb_tx_q, >> >>const struct rte_eth_conf *dev_conf) >> >> { >> >> struct rte_eth_dev *dev = &rte_eth_devices[port_id]; >> >> +struct rte_eth_dev_info dev_info; >> >> >> >> if (RTE_ETH_DEV_SRIOV(dev).active != 0) { >> >> /* check multi-queue mode */ >> >> @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, >> uint16_t nb_rx_q, uint16_t nb_tx_q, >> >> return (-EINVAL); >> >> } >> >> >> >> +if ((dev_conf->rxmode.mq_
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
> -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Vlad Zolotarov > Sent: Tuesday, January 13, 2015 11:14 AM > To: Jastrzebski, MichalX K; dev at dpdk.org > Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe > > > On 01/12/15 17:50, Michal Jastrzebski wrote: > > From: Pawel Wodkowski > > > > This patch add support for DCB in SRIOV mode. When no PFC > > is enabled this feature might be used as multiple queues > > (up to 8 or 4) for VF. > > > > It incorporate following modifications: > > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). > > Rationale: > > in SRIOV mode PF use first free VF to RX/TX. If VF count > > is 16 or 32 all recources are assigned to VFs so PF can > > be used only for configuration. > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > > Rationale: > > rx and tx number of queue might be different if RX and TX are > > configured in different mode. This allow to inform VF about > > proper number of queues. > > - extern mailbox API for DCB mode > > IMHO each bullet above is worth a separate patch. ;) > It would be much easier to review. > Good point. I will send next version shortly. Pawel
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
From: Pawel Wodkowski This patch add support for DCB in SRIOV mode. When no PFC is enabled this feature might be used as multiple queues (up to 8 or 4) for VF. It incorporate following modifications: - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). Rationale: in SRIOV mode PF use first free VF to RX/TX. If VF count is 16 or 32 all recources are assigned to VFs so PF can be used only for configuration. - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool Rationale: rx and tx number of queue might be different if RX and TX are configured in different mode. This allow to inform VF about proper number of queues. - extern mailbox API for DCB mode Signed-off-by: Pawel Wodkowski --- lib/librte_ether/rte_ethdev.c | 84 +- lib/librte_ether/rte_ethdev.h |5 +- lib/librte_pmd_e1000/igb_pf.c |3 +- lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++- lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- 7 files changed, 159 insertions(+), 49 deletions(-) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 95f2ceb..4c1a494 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", sizeof(dev->data->rx_queues[0]) * nb_queues, RTE_CACHE_LINE_SIZE); - if (dev->data->rx_queues == NULL) { + if (dev->data->rx_queues == NULL && nb_queues > 0) { dev->data->nb_rx_queues = 0; return -(ENOMEM); } @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", sizeof(dev->data->tx_queues[0]) * nb_queues, RTE_CACHE_LINE_SIZE); - if (dev->data->tx_queues == NULL) { + if (dev->data->tx_queues == NULL && nb_queues > 0) { dev->data->nb_tx_queues = 0; return -(ENOMEM); } @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + struct rte_eth_dev_info dev_info; if (RTE_ETH_DEV_SRIOV(dev).active != 0) { /* check multi-queue mode */ @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return (-EINVAL); } + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) && + (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) { + enum rte_eth_nb_pools rx_pools = + dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools; + enum rte_eth_nb_pools tx_pools = + dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools; + + if (rx_pools != tx_pools) { + /* Only equal number of pools is supported when +* DCB+VMDq in SRIOV */ + PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 + " SRIOV active, DCB+VMDQ mode, " + "number of rx and tx pools is not eqaul\n", + port_id); + return (-EINVAL); + } + } + + uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool; + uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool; + switch (dev_conf->rxmode.mq_mode) { - case ETH_MQ_RX_VMDQ_RSS: case ETH_MQ_RX_VMDQ_DCB: + break; + case ETH_MQ_RX_VMDQ_RSS: case ETH_MQ_RX_VMDQ_DCB_RSS: - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */ + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */ PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 " SRIOV active, " "unsupported VMDQ mq_mode rx %u\n", @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */ /* if nothi
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
> -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Michal Jastrzebski > Sent: Monday, January 12, 2015 3:43 PM > To: dev at dpdk.org > Subject: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe > > Date: Mon, 12 Jan 2015 15:39:40 +0100 > Message-Id: <1421073581-6644-2-git-send-email- > michalx.k.jastrzebski at intel.com> > X-Mailer: git-send-email 2.1.1 > In-Reply-To: <1421073581-6644-1-git-send-email- > michalx.k.jastrzebski at intel.com> > References: <1421073581-6644-1-git-send-email- > michalx.k.jastrzebski at intel.com> > > From: Pawel Wodkowski > > > This patch add support for DCB in SRIOV mode. When no PFC > > is enabled this feature might be used as multiple queues > > (up to 8 or 4) for VF. > > > > It incorporate following modifications: > > - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). > >Rationale: > >in SRIOV mode PF use first free VF to RX/TX. If VF count > >is 16 or 32 all recources are assigned to VFs so PF can > >be used only for configuration. > > - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool > >Rationale: > >rx and tx number of queue might be different if RX and TX are > >configured in different mode. This allow to inform VF about > >proper number of queues. > > - extern mailbox API for DCB mode > > > > Signed-off-by: Pawel Wodkowski > > --- > > lib/librte_ether/rte_ethdev.c | 84 +- > > lib/librte_ether/rte_ethdev.h |5 +- > > lib/librte_pmd_e1000/igb_pf.c |3 +- > > lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- > > lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + > > lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++-- > --- > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- > > 7 files changed, 159 insertions(+), 49 deletions(-) > > > > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c > > index 95f2ceb..4c1a494 100644 > > --- a/lib/librte_ether/rte_ethdev.c > > +++ b/lib/librte_ether/rte_ethdev.c > > @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev > *dev, uint16_t nb_queues) > > dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", > > sizeof(dev->data->rx_queues[0]) * nb_queues, > > RTE_CACHE_LINE_SIZE); > > - if (dev->data->rx_queues == NULL) { > > + if (dev->data->rx_queues == NULL && nb_queues > 0) { > > dev->data->nb_rx_queues = 0; > > return -(ENOMEM); > > } > > @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev > *dev, uint16_t nb_queues) > > dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", > > sizeof(dev->data->tx_queues[0]) * nb_queues, > > RTE_CACHE_LINE_SIZE); > > - if (dev->data->tx_queues == NULL) { > > + if (dev->data->tx_queues == NULL && nb_queues > 0) { > > dev->data->nb_tx_queues = 0; > > return -(ENOMEM); > > } > > @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, > > const struct rte_eth_conf *dev_conf) > > { > > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > > + struct rte_eth_dev_info dev_info; > > > > if (RTE_ETH_DEV_SRIOV(dev).active != 0) { > > /* check multi-queue mode */ > > @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, > uint16_t nb_rx_q, uint16_t nb_tx_q, > > return (-EINVAL); > > } > > > > + if ((dev_conf->rxmode.mq_mode == > ETH_MQ_RX_VMDQ_DCB) && > > + (dev_conf->txmode.mq_mode == > ETH_MQ_TX_VMDQ_DCB)) { > > + enum rte_eth_nb_pools rx_pools = > > + dev_conf- > >rx_adv_conf.vmdq_dcb_conf.nb_queue_pools; > > + enum rte_eth_nb_pools tx_pools = > > + dev_conf- > >tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools; > > + > > +
[dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
Date: Mon, 12 Jan 2015 15:39:40 +0100 Message-Id: <1421073581-6644-2-git-send-email-michalx.k.jastrzebski at intel.com> X-Mailer: git-send-email 2.1.1 In-Reply-To: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski at intel.com> References: <1421073581-6644-1-git-send-email-michalx.k.jastrzebski at intel.com> From: Pawel Wodkowski This patch add support for DCB in SRIOV mode. When no PFC is enabled this feature might be used as multiple queues (up to 8 or 4) for VF. It incorporate following modifications: - Allow zero rx/tx queues to be passed to rte_eth_dev_configure(). Rationale: in SRIOV mode PF use first free VF to RX/TX. If VF count is 16 or 32 all recources are assigned to VFs so PF can be used only for configuration. - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool Rationale: rx and tx number of queue might be different if RX and TX are configured in different mode. This allow to inform VF about proper number of queues. - extern mailbox API for DCB mode Signed-off-by: Pawel Wodkowski --- lib/librte_ether/rte_ethdev.c | 84 +- lib/librte_ether/rte_ethdev.h |5 +- lib/librte_pmd_e1000/igb_pf.c |3 +- lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 10 ++-- lib/librte_pmd_ixgbe/ixgbe_ethdev.h |1 + lib/librte_pmd_ixgbe/ixgbe_pf.c | 98 ++- lib/librte_pmd_ixgbe/ixgbe_rxtx.c |7 ++- 7 files changed, 159 insertions(+), 49 deletions(-) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 95f2ceb..4c1a494 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -333,7 +333,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", sizeof(dev->data->rx_queues[0]) * nb_queues, RTE_CACHE_LINE_SIZE); - if (dev->data->rx_queues == NULL) { + if (dev->data->rx_queues == NULL && nb_queues > 0) { dev->data->nb_rx_queues = 0; return -(ENOMEM); } @@ -475,7 +475,7 @@ rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", sizeof(dev->data->tx_queues[0]) * nb_queues, RTE_CACHE_LINE_SIZE); - if (dev->data->tx_queues == NULL) { + if (dev->data->tx_queues == NULL && nb_queues > 0) { dev->data->nb_tx_queues = 0; return -(ENOMEM); } @@ -507,6 +507,7 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + struct rte_eth_dev_info dev_info; if (RTE_ETH_DEV_SRIOV(dev).active != 0) { /* check multi-queue mode */ @@ -524,11 +525,33 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return (-EINVAL); } + if ((dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) && + (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB)) { + enum rte_eth_nb_pools rx_pools = + dev_conf->rx_adv_conf.vmdq_dcb_conf.nb_queue_pools; + enum rte_eth_nb_pools tx_pools = + dev_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools; + + if (rx_pools != tx_pools) { + /* Only equal number of pools is supported when +* DCB+VMDq in SRIOV */ + PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8 + " SRIOV active, DCB+VMDQ mode, " + "number of rx and tx pools is not eqaul\n", + port_id); + return (-EINVAL); + } + } + + uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool; + uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool; + switch (dev_conf->rxmode.mq_mode) { - case ETH_MQ_RX_VMDQ_RSS: case ETH_MQ_RX_VMDQ_DCB: + break; + case ETH_MQ_RX_VMDQ_RSS: case ETH_MQ_RX_VMDQ_DCB_RSS: - /* DCB/RSS VMDQ in SRIOV mode, not implement yet */ + /* RSS, DCB+RSS VMDQ in SRIOV mode, not implement y