[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-03-10 Thread Harish Patil
>>
>>
>>
>>On Sat, Feb 20, 2016 at 07:40:25AM -0800, Harish Patil wrote:
>>> This patch set introduces DPDK poll mode driver for new QLogic FastLinQ
>>>QL4
>>> 25G/40G capable family of CNAs as well as their SR-IOV Virtual
>>>Functions (VF).
>>> 
>>> The overall PMD driver design includes a common module called ecore
>>>that deals
>>> with the low level HW and a upper layer portion that provides the glue
>>>logic.
>>> 
>>> Specifically, the ecore module contains all of the common logic,
>>> e.g. initialization, cleanup, infrastructure for interrupt handling,
>>>link
>>> management, slowpath etc. as well as protocol agnostic features and
>>>supplying
>>> an abstraction layer for other modules.
>>> 
>>> The higher layer implements DPDK exported APIs/driver entry points by
>>> interfacing with the common module for configuration/status and also
>>>the
>>> fastpath routines.
>>> 
>>> Included in the patch set is the supporting documentation and
>>>maintainers.
>>> 
>>> Please apply.
>>> 
>>> Thanks,
>>> 
>>> Harish Patil (6):
>>>   qede: add maintainers
>>>   qede: add documentation
>>>   qede: add QLogic PCI ids
>>>   qede: add driver common module
>>>   qede: add driver
>>>   qede: enable PMD build
>>Hi Harish,
>>
>>there are quite a few comments to be addressed on this patchset. Are
>>there plans
>>for a V2 in time for the code freeze deadline later this week?
>>
>>  /Bruce
>>
>>
>
>Hi Bruce,
>Yes we are working on V2 series with the comments addressed. Will send out
>the patches soon.
>Thanks,
>Harish
>
>

Hi Bruce
FYI - We have submitted v2 patch series after incorporating all review
comments.

Thanks,
Harish



[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-03-08 Thread Harish Patil
>
>On Sat, Feb 20, 2016 at 07:40:25AM -0800, Harish Patil wrote:
>> This patch set introduces DPDK poll mode driver for new QLogic FastLinQ
>>QL4
>> 25G/40G capable family of CNAs as well as their SR-IOV Virtual
>>Functions (VF).
>> 
>> The overall PMD driver design includes a common module called ecore
>>that deals
>> with the low level HW and a upper layer portion that provides the glue
>>logic.
>> 
>> Specifically, the ecore module contains all of the common logic,
>> e.g. initialization, cleanup, infrastructure for interrupt handling,
>>link
>> management, slowpath etc. as well as protocol agnostic features and
>>supplying
>> an abstraction layer for other modules.
>> 
>> The higher layer implements DPDK exported APIs/driver entry points by
>> interfacing with the common module for configuration/status and also the
>> fastpath routines.
>> 
>> Included in the patch set is the supporting documentation and
>>maintainers.
>> 
>> Please apply.
>> 
>> Thanks,
>> 
>> Harish Patil (6):
>>   qede: add maintainers
>>   qede: add documentation
>>   qede: add QLogic PCI ids
>>   qede: add driver common module
>>   qede: add driver
>>   qede: enable PMD build
>Hi Harish,
>
>there are quite a few comments to be addressed on this patchset. Are
>there plans
>for a V2 in time for the code freeze deadline later this week?
>
>   /Bruce
>
>

Hi Bruce,
Yes we are working on V2 series with the comments addressed. Will send out
the patches soon.
Thanks,
Harish



[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-03-08 Thread Bruce Richardson
On Sat, Feb 20, 2016 at 07:40:25AM -0800, Harish Patil wrote:
> This patch set introduces DPDK poll mode driver for new QLogic FastLinQ 
> QL4
> 25G/40G capable family of CNAs as well as their SR-IOV Virtual Functions (VF).
> 
> The overall PMD driver design includes a common module called ecore that deals
> with the low level HW and a upper layer portion that provides the glue logic.
> 
> Specifically, the ecore module contains all of the common logic,
> e.g. initialization, cleanup, infrastructure for interrupt handling, link
> management, slowpath etc. as well as protocol agnostic features and supplying
> an abstraction layer for other modules.
> 
> The higher layer implements DPDK exported APIs/driver entry points by
> interfacing with the common module for configuration/status and also the
> fastpath routines.
> 
> Included in the patch set is the supporting documentation and maintainers.
> 
> Please apply.
> 
> Thanks,
> 
> Harish Patil (6):
>   qede: add maintainers
>   qede: add documentation
>   qede: add QLogic PCI ids
>   qede: add driver common module
>   qede: add driver
>   qede: enable PMD build
Hi Harish,

there are quite a few comments to be addressed on this patchset. Are there plans
for a V2 in time for the code freeze deadline later this week?

/Bruce



[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-02-23 Thread Harish Patil
>
>2016-02-22 16:47, Harish Patil:
>> >Please could you share some performance numbers?
>> 
>> We have measured ~68 MPPS @ 64B with zero drop for the 4x25G adapter
>> running bi-di RFC traffic.
>
>How many queues/cores to achieve this performance?
Using single queue/core per port. Eg:  Running L2FWD application in the
4x25G configuration shall use 4 cores with single queue pair per port.

>
>> >What is the status about the integration of this driver in other
>> >environments?
>> 
>> Not very sure what you mean here. If you are asking about linux, then
>> qede/qed kernel drivers are part of kernel.org.
>> If its about BSD then we have compile tested the current poll mode
>>driver
>> under FreeBSD (64 bit).
>
>OK, so the base driver is shared only between Linux and DPDK?
>Nothing in BSD kernels?
BSD driver also uses the common base code.

>
>> By the way, the patch 4/6 is still held up (not posted) due to size
>> restrictions.
>> Could you please allow it to be posted?
>
>It was posted but curiously out of the thread:
>   http://dpdk.org/ml/archives/dev/2016-February/033612.html

Ok thanks
>
>> 
>> 
>> This message and any attached documents contain information from the
>>sending company or its parent company(s), subsidiaries, divisions or
>>branch offices that may be confidential. If you are not the intended
>>recipient, you may not read, copy, distribute, or use this information.
>>If you have received this transmission in error, please notify the
>>sender immediately by reply e-mail and then delete this message.
>
>Please avoid this message --^
>
I have got it fixed at the exchange level. Hopefully should be fixed with
this email onwards. 



[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-02-22 Thread Thomas Monjalon
2016-02-22 16:47, Harish Patil:
> >Please could you share some performance numbers?
> 
> We have measured ~68 MPPS @ 64B with zero drop for the 4x25G adapter
> running bi-di RFC traffic.

How many queues/cores to achieve this performance?

> >What is the status about the integration of this driver in other
> >environments?
> 
> Not very sure what you mean here. If you are asking about linux, then
> qede/qed kernel drivers are part of kernel.org.
> If its about BSD then we have compile tested the current poll mode driver
> under FreeBSD (64 bit).

OK, so the base driver is shared only between Linux and DPDK?
Nothing in BSD kernels?

> By the way, the patch 4/6 is still held up (not posted) due to size
> restrictions.
> Could you please allow it to be posted?

It was posted but curiously out of the thread:
http://dpdk.org/ml/archives/dev/2016-February/033612.html

> 
> 
> This message and any attached documents contain information from the sending 
> company or its parent company(s), subsidiaries, divisions or branch offices 
> that may be confidential. If you are not the intended recipient, you may not 
> read, copy, distribute, or use this information. If you have received this 
> transmission in error, please notify the sender immediately by reply e-mail 
> and then delete this message.

Please avoid this message --^


[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-02-22 Thread Harish Patil
>
>2016-02-20 07:40, Harish Patil:
>> This patch set introduces DPDK poll mode driver for new QLogic FastLinQ
>>QL4
>> 25G/40G capable family of CNAs as well as their SR-IOV Virtual
>>Functions (VF).
>>
>> The overall PMD driver design includes a common module called ecore
>>that deals
>> with the low level HW and a upper layer portion that provides the glue
>>logic.
>>
>> Specifically, the ecore module contains all of the common logic,
>> e.g. initialization, cleanup, infrastructure for interrupt handling,
>>link
>> management, slowpath etc. as well as protocol agnostic features and
>>supplying
>> an abstraction layer for other modules.
>>
>> The higher layer implements DPDK exported APIs/driver entry points by
>> interfacing with the common module for configuration/status and also the
>> fastpath routines.
>
>A new driver is always a good news :)

Thank you!

>Please could you share some performance numbers?

We have measured ~68 MPPS @ 64B with zero drop for the 4x25G adapter
running bi-di RFC traffic.


>
>What is the status about the integration of this driver in other
>environments?

Not very sure what you mean here. If you are asking about linux, then
qede/qed kernel drivers are part of kernel.org.
If its about BSD then we have compile tested the current poll mode driver
under FreeBSD (64 bit).


>
>The layer named ecore seems to be what is named a base driver in other
>DPDK drivers. Maybe you could consider renaming the directory to base/ for
>consistency?

Sure

>
>About the format of the patches, I think it's better to split driver
>imports
>in several pieces to ease reviewing and later reference for specific bugs.
>What do you think of introducing basic features one by one?
>

I don?t know how easy its going to be to split the driver on a feature by
feature basis.
Some of them could be coupled/dependent on other sections of the L2 code.
But yes, we shall think thru? this and get back to you.
By the way, the patch 4/6 is still held up (not posted) due to size
restrictions.
Could you please allow it to be posted?

Thanks,
Harish




This message and any attached documents contain information from the sending 
company or its parent company(s), subsidiaries, divisions or branch offices 
that may be confidential. If you are not the intended recipient, you may not 
read, copy, distribute, or use this information. If you have received this 
transmission in error, please notify the sender immediately by reply e-mail and 
then delete this message.


[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-02-20 Thread Thomas Monjalon
2016-02-20 07:40, Harish Patil:
> This patch set introduces DPDK poll mode driver for new QLogic FastLinQ 
> QL4
> 25G/40G capable family of CNAs as well as their SR-IOV Virtual Functions (VF).
> 
> The overall PMD driver design includes a common module called ecore that deals
> with the low level HW and a upper layer portion that provides the glue logic.
> 
> Specifically, the ecore module contains all of the common logic,
> e.g. initialization, cleanup, infrastructure for interrupt handling, link
> management, slowpath etc. as well as protocol agnostic features and supplying
> an abstraction layer for other modules.
> 
> The higher layer implements DPDK exported APIs/driver entry points by
> interfacing with the common module for configuration/status and also the
> fastpath routines.

A new driver is always a good news :)
Please could you share some performance numbers?

What is the status about the integration of this driver in other environments?

The layer named ecore seems to be what is named a base driver in other
DPDK drivers. Maybe you could consider renaming the directory to base/ for
consistency?

About the format of the patches, I think it's better to split driver imports
in several pieces to ease reviewing and later reference for specific bugs.
What do you think of introducing basic features one by one?


[dpdk-dev] [PATCH 0/6] DPDK PMD for new QLogic FastLinQ QL4xxxx 25G/40G CNAs

2016-02-20 Thread Harish Patil
This patch set introduces DPDK poll mode driver for new QLogic FastLinQ QL4
25G/40G capable family of CNAs as well as their SR-IOV Virtual Functions (VF).

The overall PMD driver design includes a common module called ecore that deals
with the low level HW and a upper layer portion that provides the glue logic.

Specifically, the ecore module contains all of the common logic,
e.g. initialization, cleanup, infrastructure for interrupt handling, link
management, slowpath etc. as well as protocol agnostic features and supplying
an abstraction layer for other modules.

The higher layer implements DPDK exported APIs/driver entry points by
interfacing with the common module for configuration/status and also the
fastpath routines.

Included in the patch set is the supporting documentation and maintainers.

Please apply.

Thanks,

Harish Patil (6):
  qede: add maintainers
  qede: add documentation
  qede: add QLogic PCI ids
  qede: add driver common module
  qede: add driver
  qede: enable PMD build

 MAINTAINERS | 7 +
 config/common_bsdapp|15 +
 config/common_linuxapp  |16 +
 doc/guides/nics/index.rst   | 1 +
 doc/guides/nics/qede.rst|   344 +
 drivers/net/Makefile| 1 +
 drivers/net/qede/LICENSE.qede_pmd   |28 +
 drivers/net/qede/Makefile   |95 +
 drivers/net/qede/ecore/bcm_osal.c   |   130 +
 drivers/net/qede/ecore/bcm_osal.h   |   408 +
 drivers/net/qede/ecore/common_hsi.h |   714 ++
 drivers/net/qede/ecore/ecore.h  |   785 ++
 drivers/net/qede/ecore/ecore_attn_values.h  | 13287 ++
 drivers/net/qede/ecore/ecore_chain.h|   724 ++
 drivers/net/qede/ecore/ecore_cxt.c  |  2164 
 drivers/net/qede/ecore/ecore_cxt.h  |   173 +
 drivers/net/qede/ecore/ecore_cxt_api.h  |79 +
 drivers/net/qede/ecore/ecore_dcbx.c |   950 ++
 drivers/net/qede/ecore/ecore_dcbx.h |55 +
 drivers/net/qede/ecore/ecore_dcbx_api.h |   166 +
 drivers/net/qede/ecore/ecore_dev.c  |  3907 +++
 drivers/net/qede/ecore/ecore_dev_api.h  |   497 +
 drivers/net/qede/ecore/ecore_gtt_reg_addr.h |42 +
 drivers/net/qede/ecore/ecore_gtt_values.h   |33 +
 drivers/net/qede/ecore/ecore_hsi_common.h   |  1912 
 drivers/net/qede/ecore/ecore_hsi_eth.h  |  1912 
 drivers/net/qede/ecore/ecore_hsi_tools.h|  1081 ++
 drivers/net/qede/ecore/ecore_hw.c   |  1000 ++
 drivers/net/qede/ecore/ecore_hw.h   |   273 +
 drivers/net/qede/ecore/ecore_hw_defs.h  |49 +
 drivers/net/qede/ecore/ecore_init_fw_funcs.c|  1275 +++
 drivers/net/qede/ecore/ecore_init_fw_funcs.h|   263 +
 drivers/net/qede/ecore/ecore_init_ops.c |   610 +
 drivers/net/qede/ecore/ecore_init_ops.h |   103 +
 drivers/net/qede/ecore/ecore_int.c  |  2234 
 drivers/net/qede/ecore/ecore_int.h  |   234 +
 drivers/net/qede/ecore/ecore_int_api.h  |   277 +
 drivers/net/qede/ecore/ecore_iov_api.h  |   931 ++
 drivers/net/qede/ecore/ecore_iro.h  |   168 +
 drivers/net/qede/ecore/ecore_iro_values.h   |59 +
 drivers/net/qede/ecore/ecore_l2.c   |  1801 +++
 drivers/net/qede/ecore/ecore_l2.h   |   151 +
 drivers/net/qede/ecore/ecore_l2_api.h   |   401 +
 drivers/net/qede/ecore/ecore_mcp.c  |  1952 
 drivers/net/qede/ecore/ecore_mcp.h  |   304 +
 drivers/net/qede/ecore/ecore_mcp_api.h  |   629 +
 drivers/net/qede/ecore/ecore_proto_if.h |88 +
 drivers/net/qede/ecore/ecore_rt_defs.h  |   449 +
 drivers/net/qede/ecore/ecore_sp_api.h   |42 +
 drivers/net/qede/ecore/ecore_sp_commands.c  |   531 +
 drivers/net/qede/ecore/ecore_sp_commands.h  |   137 +
 drivers/net/qede/ecore/ecore_spq.c  |   989 ++
 drivers/net/qede/ecore/ecore_spq.h  |   302 +
 drivers/net/qede/ecore/ecore_sriov.c|  3422 ++
 drivers/net/qede/ecore/ecore_sriov.h|   390 +
 drivers/net/qede/ecore/ecore_status.h   |30 +
 drivers/net/qede/ecore/ecore_utils.h|31 +
 drivers/net/qede/ecore/ecore_vf.c   |  1319 +++
 drivers/net/qede/ecore/ecore_vf.h   |   415 +
 drivers/net/qede/ecore/ecore_vf_api.h   |   185 +
 drivers/net/qede/ecore/ecore_vfpf_if.h  |   588 +
 drivers/net/qede/ecore/eth_common.h |   526 +
 drivers/net/qede/ecore/mcp_public.h |  1243 ++
 drivers/net/qede/ecore/nvm_cfg.h|   935 ++
 drivers/net/qede/ecore/reg_addr.h   |  1112 ++
 drivers/net/qede/qede_eth_if.c  |   461 +