Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-08 Thread Leon Romanovsky
On Wed, Apr 07, 2021 at 09:59:52PM +, Dexuan Cui wrote:
> > From: Leon Romanovsky 
> > Sent: Wednesday, April 7, 2021 5:45 AM
> > > >
> > > > BTW, you don't need to write { 0 }, the {} is enough.
> > >
> > > Thanks for the suggestion! I'll use {0} in v2.
> > 
> > You missed the point, "{ 0 }" change to be "{}" without 0.
> 
> Got it. Will make the suggested change.

The numbers are not important, if you are curious, read this thread, it
talks about {}, {0}, memset(0,..) and padding :)
https://lore.kernel.org/linux-rdma/20200730192026.110246-1-yepeilin...@gmail.com/

> 
> FWIW, {0} and { 0 } are still widely used, but it looks like
> {} is indeed more preferred:
> 
> $ grep "= {};" drivers/net/  -nr  | wc -l
> 829
> 
> $ grep "= {0};" drivers/net/  -nr  | wc -l
> 708
> 
> $ grep "= {};" kernel/  -nr  | wc -l
> 29
> 
> $ grep "= {0};" kernel/  -nr  | wc -l
> 4


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Bernd Petrovitsch
Hi all!

On 07/04/2021 23:59, Dexuan Cui wrote:
[...]
> FWIW, {0} and { 0 } are still widely used, but it looks like
> {} is indeed more preferred:
> 
> $ grep "= {};" drivers/net/  -nr  | wc -l
> 829

$ egrep -nr "=[[:space:]]*{[[:space:]]*};" drivers/net/  | wc -l
872

> $ grep "= {0};" drivers/net/  -nr  | wc -l
> 708

$ egrep -nr "=[[:space:]]*{[[:space:]]*0[[:space:]]*};" drivers/net/  |
wc -l
1078

> $ grep "= {};" kernel/  -nr  | wc -l
> 29

$ egrep -nr "=[[:space:]]*{[[:space:]]*};" kernel/  | wc -l
45

> $ grep "= {0};" kernel/  -nr  | wc -l
> 4

$ egrep -nr "=[[:space:]]*{[[:space:]]*0[[:space:]]*};" kernel  | wc -l
8

MfG,
Bernd
-- 
Bernd Petrovitsch  Email : be...@petrovitsch.priv.at
 There is NO CLOUD, just other people's computers. - FSFE
 LUGA : http://www.luga.at


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Dexuan Cui
> From: Leon Romanovsky 
> Sent: Wednesday, April 7, 2021 5:45 AM
> > >
> > > BTW, you don't need to write { 0 }, the {} is enough.
> >
> > Thanks for the suggestion! I'll use {0} in v2.
> 
> You missed the point, "{ 0 }" change to be "{}" without 0.

Got it. Will make the suggested change.

FWIW, {0} and { 0 } are still widely used, but it looks like
{} is indeed more preferred:

$ grep "= {};" drivers/net/  -nr  | wc -l
829

$ grep "= {0};" drivers/net/  -nr  | wc -l
708

$ grep "= {};" kernel/  -nr  | wc -l
29

$ grep "= {0};" kernel/  -nr  | wc -l
4


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Leon Romanovsky
On Wed, Apr 07, 2021 at 03:05:26PM +, Haiyang Zhang wrote:
> 
> 
> > -Original Message-
> > From: Leon Romanovsky 
> > Sent: Wednesday, April 7, 2021 10:55 AM
> > To: Haiyang Zhang 
> > Cc: Dexuan Cui ; da...@davemloft.net;
> > k...@kernel.org; KY Srinivasan ; Stephen Hemminger
> > ; wei@kernel.org; Wei Liu
> > ; net...@vger.kernel.org; linux-
> > ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> > Network Adapter (MANA)
> > 
> > On Wed, Apr 07, 2021 at 02:41:45PM +, Haiyang Zhang wrote:
> > >
> > >
> > > > -Original Message-
> > > > From: Leon Romanovsky 
> > > > Sent: Wednesday, April 7, 2021 8:51 AM
> > > > To: Dexuan Cui 
> > > > Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> > > > ; Haiyang Zhang ;
> > Stephen
> > > > Hemminger ; wei....@kernel.org; Wei Liu
> > > > ; net...@vger.kernel.org; linux-
> > > > ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> > > > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft
> > > > Azure Network Adapter (MANA)
> > > >
> > > > On Wed, Apr 07, 2021 at 08:40:13AM +, Dexuan Cui wrote:
> > > > > > From: Leon Romanovsky 
> > > > > > Sent: Wednesday, April 7, 2021 1:10 AM
> > > > > >
> > > > > > <...>
> > > > > >
> > > > > > > +int gdma_verify_vf_version(struct pci_dev *pdev) {
> > > > > > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > > > > + struct gdma_verify_ver_req req = { 0 };
> > > > > > > + struct gdma_verify_ver_resp resp = { 0 };
> > > > > > > + int err;
> > > > > > > +
> > > > > > > + gdma_init_req_hdr(&req.hdr,
> > GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > > > > +   sizeof(req), sizeof(resp));
> > > > > > > +
> > > > > > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > > > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > > > > +
> > > > > > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp),
> > &resp);
> > > > > > > + if (err || resp.hdr.status) {
> > > > > > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n",
> > err,
> > > > > > > +resp.hdr.status);
> > > > > > > + return -EPROTO;
> > > > > > > + }
> > > > > > > +
> > > > > > > + return 0;
> > > > > > > +}
> > > > > >
> > > > > > <...>
> > > > > > > + err = gdma_verify_vf_version(pdev);
> > > > > > > + if (err)
> > > > > > > + goto remove_irq;
> > > > > >
> > > > > > Will this VF driver be used in the guest VM? What will prevent
> > > > > > from users
> > > > to
> > > > > > change it?
> > > > > > I think that such version negotiation scheme is not allowed.
> > > > >
> > > > > Yes, the VF driver is expected to run in a Linux VM that runs on 
> > > > > Azure.
> > > > >
> > > > > Currently gdma_verify_vf_version() just tells the PF driver that
> > > > > the VF
> > > > driver
> > > > > is only able to support GDMA_PROTOCOL_V1, and want to use
> > > > > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> > > > >
> > > > > enum {
> > > > > GDMA_PROTOCOL_UNDEFINED = 0,
> > > > > GDMA_PROTOCOL_V1 = 1,
> > > > > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > > > > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > > > > GDMA_PROTOCOL_VALUE_MAX
> > > > > };
> > > > >
> > > > > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> > > > expect
> > > > > here gdma_verify_vf_version() should succeed. If a user changes
> > > > > the Linux
> > > > VF
> > > > > driver and try to use a protocol version not supported by the PF
> > > > > driver,
> > > > then
> > > > > gdma_verify_vf_version() will fail; later, if the VF driver tries
> > > > > to talk to the
> > > > PF
> > > > > driver using an unsupported message format, the PF driver will
> > > > > return a
> > > > failure.
> > > >
> > > > The worry is not for the current code, but for the future one when
> > > > you will support v2, v3 e.t.c. First, your code will look like a
> > > > spaghetti and second, users will try and mix vX with "unsupported"
> > > > commands just for the fun.
> > >
> > > In the future, if the protocol version updated on the host side,
> > > guests need to support different host versions because not all hosts
> > > are updated (simultaneously). So this negotiation is necessary to know
> > > the supported version, and decide the proper command version to use.
> > 
> > And how do other paravirtual drivers solve this negotiation scheme?
> 
> I saw some other drivers used version negotiation too, for example:

I see, thanks.

> 
> /**
>  *  ixgbevf_negotiate_api_version_vf - Negotiate supported API version
>  *  @hw: pointer to the HW structure
>  *  @api: integer containing requested API version
>  **/
> static int ixgbevf_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
> {
> 
> Thanks,
> - Haiyang


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Haiyang Zhang



> -Original Message-
> From: Wei Liu 
> Sent: Wednesday, April 7, 2021 11:01 AM
> To: Haiyang Zhang 
> Cc: Wei Liu ; Dexuan Cui ;
> da...@davemloft.net; k...@kernel.org; KY Srinivasan
> ; Stephen Hemminger ;
> Wei Liu ; net...@vger.kernel.org; linux-
> ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
> 
> On Wed, Apr 07, 2021 at 02:34:01PM +, Haiyang Zhang wrote:
> >
> >
> > > -Original Message-
> > > From: Wei Liu 
> > > Sent: Wednesday, April 7, 2021 9:17 AM
> > > To: Dexuan Cui 
> > > Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> > > ; Haiyang Zhang ;
> Stephen
> > > Hemminger ; wei@kernel.org; Wei Liu
> > > ; net...@vger.kernel.org; linux-
> > > ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> > > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft
> > > Azure Network Adapter (MANA)
> > >
> > > On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> > > [...]
> > > > +config MICROSOFT_MANA
> > > > +   tristate "Microsoft Azure Network Adapter (MANA) support"
> > > > +   default m
> > > > +   depends on PCI_MSI
> > > > +   select PCI_HYPERV
> > >
> > > OOI which part of the code requires PCI_HYPERV?
> > >
> > > Asking because I can't immediately find code that looks to be
> > > Hyper-V specific (searching for vmbus etc). This device looks like
> > > any other PCI devices to me.
> >
> > It depends on the VF nic's PCI config space which is presented by the
> pci_hyperv driver.
> 
> I think all it matters is the PCI bus is able to handle the configuration 
> space
> access, right? Assuming there is an emulated PCI root complex which
> exposes the config space to the driver, will this driver still work?
> 
> I'm trying to understand how tightly coupled with Hyper-V PCI this driver is.
> In an alternative universe, Microsft may suddenly decide to sell this
> hardware and someone wants to passthrough an VF via VFIO. I don't see
> how this driver wouldn't work, hence the original question.
> 
> There is no need to change the code. I'm just curious about a tiny detail in
> the implementation.

Currently, the PCI config space only comes from pci_hyperv, so we have this 
dependency.

If the pci config space is presented from other ways in an "alternative 
universe", 
we may need to add other dependencies. And yes, the VF should continue to work:)

Thanks,
- Haiyang


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Haiyang Zhang



> -Original Message-
> From: Leon Romanovsky 
> Sent: Wednesday, April 7, 2021 10:55 AM
> To: Haiyang Zhang 
> Cc: Dexuan Cui ; da...@davemloft.net;
> k...@kernel.org; KY Srinivasan ; Stephen Hemminger
> ; wei@kernel.org; Wei Liu
> ; net...@vger.kernel.org; linux-
> ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
> 
> On Wed, Apr 07, 2021 at 02:41:45PM +, Haiyang Zhang wrote:
> >
> >
> > > -Original Message-
> > > From: Leon Romanovsky 
> > > Sent: Wednesday, April 7, 2021 8:51 AM
> > > To: Dexuan Cui 
> > > Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> > > ; Haiyang Zhang ;
> Stephen
> > > Hemminger ; wei@kernel.org; Wei Liu
> > > ; net...@vger.kernel.org; linux-
> > > ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> > > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft
> > > Azure Network Adapter (MANA)
> > >
> > > On Wed, Apr 07, 2021 at 08:40:13AM +, Dexuan Cui wrote:
> > > > > From: Leon Romanovsky 
> > > > > Sent: Wednesday, April 7, 2021 1:10 AM
> > > > >
> > > > > <...>
> > > > >
> > > > > > +int gdma_verify_vf_version(struct pci_dev *pdev) {
> > > > > > +   struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > > > +   struct gdma_verify_ver_req req = { 0 };
> > > > > > +   struct gdma_verify_ver_resp resp = { 0 };
> > > > > > +   int err;
> > > > > > +
> > > > > > +   gdma_init_req_hdr(&req.hdr,
> GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > > > + sizeof(req), sizeof(resp));
> > > > > > +
> > > > > > +   req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > > > +   req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > > > +
> > > > > > +   err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp),
> &resp);
> > > > > > +   if (err || resp.hdr.status) {
> > > > > > +   pr_err("VfVerifyVersionOutput: %d, status=0x%x\n",
> err,
> > > > > > +  resp.hdr.status);
> > > > > > +   return -EPROTO;
> > > > > > +   }
> > > > > > +
> > > > > > +   return 0;
> > > > > > +}
> > > > >
> > > > > <...>
> > > > > > +   err = gdma_verify_vf_version(pdev);
> > > > > > +   if (err)
> > > > > > +   goto remove_irq;
> > > > >
> > > > > Will this VF driver be used in the guest VM? What will prevent
> > > > > from users
> > > to
> > > > > change it?
> > > > > I think that such version negotiation scheme is not allowed.
> > > >
> > > > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> > > >
> > > > Currently gdma_verify_vf_version() just tells the PF driver that
> > > > the VF
> > > driver
> > > > is only able to support GDMA_PROTOCOL_V1, and want to use
> > > > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> > > >
> > > > enum {
> > > > GDMA_PROTOCOL_UNDEFINED = 0,
> > > > GDMA_PROTOCOL_V1 = 1,
> > > > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > > > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > > > GDMA_PROTOCOL_VALUE_MAX
> > > > };
> > > >
> > > > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> > > expect
> > > > here gdma_verify_vf_version() should succeed. If a user changes
> > > > the Linux
> > > VF
> > > > driver and try to use a protocol version not supported by the PF
> > > > driver,
> > > then
> > > > gdma_verify_vf_version() will fail; later, if the VF driver tries
> > > > to talk to the
> > > PF
> > > > driver using an unsupported message format, the PF driver will
> > > > return a
> > > failure.
> > >
> > > The worry is not for the current code, but for the future one when
> > > you will support v2, v3 e.t.c. First, your code will look like a
> > > spaghetti and second, users will try and mix vX with "unsupported"
> > > commands just for the fun.
> >
> > In the future, if the protocol version updated on the host side,
> > guests need to support different host versions because not all hosts
> > are updated (simultaneously). So this negotiation is necessary to know
> > the supported version, and decide the proper command version to use.
> 
> And how do other paravirtual drivers solve this negotiation scheme?

I saw some other drivers used version negotiation too, for example:

/**
 *  ixgbevf_negotiate_api_version_vf - Negotiate supported API version
 *  @hw: pointer to the HW structure
 *  @api: integer containing requested API version
 **/
static int ixgbevf_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
{

Thanks,
- Haiyang


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Wei Liu
On Wed, Apr 07, 2021 at 02:34:01PM +, Haiyang Zhang wrote:
> 
> 
> > -Original Message-
> > From: Wei Liu 
> > Sent: Wednesday, April 7, 2021 9:17 AM
> > To: Dexuan Cui 
> > Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> > ; Haiyang Zhang ; Stephen
> > Hemminger ; wei@kernel.org; Wei Liu
> > ; net...@vger.kernel.org; linux-
> > ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> > Network Adapter (MANA)
> > 
> > On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> > [...]
> > > +config MICROSOFT_MANA
> > > + tristate "Microsoft Azure Network Adapter (MANA) support"
> > > + default m
> > > + depends on PCI_MSI
> > > + select PCI_HYPERV
> > 
> > OOI which part of the code requires PCI_HYPERV?
> > 
> > Asking because I can't immediately find code that looks to be Hyper-V
> > specific (searching for vmbus etc). This device looks like any other PCI 
> > devices
> > to me.
> 
> It depends on the VF nic's PCI config space which is presented by the 
> pci_hyperv driver.

I think all it matters is the PCI bus is able to handle the
configuration space access, right? Assuming there is an emulated PCI
root complex which exposes the config space to the driver, will this
driver still work?

I'm trying to understand how tightly coupled with Hyper-V PCI this
driver is. In an alternative universe, Microsft may suddenly decide to
sell this hardware and someone wants to passthrough an VF via VFIO. I
don't see how this driver wouldn't work, hence the original question.

There is no need to change the code. I'm just curious about a tiny
detail in the implementation.

Wei.

> 
> Thanks,
> - Haiyang


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Leon Romanovsky
On Wed, Apr 07, 2021 at 02:41:45PM +, Haiyang Zhang wrote:
> 
> 
> > -Original Message-
> > From: Leon Romanovsky 
> > Sent: Wednesday, April 7, 2021 8:51 AM
> > To: Dexuan Cui 
> > Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> > ; Haiyang Zhang ; Stephen
> > Hemminger ; wei@kernel.org; Wei Liu
> > ; net...@vger.kernel.org; linux-
> > ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> > Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> > Network Adapter (MANA)
> > 
> > On Wed, Apr 07, 2021 at 08:40:13AM +, Dexuan Cui wrote:
> > > > From: Leon Romanovsky 
> > > > Sent: Wednesday, April 7, 2021 1:10 AM
> > > >
> > > > <...>
> > > >
> > > > > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > > > > +{
> > > > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > > + struct gdma_verify_ver_req req = { 0 };
> > > > > + struct gdma_verify_ver_resp resp = { 0 };
> > > > > + int err;
> > > > > +
> > > > > + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > > +   sizeof(req), sizeof(resp));
> > > > > +
> > > > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > > +
> > > > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), 
> > > > > &resp);
> > > > > + if (err || resp.hdr.status) {
> > > > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > > > > +resp.hdr.status);
> > > > > + return -EPROTO;
> > > > > + }
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > >
> > > > <...>
> > > > > + err = gdma_verify_vf_version(pdev);
> > > > > + if (err)
> > > > > + goto remove_irq;
> > > >
> > > > Will this VF driver be used in the guest VM? What will prevent from 
> > > > users
> > to
> > > > change it?
> > > > I think that such version negotiation scheme is not allowed.
> > >
> > > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> > >
> > > Currently gdma_verify_vf_version() just tells the PF driver that the VF
> > driver
> > > is only able to support GDMA_PROTOCOL_V1, and want to use
> > > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> > >
> > > enum {
> > > GDMA_PROTOCOL_UNDEFINED = 0,
> > > GDMA_PROTOCOL_V1 = 1,
> > > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > > GDMA_PROTOCOL_VALUE_MAX
> > > };
> > >
> > > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> > expect
> > > here gdma_verify_vf_version() should succeed. If a user changes the Linux
> > VF
> > > driver and try to use a protocol version not supported by the PF driver,
> > then
> > > gdma_verify_vf_version() will fail; later, if the VF driver tries to talk 
> > > to the
> > PF
> > > driver using an unsupported message format, the PF driver will return a
> > failure.
> > 
> > The worry is not for the current code, but for the future one when you will
> > support v2, v3 e.t.c. First, your code will look like a spaghetti and
> > second, users will try and mix vX with "unsupported" commands just for the
> > fun.
> 
> In the future, if the protocol version updated on the host side, guests need 
> to support different host versions because not all hosts are updated 
> (simultaneously). So this negotiation is necessary to know the supported 
> version, and decide the proper command version to use. 

And how do other paravirtual drivers solve this negotiation scheme?

> 
> If any user try "unsupported commands just for the fun", the host will deny 
> and return an error.
> 
> Thanks,
> - Haiyang


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Haiyang Zhang



> -Original Message-
> From: Leon Romanovsky 
> Sent: Wednesday, April 7, 2021 8:51 AM
> To: Dexuan Cui 
> Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> ; Haiyang Zhang ; Stephen
> Hemminger ; wei@kernel.org; Wei Liu
> ; net...@vger.kernel.org; linux-
> ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
> 
> On Wed, Apr 07, 2021 at 08:40:13AM +, Dexuan Cui wrote:
> > > From: Leon Romanovsky 
> > > Sent: Wednesday, April 7, 2021 1:10 AM
> > >
> > > <...>
> > >
> > > > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > > > +{
> > > > +   struct gdma_context *gc = pci_get_drvdata(pdev);
> > > > +   struct gdma_verify_ver_req req = { 0 };
> > > > +   struct gdma_verify_ver_resp resp = { 0 };
> > > > +   int err;
> > > > +
> > > > +   gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > > > + sizeof(req), sizeof(resp));
> > > > +
> > > > +   req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > > +   req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > > +
> > > > +   err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), 
> > > > &resp);
> > > > +   if (err || resp.hdr.status) {
> > > > +   pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > > > +  resp.hdr.status);
> > > > +   return -EPROTO;
> > > > +   }
> > > > +
> > > > +   return 0;
> > > > +}
> > >
> > > <...>
> > > > +   err = gdma_verify_vf_version(pdev);
> > > > +   if (err)
> > > > +   goto remove_irq;
> > >
> > > Will this VF driver be used in the guest VM? What will prevent from users
> to
> > > change it?
> > > I think that such version negotiation scheme is not allowed.
> >
> > Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> >
> > Currently gdma_verify_vf_version() just tells the PF driver that the VF
> driver
> > is only able to support GDMA_PROTOCOL_V1, and want to use
> > GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> >
> > enum {
> > GDMA_PROTOCOL_UNDEFINED = 0,
> > GDMA_PROTOCOL_V1 = 1,
> > GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> > GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> > GDMA_PROTOCOL_VALUE_MAX
> > };
> >
> > The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I
> expect
> > here gdma_verify_vf_version() should succeed. If a user changes the Linux
> VF
> > driver and try to use a protocol version not supported by the PF driver,
> then
> > gdma_verify_vf_version() will fail; later, if the VF driver tries to talk 
> > to the
> PF
> > driver using an unsupported message format, the PF driver will return a
> failure.
> 
> The worry is not for the current code, but for the future one when you will
> support v2, v3 e.t.c. First, your code will look like a spaghetti and
> second, users will try and mix vX with "unsupported" commands just for the
> fun.

In the future, if the protocol version updated on the host side, guests need 
to support different host versions because not all hosts are updated 
(simultaneously). So this negotiation is necessary to know the supported 
version, and decide the proper command version to use. 

If any user try "unsupported commands just for the fun", the host will deny 
and return an error.

Thanks,
- Haiyang


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Haiyang Zhang



> -Original Message-
> From: Wei Liu 
> Sent: Wednesday, April 7, 2021 9:17 AM
> To: Dexuan Cui 
> Cc: da...@davemloft.net; k...@kernel.org; KY Srinivasan
> ; Haiyang Zhang ; Stephen
> Hemminger ; wei@kernel.org; Wei Liu
> ; net...@vger.kernel.org; linux-
> ker...@vger.kernel.org; linux-hyp...@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure
> Network Adapter (MANA)
> 
> On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> [...]
> > +config MICROSOFT_MANA
> > +   tristate "Microsoft Azure Network Adapter (MANA) support"
> > +   default m
> > +   depends on PCI_MSI
> > +   select PCI_HYPERV
> 
> OOI which part of the code requires PCI_HYPERV?
> 
> Asking because I can't immediately find code that looks to be Hyper-V
> specific (searching for vmbus etc). This device looks like any other PCI 
> devices
> to me.

It depends on the VF nic's PCI config space which is presented by the 
pci_hyperv driver.

Thanks,
- Haiyang


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Wei Liu
On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
[...]
> +config MICROSOFT_MANA
> + tristate "Microsoft Azure Network Adapter (MANA) support"
> + default m
> + depends on PCI_MSI
> + select PCI_HYPERV

OOI which part of the code requires PCI_HYPERV?

Asking because I can't immediately find code that looks to be Hyper-V
specific (searching for vmbus etc). This device looks like any
other PCI devices to me.

Wei.


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Wei Liu
On Wed, Apr 07, 2021 at 08:08:59AM +, Dexuan Cui wrote:
> > From: kernel test robot 
> > Sent: Tuesday, April 6, 2021 6:31 PM
> > ...
> > Hi Dexuan, 
> > I love your patch! Perhaps something to improve:
> > 
> > All warnings (new ones prefixed by >>):
> > 
> >drivers/pci/controller/pci-hyperv.c: In function 'hv_irq_unmask':
> >drivers/pci/controller/pci-hyperv.c:1220:2: error: implicit declaration 
> > of
> > function 'hv_set_msi_entry_from_desc'
> > [-Werror=implicit-function-declaration]
> > 1220 |  hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry,
> > msi_desc);
> 
> This build error looks strange, because the patch doesn't even touch the 
> driver
> drivers/pci/controller/pci-hyperv.c.
> 

I think this is normal. The bot doesn't restrict itself to the changed
code from my experience.

Wei.


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Leon Romanovsky
On Wed, Apr 07, 2021 at 08:40:13AM +, Dexuan Cui wrote:
> > From: Leon Romanovsky 
> > Sent: Wednesday, April 7, 2021 1:10 AM
> > 
> > <...>
> > 
> > > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > > +{
> > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > + struct gdma_verify_ver_req req = { 0 };
> > > + struct gdma_verify_ver_resp resp = { 0 };
> > > + int err;
> > > +
> > > + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > > +   sizeof(req), sizeof(resp));
> > > +
> > > + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > > + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > > +
> > > + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > > + if (err || resp.hdr.status) {
> > > + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > > +resp.hdr.status);
> > > + return -EPROTO;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > 
> > <...>
> > > + err = gdma_verify_vf_version(pdev);
> > > + if (err)
> > > + goto remove_irq;
> > 
> > Will this VF driver be used in the guest VM? What will prevent from users to
> > change it?
> > I think that such version negotiation scheme is not allowed.
> 
> Yes, the VF driver is expected to run in a Linux VM that runs on Azure.
> 
> Currently gdma_verify_vf_version() just tells the PF driver that the VF driver
> is only able to support GDMA_PROTOCOL_V1, and want to use
> GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.
> 
> enum {
> GDMA_PROTOCOL_UNDEFINED = 0,
> GDMA_PROTOCOL_V1 = 1,
> GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
> GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
> GDMA_PROTOCOL_VALUE_MAX
> };
> 
> The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I expect
> here gdma_verify_vf_version() should succeed. If a user changes the Linux VF
> driver and try to use a protocol version not supported by the PF driver, then
> gdma_verify_vf_version() will fail; later, if the VF driver tries to talk to 
> the PF
> driver using an unsupported message format, the PF driver will return a 
> failure.

The worry is not for the current code, but for the future one when you will
support v2, v3 e.t.c. First, your code will look like a spaghetti and
second, users will try and mix vX with "unsupported" commands just for the
fun.

Thanks

> 
> Thanks,
> -- Dexuan


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Leon Romanovsky
On Wed, Apr 07, 2021 at 08:28:45AM +, Dexuan Cui wrote:
> > From: Leon Romanovsky 
> > Sent: Wednesday, April 7, 2021 1:15 AM
> > > ...
> > > int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
> > > {
> > > struct gdma_generate_test_event_req req = { 0 };
> > > struct gdma_general_resp resp = { 0 };
> > 
> > BTW, you don't need to write { 0 }, the {} is enough.
>  
> Thanks for the suggestion! I'll use {0} in v2. 

You missed the point, "{ 0 }" change to be "{}" without 0.

Thanks


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Andrew Lunn
> And, some variable defines can not follow the reverse christmas tree
> style due to dependency, e.g.
> static void hwc_init_event_handler(void *ctx, struct gdma_queue *q_self,
>struct gdma_event *event) 
> {
> struct hw_channel_context *hwc = ctx;
> struct gdma_dev *gd = hwc->gdma_dev;
> struct gdma_context *gc = gdma_dev_to_context(gd);
> 
> I failed to find the reverse christmas tree rule in the Documentation/ 
> folder. I knew the rule and I thought it was documented there,
> but it looks like it's not. So my understanding is that in general we
> should follow the style, but there can be exceptions due to
> dependencies or logical grouping.

I expect DaveM will tell you to move gdma_dev_to_context(gd) into the
body of the function. Or if you have this pattern a lot, add a macro
gdma_ctx_to_context().

Reverse Christmas tree is not in the main Coding Style documentation,
but it is expected for netdev.

Andrew


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Dexuan Cui
> From: Leon Romanovsky 
> Sent: Wednesday, April 7, 2021 1:10 AM
> 
> <...>
> 
> > +int gdma_verify_vf_version(struct pci_dev *pdev)
> > +{
> > +   struct gdma_context *gc = pci_get_drvdata(pdev);
> > +   struct gdma_verify_ver_req req = { 0 };
> > +   struct gdma_verify_ver_resp resp = { 0 };
> > +   int err;
> > +
> > +   gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> > + sizeof(req), sizeof(resp));
> > +
> > +   req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> > +   req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> > +
> > +   err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > +   if (err || resp.hdr.status) {
> > +   pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> > +  resp.hdr.status);
> > +   return -EPROTO;
> > +   }
> > +
> > +   return 0;
> > +}
> 
> <...>
> > +   err = gdma_verify_vf_version(pdev);
> > +   if (err)
> > +   goto remove_irq;
> 
> Will this VF driver be used in the guest VM? What will prevent from users to
> change it?
> I think that such version negotiation scheme is not allowed.

Yes, the VF driver is expected to run in a Linux VM that runs on Azure.

Currently gdma_verify_vf_version() just tells the PF driver that the VF driver
is only able to support GDMA_PROTOCOL_V1, and want to use
GDMA_PROTOCOL_V1's message formats to talk to the PF driver later.

enum {
GDMA_PROTOCOL_UNDEFINED = 0,
GDMA_PROTOCOL_V1 = 1,
GDMA_PROTOCOL_FIRST = GDMA_PROTOCOL_V1,
GDMA_PROTOCOL_LAST = GDMA_PROTOCOL_V1,
GDMA_PROTOCOL_VALUE_MAX
};

The PF driver is supposed to always support GDMA_PROTOCOL_V1, so I expect
here gdma_verify_vf_version() should succeed. If a user changes the Linux VF
driver and try to use a protocol version not supported by the PF driver, then
gdma_verify_vf_version() will fail; later, if the VF driver tries to talk to 
the PF
driver using an unsupported message format, the PF driver will return a failure.

Thanks,
-- Dexuan


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Dexuan Cui
> From: Leon Romanovsky 
> Sent: Wednesday, April 7, 2021 1:15 AM
> > ...
> > int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
> > {
> > struct gdma_generate_test_event_req req = { 0 };
> > struct gdma_general_resp resp = { 0 };
> 
> BTW, you don't need to write { 0 }, the {} is enough.
 
Thanks for the suggestion! I'll use {0} in v2. 

BTW, looks like both are widely used in the kernel. Maybe someone
should update scripts/checkpatch.pl to add a warning agaist { 0 } in
new code, if {0} is preferred. :-)

dexuan@localhost:~/linux$ grep "= { 0 };" kernel/ -nr | wc -l
4
dexuan@localhost:~/linux$  grep "= {0};" kernel/ -nr | wc -l
4

dexuan@localhost:~/linux$ grep "= { 0 };" Documentation/  -nr
Documentation/networking/page_pool.rst:117:struct page_pool_params 
pp_params = { 0 };

dexuan@localhost:~/linux$ grep "= { 0 };" drivers/ -nr | wc -l
1085
dexuan@localhost:~/linux$ grep "= {0};" drivers/ -nr | wc -l
1321

dexuan@localhost:~/linux$ grep "= { 0 };" drivers/net/ -nr | wc -l
408
dexuan@localhost:~/linux$  grep "= {0};" drivers/net/ -nr | wc -l
708


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Leon Romanovsky
On Wed, Apr 07, 2021 at 08:02:17AM +, Dexuan Cui wrote:
> > From: Andrew Lunn 
> > Sent: Tuesday, April 6, 2021 6:08 PM
> > To: Dexuan Cui 
> > 
> > > +static int gdma_query_max_resources(struct pci_dev *pdev)
> > > +{
> > > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > > + struct gdma_general_req req = { 0 };
> > > + struct gdma_query_max_resources_resp resp = { 0 };
> > > + int err;
> > 
> > Network drivers need to use reverse christmas tree. I spotted a number
> > of functions getting this wrong. Please review the whole driver.
> 
> Hi Andrew, thanks for the quick comments!
> 
> I think In general the patch follows the reverse christmas tree style.
> 
> For the function you pointed out, I didn't follow the reverse
> christmas tree style because I wanted to keep the variable defines
> more natural, i.e. first define 'req' and then 'resp'.
> 
> I can swap the 2 lines here, i.e. define 'resp' first, but this looks a little
> strange to me, because in some other functions the 'req' should be
> defined first, e.g. 
> 
> int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
> {
> struct gdma_generate_test_event_req req = { 0 };
> struct gdma_general_resp resp = { 0 };

BTW, you don't need to write { 0 }, the {} is enough.

Thanks


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Leon Romanovsky
On Tue, Apr 06, 2021 at 04:23:21PM -0700, Dexuan Cui wrote:
> Add a VF driver for Microsoft Azure Network Adapter (MANA) that will be
> available in the future.
> 
> Co-developed-by: Haiyang Zhang 
> Signed-off-by: Haiyang Zhang 
> Signed-off-by: Dexuan Cui 
> ---
>  MAINTAINERS   |4 +-
>  drivers/net/ethernet/Kconfig  |1 +
>  drivers/net/ethernet/Makefile |1 +
>  drivers/net/ethernet/microsoft/Kconfig|   29 +
>  drivers/net/ethernet/microsoft/Makefile   |5 +
>  drivers/net/ethernet/microsoft/mana/Makefile  |6 +
>  drivers/net/ethernet/microsoft/mana/gdma.h|  731 +++
>  .../net/ethernet/microsoft/mana/gdma_main.c   | 1500 +
>  .../net/ethernet/microsoft/mana/hw_channel.c  |  851 
>  .../net/ethernet/microsoft/mana/hw_channel.h  |  181 ++
>  drivers/net/ethernet/microsoft/mana/mana.h|  529 +
>  drivers/net/ethernet/microsoft/mana/mana_en.c | 1861 +
>  .../ethernet/microsoft/mana/mana_ethtool.c|  276 +++
>  .../net/ethernet/microsoft/mana/shm_channel.c |  290 +++
>  .../net/ethernet/microsoft/mana/shm_channel.h |   19 +
>  15 files changed, 6283 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/net/ethernet/microsoft/Kconfig
>  create mode 100644 drivers/net/ethernet/microsoft/Makefile
>  create mode 100644 drivers/net/ethernet/microsoft/mana/Makefile
>  create mode 100644 drivers/net/ethernet/microsoft/mana/gdma.h
>  create mode 100644 drivers/net/ethernet/microsoft/mana/gdma_main.c
>  create mode 100644 drivers/net/ethernet/microsoft/mana/hw_channel.c
>  create mode 100644 drivers/net/ethernet/microsoft/mana/hw_channel.h
>  create mode 100644 drivers/net/ethernet/microsoft/mana/mana.h
>  create mode 100644 drivers/net/ethernet/microsoft/mana/mana_en.c
>  create mode 100644 drivers/net/ethernet/microsoft/mana/mana_ethtool.c
>  create mode 100644 drivers/net/ethernet/microsoft/mana/shm_channel.c
>  create mode 100644 drivers/net/ethernet/microsoft/mana/shm_channel.h

<...>

> +int gdma_verify_vf_version(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + struct gdma_verify_ver_req req = { 0 };
> + struct gdma_verify_ver_resp resp = { 0 };
> + int err;
> +
> + gdma_init_req_hdr(&req.hdr, GDMA_VERIFY_VF_DRIVER_VERSION,
> +   sizeof(req), sizeof(resp));
> +
> + req.protocol_ver_min = GDMA_PROTOCOL_FIRST;
> + req.protocol_ver_max = GDMA_PROTOCOL_LAST;
> +
> + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> + if (err || resp.hdr.status) {
> + pr_err("VfVerifyVersionOutput: %d, status=0x%x\n", err,
> +resp.hdr.status);
> + return -EPROTO;
> + }
> +
> + return 0;
> +}

<...>

> +static int gdma_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> +{
> + struct gdma_context *gc;
> + void __iomem *bar0_va;
> + int bar = 0;
> + int err;
> +
> + err = pci_enable_device(pdev);
> + if (err)
> + return -ENXIO;
> +
> + pci_set_master(pdev);
> +
> + err = pci_request_regions(pdev, "gdma");
> + if (err)
> + goto disable_dev;
> +
> + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> + if (err)
> + goto release_region;
> +
> + err = -ENOMEM;
> + gc = vzalloc(sizeof(*gc));
> + if (!gc)
> + goto release_region;
> +
> + bar0_va = pci_iomap(pdev, bar, 0);
> + if (!bar0_va)
> + goto free_gc;
> +
> + gc->bar0_va = bar0_va;
> + gc->pci_dev = pdev;
> +
> + pci_set_drvdata(pdev, gc);
> +
> + gdma_init_registers(pdev);
> +
> + shm_channel_init(&gc->shm_channel, gc->shm_base);
> +
> + err = gdma_setup_irqs(pdev);
> + if (err)
> + goto unmap_bar;
> +
> + mutex_init(&gc->eq_test_event_mutex);
> +
> + err = hwc_create_channel(gc);
> + if (err)
> + goto remove_irq;
> +
> + err = gdma_verify_vf_version(pdev);
> + if (err)
> + goto remove_irq;

Will this VF driver be used in the guest VM? What will prevent from users to 
change it? 
I think that such version negotiation scheme is not allowed.

Thanks


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Dexuan Cui
> From: kernel test robot 
> Sent: Tuesday, April 6, 2021 6:31 PM
> ...
> Hi Dexuan, 
> I love your patch! Perhaps something to improve:
> 
> All warnings (new ones prefixed by >>):
> 
>drivers/pci/controller/pci-hyperv.c: In function 'hv_irq_unmask':
>drivers/pci/controller/pci-hyperv.c:1220:2: error: implicit declaration of
> function 'hv_set_msi_entry_from_desc'
> [-Werror=implicit-function-declaration]
> 1220 |  hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry,
> msi_desc);

This build error looks strange, because the patch doesn't even touch the driver
drivers/pci/controller/pci-hyperv.c.

I'll try to repro the build failure, and the other 2 failures in 2 separate
emails, and figure out what's happening.

PS, I tested building the patch in a fresh Ubuntu 20.04 VM and it was 
successful.

Thanks,
-- Dexuan


RE: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-07 Thread Dexuan Cui
> From: Andrew Lunn 
> Sent: Tuesday, April 6, 2021 6:08 PM
> To: Dexuan Cui 
> 
> > +static int gdma_query_max_resources(struct pci_dev *pdev)
> > +{
> > +   struct gdma_context *gc = pci_get_drvdata(pdev);
> > +   struct gdma_general_req req = { 0 };
> > +   struct gdma_query_max_resources_resp resp = { 0 };
> > +   int err;
> 
> Network drivers need to use reverse christmas tree. I spotted a number
> of functions getting this wrong. Please review the whole driver.

Hi Andrew, thanks for the quick comments!

I think In general the patch follows the reverse christmas tree style.

For the function you pointed out, I didn't follow the reverse
christmas tree style because I wanted to keep the variable defines
more natural, i.e. first define 'req' and then 'resp'.

I can swap the 2 lines here, i.e. define 'resp' first, but this looks a little
strange to me, because in some other functions the 'req' should be
defined first, e.g. 

int gdma_test_eq(struct gdma_context *gc, struct gdma_queue *eq)
{
struct gdma_generate_test_event_req req = { 0 };
struct gdma_general_resp resp = { 0 };


And, some variable defines can not follow the reverse christmas tree
style due to dependency, e.g.
static void hwc_init_event_handler(void *ctx, struct gdma_queue *q_self,
   struct gdma_event *event) 
{
struct hw_channel_context *hwc = ctx;
struct gdma_dev *gd = hwc->gdma_dev;
struct gdma_context *gc = gdma_dev_to_context(gd);

I failed to find the reverse christmas tree rule in the Documentation/ 
folder. I knew the rule and I thought it was documented there,
but it looks like it's not. So my understanding is that in general we
should follow the style, but there can be exceptions due to
dependencies or logical grouping.

> > +   gdma_init_req_hdr(&req.hdr, GDMA_QUERY_MAX_RESOURCES,
> > + sizeof(req), sizeof(resp));
> > +
> > +   err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> > +   if (err || resp.hdr.status) {
> > +   pr_err("%s, line %d: err=%d, err=0x%x\n", __func__, __LINE__,
> > +  err, resp.hdr.status);
> 
> I expect checkpatch complained about this. Don't use pr_err(), use
> dev_err(pdev->dev, ...  of netdev_err(ndev, ... You should always have
> access to dev or ndev, so please change all pr_ calls.

I did run scripts/checkpatch.pl and it reported no error/warning. :-)

But I think you're correct. I'll change the pr_* to dev_* or netdev_*.
 
> > +static unsigned int num_queues = ANA_DEFAULT_NUM_QUEUE;
> > +module_param(num_queues, uint, 0444);
> 
> No module parameters please.
> 
>Andrew

This parameter was mostly for debugging purpose. I can remove it.

BTW, I do remember some maintainers ask people to not add module
parameters unless that's absolutely necessary, but I don't remember if
the rule applies to only the net subsystem or to the whole drivers/
folder. It looks like the rule was also not documented in the
Documentation/ folder? Please let me know if such a rule is explicitly
defined somewhere.

Thanks,
-- Dexuan


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-06 Thread kernel test robot
Hi Dexuan,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:
https://github.com/0day-ci/linux/commits/Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 
cc0626c2aaed8e475efdd85fa374b497a7192e35
config: arc-allyesconfig (attached as .config)
compiler: arceb-elf-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# 
https://github.com/0day-ci/linux/commit/f086d8bc693c2686de24a81398e49496ab3747a9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
git checkout f086d8bc693c2686de24a81398e49496ab3747a9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 
'hwc_post_rx_wqe':
   drivers/net/ethernet/microsoft/mana/hw_channel.c:88:17: warning: cast from 
pointer to integer of different size [-Wpointer-to-int-cast]
  88 |  sge->address = (u64)req->buf_sge_addr;
 | ^
   drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 
'hwc_alloc_dma_buf':
>> drivers/net/ethernet/microsoft/mana/hw_channel.c:426:12: warning: cast to 
>> pointer from integer of different size [-Wint-to-pointer-cast]
 426 |  base_pa = (u8 *)dma_buf->mem_info.dma_handle;
 |^
   drivers/net/ethernet/microsoft/mana/hw_channel.c: In function 
'hwc_post_tx_wqe':
   drivers/net/ethernet/microsoft/mana/hw_channel.c:542:17: warning: cast from 
pointer to integer of different size [-Wpointer-to-int-cast]
 542 |  sge->address = (u64)req->buf_sge_addr;
 | ^

Kconfig warnings: (for reference only)
   WARNING: unmet direct dependencies detected for PCI_HYPERV
   Depends on PCI && X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
   Selected by
   - MICROSOFT_MANA && NETDEVICES && ETHERNET && NET_VENDOR_MICROSOFT && PCI_MSI


vim +426 drivers/net/ethernet/microsoft/mana/hw_channel.c

   394  
   395  static int hwc_alloc_dma_buf(struct hw_channel_context *hwc, u16 
q_depth,
   396   u32 max_msg_size, struct hwc_dma_buf 
**dma_buf_p)
   397  {
   398  struct gdma_context *gc = gdma_dev_to_context(hwc->gdma_dev);
   399  struct gdma_mem_info *gmi;
   400  struct hwc_work_request *hwc_wr;
   401  struct hwc_dma_buf *dma_buf;
   402  u32 buf_size;
   403  void *virt_addr;
   404  u8 *base_pa;
   405  int err;
   406  u16 i;
   407  
   408  dma_buf = kzalloc(sizeof(*dma_buf) +
   409q_depth * sizeof(struct hwc_work_request),
   410GFP_KERNEL);
   411  if (!dma_buf)
   412  return -ENOMEM;
   413  
   414  dma_buf->num_reqs = q_depth;
   415  
   416  buf_size = ALIGN(q_depth * max_msg_size, PAGE_SIZE);
   417  
   418  gmi = &dma_buf->mem_info;
   419  err = gdma_alloc_memory(gc, buf_size, gmi);
   420  if (err) {
   421  pr_err("Failed to allocate dma buffer: %d\n", err);
   422  goto out;
   423  }
   424  
   425  virt_addr = dma_buf->mem_info.virt_addr;
 > 426  base_pa = (u8 *)dma_buf->mem_info.dma_handle;
   427  
   428  for (i = 0; i < q_depth; i++) {
   429  hwc_wr = &dma_buf->reqs[i];
   430  
   431  hwc_wr->buf_va = virt_addr + i * max_msg_size;
   432  hwc_wr->buf_sge_addr = base_pa + i * max_msg_size;
   433  
   434  hwc_wr->buf_len = max_msg_size;
   435  }
   436  
   437  *dma_buf_p = dma_buf;
   438  return 0;
   439  out:
   440  kfree(dma_buf);
   441  return err;
   442  }
   443  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-06 Thread kernel test robot
Hi Dexuan,

I love your patch! Yet something to improve:

[auto build test ERROR on net-next/master]

url:
https://github.com/0day-ci/linux/commits/Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 
cc0626c2aaed8e475efdd85fa374b497a7192e35
config: powerpc-randconfig-r024-20210406 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 
c060945b23a1c54d4b2a053ff4b093a2277b303d)
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc cross compiling tool for clang build
# apt-get install binutils-powerpc-linux-gnu
# 
https://github.com/0day-ci/linux/commit/f086d8bc693c2686de24a81398e49496ab3747a9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
git checkout f086d8bc693c2686de24a81398e49496ab3747a9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All error/warnings (new ones prefixed by >>):

>> drivers/pci/controller/pci-hyperv.c:46:10: fatal error: 'asm/irqdomain.h' 
>> file not found
   #include 
^
   1 error generated.
--
>> drivers/net/ethernet/microsoft/mana/gdma_main.c:18:9: error: implicit 
>> declaration of function 'readq' [-Werror,-Wimplicit-function-declaration]
   return readq(g->bar0_va + offset);
  ^
>> drivers/net/ethernet/microsoft/mana/gdma_main.c:259:2: error: implicit 
>> declaration of function 'writeq' [-Werror,-Wimplicit-function-declaration]
   writeq(e.as_uint64, addr);
   ^
   2 errors generated.
--
>> drivers/net/ethernet/microsoft/mana/hw_channel.c:60:6: warning: variable 
>> 'ctx' is used uninitialized whenever 'if' condition is true 
>> [-Wsometimes-uninitialized]
   if (!test_bit(resp_msg->response.hwc_msg_id,
   ^~~~
   drivers/net/ethernet/microsoft/mana/hw_channel.c:77:2: note: uninitialized 
use occurs here
   ctx->error = err;
   ^~~
   drivers/net/ethernet/microsoft/mana/hw_channel.c:60:2: note: remove the 'if' 
if its condition is always false
   if (!test_bit(resp_msg->response.hwc_msg_id,
   ^~~~
   drivers/net/ethernet/microsoft/mana/hw_channel.c:57:28: note: initialize the 
variable 'ctx' to silence this warning
   struct hwc_caller_ctx *ctx;
 ^
  = NULL
   1 warning generated.

Kconfig warnings: (for reference only)
   WARNING: unmet direct dependencies detected for PCI_HYPERV
   Depends on PCI && X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
   Selected by
   - MICROSOFT_MANA && NETDEVICES && ETHERNET && NET_VENDOR_MICROSOFT && PCI_MSI


vim +46 drivers/pci/controller/pci-hyperv.c

4daace0d8ce851 drivers/pci/host/pci-hyperv.c   Jake Oshins 2016-02-16 
@46  #include 
4daace0d8ce851 drivers/pci/host/pci-hyperv.c   Jake Oshins 2016-02-16  
47  #include 
447ae316670230 drivers/pci/controller/pci-hyperv.c Nicolai Stange  2018-07-29  
48  #include 
4daace0d8ce851 drivers/pci/host/pci-hyperv.c   Jake Oshins 2016-02-16  
49  #include 
4daace0d8ce851 drivers/pci/host/pci-hyperv.c   Jake Oshins 2016-02-16  
50  #include 
24196f0c7d4bba drivers/pci/host/pci-hyperv.c   Elena Reshetova 2017-04-18  
51  #include 
4daace0d8ce851 drivers/pci/host/pci-hyperv.c   Jake Oshins 2016-02-16  
52  #include 
4daace0d8ce851 drivers/pci/host/pci-hyperv.c   Jake Oshins 2016-02-16  
53  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-06 Thread kernel test robot
Hi Dexuan,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:
https://github.com/0day-ci/linux/commits/Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 
cc0626c2aaed8e475efdd85fa374b497a7192e35
config: i386-randconfig-m021-20210406 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
# 
https://github.com/0day-ci/linux/commit/f086d8bc693c2686de24a81398e49496ab3747a9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Dexuan-Cui/net-mana-Add-a-driver-for-Microsoft-Azure-Network-Adapter-MANA/20210407-072552
git checkout f086d8bc693c2686de24a81398e49496ab3747a9
# save the attached .config to linux build tree
make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   drivers/pci/controller/pci-hyperv.c: In function 'hv_irq_unmask':
   drivers/pci/controller/pci-hyperv.c:1220:2: error: implicit declaration of 
function 'hv_set_msi_entry_from_desc' [-Werror=implicit-function-declaration]
1220 |  hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry, msi_desc);
 |  ^~
   drivers/pci/controller/pci-hyperv.c:1252:13: error: implicit declaration of 
function 'cpumask_to_vpset'; did you mean 'cpumask_subset'? 
[-Werror=implicit-function-declaration]
1252 |   nr_bank = cpumask_to_vpset(¶ms->int_target.vp_set, tmp);
 | ^~~~
 | cpumask_subset
   drivers/pci/controller/pci-hyperv.c:1269:14: error: implicit declaration of 
function 'hv_cpu_number_to_vp_number' [-Werror=implicit-function-declaration]
1269 | (1ULL << hv_cpu_number_to_vp_number(cpu));
 |  ^~
   drivers/pci/controller/pci-hyperv.c: In function 'prepopulate_bars':
>> drivers/pci/controller/pci-hyperv.c:1756:26: warning: right shift count >= 
>> width of type [-Wshift-count-overflow]
1756 |   4, (u32)(high_base >> 32));
 |  ^~
   drivers/pci/controller/pci-hyperv.c: In function 'hv_pci_onchannelcallback':
>> drivers/pci/controller/pci-hyperv.c:2438:18: warning: cast to pointer from 
>> integer of different size [-Wint-to-pointer-cast]
2438 |comp_packet = (struct pci_packet *)req_id;
 |  ^
   In file included from include/linux/device.h:15,
from include/linux/pci.h:37,
from drivers/pci/controller/pci-hyperv.c:42:
   drivers/pci/controller/pci-hyperv.c: In function 
'hv_pci_allocate_bridge_windows':
>> drivers/pci/controller/pci-hyperv.c:2675:5: warning: format '%llx' expects 
>> argument of type 'long long unsigned int', but argument 3 has type 
>> 'resource_size_t' {aka 'unsigned int'} [-Wformat=]
2675 | "Need %#llx of low MMIO space. Consider reconfiguring the VM.\n",
 | ^~~~
   include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
  19 | #define dev_fmt(fmt) fmt
 |  ^~~
   drivers/pci/controller/pci-hyperv.c:2674:4: note: in expansion of macro 
'dev_err'
2674 |dev_err(&hbus->hdev->device,
 |^~~
   drivers/pci/controller/pci-hyperv.c:2675:15: note: format string is defined 
here
2675 | "Need %#llx of low MMIO space. Consider reconfiguring the VM.\n",
 |   ^
 |   |
 |   long long unsigned int
 |   %#x
>> drivers/pci/controller/pci-hyperv.c:2690:8: warning: unsigned conversion 
>> from 'long long int' to 'resource_size_t' {aka 'unsigned int'} changes value 
>> from '4294967296' to '0' [-Woverflow]
2690 |0x1, -1,
 |^~~
   In file included from include/linux/device.h:15,
from include/linux/pci.h:37,
from drivers/pci/controller/pci-hyperv.c:42:
   drivers/pci/controller/pci-hyperv.c:2695:5: warning: format '%llx' expects 
argument of type 'long long unsigned int', but argument 3 has type 
'resource_size_t' {aka 'unsigned int'} [-Wformat=]
2695 | "Need %#llx of high MMIO space. Consider reconfiguring the 
VM.\n",
 | ^
   include/linux/dev_printk.h:19:22: note: in definition of macro 'dev_fmt'
  19 | #define dev_fmt(fmt) fmt
 |  ^~~
   drivers/pci/controller/pci-hyperv.c:2694:4: note: in expansion of macro 
'dev_err'
2694 |dev_err(&hbus->hdev->device,
 |^~~
   drivers/pci/controller/pci-hyperv.c:2695:15: note: format string is defined 
here
2695 | "Need %#llx

Re: [PATCH net-next] net: mana: Add a driver for Microsoft Azure Network Adapter (MANA)

2021-04-06 Thread Andrew Lunn
> +static int gdma_query_max_resources(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + struct gdma_general_req req = { 0 };
> + struct gdma_query_max_resources_resp resp = { 0 };
> + int err;

Network drivers need to use reverse christmas tree. I spotted a number
of functions getting this wrong. Please review the whole driver.


> +
> + gdma_init_req_hdr(&req.hdr, GDMA_QUERY_MAX_RESOURCES,
> +   sizeof(req), sizeof(resp));
> +
> + err = gdma_send_request(gc, sizeof(req), &req, sizeof(resp), &resp);
> + if (err || resp.hdr.status) {
> + pr_err("%s, line %d: err=%d, err=0x%x\n", __func__, __LINE__,
> +err, resp.hdr.status);

I expect checkpatch complained about this. Don't use pr_err(), use
dev_err(pdev->dev, ...  of netdev_err(ndev, ... You should always have
access to dev or ndev, so please change all pr_ calls.

> +static unsigned int num_queues = ANA_DEFAULT_NUM_QUEUE;
> +module_param(num_queues, uint, 0444);

No module parameters please.

   Andrew