attention de...@linuxdriverproject.org,

2019-09-27 Thread test
Dear  de...@linuxdriverproject.org, 

THIS IS FOR YOUR ATTENTION. 

I wish to notify you that you are the  beneficiary to the total 
sum of $10,600,000.00 Million (Ten Million Six Hundred Thousand 
United States Dollars). This is the sum deposited by my late 
client with the bank months before his eventual and unexpected 
dismiss. 

I am contacting you because you bear the surname identity and I 
can therefore present you as the beneficiary to the inheritance. 
I have no iota of doubt that you could receive these funds as you 
are qualified by your name identity. All the legal papers will be 
processed in your acceptance.

In your acceptance of this deal, we request that you kindly 
forward to us your letter of acceptance; your current telephone 
and fax numbers and a forwarding address to enable us file 
necessary documents at our high court probate division for the 
release of this sum of money. 



Yours Faithfully, 

Mr. Perry Edwards.
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] staging: rtl8723bs: Variable rf_type in function rtw_cfg80211_init_wiphy() could be uninitialized

2019-09-27 Thread Yizhuo
In function rtw_cfg80211_init_wiphy(), local variable "rf_type" could
be uninitialized if function rtw_hal_get_hwreg() fails to initialize
it. However, this value is used in function rtw_cfg80211_init_ht_capab()
and used to decide the value writing to ht_cap, which is potentially
unsafe.

Signed-off-by: Yizhuo 
---
 drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c 
b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
index 9bc685632651..dd39a581b7ef 100644
--- a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+++ b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
@@ -3315,7 +3315,7 @@ static void rtw_cfg80211_init_ht_capab(struct 
ieee80211_sta_ht_cap *ht_cap, enum
 
 void rtw_cfg80211_init_wiphy(struct adapter *padapter)
 {
-   u8 rf_type;
+   u8 rf_type = RF_MAX_TYPE;
struct ieee80211_supported_band *bands;
struct wireless_dev *pwdev = padapter->rtw_wdev;
struct wiphy *wiphy = pwdev->wiphy;
-- 
2.17.1

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH RESEND v3 00/26] Add definition for the number of standard PCI BARs

2019-09-27 Thread Denis Efremov
Code that iterates over all standard PCI BARs typically uses
PCI_STD_RESOURCE_END, but this is error-prone because it requires
"i <= PCI_STD_RESOURCE_END" rather than something like
"i < PCI_STD_NUM_BARS". We could add such a definition and use it the same
way PCI_SRIOV_NUM_BARS is used. The patchset also replaces constant (6)
with new define PCI_STD_NUM_BARS where appropriate and removes local
declarations for the number of PCI BARs.

Changes in v3:
  - Updated commits description.
  - Refactored "< PCI_ROM_RESOURCE" with "< PCI_STD_NUM_BARS" in loops.
  - Refactored "<= BAR_5" with "< PCI_STD_NUM_BARS" in loops.
  - Removed local define GASKET_NUM_BARS.
  - Removed local define PCI_NUM_BAR_RESOURCES.

Changes in v2:
  - Reversed checks in pci_iomap_range,pci_iomap_wc_range.
  - Refactored loops in vfio_pci to keep PCI_STD_RESOURCES.
  - Added 2 new patches to replace the magic constant with new define.
  - Splitted net patch in v1 to separate stmmac and dwc-xlgmac patches.

Denis Efremov (26):
  PCI: Add define for the number of standard PCI BARs
  PCI: hv: Use PCI_STD_NUM_BARS
  PCI: dwc: Use PCI_STD_NUM_BARS
  PCI: endpoint: Use PCI_STD_NUM_BARS
  misc: pci_endpoint_test: Use PCI_STD_NUM_BARS
  s390/pci: Use PCI_STD_NUM_BARS
  x86/PCI: Loop using PCI_STD_NUM_BARS
  alpha/PCI: Use PCI_STD_NUM_BARS
  ia64: Use PCI_STD_NUM_BARS
  stmmac: pci: Loop using PCI_STD_NUM_BARS
  net: dwc-xlgmac: Loop using PCI_STD_NUM_BARS
  ixgb: use PCI_STD_NUM_BARS
  e1000: Use PCI_STD_NUM_BARS
  rapidio/tsi721: Loop using PCI_STD_NUM_BARS
  efifb: Loop using PCI_STD_NUM_BARS
  fbmem: use PCI_STD_NUM_BARS
  vfio_pci: Loop using PCI_STD_NUM_BARS
  scsi: pm80xx: Use PCI_STD_NUM_BARS
  ata: sata_nv: Use PCI_STD_NUM_BARS
  staging: gasket: Use PCI_STD_NUM_BARS
  serial: 8250_pci: Use PCI_STD_NUM_BARS
  pata_atp867x: Use PCI_STD_NUM_BARS
  memstick: use PCI_STD_NUM_BARS
  USB: core: Use PCI_STD_NUM_BARS
  usb: pci-quirks: Use PCI_STD_NUM_BARS
  devres: use PCI_STD_NUM_BARS

 arch/alpha/kernel/pci-sysfs.c |  8 ++---
 arch/ia64/sn/pci/pcibr/pcibr_dma.c|  4 +--
 arch/s390/include/asm/pci.h   |  5 +--
 arch/s390/include/asm/pci_clp.h   |  6 ++--
 arch/s390/pci/pci.c   | 16 +-
 arch/s390/pci/pci_clp.c   |  6 ++--
 arch/x86/pci/common.c |  2 +-
 arch/x86/pci/intel_mid_pci.c  |  2 +-
 drivers/ata/pata_atp867x.c|  2 +-
 drivers/ata/sata_nv.c |  2 +-
 drivers/memstick/host/jmb38x_ms.c |  2 +-
 drivers/misc/pci_endpoint_test.c  |  8 ++---
 drivers/net/ethernet/intel/e1000/e1000.h  |  1 -
 drivers/net/ethernet/intel/e1000/e1000_main.c |  2 +-
 drivers/net/ethernet/intel/ixgb/ixgb.h|  1 -
 drivers/net/ethernet/intel/ixgb/ixgb_main.c   |  2 +-
 .../net/ethernet/stmicro/stmmac/stmmac_pci.c  |  4 +--
 .../net/ethernet/synopsys/dwc-xlgmac-pci.c|  2 +-
 drivers/pci/controller/dwc/pci-dra7xx.c   |  2 +-
 .../pci/controller/dwc/pci-layerscape-ep.c|  2 +-
 drivers/pci/controller/dwc/pcie-artpec6.c |  2 +-
 .../pci/controller/dwc/pcie-designware-plat.c |  2 +-
 drivers/pci/controller/dwc/pcie-designware.h  |  2 +-
 drivers/pci/controller/pci-hyperv.c   | 10 +++---
 drivers/pci/endpoint/functions/pci-epf-test.c | 10 +++---
 drivers/pci/pci-sysfs.c   |  4 +--
 drivers/pci/pci.c | 13 
 drivers/pci/proc.c|  4 +--
 drivers/pci/quirks.c  |  4 +--
 drivers/rapidio/devices/tsi721.c  |  2 +-
 drivers/scsi/pm8001/pm8001_hwi.c  |  2 +-
 drivers/scsi/pm8001/pm8001_init.c |  2 +-
 drivers/staging/gasket/gasket_constants.h |  3 --
 drivers/staging/gasket/gasket_core.c  | 12 +++
 drivers/staging/gasket/gasket_core.h  |  4 +--
 drivers/tty/serial/8250/8250_pci.c|  8 ++---
 drivers/usb/core/hcd-pci.c|  2 +-
 drivers/usb/host/pci-quirks.c |  2 +-
 drivers/vfio/pci/vfio_pci.c   | 11 ---
 drivers/vfio/pci/vfio_pci_config.c| 32 ++-
 drivers/vfio/pci/vfio_pci_private.h   |  4 +--
 drivers/video/fbdev/core/fbmem.c  |  4 +--
 drivers/video/fbdev/efifb.c   |  2 +-
 include/linux/pci-epc.h   |  2 +-
 include/linux/pci.h   |  2 +-
 include/uapi/linux/pci_regs.h |  1 +
 lib/devres.c  |  2 +-
 47 files changed, 112 insertions(+), 115 deletions(-)

-- 
2.21.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] staging: rtl8188eu: fix null dereference when kzalloc fails

2019-09-27 Thread Connor Kuehl
If kzalloc() returns NULL, the error path doesn't stop the flow of
control from entering rtw_hal_read_chip_version() which dereferences the
null pointer. Fix this by adding a 'goto' to the error path to more
gracefully handle the issue and avoid proceeding with initialization
steps that we're no longer prepared to handle.

Also update the debug message to be more consistent with the other debug
messages in this function.

Addresses-Coverity: ("Dereference after null check")

Signed-off-by: Connor Kuehl 
---
 drivers/staging/rtl8188eu/os_dep/usb_intf.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c 
b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
index 664d93a7f90d..4fac9dca798e 100644
--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
@@ -348,8 +348,10 @@ static struct adapter *rtw_usb_if1_init(struct dvobj_priv 
*dvobj,
}
 
padapter->HalData = kzalloc(sizeof(struct hal_data_8188e), GFP_KERNEL);
-   if (!padapter->HalData)
-   DBG_88E("cant not alloc memory for HAL DATA\n");
+   if (!padapter->HalData) {
+   DBG_88E("Failed to allocate memory for HAL data\n");
+   goto free_adapter;
+   }
 
/* step read_chip_version */
rtw_hal_read_chip_version(padapter);
-- 
2.17.1

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH] staging: rtl8192u: fix multiple memory leaks on error path

2019-09-27 Thread Markus Elfring
> In rtl8192_tx on error handling path allocated urbs and also skb should
> be released.

Can this change description be improved?


How do you think about to add the tag “Fixes” here?


> @@ -1588,7 +1590,12 @@ short rtl8192_tx(struct net_device *dev, struct 
> sk_buff *skb)
>   RT_TRACE(COMP_ERR, "Error TX URB %d, error %d",
>atomic_read(>tx_pending[tcb_desc->queue_index]),
>status);
> - return -1;
> +
> +error:
> + dev_kfree_skb_any(skb);
…

Would an other label be more appropriate according to the Linux coding style?

Regards,
Markus
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH] staging: vt6656: clean up an indentation issue

2019-09-27 Thread Quentin Deslandes
On Fri, Sep 27, 2019 at 10:24:00AM +0100, Colin King wrote:
> From: Colin Ian King 
> 
> There is a block of code that is indented incorrectly, add in the
> missing tabs.
> 
> Signed-off-by: Colin Ian King 
> ---
>  drivers/staging/vt6656/main_usb.c | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/staging/vt6656/main_usb.c 
> b/drivers/staging/vt6656/main_usb.c
> index 856ba97aec4f..3478a10f8025 100644
> --- a/drivers/staging/vt6656/main_usb.c
> +++ b/drivers/staging/vt6656/main_usb.c
> @@ -249,10 +249,10 @@ static int vnt_init_registers(struct vnt_private *priv)
>   } else {
>   priv->tx_antenna_mode = ANT_B;
>  
> - if (priv->tx_rx_ant_inv)
> - priv->rx_antenna_mode = ANT_A;
> - else
> - priv->rx_antenna_mode = ANT_B;
> + if (priv->tx_rx_ant_inv)
> + priv->rx_antenna_mode = ANT_A;
> + else
> + priv->rx_antenna_mode = ANT_B;
>   }
>   }
>  
> -- 
> 2.20.1
> 

Reviewed-by: Quentin Deslandes 

Thanks!
Quentin
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH v2] staging: rtl8188eu: remove dead code/vestigial do..while loop

2019-09-27 Thread Dan Carpenter
Looks good.  Thanks!

regards,
dan carpenter

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH] staging: rtl8188eu: fix HighestRate check in odm_ARFBRefresh_8188E()

2019-09-27 Thread Sasha Levin
Hi,

[This is an automated email]

This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all

The bot has tested the following trees: v5.3.1, v5.2.17, v4.19.75, v4.14.146, 
v4.9.194, v4.4.194.

v5.3.1: Build OK!
v5.2.17: Build OK!
v4.19.75: Failed to apply! Possible dependencies:
00585495c4fa ("staging: rtl8188eu: refactor SwLedControlMode1()")
859df6aa0d97 ("staging: rtl8188eu: cleanup inconsistent indenting")

v4.14.146: Failed to apply! Possible dependencies:
00585495c4fa ("staging: rtl8188eu: refactor SwLedControlMode1()")
2742a7dddae4 ("Staging: rtl8188eu: core: Use __func__ instead of function 
name")
35a53b9a37ca ("staging:rtl8188eu Fix remove semicolon in do {}while(0)")
3cedbfb85199 ("staging: rtl8188eu: rename variable")
515ce733e86e ("staging:r8188eu: Use lib80211 to encrypt (CCMP) tx frames")
7de2258b5c71 ("staging: rtl8188eu: replace NULL comparison with variable")
819fa2a0d749 ("staging: rtl8188eu: use __func__ instead of function name")
859df6aa0d97 ("staging: rtl8188eu: cleanup inconsistent indenting")
b677f4ecf6ac ("staging: rtl8188eu: Fix spelling")
c5fe50aaa20c ("Revert "staging:r8188eu: Use lib80211 to encrypt (CCMP) tx 
frames"")
ceefaaced11e ("staging:rtl8188eu Remove unneccessary parenthesis")
e8d93aca1b23 ("Staging: rtl8188eu: core: Fix line over 80 characters")
f3139e621429 ("staging: rtl8188eu: Place the constant on the right side in 
comparisons")

v4.9.194: Failed to apply! Possible dependencies:
00585495c4fa ("staging: rtl8188eu: refactor SwLedControlMode1()")
2091eda1f21d ("staging: rtl8188eu: Put constant on right side of 
comparison")
2742a7dddae4 ("Staging: rtl8188eu: core: Use __func__ instead of function 
name")
35abf582a537 ("staging:r8188eu: replace rx_end member of recv_frame with 
pkt->end")
3cedbfb85199 ("staging: rtl8188eu: rename variable")
515ce733e86e ("staging:r8188eu: Use lib80211 to encrypt (CCMP) tx frames")
7d2af82cc5f5 ("staging: rtl8188eu: In core directory, fixed 'missing a 
balnk line after declarations' warnings.")
7de2258b5c71 ("staging: rtl8188eu: replace NULL comparison with variable")
80c96e08c416 ("staging:r8188eu: remove unused WIFI_MP_*STATE and 
WIFI_MP_CTX* definitions")
819fa2a0d749 ("staging: rtl8188eu: use __func__ instead of function name")
859df6aa0d97 ("staging: rtl8188eu: cleanup inconsistent indenting")
b677f4ecf6ac ("staging: rtl8188eu: Fix spelling")
bb5cd2e531c0 ("staging:r8188eu: remove rtw_os_recv_resource_alloc function")
c10364e1f4f6 ("staging: rtl8188eu: core: removes unecessary parenthesis")
c5fe50aaa20c ("Revert "staging:r8188eu: Use lib80211 to encrypt (CCMP) tx 
frames"")
cd30a3924932 ("staging:r8188eu: refactor recvbuf2recvframe function")
dd2aa2501c92 ("staging: rtl8188eu: core: fixes tabstop alignment")
de109778e7cf ("staging: rtl8188eu: Fix block comments warning")
df47a14c2c8b ("staging:r8188eu: replace recv_frame->rx_(data|len|tail) with 
pkt->(data|len|tail) and remove unused recvframe_(put|pull|pull_tail)()")
e038e67f0891 ("staging:r8188eu: update pkt->(data|tail|len) synchronously 
with rx_(data|tail|len) in recv_frame structure")
e8d93aca1b23 ("Staging: rtl8188eu: core: Fix line over 80 characters")
f3139e621429 ("staging: rtl8188eu: Place the constant on the right side in 
comparisons")

v4.4.194: Failed to apply! Possible dependencies:
00585495c4fa ("staging: rtl8188eu: refactor SwLedControlMode1()")
139737983db4 ("staging: rtl8188eu: Remove unnecessary pointer cast")
2742a7dddae4 ("Staging: rtl8188eu: core: Use __func__ instead of function 
name")
35abf582a537 ("staging:r8188eu: replace rx_end member of recv_frame with 
pkt->end")
3cedbfb85199 ("staging: rtl8188eu: rename variable")
515ce733e86e ("staging:r8188eu: Use lib80211 to encrypt (CCMP) tx frames")
7a1586353b97 ("rtl8188eu: Add spaces around arithmetic operators")
7b170bacbb13 ("staging: rtl8188eu: core: rtw_xmit: Use macros instead of 
constants")
7d2af82cc5f5 ("staging: rtl8188eu: In core directory, fixed 'missing a 
balnk line after declarations' warnings.")
7d7be350073e ("staging: rtl8188eu: core: rtw_xmit: Move constant of the 
right side")
7de2258b5c71 ("staging: rtl8188eu: replace NULL comparison with variable")
80c96e08c416 ("staging:r8188eu: remove unused WIFI_MP_*STATE and 
WIFI_MP_CTX* definitions")
859df6aa0d97 ("staging: rtl8188eu: cleanup inconsistent indenting")
8891bcac17da ("staging: rtl8188eu: add spaces around binary '*'")
b677f4ecf6ac ("staging: rtl8188eu: Fix spelling")
bb5cd2e531c0 ("staging:r8188eu: remove rtw_os_recv_resource_alloc function")
bbfe286b07d8 ("staging: r8188eu: replace rtw_ieee80211_hdr_3addr with 
ieee80211_hdr_3addr")
c10364e1f4f6 ("staging: rtl8188eu: core: removes unecessary parenthesis")
c5fe50aaa20c ("Revert 

Re: [PATCH v3 0/4] Add binder state and statistics to binderfs

2019-09-27 Thread Christian Brauner
On Tue, Sep 03, 2019 at 09:16:51AM -0700, Hridya Valsaraju wrote:
> Currently, the only way to access binder state and
> statistics is through debugfs. We need a way to
> access the same even when debugfs is not mounted.
> These patches add a mount option to make this
> information available in binderfs without affecting
> its presence in debugfs. The following debugfs nodes
> will be made available in a binderfs instance when
> mounted with the mount option 'stats=global' or 'stats=local'.
> 
>  /sys/kernel/debug/binder/failed_transaction_log
>  /sys/kernel/debug/binder/proc
>  /sys/kernel/debug/binder/state
>  /sys/kernel/debug/binder/stats
>  /sys/kernel/debug/binder/transaction_log
>  /sys/kernel/debug/binder/transactions

I'm sitting in a talk from Jonathan about kernel documentation and what
I realized is that we forgot to update the documentation I wrote for
binderfs in Documentation/admin-guide/binderfs.rst to reflect the new
stats=global mount option. Would be great if we could add that after rc1
is out. Would you have time to do that, Hridya?

Should just be a new entry under:

Options
---
max
  binderfs instances can be mounted with a limit on the number of binder
  devices that can be allocated. The ``max=`` mount option serves as
  a per-instance limit. If ``max=`` is set then only  number
  of binder devices can be allocated in this binderfs instance.
stats
  

Thanks!
Christian
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 11/17] staging: qlge: Factor out duplicated expression

2019-09-27 Thread Benjamin Poirier
Given that (u16) 65536 == 0, that expression can be replaced by a simple
cast.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  |  5 +
 drivers/staging/qlge/qlge_main.c | 18 ++
 2 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 5a4b2520cd2a..24af938da7a4 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -77,6 +77,11 @@
 #define LSD(x)  ((u32)((u64)(x)))
 #define MSD(x)  ((u32)u64)(x)) >> 32)))
 
+/* In some cases, the device interprets a value of 0x as 65536. These
+ * cases are marked using the following macro.
+ */
+#define QLGE_FIT16(value) ((u16)(value))
+
 /* MPI test register definitions. This register
  * is used for determining alternate NIC function's
  * PCI->func number.
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 0e304a7ac22f..e1099bd29672 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -2974,7 +2974,6 @@ static int ql_start_rx_ring(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
void __iomem *doorbell_area =
qdev->doorbell_area + (DB_PAGE_SIZE * (128 + rx_ring->cq_id));
int err = 0;
-   u16 bq_len;
u64 tmp;
__le64 *base_indirect_ptr;
int page_entries;
@@ -3009,8 +3008,8 @@ static int ql_start_rx_ring(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
memset((void *)cqicb, 0, sizeof(struct cqicb));
cqicb->msix_vect = rx_ring->irq;
 
-   bq_len = (rx_ring->cq_len == 65536) ? 0 : (u16) rx_ring->cq_len;
-   cqicb->len = cpu_to_le16(bq_len | LEN_V | LEN_CPP_CONT);
+   cqicb->len = cpu_to_le16(QLGE_FIT16(rx_ring->cq_len) | LEN_V |
+LEN_CPP_CONT);
 
cqicb->addr = cpu_to_le64(rx_ring->cq_base_dma);
 
@@ -3034,12 +3033,9 @@ static int ql_start_rx_ring(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
page_entries++;
} while (page_entries < MAX_DB_PAGES_PER_BQ(rx_ring->lbq.len));
cqicb->lbq_addr = cpu_to_le64(rx_ring->lbq.base_indirect_dma);
-   bq_len = qdev->lbq_buf_size == 65536 ? 0 :
-   (u16)qdev->lbq_buf_size;
-   cqicb->lbq_buf_size = cpu_to_le16(bq_len);
-   bq_len = (rx_ring->lbq.len == 65536) ? 0 :
-   (u16)rx_ring->lbq.len;
-   cqicb->lbq_len = cpu_to_le16(bq_len);
+   cqicb->lbq_buf_size =
+   cpu_to_le16(QLGE_FIT16(qdev->lbq_buf_size));
+   cqicb->lbq_len = cpu_to_le16(QLGE_FIT16(rx_ring->lbq.len));
rx_ring->lbq.prod_idx = 0;
rx_ring->lbq.curr_idx = 0;
rx_ring->lbq.clean_idx = 0;
@@ -3059,9 +3055,7 @@ static int ql_start_rx_ring(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
cqicb->sbq_addr =
cpu_to_le64(rx_ring->sbq.base_indirect_dma);
cqicb->sbq_buf_size = cpu_to_le16(SMALL_BUFFER_SIZE);
-   bq_len = (rx_ring->sbq.len == 65536) ? 0 :
-   (u16)rx_ring->sbq.len;
-   cqicb->sbq_len = cpu_to_le16(bq_len);
+   cqicb->sbq_len = cpu_to_le16(QLGE_FIT16(rx_ring->sbq.len));
rx_ring->sbq.prod_idx = 0;
rx_ring->sbq.curr_idx = 0;
rx_ring->sbq.clean_idx = 0;
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 05/17] staging: qlge: Remove bq_desc.maplen

2019-09-27 Thread Benjamin Poirier
The size of the mapping is known statically in all cases, there's no need
to save it at runtime. Remove this member.

Signed-off-by: Benjamin Poirier 
Acked-by: Manish Chopra 
---
 drivers/staging/qlge/qlge.h  |  1 -
 drivers/staging/qlge/qlge_main.c | 43 +++-
 2 files changed, 15 insertions(+), 29 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index ba61b4559dd6..f32da8c7679f 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1373,7 +1373,6 @@ struct bq_desc {
__le64 *addr;
u32 index;
DEFINE_DMA_UNMAP_ADDR(mapaddr);
-   DEFINE_DMA_UNMAP_LEN(maplen);
 };
 
 #define QL_TXQ_IDX(qdev, skb) (smp_processor_id()%(qdev->tx_ring_count))
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 2b1cc4b29bed..34bc1d9560ce 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -1108,8 +1108,6 @@ static void ql_update_lbq(struct ql_adapter *qdev, struct 
rx_ring *rx_ring)
map = lbq_desc->p.pg_chunk.map +
lbq_desc->p.pg_chunk.offset;
dma_unmap_addr_set(lbq_desc, mapaddr, map);
-   dma_unmap_len_set(lbq_desc, maplen,
- qdev->lbq_buf_size);
*lbq_desc->addr = cpu_to_le64(map);
 
pci_dma_sync_single_for_device(qdev->pdev, map,
@@ -1177,8 +1175,6 @@ static void ql_update_sbq(struct ql_adapter *qdev, struct 
rx_ring *rx_ring)
return;
}
dma_unmap_addr_set(sbq_desc, mapaddr, map);
-   dma_unmap_len_set(sbq_desc, maplen,
- rx_ring->sbq_buf_size);
*sbq_desc->addr = cpu_to_le64(map);
}
 
@@ -1598,14 +1594,14 @@ static void ql_process_mac_rx_skb(struct ql_adapter 
*qdev,
 
pci_dma_sync_single_for_cpu(qdev->pdev,
dma_unmap_addr(sbq_desc, mapaddr),
-   dma_unmap_len(sbq_desc, maplen),
+   rx_ring->sbq_buf_size,
PCI_DMA_FROMDEVICE);
 
skb_put_data(new_skb, skb->data, length);
 
pci_dma_sync_single_for_device(qdev->pdev,
   dma_unmap_addr(sbq_desc, mapaddr),
-  dma_unmap_len(sbq_desc, maplen),
+  rx_ring->sbq_buf_size,
   PCI_DMA_FROMDEVICE);
skb = new_skb;
 
@@ -1727,8 +1723,7 @@ static struct sk_buff *ql_build_rx_skb(struct ql_adapter 
*qdev,
sbq_desc = ql_get_curr_sbuf(rx_ring);
pci_unmap_single(qdev->pdev,
dma_unmap_addr(sbq_desc, mapaddr),
-   dma_unmap_len(sbq_desc, maplen),
-   PCI_DMA_FROMDEVICE);
+   rx_ring->sbq_buf_size, PCI_DMA_FROMDEVICE);
skb = sbq_desc->p.skb;
ql_realign_skb(skb, hdr_len);
skb_put(skb, hdr_len);
@@ -1758,19 +1753,15 @@ static struct sk_buff *ql_build_rx_skb(struct 
ql_adapter *qdev,
 */
sbq_desc = ql_get_curr_sbuf(rx_ring);
pci_dma_sync_single_for_cpu(qdev->pdev,
-   dma_unmap_addr
-   (sbq_desc, mapaddr),
-   dma_unmap_len
-   (sbq_desc, maplen),
+   dma_unmap_addr(sbq_desc,
+  mapaddr),
+   rx_ring->sbq_buf_size,
PCI_DMA_FROMDEVICE);
skb_put_data(skb, sbq_desc->p.skb->data, length);
pci_dma_sync_single_for_device(qdev->pdev,
-  dma_unmap_addr
-  (sbq_desc,
-   mapaddr),
-  dma_unmap_len
-  (sbq_desc,
-   maplen),
+  dma_unmap_addr(sbq_desc,
+ mapaddr),
+  rx_ring->sbq_buf_size,
 

[PATCH v2 16/17] staging: qlge: Refill rx buffers up to multiple of 16

2019-09-27 Thread Benjamin Poirier
Reading the {s,l}bq_prod_idx registers on a running device, it appears that
the adapter will only use buffers up to prod_idx & 0xfff0. The driver
currently uses fixed-size guard zones (16 for sbq, 32 for lbq - don't know
why this difference). After the previous patch, this approach no longer
guarantees prod_idx values aligned on multiples of 16. While it appears
that we can write unaligned values to prod_idx without ill effects on
device operation, it makes more sense to change qlge_refill_bq() to refill
up to a limit that corresponds with the device's behavior.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  |  8 
 drivers/staging/qlge/qlge_main.c | 29 +++--
 2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 7c48e333d29b..e5a352df8228 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1423,6 +1423,9 @@ struct qlge_bq {
__le64 *base_indirect;
dma_addr_t base_indirect_dma;
struct qlge_bq_desc *queue;
+   /* prod_idx is the index of the first buffer that may NOT be used by
+* hw, ie. one after the last. Advanced by sw.
+*/
void __iomem *prod_idx_db_reg;
/* next index where sw should refill a buffer for hw */
u16 next_to_use;
@@ -1442,6 +1445,11 @@ struct qlge_bq {
  offsetof(struct rx_ring, lbq))); \
 })
 
+/* Experience shows that the device ignores the low 4 bits of the tail index.
+ * Refill up to a x16 multiple.
+ */
+#define QLGE_BQ_ALIGN(index) ALIGN_DOWN(index, 16)
+
 #define QLGE_BQ_WRAP(index) ((index) & (QLGE_BQ_LEN - 1))
 
 struct rx_ring {
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 83e75005688a..02ad0cdf4856 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -1114,22 +1114,12 @@ static void qlge_refill_bq(struct qlge_bq *bq)
struct rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq);
struct ql_adapter *qdev = rx_ring->qdev;
struct qlge_bq_desc *bq_desc;
-   int free_count, refill_count;
-   unsigned int reserved_count;
+   int refill_count;
int i;
 
-   if (bq->type == QLGE_SB)
-   reserved_count = 16;
-   else
-   reserved_count = 32;
-
-   free_count = bq->next_to_clean - bq->next_to_use;
-   if (free_count <= 0)
-   free_count += QLGE_BQ_LEN;
-
-   refill_count = free_count - reserved_count;
-   /* refill batch size */
-   if (refill_count < 16)
+   refill_count = QLGE_BQ_WRAP(QLGE_BQ_ALIGN(bq->next_to_clean - 1) -
+   bq->next_to_use);
+   if (!refill_count)
return;
 
i = bq->next_to_use;
@@ -1164,11 +1154,14 @@ static void qlge_refill_bq(struct qlge_bq *bq)
i += QLGE_BQ_LEN;
 
if (bq->next_to_use != i) {
-   netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev,
-"ring %u %s: updating prod idx = %d.\n",
-rx_ring->cq_id, bq_type_name[bq->type], i);
+   if (QLGE_BQ_ALIGN(bq->next_to_use) != QLGE_BQ_ALIGN(i)) {
+   netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev,
+"ring %u %s: updating prod idx = %d.\n",
+rx_ring->cq_id, bq_type_name[bq->type],
+i);
+   ql_write_db_reg(i, bq->prod_idx_db_reg);
+   }
bq->next_to_use = i;
-   ql_write_db_reg(bq->next_to_use, bq->prod_idx_db_reg);
}
 }
 
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 12/17] staging: qlge: Remove qlge_bq.len & size

2019-09-27 Thread Benjamin Poirier
Given the way the driver currently works, these values are always known
at compile time.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  | 17 +---
 drivers/staging/qlge/qlge_dbg.c  |  4 --
 drivers/staging/qlge/qlge_main.c | 75 
 3 files changed, 39 insertions(+), 57 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 24af938da7a4..5e773af50397 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -34,8 +34,13 @@
 #define NUM_TX_RING_ENTRIES256
 #define NUM_RX_RING_ENTRIES256
 
-#define NUM_SMALL_BUFFERS   512
-#define NUM_LARGE_BUFFERS   512
+/* Use the same len for sbq and lbq. Note that it seems like the device might
+ * support different sizes.
+ */
+#define QLGE_BQ_SHIFT 9
+#define QLGE_BQ_LEN BIT(QLGE_BQ_SHIFT)
+#define QLGE_BQ_SIZE (QLGE_BQ_LEN * sizeof(__le64))
+
 #define DB_PAGE_SIZE 4096
 
 /* Calculate the number of (4k) pages required to
@@ -46,8 +51,8 @@
(((x * sizeof(u64)) % DB_PAGE_SIZE) ? 1 : 0))
 
 #define RX_RING_SHADOW_SPACE   (sizeof(u64) + \
-   MAX_DB_PAGES_PER_BQ(NUM_SMALL_BUFFERS) * sizeof(u64) + \
-   MAX_DB_PAGES_PER_BQ(NUM_LARGE_BUFFERS) * sizeof(u64))
+   MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN) * sizeof(u64) + \
+   MAX_DB_PAGES_PER_BQ(QLGE_BQ_LEN) * sizeof(u64))
 #define LARGE_BUFFER_MAX_SIZE 8192
 #define LARGE_BUFFER_MIN_SIZE 2048
 
@@ -1419,8 +1424,6 @@ struct qlge_bq {
dma_addr_t base_indirect_dma;
struct qlge_bq_desc *queue;
void __iomem *prod_idx_db_reg;
-   u32 len;/* entry count */
-   u32 size;   /* size in bytes of hw ring */
u32 prod_idx;   /* current sw prod idx */
u32 curr_idx;   /* next entry we expect */
u32 clean_idx;  /* beginning of new descs */
@@ -1439,6 +1442,8 @@ struct qlge_bq {
  offsetof(struct rx_ring, lbq))); \
 })
 
+#define QLGE_BQ_WRAP(index) ((index) & (QLGE_BQ_LEN - 1))
+
 struct rx_ring {
struct cqicb cqicb; /* The chip's completion queue init control 
block. */
 
diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c
index a177302073db..c21d1b228bd2 100644
--- a/drivers/staging/qlge/qlge_dbg.c
+++ b/drivers/staging/qlge/qlge_dbg.c
@@ -1775,8 +1775,6 @@ void ql_dump_rx_ring(struct rx_ring *rx_ring)
pr_err("rx_ring->lbq.base_indirect_dma = %llx\n",
   (unsigned long long)rx_ring->lbq.base_indirect_dma);
pr_err("rx_ring->lbq = %p\n", rx_ring->lbq.queue);
-   pr_err("rx_ring->lbq.len = %d\n", rx_ring->lbq.len);
-   pr_err("rx_ring->lbq.size = %d\n", rx_ring->lbq.size);
pr_err("rx_ring->lbq.prod_idx_db_reg = %p\n",
   rx_ring->lbq.prod_idx_db_reg);
pr_err("rx_ring->lbq.prod_idx = %d\n", rx_ring->lbq.prod_idx);
@@ -1792,8 +1790,6 @@ void ql_dump_rx_ring(struct rx_ring *rx_ring)
pr_err("rx_ring->sbq.base_indirect_dma = %llx\n",
   (unsigned long long)rx_ring->sbq.base_indirect_dma);
pr_err("rx_ring->sbq = %p\n", rx_ring->sbq.queue);
-   pr_err("rx_ring->sbq.len = %d\n", rx_ring->sbq.len);
-   pr_err("rx_ring->sbq.size = %d\n", rx_ring->sbq.size);
pr_err("rx_ring->sbq.prod_idx_db_reg addr = %p\n",
   rx_ring->sbq.prod_idx_db_reg);
pr_err("rx_ring->sbq.prod_idx = %d\n", rx_ring->sbq.prod_idx);
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index e1099bd29672..ef33db118aa1 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -982,9 +982,8 @@ static struct qlge_bq_desc *qlge_get_curr_buf(struct 
qlge_bq *bq)
 {
struct qlge_bq_desc *bq_desc;
 
-   bq_desc = >queue[bq->curr_idx++];
-   if (bq->curr_idx == bq->len)
-   bq->curr_idx = 0;
+   bq_desc = >queue[bq->curr_idx];
+   bq->curr_idx = QLGE_BQ_WRAP(bq->curr_idx + 1);
bq->free_cnt++;
 
return bq_desc;
@@ -1149,15 +1148,11 @@ static void qlge_refill_bq(struct qlge_bq *bq)
return;
}
 
-   clean_idx++;
-   if (clean_idx == bq->len)
-   clean_idx = 0;
+   clean_idx = QLGE_BQ_WRAP(clean_idx + 1);
}
 
bq->clean_idx = clean_idx;
-   bq->prod_idx += 16;
-   if (bq->prod_idx == bq->len)
-   bq->prod_idx = 0;
+   bq->prod_idx = QLGE_BQ_WRAP(bq->prod_idx + 16);
bq->free_cnt -= 16;
}
 
@@ -2732,8 +2727,7 @@ static void ql_free_lbq_buffers(struct ql_adapter *qdev, 
struct rx_ring *rx_ring
put_page(lbq_desc->p.pg_chunk.page);
lbq_desc->p.pg_chunk.page = NULL;
 
-   if 

[PATCH v2 08/17] staging: qlge: Deduplicate rx buffer queue management

2019-09-27 Thread Benjamin Poirier
The qlge driver (and device) uses two kinds of buffers for reception,
so-called "small buffers" and "large buffers". The two are arranged in
rings, the sbq and lbq. These two share similar data structures and code.

Factor out data structures into a common struct qlge_bq, make required
adjustments to code and dedup the most obvious cases of copy/paste.

This patch should not introduce any functional change other than to some of
the printk format strings.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  |  96 +++---
 drivers/staging/qlge/qlge_dbg.c  |  60 ++--
 drivers/staging/qlge/qlge_main.c | 573 ++-
 3 files changed, 335 insertions(+), 394 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index a3a52bbc2821..a84aa264dfa8 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1358,23 +1358,6 @@ struct tx_ring_desc {
struct tx_ring_desc *next;
 };
 
-struct page_chunk {
-   struct page *page;  /* master page */
-   char *va;   /* virt addr for this chunk */
-   u64 map;/* mapping for master */
-   unsigned int offset;/* offset for this chunk */
-};
-
-struct bq_desc {
-   union {
-   struct page_chunk pg_chunk;
-   struct sk_buff *skb;
-   } p;
-   __le64 *addr;
-   u32 index;
-   DEFINE_DMA_UNMAP_ADDR(mapaddr);
-};
-
 #define QL_TXQ_IDX(qdev, skb) (smp_processor_id()%(qdev->tx_ring_count))
 
 struct tx_ring {
@@ -1413,6 +1396,56 @@ enum {
RX_Q = 4,   /* Handles inbound completions. */
 };
 
+struct qlge_page_chunk {
+   struct page *page;
+   void *va; /* virt addr including offset */
+   unsigned int offset;
+};
+
+struct qlge_bq_desc {
+   union {
+   /* for large buffers */
+   struct qlge_page_chunk pg_chunk;
+   /* for small buffers */
+   struct sk_buff *skb;
+   } p;
+   dma_addr_t dma_addr;
+   /* address in ring where the buffer address (dma_addr) is written for
+* the device
+*/
+   __le64 *buf_ptr;
+   u32 index;
+   DEFINE_DMA_UNMAP_ADDR(mapaddr);
+};
+
+/* buffer queue */
+struct qlge_bq {
+   __le64 *base;
+   dma_addr_t base_dma;
+   __le64 *base_indirect;
+   dma_addr_t base_indirect_dma;
+   struct qlge_bq_desc *queue;
+   void __iomem *prod_idx_db_reg;
+   u32 len;/* entry count */
+   u32 size;   /* size in bytes of hw ring */
+   u32 prod_idx;   /* current sw prod idx */
+   u32 curr_idx;   /* next entry we expect */
+   u32 clean_idx;  /* beginning of new descs */
+   u32 free_cnt;   /* free buffer desc cnt */
+   enum {
+   QLGE_SB,/* small buffer */
+   QLGE_LB,/* large buffer */
+   } type;
+};
+
+#define QLGE_BQ_CONTAINER(bq) \
+({ \
+   typeof(bq) _bq = bq; \
+   (struct rx_ring *)((char *)_bq - (_bq->type == QLGE_SB ? \
+ offsetof(struct rx_ring, sbq) : \
+ offsetof(struct rx_ring, lbq))); \
+})
+
 struct rx_ring {
struct cqicb cqicb; /* The chip's completion queue init control 
block. */
 
@@ -1430,33 +1463,12 @@ struct rx_ring {
void __iomem *valid_db_reg; /* PCI doorbell mem area + 0x04 */
 
/* Large buffer queue elements. */
-   u32 lbq_len;/* entry count */
-   u32 lbq_size;   /* size in bytes of queue */
-   void *lbq_base;
-   dma_addr_t lbq_base_dma;
-   void *lbq_base_indirect;
-   dma_addr_t lbq_base_indirect_dma;
-   struct page_chunk pg_chunk; /* current page for chunks */
-   struct bq_desc *lbq;/* array of control blocks */
-   void __iomem *lbq_prod_idx_db_reg;  /* PCI doorbell mem area + 0x18 
*/
-   u32 lbq_prod_idx;   /* current sw prod idx */
-   u32 lbq_curr_idx;   /* next entry we expect */
-   u32 lbq_clean_idx;  /* beginning of new descs */
-   u32 lbq_free_cnt;   /* free buffer desc cnt */
+   struct qlge_bq lbq;
+   struct qlge_page_chunk master_chunk;
+   dma_addr_t chunk_dma_addr;
 
/* Small buffer queue elements. */
-   u32 sbq_len;/* entry count */
-   u32 sbq_size;   /* size in bytes of queue */
-   void *sbq_base;
-   dma_addr_t sbq_base_dma;
-   void *sbq_base_indirect;
-   dma_addr_t sbq_base_indirect_dma;
-   struct bq_desc *sbq;/* array of control blocks */
-   void __iomem *sbq_prod_idx_db_reg; /* PCI doorbell mem area + 0x1c */
-   u32 sbq_prod_idx;   /* current sw prod idx */
-   u32 sbq_curr_idx;   /* next entry we expect */
-   u32 sbq_clean_idx;  /* beginning of new descs */
-

[PATCH v2 13/17] staging: qlge: Remove useless memset

2019-09-27 Thread Benjamin Poirier
This just repeats what the other memset a few lines above did.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge_main.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index ef33db118aa1..8da596922582 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -2812,7 +2812,6 @@ static int qlge_init_bq(struct qlge_bq *bq)
buf_ptr = bq->base;
bq_desc = >queue[0];
for (i = 0; i < QLGE_BQ_LEN; i++, buf_ptr++, bq_desc++) {
-   memset(bq_desc, 0, sizeof(*bq_desc));
bq_desc->index = i;
bq_desc->buf_ptr = buf_ptr;
}
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 15/17] staging: qlge: Update buffer queue prod index despite oom

2019-09-27 Thread Benjamin Poirier
Currently, if we repeatedly fail to allocate all of the buffers from the
desired batching budget, we will never update the prod_idx register.
Restructure code to always update prod_idx if new buffers could be
allocated. This eliminates the current two stage process (clean_idx ->
prod_idx) and some associated bookkeeping variables.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  |   8 +--
 drivers/staging/qlge/qlge_dbg.c  |  10 ++-
 drivers/staging/qlge/qlge_main.c | 105 +++
 3 files changed, 60 insertions(+), 63 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 5e773af50397..7c48e333d29b 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1424,10 +1424,10 @@ struct qlge_bq {
dma_addr_t base_indirect_dma;
struct qlge_bq_desc *queue;
void __iomem *prod_idx_db_reg;
-   u32 prod_idx;   /* current sw prod idx */
-   u32 curr_idx;   /* next entry we expect */
-   u32 clean_idx;  /* beginning of new descs */
-   u32 free_cnt;   /* free buffer desc cnt */
+   /* next index where sw should refill a buffer for hw */
+   u16 next_to_use;
+   /* next index where sw expects to find a buffer filled by hw */
+   u16 next_to_clean;
enum {
QLGE_SB,/* small buffer */
QLGE_LB,/* large buffer */
diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c
index c21d1b228bd2..08d9223956c2 100644
--- a/drivers/staging/qlge/qlge_dbg.c
+++ b/drivers/staging/qlge/qlge_dbg.c
@@ -1777,8 +1777,8 @@ void ql_dump_rx_ring(struct rx_ring *rx_ring)
pr_err("rx_ring->lbq = %p\n", rx_ring->lbq.queue);
pr_err("rx_ring->lbq.prod_idx_db_reg = %p\n",
   rx_ring->lbq.prod_idx_db_reg);
-   pr_err("rx_ring->lbq.prod_idx = %d\n", rx_ring->lbq.prod_idx);
-   pr_err("rx_ring->lbq.curr_idx = %d\n", rx_ring->lbq.curr_idx);
+   pr_err("rx_ring->lbq.next_to_use = %d\n", rx_ring->lbq.next_to_use);
+   pr_err("rx_ring->lbq.next_to_clean = %d\n", rx_ring->lbq.next_to_clean);
pr_err("rx_ring->lbq_clean_idx = %d\n", rx_ring->lbq_clean_idx);
pr_err("rx_ring->lbq_free_cnt = %d\n", rx_ring->lbq_free_cnt);
 
@@ -1792,10 +1792,8 @@ void ql_dump_rx_ring(struct rx_ring *rx_ring)
pr_err("rx_ring->sbq = %p\n", rx_ring->sbq.queue);
pr_err("rx_ring->sbq.prod_idx_db_reg addr = %p\n",
   rx_ring->sbq.prod_idx_db_reg);
-   pr_err("rx_ring->sbq.prod_idx = %d\n", rx_ring->sbq.prod_idx);
-   pr_err("rx_ring->sbq.curr_idx = %d\n", rx_ring->sbq.curr_idx);
-   pr_err("rx_ring->sbq.clean_idx = %d\n", rx_ring->sbq.clean_idx);
-   pr_err("rx_ring->sbq.free_cnt = %d\n", rx_ring->sbq.free_cnt);
+   pr_err("rx_ring->sbq.next_to_use = %d\n", rx_ring->sbq.next_to_use);
+   pr_err("rx_ring->sbq.next_to_clean = %d\n", rx_ring->sbq.next_to_clean);
pr_err("rx_ring->cq_id = %d\n", rx_ring->cq_id);
pr_err("rx_ring->irq = %d\n", rx_ring->irq);
pr_err("rx_ring->cpu = %d\n", rx_ring->cpu);
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 009934bcb515..83e75005688a 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -982,9 +982,8 @@ static struct qlge_bq_desc *qlge_get_curr_buf(struct 
qlge_bq *bq)
 {
struct qlge_bq_desc *bq_desc;
 
-   bq_desc = >queue[bq->curr_idx];
-   bq->curr_idx = QLGE_BQ_WRAP(bq->curr_idx + 1);
-   bq->free_cnt++;
+   bq_desc = >queue[bq->next_to_clean];
+   bq->next_to_clean = QLGE_BQ_WRAP(bq->next_to_clean + 1);
 
return bq_desc;
 }
@@ -1114,9 +1113,9 @@ static void qlge_refill_bq(struct qlge_bq *bq)
 {
struct rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq);
struct ql_adapter *qdev = rx_ring->qdev;
-   u32 clean_idx = bq->clean_idx;
+   struct qlge_bq_desc *bq_desc;
+   int free_count, refill_count;
unsigned int reserved_count;
-   u32 start_idx = clean_idx;
int i;
 
if (bq->type == QLGE_SB)
@@ -1124,44 +1123,52 @@ static void qlge_refill_bq(struct qlge_bq *bq)
else
reserved_count = 32;
 
-   while (bq->free_cnt > reserved_count) {
-   for (i = (bq->clean_idx % 16); i < 16; i++) {
-   struct qlge_bq_desc *bq_desc = >queue[clean_idx];
-   int retval;
+   free_count = bq->next_to_clean - bq->next_to_use;
+   if (free_count <= 0)
+   free_count += QLGE_BQ_LEN;
 
-   netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev,
-"ring %u %s: try cleaning clean_idx = 
%d.\n",
-rx_ring->cq_id, bq_type_name[bq->type],
-clean_idx);
-
-

[PATCH v2 02/17] staging: qlge: Remove irq_cnt

2019-09-27 Thread Benjamin Poirier
qlge uses an irq enable/disable refcounting scheme that is:
* poorly implemented
Uses a spin_lock to protect accesses to the irq_cnt atomic
variable.
* buggy
Breaks when there is not a 1:1 sequence of irq - napi_poll, such as
when using SO_BUSY_POLL.
* unnecessary
The purpose or irq_cnt is to reduce irq control writes when
multiple work items result from one irq: the irq is re-enabled
after all work is done.
Analysis of the irq handler shows that there is only one case where
there might be two workers scheduled at once, and those have
separate irq masking bits.

Therefore, remove irq_cnt.

Additionally, we get a performance improvement:
perf stat -e cycles -a -r5 super_netperf 100 -H 192.168.33.1 -t TCP_RR

Before:
628560
628056
622103
622744
627202
[...]
   268,803,947,669  cycles ( +-  0.09% )

After:
636300
634106
634984
638555
634188
[...]
   259,237,291,449  cycles ( +-  0.19% )

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  |  7 ---
 drivers/staging/qlge/qlge_main.c | 98 +---
 drivers/staging/qlge/qlge_mpi.c  |  1 -
 3 files changed, 27 insertions(+), 79 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index ad7c5eb8a3b6..5d9a36deda08 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1982,11 +1982,6 @@ struct intr_context {
u32 intr_dis_mask;  /* value/mask used to disable this intr */
u32 intr_read_mask; /* value/mask used to read this intr */
char name[IFNAMSIZ * 2];
-   atomic_t irq_cnt;   /* irq_cnt is used in single vector
-* environment.  It's incremented for each
-* irq handler that is scheduled.  When each
-* handler finishes it decrements irq_cnt and
-* enables interrupts if it's zero. */
irq_handler_t handler;
 };
 
@@ -2074,7 +2069,6 @@ struct ql_adapter {
u32 port;   /* Port number this adapter */
 
spinlock_t adapter_lock;
-   spinlock_t hw_lock;
spinlock_t stats_lock;
 
/* PCI Bus Relative Register Addresses */
@@ -2235,7 +2229,6 @@ void ql_mpi_reset_work(struct work_struct *work);
 void ql_mpi_core_to_log(struct work_struct *work);
 int ql_wait_reg_rdy(struct ql_adapter *qdev, u32 reg, u32 bit, u32 ebit);
 void ql_queue_asic_error(struct ql_adapter *qdev);
-u32 ql_enable_completion_interrupt(struct ql_adapter *qdev, u32 intr);
 void ql_set_ethtool_ops(struct net_device *ndev);
 int ql_read_xgmac_reg64(struct ql_adapter *qdev, u32 reg, u64 *data);
 void ql_mpi_idc_work(struct work_struct *work);
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index d7b64d360ea8..7a8d6390d5de 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -625,75 +625,26 @@ static void ql_disable_interrupts(struct ql_adapter *qdev)
ql_write32(qdev, INTR_EN, (INTR_EN_EI << 16));
 }
 
-/* If we're running with multiple MSI-X vectors then we enable on the fly.
- * Otherwise, we may have multiple outstanding workers and don't want to
- * enable until the last one finishes. In this case, the irq_cnt gets
- * incremented every time we queue a worker and decremented every time
- * a worker finishes.  Once it hits zero we enable the interrupt.
- */
-u32 ql_enable_completion_interrupt(struct ql_adapter *qdev, u32 intr)
+static void ql_enable_completion_interrupt(struct ql_adapter *qdev, u32 intr)
 {
-   u32 var = 0;
-   unsigned long hw_flags = 0;
-   struct intr_context *ctx = qdev->intr_context + intr;
-
-   if (likely(test_bit(QL_MSIX_ENABLED, >flags) && intr)) {
-   /* Always enable if we're MSIX multi interrupts and
-* it's not the default (zeroeth) interrupt.
-*/
-   ql_write32(qdev, INTR_EN,
-  ctx->intr_en_mask);
-   var = ql_read32(qdev, STS);
-   return var;
-   }
+   struct intr_context *ctx = >intr_context[intr];
 
-   spin_lock_irqsave(>hw_lock, hw_flags);
-   if (atomic_dec_and_test(>irq_cnt)) {
-   ql_write32(qdev, INTR_EN,
-  ctx->intr_en_mask);
-   var = ql_read32(qdev, STS);
-   }
-   spin_unlock_irqrestore(>hw_lock, hw_flags);
-   return var;
+   ql_write32(qdev, INTR_EN, ctx->intr_en_mask);
 }
 
-static u32 ql_disable_completion_interrupt(struct ql_adapter *qdev, u32 intr)
+static void ql_disable_completion_interrupt(struct ql_adapter *qdev, u32 intr)
 {
-   u32 var = 0;
-   struct intr_context *ctx;
+   struct intr_context *ctx = >intr_context[intr];
 
-   /* HW disables for us if we're MSIX multi interrupts and
-* it's not the default (zeroeth) 

[PATCH v2 03/17] staging: qlge: Remove page_chunk.last_flag

2019-09-27 Thread Benjamin Poirier
As already done in ql_get_curr_lchunk(), this member can be replaced by a
simple test.

Signed-off-by: Benjamin Poirier 
Acked-by: Manish Chopra 
---
 drivers/staging/qlge/qlge.h  |  1 -
 drivers/staging/qlge/qlge_main.c | 13 +
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 5d9a36deda08..0a156a95e981 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1363,7 +1363,6 @@ struct page_chunk {
char *va;   /* virt addr for this chunk */
u64 map;/* mapping for master */
unsigned int offset;/* offset for this chunk */
-   unsigned int last_flag; /* flag set for last chunk in page */
 };
 
 struct bq_desc {
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 7a8d6390d5de..a82920776e6b 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -1077,11 +1077,9 @@ static int ql_get_next_chunk(struct ql_adapter *qdev, 
struct rx_ring *rx_ring,
rx_ring->pg_chunk.offset += rx_ring->lbq_buf_size;
if (rx_ring->pg_chunk.offset == ql_lbq_block_size(qdev)) {
rx_ring->pg_chunk.page = NULL;
-   lbq_desc->p.pg_chunk.last_flag = 1;
} else {
rx_ring->pg_chunk.va += rx_ring->lbq_buf_size;
get_page(rx_ring->pg_chunk.page);
-   lbq_desc->p.pg_chunk.last_flag = 0;
}
return 0;
 }
@@ -2778,6 +2776,8 @@ static int ql_alloc_tx_resources(struct ql_adapter *qdev,
 
 static void ql_free_lbq_buffers(struct ql_adapter *qdev, struct rx_ring 
*rx_ring)
 {
+   unsigned int last_offset = ql_lbq_block_size(qdev) -
+   rx_ring->lbq_buf_size;
struct bq_desc *lbq_desc;
 
uint32_t  curr_idx, clean_idx;
@@ -2787,13 +2787,10 @@ static void ql_free_lbq_buffers(struct ql_adapter 
*qdev, struct rx_ring *rx_ring
while (curr_idx != clean_idx) {
lbq_desc = _ring->lbq[curr_idx];
 
-   if (lbq_desc->p.pg_chunk.last_flag) {
-   pci_unmap_page(qdev->pdev,
-   lbq_desc->p.pg_chunk.map,
-   ql_lbq_block_size(qdev),
+   if (lbq_desc->p.pg_chunk.offset == last_offset)
+   pci_unmap_page(qdev->pdev, lbq_desc->p.pg_chunk.map,
+  ql_lbq_block_size(qdev),
   PCI_DMA_FROMDEVICE);
-   lbq_desc->p.pg_chunk.last_flag = 0;
-   }
 
put_page(lbq_desc->p.pg_chunk.page);
lbq_desc->p.pg_chunk.page = NULL;
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 17/17] staging: qlge: Refill empty buffer queues from wq

2019-09-27 Thread Benjamin Poirier
When operating at mtu 9000, qlge does order-1 allocations for rx buffers in
atomic context. This is especially unreliable when free memory is low or
fragmented. Add an approach similar to commit 3161e453e496 ("virtio: net
refill on out-of-memory") to qlge so that the device doesn't lock up if
there are allocation failures.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/TODO|  3 --
 drivers/staging/qlge/qlge.h  |  8 
 drivers/staging/qlge/qlge_main.c | 80 +---
 3 files changed, 72 insertions(+), 19 deletions(-)

diff --git a/drivers/staging/qlge/TODO b/drivers/staging/qlge/TODO
index 51c509084e80..f93f7428f5d5 100644
--- a/drivers/staging/qlge/TODO
+++ b/drivers/staging/qlge/TODO
@@ -1,6 +1,3 @@
-* reception stalls permanently (until admin intervention) if the rx buffer
-  queues become empty because of allocation failures (ex. under memory
-  pressure)
 * commit 7c734359d350 ("qlge: Size RX buffers based on MTU.", v2.6.33-rc1)
   introduced dead code in the receive routines, which should be rewritten
   anyways by the admission of the author himself, see the comment above
diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index e5a352df8228..6ec7e3ce3863 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1452,6 +1452,13 @@ struct qlge_bq {
 
 #define QLGE_BQ_WRAP(index) ((index) & (QLGE_BQ_LEN - 1))
 
+#define QLGE_BQ_HW_OWNED(bq) \
+({ \
+   typeof(bq) _bq = bq; \
+   QLGE_BQ_WRAP(QLGE_BQ_ALIGN((_bq)->next_to_use) - \
+(_bq)->next_to_clean); \
+})
+
 struct rx_ring {
struct cqicb cqicb; /* The chip's completion queue init control 
block. */
 
@@ -1479,6 +1486,7 @@ struct rx_ring {
/* Misc. handler elements. */
u32 irq;/* Which vector this ring is assigned. */
u32 cpu;/* Which CPU this should run on. */
+   struct delayed_work refill_work;
char name[IFNAMSIZ + 5];
struct napi_struct napi;
u8 reserved;
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 02ad0cdf4856..0c381d91faa6 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -1029,7 +1029,7 @@ static const char * const bq_type_name[] = {
 
 /* return 0 or negative error */
 static int qlge_refill_sb(struct rx_ring *rx_ring,
- struct qlge_bq_desc *sbq_desc)
+ struct qlge_bq_desc *sbq_desc, gfp_t gfp)
 {
struct ql_adapter *qdev = rx_ring->qdev;
struct sk_buff *skb;
@@ -1041,7 +1041,7 @@ static int qlge_refill_sb(struct rx_ring *rx_ring,
 "ring %u sbq: getting new skb for index %d.\n",
 rx_ring->cq_id, sbq_desc->index);
 
-   skb = netdev_alloc_skb(qdev->ndev, SMALL_BUFFER_SIZE);
+   skb = __netdev_alloc_skb(qdev->ndev, SMALL_BUFFER_SIZE, gfp);
if (!skb)
return -ENOMEM;
skb_reserve(skb, QLGE_SB_PAD);
@@ -1062,7 +1062,7 @@ static int qlge_refill_sb(struct rx_ring *rx_ring,
 
 /* return 0 or negative error */
 static int qlge_refill_lb(struct rx_ring *rx_ring,
- struct qlge_bq_desc *lbq_desc)
+ struct qlge_bq_desc *lbq_desc, gfp_t gfp)
 {
struct ql_adapter *qdev = rx_ring->qdev;
struct qlge_page_chunk *master_chunk = _ring->master_chunk;
@@ -1071,8 +1071,7 @@ static int qlge_refill_lb(struct rx_ring *rx_ring,
struct page *page;
dma_addr_t dma_addr;
 
-   page = alloc_pages(__GFP_COMP | GFP_ATOMIC,
-  qdev->lbq_buf_order);
+   page = alloc_pages(gfp | __GFP_COMP, qdev->lbq_buf_order);
if (unlikely(!page))
return -ENOMEM;
dma_addr = pci_map_page(qdev->pdev, page, 0,
@@ -1109,33 +1108,33 @@ static int qlge_refill_lb(struct rx_ring *rx_ring,
return 0;
 }
 
-static void qlge_refill_bq(struct qlge_bq *bq)
+/* return 0 or negative error */
+static int qlge_refill_bq(struct qlge_bq *bq, gfp_t gfp)
 {
struct rx_ring *rx_ring = QLGE_BQ_CONTAINER(bq);
struct ql_adapter *qdev = rx_ring->qdev;
struct qlge_bq_desc *bq_desc;
int refill_count;
+   int retval;
int i;
 
refill_count = QLGE_BQ_WRAP(QLGE_BQ_ALIGN(bq->next_to_clean - 1) -
bq->next_to_use);
if (!refill_count)
-   return;
+   return 0;
 
i = bq->next_to_use;
bq_desc = >queue[i];
i -= QLGE_BQ_LEN;
do {
-   int retval;
-
netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev,
 "ring %u %s: try cleaning idx %d\n",
 rx_ring->cq_id, bq_type_name[bq->type], i);
 
if (bq->type == QLGE_SB)
-   retval = 

[PATCH v2 14/17] staging: qlge: Replace memset with assignment

2019-09-27 Thread Benjamin Poirier
Instead of clearing the structure wholesale, it is sufficient to initialize
the skb member which is used to manage sbq instances. lbq instances are
managed according to curr_idx and clean_idx.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge_main.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 8da596922582..009934bcb515 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -2807,11 +2807,10 @@ static int qlge_init_bq(struct qlge_bq *bq)
if (!bq->queue)
return -ENOMEM;
 
-   memset(bq->queue, 0, QLGE_BQ_LEN * sizeof(struct qlge_bq_desc));
-
buf_ptr = bq->base;
bq_desc = >queue[0];
for (i = 0; i < QLGE_BQ_LEN; i++, buf_ptr++, bq_desc++) {
+   bq_desc->p.skb = NULL;
bq_desc->index = i;
bq_desc->buf_ptr = buf_ptr;
}
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 09/17] staging: qlge: Fix dma_sync_single calls

2019-09-27 Thread Benjamin Poirier
Using the unmap addr elsewhere than unmap calls is a misuse of the dma api.
In prevision of this fix, qlge kept two copies of the dma address around ;)

Fixes: c4e84bde1d59 ("qlge: New Qlogic 10Gb Ethernet Driver.")
Fixes: 7c734359d350 ("qlge: Size RX buffers based on MTU.")
Fixes: 2c9a266afefe ("qlge: Fix receive packets drop.")
Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  |  5 +--
 drivers/staging/qlge/qlge_main.c | 54 +---
 2 files changed, 22 insertions(+), 37 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index a84aa264dfa8..519fa39dd194 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1410,12 +1410,9 @@ struct qlge_bq_desc {
struct sk_buff *skb;
} p;
dma_addr_t dma_addr;
-   /* address in ring where the buffer address (dma_addr) is written for
-* the device
-*/
+   /* address in ring where the buffer address is written for the device */
__le64 *buf_ptr;
u32 index;
-   DEFINE_DMA_UNMAP_ADDR(mapaddr);
 };
 
 /* buffer queue */
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index ba133d1f2b74..609a87804a94 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -995,15 +995,13 @@ static struct qlge_bq_desc *ql_get_curr_lchunk(struct 
ql_adapter *qdev,
 {
struct qlge_bq_desc *lbq_desc = qlge_get_curr_buf(_ring->lbq);
 
-   pci_dma_sync_single_for_cpu(qdev->pdev,
-   dma_unmap_addr(lbq_desc, mapaddr),
+   pci_dma_sync_single_for_cpu(qdev->pdev, lbq_desc->dma_addr,
qdev->lbq_buf_size, PCI_DMA_FROMDEVICE);
 
if ((lbq_desc->p.pg_chunk.offset + qdev->lbq_buf_size) ==
ql_lbq_block_size(qdev)) {
/* last chunk of the master page */
-   pci_unmap_page(qdev->pdev, lbq_desc->dma_addr -
-  lbq_desc->p.pg_chunk.offset,
+   pci_unmap_page(qdev->pdev, lbq_desc->dma_addr,
   ql_lbq_block_size(qdev), PCI_DMA_FROMDEVICE);
}
 
@@ -1031,7 +1029,7 @@ static const char * const bq_type_name[] = {
[QLGE_LB] = "lbq",
 };
 
-/* return size of allocated buffer (may be 0) or negative error */
+/* return 0 or negative error */
 static int qlge_refill_sb(struct rx_ring *rx_ring,
  struct qlge_bq_desc *sbq_desc)
 {
@@ -1058,12 +1056,13 @@ static int qlge_refill_sb(struct rx_ring *rx_ring,
dev_kfree_skb_any(skb);
return -EIO;
}
+   *sbq_desc->buf_ptr = cpu_to_le64(sbq_desc->dma_addr);
 
sbq_desc->p.skb = skb;
-   return SMALL_BUFFER_SIZE;
+   return 0;
 }
 
-/* return size of allocated buffer or negative error */
+/* return 0 or negative error */
 static int qlge_refill_lb(struct rx_ring *rx_ring,
  struct qlge_bq_desc *lbq_desc)
 {
@@ -1094,7 +1093,9 @@ static int qlge_refill_lb(struct rx_ring *rx_ring,
}
 
lbq_desc->p.pg_chunk = *master_chunk;
-   lbq_desc->dma_addr = rx_ring->chunk_dma_addr + master_chunk->offset;
+   lbq_desc->dma_addr = rx_ring->chunk_dma_addr;
+   *lbq_desc->buf_ptr = cpu_to_le64(lbq_desc->dma_addr +
+lbq_desc->p.pg_chunk.offset);
 
/* Adjust the master page chunk for next
 * buffer get.
@@ -1107,7 +1108,7 @@ static int qlge_refill_lb(struct rx_ring *rx_ring,
get_page(master_chunk->page);
}
 
-   return qdev->lbq_buf_size;
+   return 0;
 }
 
 static void qlge_refill_bq(struct qlge_bq *bq)
@@ -1138,13 +1139,7 @@ static void qlge_refill_bq(struct qlge_bq *bq)
retval = qlge_refill_sb(rx_ring, bq_desc);
else
retval = qlge_refill_lb(rx_ring, bq_desc);
-
-   if (retval > 0) {
-   dma_unmap_addr_set(bq_desc, mapaddr,
-  bq_desc->dma_addr);
-   *bq_desc->buf_ptr =
-   cpu_to_le64(bq_desc->dma_addr);
-   } else if (retval < 0) {
+   if (retval < 0) {
bq->clean_idx = clean_idx;
netif_err(qdev, ifup, qdev->ndev,
  "ring %u %s: Could not get a page 
chunk, i=%d, clean_idx =%d .\n",
@@ -1567,8 +1562,7 @@ static void ql_process_mac_rx_skb(struct ql_adapter *qdev,
}
skb_reserve(new_skb, NET_IP_ALIGN);
 
-   pci_dma_sync_single_for_cpu(qdev->pdev,
-   dma_unmap_addr(sbq_desc, mapaddr),
+   pci_dma_sync_single_for_cpu(qdev->pdev, sbq_desc->dma_addr,

[PATCH v2 04/17] staging: qlge: Deduplicate lbq_buf_size

2019-09-27 Thread Benjamin Poirier
lbq_buf_size is duplicated to every rx_ring structure whereas lbq_buf_order
is present once in the ql_adapter structure. All rings use the same buf
size, keep only one copy of it. Also factor out the calculation of
lbq_buf_size instead of having two copies.

Signed-off-by: Benjamin Poirier 
Acked-by: Willem de Bruijn 
---
 drivers/staging/qlge/qlge.h  |  2 +-
 drivers/staging/qlge/qlge_dbg.c  |  2 +-
 drivers/staging/qlge/qlge_main.c | 61 ++--
 3 files changed, 28 insertions(+), 37 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 0a156a95e981..ba61b4559dd6 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1433,7 +1433,6 @@ struct rx_ring {
/* Large buffer queue elements. */
u32 lbq_len;/* entry count */
u32 lbq_size;   /* size in bytes of queue */
-   u32 lbq_buf_size;
void *lbq_base;
dma_addr_t lbq_base_dma;
void *lbq_base_indirect;
@@ -2108,6 +2107,7 @@ struct ql_adapter {
struct rx_ring rx_ring[MAX_RX_RINGS];
struct tx_ring tx_ring[MAX_TX_RINGS];
unsigned int lbq_buf_order;
+   u32 lbq_buf_size;
 
int rx_csum;
u32 default_rx_queue;
diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c
index 31389ab8bdf7..46599d74c6fb 100644
--- a/drivers/staging/qlge/qlge_dbg.c
+++ b/drivers/staging/qlge/qlge_dbg.c
@@ -1630,6 +1630,7 @@ void ql_dump_qdev(struct ql_adapter *qdev)
DUMP_QDEV_FIELD(qdev, "0x%08x", xg_sem_mask);
DUMP_QDEV_FIELD(qdev, "0x%08x", port_link_up);
DUMP_QDEV_FIELD(qdev, "0x%08x", port_init);
+   DUMP_QDEV_FIELD(qdev, "%u", lbq_buf_size);
 }
 #endif
 
@@ -1774,7 +1775,6 @@ void ql_dump_rx_ring(struct rx_ring *rx_ring)
pr_err("rx_ring->lbq_curr_idx = %d\n", rx_ring->lbq_curr_idx);
pr_err("rx_ring->lbq_clean_idx = %d\n", rx_ring->lbq_clean_idx);
pr_err("rx_ring->lbq_free_cnt = %d\n", rx_ring->lbq_free_cnt);
-   pr_err("rx_ring->lbq_buf_size = %d\n", rx_ring->lbq_buf_size);
 
pr_err("rx_ring->sbq_base = %p\n", rx_ring->sbq_base);
pr_err("rx_ring->sbq_base_dma = %llx\n",
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index a82920776e6b..2b1cc4b29bed 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -995,15 +995,14 @@ static struct bq_desc *ql_get_curr_lchunk(struct 
ql_adapter *qdev,
struct bq_desc *lbq_desc = ql_get_curr_lbuf(rx_ring);
 
pci_dma_sync_single_for_cpu(qdev->pdev,
-   dma_unmap_addr(lbq_desc, mapaddr),
-   rx_ring->lbq_buf_size,
-   PCI_DMA_FROMDEVICE);
+   dma_unmap_addr(lbq_desc, mapaddr),
+   qdev->lbq_buf_size, PCI_DMA_FROMDEVICE);
 
/* If it's the last chunk of our master page then
 * we unmap it.
 */
-   if ((lbq_desc->p.pg_chunk.offset + rx_ring->lbq_buf_size)
-   == ql_lbq_block_size(qdev))
+   if (lbq_desc->p.pg_chunk.offset + qdev->lbq_buf_size ==
+   ql_lbq_block_size(qdev))
pci_unmap_page(qdev->pdev,
lbq_desc->p.pg_chunk.map,
ql_lbq_block_size(qdev),
@@ -1074,11 +1073,11 @@ static int ql_get_next_chunk(struct ql_adapter *qdev, 
struct rx_ring *rx_ring,
/* Adjust the master page chunk for next
 * buffer get.
 */
-   rx_ring->pg_chunk.offset += rx_ring->lbq_buf_size;
+   rx_ring->pg_chunk.offset += qdev->lbq_buf_size;
if (rx_ring->pg_chunk.offset == ql_lbq_block_size(qdev)) {
rx_ring->pg_chunk.page = NULL;
} else {
-   rx_ring->pg_chunk.va += rx_ring->lbq_buf_size;
+   rx_ring->pg_chunk.va += qdev->lbq_buf_size;
get_page(rx_ring->pg_chunk.page);
}
return 0;
@@ -1110,12 +1109,12 @@ static void ql_update_lbq(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
lbq_desc->p.pg_chunk.offset;
dma_unmap_addr_set(lbq_desc, mapaddr, map);
dma_unmap_len_set(lbq_desc, maplen,
-   rx_ring->lbq_buf_size);
+ qdev->lbq_buf_size);
*lbq_desc->addr = cpu_to_le64(map);
 
pci_dma_sync_single_for_device(qdev->pdev, map,
-   rx_ring->lbq_buf_size,
-   PCI_DMA_FROMDEVICE);
+  qdev->lbq_buf_size,
+  PCI_DMA_FROMDEVICE);
clean_idx++;
if 

[PATCH v2 07/17] staging: qlge: Remove useless dma synchronization calls

2019-09-27 Thread Benjamin Poirier
This is unneeded for two reasons:
1) the cpu does not write data for the device in the mapping
2) calls like ..._sync_..._for_device(..., ..._FROMDEVICE) are
   nonsensical, see commit 3f0fb4e85b38 ("Documentation/DMA-API-HOWTO.txt:
   fix misleading example")

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge_main.c | 12 
 1 file changed, 12 deletions(-)

diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 0a3809c50c10..03403718a273 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -1110,9 +1110,6 @@ static void ql_update_lbq(struct ql_adapter *qdev, struct 
rx_ring *rx_ring)
dma_unmap_addr_set(lbq_desc, mapaddr, map);
*lbq_desc->addr = cpu_to_le64(map);
 
-   pci_dma_sync_single_for_device(qdev->pdev, map,
-  qdev->lbq_buf_size,
-  PCI_DMA_FROMDEVICE);
clean_idx++;
if (clean_idx == rx_ring->lbq_len)
clean_idx = 0;
@@ -1598,10 +1595,6 @@ static void ql_process_mac_rx_skb(struct ql_adapter 
*qdev,
 
skb_put_data(new_skb, skb->data, length);
 
-   pci_dma_sync_single_for_device(qdev->pdev,
-  dma_unmap_addr(sbq_desc, mapaddr),
-  SMALL_BUF_MAP_SIZE,
-  PCI_DMA_FROMDEVICE);
skb = new_skb;
 
/* Frame error, so drop the packet. */
@@ -1757,11 +1750,6 @@ static struct sk_buff *ql_build_rx_skb(struct ql_adapter 
*qdev,
SMALL_BUF_MAP_SIZE,
PCI_DMA_FROMDEVICE);
skb_put_data(skb, sbq_desc->p.skb->data, length);
-   pci_dma_sync_single_for_device(qdev->pdev,
-  dma_unmap_addr(sbq_desc,
- mapaddr),
-  SMALL_BUF_MAP_SIZE,
-  PCI_DMA_FROMDEVICE);
} else {
netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev,
 "%d bytes in a single small buffer.\n",
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 01/17] staging: qlge: Fix irq masking in INTx mode

2019-09-27 Thread Benjamin Poirier
Tracing the driver operation reveals that the INTR_EN_EN bit (per-queue
interrupt control) does not immediately prevent rx completion interrupts
when the device is operating in INTx mode. This leads to interrupts being
raised while napi is scheduled/running. Those interrupts are ignored by
qlge_isr() and falsely reported as IRQ_NONE thanks to the irq_cnt scheme.
This in turn can cause frames to loiter in the receive queue until a later
frame leads to another rx interrupt that will schedule napi.

Use the INTR_EN_EI bit (master interrupt control) instead.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge_main.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 6cae33072496..d7b64d360ea8 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -3366,6 +3366,7 @@ static void ql_enable_msix(struct ql_adapter *qdev)
}
}
qlge_irq_type = LEG_IRQ;
+   set_bit(QL_LEGACY_ENABLED, >flags);
netif_printk(qdev, ifup, KERN_DEBUG, qdev->ndev,
 "Running with legacy interrupts.\n");
 }
@@ -3509,6 +3510,16 @@ static void ql_resolve_queues_to_irqs(struct ql_adapter 
*qdev)
intr_context->intr_dis_mask =
INTR_EN_TYPE_MASK | INTR_EN_INTR_MASK |
INTR_EN_TYPE_DISABLE;
+   if (test_bit(QL_LEGACY_ENABLED, >flags)) {
+   /* Experience shows that when using INTx interrupts,
+* the device does not always auto-mask INTR_EN_EN.
+* Moreover, masking INTR_EN_EN manually does not
+* immediately prevent interrupt generation.
+*/
+   intr_context->intr_en_mask |= INTR_EN_EI << 16 |
+   INTR_EN_EI;
+   intr_context->intr_dis_mask |= INTR_EN_EI << 16;
+   }
intr_context->intr_read_mask =
INTR_EN_TYPE_MASK | INTR_EN_INTR_MASK | INTR_EN_TYPE_READ;
/*
-- 
2.23.0

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2 06/17] staging: qlge: Remove rx_ring.sbq_buf_size

2019-09-27 Thread Benjamin Poirier
Tx completion rings have sbq_buf_size = 0 but there's no case where the
code actually tests on that value. We can remove sbq_buf_size and use a
constant instead.

Signed-off-by: Benjamin Poirier 
Reviewed-by: Willem de Bruijn 
---
 drivers/staging/qlge/qlge.h  |  1 -
 drivers/staging/qlge/qlge_dbg.c  |  1 -
 drivers/staging/qlge/qlge_main.c | 24 ++--
 3 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index f32da8c7679f..a3a52bbc2821 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1447,7 +1447,6 @@ struct rx_ring {
/* Small buffer queue elements. */
u32 sbq_len;/* entry count */
u32 sbq_size;   /* size in bytes of queue */
-   u32 sbq_buf_size;
void *sbq_base;
dma_addr_t sbq_base_dma;
void *sbq_base_indirect;
diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c
index 46599d74c6fb..cff1603d121c 100644
--- a/drivers/staging/qlge/qlge_dbg.c
+++ b/drivers/staging/qlge/qlge_dbg.c
@@ -1792,7 +1792,6 @@ void ql_dump_rx_ring(struct rx_ring *rx_ring)
pr_err("rx_ring->sbq_curr_idx = %d\n", rx_ring->sbq_curr_idx);
pr_err("rx_ring->sbq_clean_idx = %d\n", rx_ring->sbq_clean_idx);
pr_err("rx_ring->sbq_free_cnt = %d\n", rx_ring->sbq_free_cnt);
-   pr_err("rx_ring->sbq_buf_size = %d\n", rx_ring->sbq_buf_size);
pr_err("rx_ring->cq_id = %d\n", rx_ring->cq_id);
pr_err("rx_ring->irq = %d\n", rx_ring->irq);
pr_err("rx_ring->cpu = %d\n", rx_ring->cpu);
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 34bc1d9560ce..0a3809c50c10 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -1164,7 +1164,7 @@ static void ql_update_sbq(struct ql_adapter *qdev, struct 
rx_ring *rx_ring)
skb_reserve(sbq_desc->p.skb, QLGE_SB_PAD);
map = pci_map_single(qdev->pdev,
 sbq_desc->p.skb->data,
-rx_ring->sbq_buf_size,
+SMALL_BUF_MAP_SIZE,
 PCI_DMA_FROMDEVICE);
if (pci_dma_mapping_error(qdev->pdev, map)) {
netif_err(qdev, ifup, qdev->ndev,
@@ -1594,14 +1594,13 @@ static void ql_process_mac_rx_skb(struct ql_adapter 
*qdev,
 
pci_dma_sync_single_for_cpu(qdev->pdev,
dma_unmap_addr(sbq_desc, mapaddr),
-   rx_ring->sbq_buf_size,
-   PCI_DMA_FROMDEVICE);
+   SMALL_BUF_MAP_SIZE, PCI_DMA_FROMDEVICE);
 
skb_put_data(new_skb, skb->data, length);
 
pci_dma_sync_single_for_device(qdev->pdev,
   dma_unmap_addr(sbq_desc, mapaddr),
-  rx_ring->sbq_buf_size,
+  SMALL_BUF_MAP_SIZE,
   PCI_DMA_FROMDEVICE);
skb = new_skb;
 
@@ -1723,7 +1722,7 @@ static struct sk_buff *ql_build_rx_skb(struct ql_adapter 
*qdev,
sbq_desc = ql_get_curr_sbuf(rx_ring);
pci_unmap_single(qdev->pdev,
dma_unmap_addr(sbq_desc, mapaddr),
-   rx_ring->sbq_buf_size, PCI_DMA_FROMDEVICE);
+   SMALL_BUF_MAP_SIZE, PCI_DMA_FROMDEVICE);
skb = sbq_desc->p.skb;
ql_realign_skb(skb, hdr_len);
skb_put(skb, hdr_len);
@@ -1755,13 +1754,13 @@ static struct sk_buff *ql_build_rx_skb(struct 
ql_adapter *qdev,
pci_dma_sync_single_for_cpu(qdev->pdev,
dma_unmap_addr(sbq_desc,
   mapaddr),
-   rx_ring->sbq_buf_size,
+   SMALL_BUF_MAP_SIZE,
PCI_DMA_FROMDEVICE);
skb_put_data(skb, sbq_desc->p.skb->data, length);
pci_dma_sync_single_for_device(qdev->pdev,
   dma_unmap_addr(sbq_desc,
  mapaddr),
-  rx_ring->sbq_buf_size,
+  SMALL_BUF_MAP_SIZE,
   PCI_DMA_FROMDEVICE);
} else {
netif_printk(qdev, rx_status, KERN_DEBUG, qdev->ndev,
@@ -1773,7 

[PATCH v2 10/17] staging: qlge: Remove rx_ring.type

2019-09-27 Thread Benjamin Poirier
This field is redundant, the type can be determined from the index, cq_id.

Signed-off-by: Benjamin Poirier 
---
 drivers/staging/qlge/qlge.h  | 10 --
 drivers/staging/qlge/qlge_dbg.c  | 16 
 drivers/staging/qlge/qlge_main.c | 31 +++
 3 files changed, 19 insertions(+), 38 deletions(-)

diff --git a/drivers/staging/qlge/qlge.h b/drivers/staging/qlge/qlge.h
index 519fa39dd194..5a4b2520cd2a 100644
--- a/drivers/staging/qlge/qlge.h
+++ b/drivers/staging/qlge/qlge.h
@@ -1387,15 +1387,6 @@ struct tx_ring {
u64 tx_errors;
 };
 
-/*
- * Type of inbound queue.
- */
-enum {
-   DEFAULT_Q = 2,  /* Handles slow queue and chip/MPI events. */
-   TX_Q = 3,   /* Handles outbound completions. */
-   RX_Q = 4,   /* Handles inbound completions. */
-};
-
 struct qlge_page_chunk {
struct page *page;
void *va; /* virt addr including offset */
@@ -1468,7 +1459,6 @@ struct rx_ring {
struct qlge_bq sbq;
 
/* Misc. handler elements. */
-   u32 type;   /* Type of queue, tx, rx. */
u32 irq;/* Which vector this ring is assigned. */
u32 cpu;/* Which CPU this should run on. */
char name[IFNAMSIZ + 5];
diff --git a/drivers/staging/qlge/qlge_dbg.c b/drivers/staging/qlge/qlge_dbg.c
index 35af06dd21dd..a177302073db 100644
--- a/drivers/staging/qlge/qlge_dbg.c
+++ b/drivers/staging/qlge/qlge_dbg.c
@@ -1731,16 +1731,24 @@ void ql_dump_cqicb(struct cqicb *cqicb)
   le16_to_cpu(cqicb->sbq_len));
 }
 
+static const char *qlge_rx_ring_type_name(struct rx_ring *rx_ring)
+{
+   struct ql_adapter *qdev = rx_ring->qdev;
+
+   if (rx_ring->cq_id < qdev->rss_ring_count)
+   return "RX COMPLETION";
+   else
+   return "TX COMPLETION";
+};
+
 void ql_dump_rx_ring(struct rx_ring *rx_ring)
 {
if (rx_ring == NULL)
return;
pr_err("= Dumping rx_ring %d ===\n",
   rx_ring->cq_id);
-   pr_err("Dumping rx_ring %d, type = %s%s%s\n",
-  rx_ring->cq_id, rx_ring->type == DEFAULT_Q ? "DEFAULT" : "",
-  rx_ring->type == TX_Q ? "OUTBOUND COMPLETIONS" : "",
-  rx_ring->type == RX_Q ? "INBOUND_COMPLETIONS" : "");
+   pr_err("Dumping rx_ring %d, type = %s\n", rx_ring->cq_id,
+  qlge_rx_ring_type_name(rx_ring));
pr_err("rx_ring->cqicb = %p\n", _ring->cqicb);
pr_err("rx_ring->cq_base = %p\n", rx_ring->cq_base);
pr_err("rx_ring->cq_base_dma = %llx\n",
diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
index 609a87804a94..0e304a7ac22f 100644
--- a/drivers/staging/qlge/qlge_main.c
+++ b/drivers/staging/qlge/qlge_main.c
@@ -2785,14 +2785,10 @@ static void ql_free_rx_buffers(struct ql_adapter *qdev)
 
 static void ql_alloc_rx_buffers(struct ql_adapter *qdev)
 {
-   struct rx_ring *rx_ring;
int i;
 
-   for (i = 0; i < qdev->rx_ring_count; i++) {
-   rx_ring = >rx_ring[i];
-   if (rx_ring->type != TX_Q)
-   ql_update_buffer_queues(rx_ring);
-   }
+   for (i = 0; i < qdev->rss_ring_count; i++)
+   ql_update_buffer_queues(>rx_ring[i]);
 }
 
 static int qlge_init_bq(struct qlge_bq *bq)
@@ -3071,12 +3067,7 @@ static int ql_start_rx_ring(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
rx_ring->sbq.clean_idx = 0;
rx_ring->sbq.free_cnt = rx_ring->sbq.len;
}
-   switch (rx_ring->type) {
-   case TX_Q:
-   cqicb->irq_delay = cpu_to_le16(qdev->tx_coalesce_usecs);
-   cqicb->pkt_delay = cpu_to_le16(qdev->tx_max_coalesced_frames);
-   break;
-   case RX_Q:
+   if (rx_ring->cq_id < qdev->rss_ring_count) {
/* Inbound completion handling rx_rings run in
 * separate NAPI contexts.
 */
@@ -3084,10 +3075,9 @@ static int ql_start_rx_ring(struct ql_adapter *qdev, 
struct rx_ring *rx_ring)
   64);
cqicb->irq_delay = cpu_to_le16(qdev->rx_coalesce_usecs);
cqicb->pkt_delay = cpu_to_le16(qdev->rx_max_coalesced_frames);
-   break;
-   default:
-   netif_printk(qdev, ifup, KERN_DEBUG, qdev->ndev,
-"Invalid rx_ring->type = %d.\n", rx_ring->type);
+   } else {
+   cqicb->irq_delay = cpu_to_le16(qdev->tx_coalesce_usecs);
+   cqicb->pkt_delay = cpu_to_le16(qdev->tx_max_coalesced_frames);
}
err = ql_write_cfg(qdev, cqicb, sizeof(struct cqicb),
   CFG_LCQ, rx_ring->cq_id);
@@ -3444,12 +3434,7 @@ static int ql_request_irq(struct ql_adapter *qdev)
goto err_irq;
 
netif_err(qdev, ifup, qdev->ndev,
-

[PATCH v2 0/17] staging: qlge: Fix rx stall in case of allocation failures

2019-09-27 Thread Benjamin Poirier
qlge refills rx buffers from napi context. In case of allocation failure,
allocation will be retried the next time napi runs. If a receive queue runs
out of free buffers (possibly after subsequent allocation failures), it
drops all traffic, no longer raises interrupts and napi is no longer
scheduled; reception is stalled until manual admin intervention.

This patch series adds a fallback mechanism for rx buffer allocation. If an
rx buffer queue becomes empty, a workqueue is scheduled to refill it from
process context where allocation can block until mm has freed some pages
(hopefully). This approach was inspired by the virtio_net driver (commit
3161e453e496 "virtio: net refill on out-of-memory").

I've compared this with how some other devices with a similar allocation
scheme handle this situation:
mlx4 relies on a periodic watchdog, sfc uses a timer, e1000e and fm10k rely
on periodic hardware interrupts (IIUC). In all cases, they use this to
schedule napi periodically at a fixed interval (10-250ms) until allocations
succeed. This kind of approach simplifies allocations because only one
context may refill buffers, however it is inefficient because of the fixed
interval: either the interval was too short, the allocation fails again and
work was done without forward progress; or the interval was too long,
buffers could've been allocated earlier and rx restarted earlier, instead
traffic was dropped while the system was idle.

Note that the qlge driver (and device) uses two kinds of buffers for
received data, so-called "small buffers" and "large buffers". The two are
arranged in ring pairs, the sbq and lbq. Depending on frame size, protocol
content and header splitting, data can go in either type of buffers.
Because of buffer size, lbq allocations are more likely to fail and lead to
stall, however I've reproduced the problem with sbq as well. The problem
was originally found when running jumbo frames. In that case, qlge uses
order-1 allocations for the large buffers. Although the two kinds of
buffers are managed similarly, the qlge driver duplicates most data
structures and code for their handling. In fact, even a casual look at the
qlge driver shows it to be in a state of disrepair, to put it kindly...

Patches 1-14 are cleanups that remove, fix and deduplicate code related to
sbq and lbq handling. Regarding those cleanups, patches 2 ("Remove
irq_cnt") and 8 ("Deduplicate rx buffer queue management") are the most
important. Finally, patches 15-17 fix the actual problem of rx stalls in
case of allocation failures by implementing the fallback of allocations to
a workqueue.

I've tested these patches using two different approaches:
1) A sender uses pktgen to send udp traffic. The receiver has a large swap,
a large net.core.rmem_max, runs a program that dirties all free memory in a
loop and runs a program that opens as many udp sockets as possible but
doesn't read from them. Since received data is all queued in the sockets
rather than freed, qlge is allocating receive buffers as quickly as
possible and faces allocation failures if the swap is slower than the
network.
2) A sender uses super_netperf. Likewise, the receiver has a large swap, a
large net.core.rmem_max and runs a program that dirties all free memory in
a loop. After the netperf send test is started, `killall -s SIGSTOP
netserver` on the receiver leads to the same situation as above.

---
Changes
v1->v2
https://lore.kernel.org/netdev/20190617074858.32467-1-bpoir...@suse.com/
* simplified QLGE_FIT16 macro down to a simple cast
* added "qlge: Fix irq masking in INTx mode"
* fixed address in pci_unmap_page() calls in "qlge: Deduplicate rx buffer
  queue management", no effect on end result of series
* adjusted series following move of driver to staging


___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] staging: vt6656: clean up an indentation issue

2019-09-27 Thread Colin King
From: Colin Ian King 

There is a block of code that is indented incorrectly, add in the
missing tabs.

Signed-off-by: Colin Ian King 
---
 drivers/staging/vt6656/main_usb.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/vt6656/main_usb.c 
b/drivers/staging/vt6656/main_usb.c
index 856ba97aec4f..3478a10f8025 100644
--- a/drivers/staging/vt6656/main_usb.c
+++ b/drivers/staging/vt6656/main_usb.c
@@ -249,10 +249,10 @@ static int vnt_init_registers(struct vnt_private *priv)
} else {
priv->tx_antenna_mode = ANT_B;
 
-   if (priv->tx_rx_ant_inv)
-   priv->rx_antenna_mode = ANT_A;
-   else
-   priv->rx_antenna_mode = ANT_B;
+   if (priv->tx_rx_ant_inv)
+   priv->rx_antenna_mode = ANT_A;
+   else
+   priv->rx_antenna_mode = ANT_B;
}
}
 
-- 
2.20.1

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel