[PATCH 1/2] powerpc: sstep: Fix load and update instructions

2020-11-18 Thread Sandipan Das
The Power ISA says that the fixed-point load and update
instructions must neither use R0 for the base address (RA)
nor have the destination (RT) and the base address (RA) as
the same register. In these cases, the instruction is
invalid. This applies to the following instructions.
  * Load Byte and Zero with Update (lbzu)
  * Load Byte and Zero with Update Indexed (lbzux)
  * Load Halfword and Zero with Update (lhzu)
  * Load Halfword and Zero with Update Indexed (lhzux)
  * Load Halfword Algebraic with Update (lhau)
  * Load Halfword Algebraic with Update Indexed (lhaux)
  * Load Word and Zero with Update (lwzu)
  * Load Word and Zero with Update Indexed (lwzux)
  * Load Word Algebraic with Update Indexed (lwaux)
  * Load Doubleword with Update (ldu)
  * Load Doubleword with Update Indexed (ldux)

However, the following behaviour is observed using some
invalid opcodes where RA = RT.

An userspace program using an invalid instruction word like
0xe9ce0001, i.e. "ldu r14, 0(r14)", runs and exits without
getting terminated abruptly. The instruction performs the
load operation but does not write the effective address to
the base address register. Attaching an uprobe at that
instruction's address results in emulation which writes the
effective address to the base register. Thus, the final value
of the base address register is different.

To remove any inconsistencies, this adds an additional check
for the aforementioned instructions to make sure that they
are treated as unknown by the emulation infrastructure when
RA = 0 or RA = RT. The kernel will then fallback to executing
the instruction on hardware.

Signed-off-by: Sandipan Das 
---
 arch/powerpc/lib/sstep.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 855457ed09b5..25a5436be6c6 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2157,11 +2157,15 @@ int analyse_instr(struct instruction_op *op, const 
struct pt_regs *regs,
 
case 23:/* lwzx */
case 55:/* lwzux */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 4);
break;
 
case 87:/* lbzx */
case 119:   /* lbzux */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 1);
break;
 
@@ -2215,6 +2219,8 @@ int analyse_instr(struct instruction_op *op, const struct 
pt_regs *regs,
 #ifdef __powerpc64__
case 21:/* ldx */
case 53:/* ldux */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 8);
break;
 
@@ -2236,18 +2242,24 @@ int analyse_instr(struct instruction_op *op, const 
struct pt_regs *regs,
 
case 279:   /* lhzx */
case 311:   /* lhzux */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 2);
break;
 
 #ifdef __powerpc64__
case 341:   /* lwax */
case 373:   /* lwaux */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, SIGNEXT | u, 4);
break;
 #endif
 
case 343:   /* lhax */
case 375:   /* lhaux */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, SIGNEXT | u, 2);
break;
 
@@ -2540,12 +2552,16 @@ int analyse_instr(struct instruction_op *op, const 
struct pt_regs *regs,
 
case 32:/* lwz */
case 33:/* lwzu */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 4);
op->ea = dform_ea(word, regs);
break;
 
case 34:/* lbz */
case 35:/* lbzu */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 1);
op->ea = dform_ea(word, regs);
break;
@@ -2564,12 +2580,16 @@ int analyse_instr(struct instruction_op *op, const 
struct pt_regs *regs,
 
case 40:/* lhz */
case 41:/* lhzu */
+   if (u && (ra == 0 || ra == rd))
+   return -1;
op->type = MKOP(LOAD, u, 2);
op->ea = dform_ea(word, regs);
break;
 
case 42:/* lha */
case 43:/* lhau */
+   

[PATCH 2/2] powerpc: sstep: Fix store and update instructions

2020-11-18 Thread Sandipan Das
The Power ISA says that the fixed-point store and update
instructions must not use R0 for the base address (RA).
In this case, the instruction is invalid. This applies
to the following instructions.
  * Store Byte with Update (stbu)
  * Store Byte with Update Indexed (stbux)
  * Store Halfword with Update (sthu)
  * Store Halfword with Update Indexed (sthux)
  * Store Word with Update (stwu)
  * Store Word with Update Indexed (stwux)
  * Store Doubleword with Update (stdu)
  * Store Doubleword with Update Indexed (stdux)

To remove any inconsistencies, this adds an additional check
for the aforementioned instructions to make sure that they
are treated as unknown by the emulation infrastructure when
RA = 0. The kernel will then fallback to executing the
instruction on hardware.

Signed-off-by: Sandipan Das 
---
 arch/powerpc/lib/sstep.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 25a5436be6c6..1c20c14f8757 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2226,17 +2226,23 @@ int analyse_instr(struct instruction_op *op, const 
struct pt_regs *regs,
 
case 149:   /* stdx */
case 181:   /* stdux */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 8);
break;
 #endif
 
case 151:   /* stwx */
case 183:   /* stwux */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 4);
break;
 
case 215:   /* stbx */
case 247:   /* stbux */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 1);
break;
 
@@ -2265,6 +2271,8 @@ int analyse_instr(struct instruction_op *op, const struct 
pt_regs *regs,
 
case 407:   /* sthx */
case 439:   /* sthux */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 2);
break;
 
@@ -2568,12 +2576,16 @@ int analyse_instr(struct instruction_op *op, const 
struct pt_regs *regs,
 
case 36:/* stw */
case 37:/* stwu */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 4);
op->ea = dform_ea(word, regs);
break;
 
case 38:/* stb */
case 39:/* stbu */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 1);
op->ea = dform_ea(word, regs);
break;
@@ -2596,6 +2608,8 @@ int analyse_instr(struct instruction_op *op, const struct 
pt_regs *regs,
 
case 44:/* sth */
case 45:/* sthu */
+   if (u && ra == 0)
+   return -1;
op->type = MKOP(STORE, u, 2);
op->ea = dform_ea(word, regs);
break;
@@ -2746,6 +2760,8 @@ int analyse_instr(struct instruction_op *op, const struct 
pt_regs *regs,
op->type = MKOP(STORE, 0, 8);
break;
case 1: /* stdu */
+   if (ra == 0)
+   return -1;
op->type = MKOP(STORE, UPDATE, 8);
break;
case 2: /* stq */
-- 
2.25.1



[PATCH 1/3] powerpc/wrapper: add "-z notext" flag to disable diagnostic

2020-11-18 Thread Bill Wendling
The "-z notext" flag disables reporting an error if DT_TEXTREL is set.

  ld.lld: error: can't create dynamic relocation R_PPC64_ADDR64 against
symbol: _start in readonly segment; recompile object files with
-fPIC or pass '-Wl,-z,notext' to allow text relocations in the
output
  >>> defined in
  >>> referenced by crt0.o:(.text+0x8) in archive arch/powerpc/boot/wrapper.a

The BFD linker disables this by default (though it's configurable in
current versions). LLD enables this by default. So we add the flag to
keep LLD from emitting the error.

Cc: Fangrui Song 
Cc: Alan Modra 
Signed-off-by: Bill Wendling 
---
 arch/powerpc/boot/wrapper | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index e1194955adbb..41fa0a8715e3 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -46,6 +46,7 @@ compression=.gz
 uboot_comp=gzip
 pie=
 format=
+notext=
 rodynamic=
 
 # cross-compilation prefix
@@ -354,6 +355,7 @@ epapr)
 platformo="$object/pseries-head.o $object/epapr.o $object/epapr-wrapper.o"
 link_address='0x2000'
 pie=-pie
+notext='-z notext'
 rodynamic=$(if ${CROSS}ld -V 2>&1 | grep -q LLD ; then echo "-z 
rodynamic"; fi)
 ;;
 mvme5100)
@@ -495,7 +497,7 @@ if [ "$platform" != "miboot" ]; then
 text_start="-Ttext $link_address"
 fi
 #link everything
-${CROSS}ld -m $format -T $lds $text_start $pie $nodl $rodynamic -o 
"$ofile" $map \
+${CROSS}ld -m $format -T $lds $text_start $pie $nodl $rodynamic $notext -o 
"$ofile" $map \
$platformo $tmp $object/wrapper.a
 rm $tmp
 fi
-- 
2.29.2.454.gaff20da3a2-goog



[PATCH net-next v2 9/9] ibmvnic: Do not replenish RX buffers after every polling loop

2020-11-18 Thread Thomas Falcon
From: "Dwip N. Banerjee" 

Reduce the amount of time spent replenishing RX buffers by
only doing so once available buffers has fallen under a certain
threshold, in this case half of the total number of buffers, or
if the polling loop exits before the packets processed is less
than its budget.

Signed-off-by: Dwip N. Banerjee 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 96df6d8fa277..9fe43ab0496d 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2537,7 +2537,10 @@ static int ibmvnic_poll(struct napi_struct *napi, int 
budget)
frames_processed++;
}
 
-   if (adapter->state != VNIC_CLOSING)
+   if (adapter->state != VNIC_CLOSING &&
+   ((atomic_read(>rx_pool[scrq_num].available) <
+ adapter->req_rx_add_entries_per_subcrq / 2) ||
+ frames_processed < budget))
replenish_rx_pool(adapter, >rx_pool[scrq_num]);
if (frames_processed < budget) {
if (napi_complete_done(napi, frames_processed)) {
-- 
2.26.2



[PATCH net-next v2 7/9] ibmvnic: Correctly re-enable interrupts in NAPI polling routine

2020-11-18 Thread Thomas Falcon
From: "Dwip N. Banerjee" 

If the current NAPI polling loop exits without completing it's
budget, only re-enable interrupts if there are no entries remaining
in the queue and napi_complete_done is successful. If there are entries
remaining on the queue that were missed, restart the polling loop.

Signed-off-by: Dwip N. Banerjee 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 37 +++---
 1 file changed, 23 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 85df91c9861b..596546f0614d 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2450,10 +2450,17 @@ static void remove_buff_from_pool(struct 
ibmvnic_adapter *adapter,
 
 static int ibmvnic_poll(struct napi_struct *napi, int budget)
 {
-   struct net_device *netdev = napi->dev;
-   struct ibmvnic_adapter *adapter = netdev_priv(netdev);
-   int scrq_num = (int)(napi - adapter->napi);
-   int frames_processed = 0;
+   struct ibmvnic_sub_crq_queue *rx_scrq;
+   struct ibmvnic_adapter *adapter;
+   struct net_device *netdev;
+   int frames_processed;
+   int scrq_num;
+
+   netdev = napi->dev;
+   adapter = netdev_priv(netdev);
+   scrq_num = (int)(napi - adapter->napi);
+   frames_processed = 0;
+   rx_scrq = adapter->rx_scrq[scrq_num];
 
 restart_poll:
while (frames_processed < budget) {
@@ -2466,14 +2473,14 @@ static int ibmvnic_poll(struct napi_struct *napi, int 
budget)
 
if (unlikely(test_bit(0, >resetting) &&
 adapter->reset_reason != VNIC_RESET_NON_FATAL)) {
-   enable_scrq_irq(adapter, adapter->rx_scrq[scrq_num]);
+   enable_scrq_irq(adapter, rx_scrq);
napi_complete_done(napi, frames_processed);
return frames_processed;
}
 
-   if (!pending_scrq(adapter, adapter->rx_scrq[scrq_num]))
+   if (!pending_scrq(adapter, rx_scrq))
break;
-   next = ibmvnic_next_scrq(adapter, adapter->rx_scrq[scrq_num]);
+   next = ibmvnic_next_scrq(adapter, rx_scrq);
rx_buff =
(struct ibmvnic_rx_buff *)be64_to_cpu(next->
  rx_comp.correlator);
@@ -2532,14 +2539,16 @@ static int ibmvnic_poll(struct napi_struct *napi, int 
budget)
 
if (adapter->state != VNIC_CLOSING)
replenish_rx_pool(adapter, >rx_pool[scrq_num]);
-
if (frames_processed < budget) {
-   enable_scrq_irq(adapter, adapter->rx_scrq[scrq_num]);
-   napi_complete_done(napi, frames_processed);
-   if (pending_scrq(adapter, adapter->rx_scrq[scrq_num]) &&
-   napi_reschedule(napi)) {
-   disable_scrq_irq(adapter, adapter->rx_scrq[scrq_num]);
-   goto restart_poll;
+   if (napi_complete_done(napi, frames_processed)) {
+   enable_scrq_irq(adapter, rx_scrq);
+   if (pending_scrq(adapter, rx_scrq)) {
+   rmb();
+   if (napi_reschedule(napi)) {
+   disable_scrq_irq(adapter, rx_scrq);
+   goto restart_poll;
+   }
+   }
}
}
return frames_processed;
-- 
2.26.2



[PATCH net-next v2 8/9] ibmvnic: Use netdev_alloc_skb instead of alloc_skb to replenish RX buffers

2020-11-18 Thread Thomas Falcon
From: "Dwip N. Banerjee" 

Take advantage of the additional optimizations in netdev_alloc_skb when
allocating socket buffers to be used for packet reception.

Signed-off-by: Dwip N. Banerjee 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 596546f0614d..96df6d8fa277 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -323,7 +323,7 @@ static void replenish_rx_pool(struct ibmvnic_adapter 
*adapter,
rx_scrq = adapter->rx_scrq[pool->index];
ind_bufp = _scrq->ind_buf;
for (i = 0; i < count; ++i) {
-   skb = alloc_skb(pool->buff_size, GFP_ATOMIC);
+   skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
if (!skb) {
dev_err(dev, "Couldn't replenish rx buff\n");
adapter->replenish_no_mem++;
-- 
2.26.2



[PATCH net-next v2 5/9] ibmvnic: Remove send_subcrq function

2020-11-18 Thread Thomas Falcon
It is not longer used, so remove it.

Signed-off-by: Thomas Falcon 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 34 --
 1 file changed, 34 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 2aace693559f..e9b0cb6dfd9d 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -84,8 +84,6 @@ static int ibmvnic_reset_crq(struct ibmvnic_adapter *);
 static int ibmvnic_send_crq_init(struct ibmvnic_adapter *);
 static int ibmvnic_reenable_crq_queue(struct ibmvnic_adapter *);
 static int ibmvnic_send_crq(struct ibmvnic_adapter *, union ibmvnic_crq *);
-static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle,
-  union sub_crq *sub_crq);
 static int send_subcrq_indirect(struct ibmvnic_adapter *, u64, u64, u64);
 static irqreturn_t ibmvnic_interrupt_rx(int irq, void *instance);
 static int enable_scrq_irq(struct ibmvnic_adapter *,
@@ -3629,38 +3627,6 @@ static void print_subcrq_error(struct device *dev, int 
rc, const char *func)
}
 }
 
-static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle,
-  union sub_crq *sub_crq)
-{
-   unsigned int ua = adapter->vdev->unit_address;
-   struct device *dev = >vdev->dev;
-   u64 *u64_crq = (u64 *)sub_crq;
-   int rc;
-
-   netdev_dbg(adapter->netdev,
-  "Sending sCRQ %016lx: %016lx %016lx %016lx %016lx\n",
-  (unsigned long int)cpu_to_be64(remote_handle),
-  (unsigned long int)cpu_to_be64(u64_crq[0]),
-  (unsigned long int)cpu_to_be64(u64_crq[1]),
-  (unsigned long int)cpu_to_be64(u64_crq[2]),
-  (unsigned long int)cpu_to_be64(u64_crq[3]));
-
-   /* Make sure the hypervisor sees the complete request */
-   mb();
-
-   rc = plpar_hcall_norets(H_SEND_SUB_CRQ, ua,
-   cpu_to_be64(remote_handle),
-   cpu_to_be64(u64_crq[0]),
-   cpu_to_be64(u64_crq[1]),
-   cpu_to_be64(u64_crq[2]),
-   cpu_to_be64(u64_crq[3]));
-
-   if (rc)
-   print_subcrq_error(dev, rc, __func__);
-
-   return rc;
-}
-
 static int send_subcrq_indirect(struct ibmvnic_adapter *adapter,
u64 remote_handle, u64 ioba, u64 num_entries)
 {
-- 
2.26.2



[PATCH net-next v2 6/9] ibmvnic: Ensure that device queue memory is cache-line aligned

2020-11-18 Thread Thomas Falcon
From: "Dwip N. Banerjee" 

PCI bus slowdowns were observed on IBM VNIC devices as a result
of partial cache line writes and non-cache aligned full cache line writes.
Ensure that packet data buffers are cache-line aligned to avoid these
slowdowns.

Signed-off-by: Dwip N. Banerjee 
---
 drivers/net/ethernet/ibm/ibmvnic.c |  9 ++---
 drivers/net/ethernet/ibm/ibmvnic.h | 10 +-
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index e9b0cb6dfd9d..85df91c9861b 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -498,7 +498,7 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
 
if (rx_pool->buff_size != buff_size) {
free_long_term_buff(adapter, _pool->long_term_buff);
-   rx_pool->buff_size = buff_size;
+   rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
rc = alloc_long_term_buff(adapter,
  _pool->long_term_buff,
  rx_pool->size *
@@ -592,7 +592,7 @@ static int init_rx_pools(struct net_device *netdev)
 
rx_pool->size = adapter->req_rx_add_entries_per_subcrq;
rx_pool->index = i;
-   rx_pool->buff_size = buff_size;
+   rx_pool->buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
rx_pool->active = 1;
 
rx_pool->free_map = kcalloc(rx_pool->size, sizeof(int),
@@ -745,6 +745,7 @@ static int init_tx_pools(struct net_device *netdev)
 {
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
int tx_subcrqs;
+   u64 buff_size;
int i, rc;
 
tx_subcrqs = adapter->num_active_tx_scrqs;
@@ -761,9 +762,11 @@ static int init_tx_pools(struct net_device *netdev)
adapter->num_active_tx_pools = tx_subcrqs;
 
for (i = 0; i < tx_subcrqs; i++) {
+   buff_size = adapter->req_mtu + VLAN_HLEN;
+   buff_size = ALIGN(buff_size, L1_CACHE_BYTES);
rc = init_one_tx_pool(netdev, >tx_pool[i],
  adapter->req_tx_entries_per_subcrq,
- adapter->req_mtu + VLAN_HLEN);
+ buff_size);
if (rc) {
release_tx_pools(adapter);
return rc;
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h 
b/drivers/net/ethernet/ibm/ibmvnic.h
index 16d892c3db0f..9911d926dd7f 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -883,7 +883,7 @@ struct ibmvnic_sub_crq_queue {
atomic_t used;
char name[32];
u64 handle;
-};
+} cacheline_aligned;
 
 struct ibmvnic_long_term_buff {
unsigned char *buff;
@@ -907,7 +907,7 @@ struct ibmvnic_tx_pool {
struct ibmvnic_long_term_buff long_term_buff;
int num_buffers;
int buf_size;
-};
+} cacheline_aligned;
 
 struct ibmvnic_rx_buff {
struct sk_buff *skb;
@@ -928,7 +928,7 @@ struct ibmvnic_rx_pool {
int next_alloc;
int active;
struct ibmvnic_long_term_buff long_term_buff;
-};
+} cacheline_aligned;
 
 struct ibmvnic_vpd {
unsigned char *buff;
@@ -1015,8 +1015,8 @@ struct ibmvnic_adapter {
atomic_t running_cap_crqs;
bool wait_capability;
 
-   struct ibmvnic_sub_crq_queue **tx_scrq;
-   struct ibmvnic_sub_crq_queue **rx_scrq;
+   struct ibmvnic_sub_crq_queue **tx_scrq cacheline_aligned;
+   struct ibmvnic_sub_crq_queue **rx_scrq cacheline_aligned;
 
/* rx structs */
struct napi_struct *napi;
-- 
2.26.2



[PATCH net-next v2 4/9] ibmvnic: Clean up TX code and TX buffer data structure

2020-11-18 Thread Thomas Falcon
Remove unused and superfluous code and members in
existing TX implementation and data structures.

Signed-off-by: Thomas Falcon 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 31 +++---
 drivers/net/ethernet/ibm/ibmvnic.h |  8 
 2 files changed, 11 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 650aaf100d65..2aace693559f 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -1496,17 +1496,18 @@ static int create_hdr_descs(u8 hdr_field, u8 *hdr_data, 
int len, int *hdr_len,
  * L2/L3/L4 packet header descriptors to be sent by send_subcrq_indirect.
  */
 
-static void build_hdr_descs_arr(struct ibmvnic_tx_buff *txbuff,
+static void build_hdr_descs_arr(struct sk_buff *skb,
+   union sub_crq *indir_arr,
int *num_entries, u8 hdr_field)
 {
int hdr_len[3] = {0, 0, 0};
+   u8 hdr_data[140] = {0};
int tot_len;
-   u8 *hdr_data = txbuff->hdr_data;
 
-   tot_len = build_hdr_data(hdr_field, txbuff->skb, hdr_len,
-txbuff->hdr_data);
+   tot_len = build_hdr_data(hdr_field, skb, hdr_len,
+hdr_data);
*num_entries += create_hdr_descs(hdr_field, hdr_data, tot_len, hdr_len,
-txbuff->indir_arr + 1);
+indir_arr + 1);
 }
 
 static int ibmvnic_xmit_workarounds(struct sk_buff *skb,
@@ -1612,6 +1613,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, 
struct net_device *netdev)
unsigned int tx_send_failed = 0;
netdev_tx_t ret = NETDEV_TX_OK;
unsigned int tx_map_failed = 0;
+   union sub_crq indir_arr[16];
unsigned int tx_dropped = 0;
unsigned int tx_packets = 0;
unsigned int tx_bytes = 0;
@@ -1696,11 +1698,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, 
struct net_device *netdev)
 
tx_buff = _pool->tx_buff[index];
tx_buff->skb = skb;
-   tx_buff->data_dma[0] = data_dma_addr;
-   tx_buff->data_len[0] = skb->len;
tx_buff->index = index;
tx_buff->pool_index = queue_num;
-   tx_buff->last_frag = true;
 
memset(_crq, 0, sizeof(tx_crq));
tx_crq.v1.first = IBMVNIC_CRQ_CMD;
@@ -1747,7 +1746,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, 
struct net_device *netdev)
}
 
if ((*hdrs >> 7) & 1)
-   build_hdr_descs_arr(tx_buff, _entries, *hdrs);
+   build_hdr_descs_arr(skb, indir_arr, _entries, *hdrs);
 
tx_crq.v1.n_crq_elem = num_entries;
tx_buff->num_entries = num_entries;
@@ -1758,8 +1757,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, 
struct net_device *netdev)
goto tx_flush_err;
}
 
-   tx_buff->indir_arr[0] = tx_crq;
-   memcpy(_bufp->indir_arr[ind_bufp->index], tx_buff->indir_arr,
+   indir_arr[0] = tx_crq;
+   memcpy(_bufp->indir_arr[ind_bufp->index], _arr[0],
   num_entries * sizeof(struct ibmvnic_generic_scrq));
ind_bufp->index += num_entries;
if (__netdev_tx_sent_queue(txq, skb->len,
@@ -3185,7 +3184,7 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter 
*adapter,
struct netdev_queue *txq;
union sub_crq *next;
int index;
-   int i, j;
+   int i;
 
 restart_loop:
while (pending_scrq(adapter, scrq)) {
@@ -3210,14 +3209,6 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter 
*adapter,
}
 
txbuff = _pool->tx_buff[index];
-
-   for (j = 0; j < IBMVNIC_MAX_FRAGS_PER_CRQ; j++) {
-   if (!txbuff->data_dma[j])
-   continue;
-
-   txbuff->data_dma[j] = 0;
-   }
-
num_packets++;
num_entries += txbuff->num_entries;
if (txbuff->skb) {
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h 
b/drivers/net/ethernet/ibm/ibmvnic.h
index 4a63e9886719..16d892c3db0f 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -226,8 +226,6 @@ struct ibmvnic_tx_comp_desc {
 #define IBMVNIC_TCP_CHKSUM 0x20
 #define IBMVNIC_UDP_CHKSUM 0x08
 
-#define IBMVNIC_MAX_FRAGS_PER_CRQ 3
-
 struct ibmvnic_tx_desc {
u8 first;
u8 type;
@@ -896,14 +894,8 @@ struct ibmvnic_long_term_buff {
 
 struct ibmvnic_tx_buff {
struct sk_buff *skb;
-   dma_addr_t data_dma[IBMVNIC_MAX_FRAGS_PER_CRQ];
-   unsigned int data_len[IBMVNIC_MAX_FRAGS_PER_CRQ];
int index;
int pool_index;
-   bool last_frag;
-   union sub_crq indir_arr[6];
-   u8 hdr_data[140];
-   dma_addr_t indir_dma;
int num_entries;
 };
 
-- 
2.26.2



[PATCH net-next v2 3/9] ibmvnic: Introduce xmit_more support using batched subCRQ hcalls

2020-11-18 Thread Thomas Falcon
Include support for the xmit_more feature utilizing the
H_SEND_SUB_CRQ_INDIRECT hypervisor call which allows the sending
of multiple subordinate Command Response Queue descriptors in one
hypervisor call via a DMA-mapped buffer. This update reduces hypervisor
calls and thus hypervisor call overhead per TX descriptor.

Signed-off-by: Thomas Falcon 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 204 -
 1 file changed, 139 insertions(+), 65 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 17ba6db6f5f9..650aaf100d65 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -1165,6 +1165,7 @@ static int __ibmvnic_open(struct net_device *netdev)
if (prev_state == VNIC_CLOSED)
enable_irq(adapter->tx_scrq[i]->irq);
enable_scrq_irq(adapter, adapter->tx_scrq[i]);
+   netdev_tx_reset_queue(netdev_get_tx_queue(netdev, i));
}
 
rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP);
@@ -1523,16 +1524,93 @@ static int ibmvnic_xmit_workarounds(struct sk_buff *skb,
return 0;
 }
 
+static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+struct ibmvnic_sub_crq_queue *tx_scrq)
+{
+   struct ibmvnic_ind_xmit_queue *ind_bufp;
+   struct ibmvnic_tx_buff *tx_buff;
+   struct ibmvnic_tx_pool *tx_pool;
+   union sub_crq tx_scrq_entry;
+   int queue_num;
+   int entries;
+   int index;
+   int i;
+
+   ind_bufp = _scrq->ind_buf;
+   entries = (u64)ind_bufp->index;
+   queue_num = tx_scrq->pool_index;
+
+   for (i = entries - 1; i >= 0; --i) {
+   tx_scrq_entry = ind_bufp->indir_arr[i];
+   if (tx_scrq_entry.v1.type != IBMVNIC_TX_DESC)
+   continue;
+   index = be32_to_cpu(tx_scrq_entry.v1.correlator);
+   if (index & IBMVNIC_TSO_POOL_MASK) {
+   tx_pool = >tso_pool[queue_num];
+   index &= ~IBMVNIC_TSO_POOL_MASK;
+   } else {
+   tx_pool = >tx_pool[queue_num];
+   }
+   tx_pool->free_map[tx_pool->consumer_index] = index;
+   tx_pool->consumer_index = tx_pool->consumer_index == 0 ?
+ tx_pool->num_buffers - 1 :
+ tx_pool->consumer_index - 1;
+   tx_buff = _pool->tx_buff[index];
+   adapter->netdev->stats.tx_packets--;
+   adapter->netdev->stats.tx_bytes -= tx_buff->skb->len;
+   adapter->tx_stats_buffers[queue_num].packets--;
+   adapter->tx_stats_buffers[queue_num].bytes -=
+   tx_buff->skb->len;
+   dev_kfree_skb_any(tx_buff->skb);
+   tx_buff->skb = NULL;
+   adapter->netdev->stats.tx_dropped++;
+   }
+   ind_bufp->index = 0;
+   if (atomic_sub_return(entries, _scrq->used) <=
+   (adapter->req_tx_entries_per_subcrq / 2) &&
+   __netif_subqueue_stopped(adapter->netdev, queue_num)) {
+   netif_wake_subqueue(adapter->netdev, queue_num);
+   netdev_dbg(adapter->netdev, "Started queue %d\n",
+  queue_num);
+   }
+}
+
+static int ibmvnic_tx_scrq_flush(struct ibmvnic_adapter *adapter,
+struct ibmvnic_sub_crq_queue *tx_scrq)
+{
+   struct ibmvnic_ind_xmit_queue *ind_bufp;
+   u64 dma_addr;
+   u64 entries;
+   u64 handle;
+   int rc;
+
+   ind_bufp = _scrq->ind_buf;
+   dma_addr = (u64)ind_bufp->indir_dma;
+   entries = (u64)ind_bufp->index;
+   handle = tx_scrq->handle;
+
+   if (!entries)
+   return 0;
+   rc = send_subcrq_indirect(adapter, handle, dma_addr, entries);
+   if (rc)
+   ibmvnic_tx_scrq_clean_buffer(adapter, tx_scrq);
+   else
+   ind_bufp->index = 0;
+   return 0;
+}
+
 static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
 {
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
int queue_num = skb_get_queue_mapping(skb);
u8 *hdrs = (u8 *)>tx_rx_desc_req;
struct device *dev = >vdev->dev;
+   struct ibmvnic_ind_xmit_queue *ind_bufp;
struct ibmvnic_tx_buff *tx_buff = NULL;
struct ibmvnic_sub_crq_queue *tx_scrq;
struct ibmvnic_tx_pool *tx_pool;
unsigned int tx_send_failed = 0;
+   netdev_tx_t ret = NETDEV_TX_OK;
unsigned int tx_map_failed = 0;
unsigned int tx_dropped = 0;
unsigned int tx_packets = 0;
@@ -1546,8 +1624,10 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, 
struct net_device *netdev)
unsigned char *dst;
int index = 0;
u8 proto = 0;
-   u64 handle;
-   

[PATCH net-next v2 2/9] ibmvnic: Introduce batched RX buffer descriptor transmission

2020-11-18 Thread Thomas Falcon
Utilize the H_SEND_SUB_CRQ_INDIRECT hypervisor call to send
multiple RX buffer descriptors to the device in one hypervisor
call operation. This change will reduce the number of hypervisor
calls and thus hypervisor call overhead needed to transmit
RX buffer descriptors to the device.

Signed-off-by: Thomas Falcon 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 57 +++---
 1 file changed, 37 insertions(+), 20 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 3884f8a683a7..17ba6db6f5f9 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -306,9 +306,11 @@ static void replenish_rx_pool(struct ibmvnic_adapter 
*adapter,
int count = pool->size - atomic_read(>available);
u64 handle = adapter->rx_scrq[pool->index]->handle;
struct device *dev = >vdev->dev;
+   struct ibmvnic_ind_xmit_queue *ind_bufp;
+   struct ibmvnic_sub_crq_queue *rx_scrq;
+   union sub_crq *sub_crq;
int buffers_added = 0;
unsigned long lpar_rc;
-   union sub_crq sub_crq;
struct sk_buff *skb;
unsigned int offset;
dma_addr_t dma_addr;
@@ -320,6 +322,8 @@ static void replenish_rx_pool(struct ibmvnic_adapter 
*adapter,
if (!pool->active)
return;
 
+   rx_scrq = adapter->rx_scrq[pool->index];
+   ind_bufp = _scrq->ind_buf;
for (i = 0; i < count; ++i) {
skb = alloc_skb(pool->buff_size, GFP_ATOMIC);
if (!skb) {
@@ -346,12 +350,13 @@ static void replenish_rx_pool(struct ibmvnic_adapter 
*adapter,
pool->rx_buff[index].pool_index = pool->index;
pool->rx_buff[index].size = pool->buff_size;
 
-   memset(_crq, 0, sizeof(sub_crq));
-   sub_crq.rx_add.first = IBMVNIC_CRQ_CMD;
-   sub_crq.rx_add.correlator =
+   sub_crq = _bufp->indir_arr[ind_bufp->index++];
+   memset(sub_crq, 0, sizeof(*sub_crq));
+   sub_crq->rx_add.first = IBMVNIC_CRQ_CMD;
+   sub_crq->rx_add.correlator =
cpu_to_be64((u64)>rx_buff[index]);
-   sub_crq.rx_add.ioba = cpu_to_be32(dma_addr);
-   sub_crq.rx_add.map_id = pool->long_term_buff.map_id;
+   sub_crq->rx_add.ioba = cpu_to_be32(dma_addr);
+   sub_crq->rx_add.map_id = pool->long_term_buff.map_id;
 
/* The length field of the sCRQ is defined to be 24 bits so the
 * buffer size needs to be left shifted by a byte before it is
@@ -361,15 +366,20 @@ static void replenish_rx_pool(struct ibmvnic_adapter 
*adapter,
 #ifdef __LITTLE_ENDIAN__
shift = 8;
 #endif
-   sub_crq.rx_add.len = cpu_to_be32(pool->buff_size << shift);
-
-   lpar_rc = send_subcrq(adapter, handle, _crq);
-   if (lpar_rc != H_SUCCESS)
-   goto failure;
-
-   buffers_added++;
-   adapter->replenish_add_buff_success++;
+   sub_crq->rx_add.len = cpu_to_be32(pool->buff_size << shift);
pool->next_free = (pool->next_free + 1) % pool->size;
+   if (ind_bufp->index == IBMVNIC_MAX_IND_DESCS ||
+   i == count - 1) {
+   lpar_rc =
+   send_subcrq_indirect(adapter, handle,
+(u64)ind_bufp->indir_dma,
+(u64)ind_bufp->index);
+   if (lpar_rc != H_SUCCESS)
+   goto failure;
+   buffers_added += ind_bufp->index;
+   adapter->replenish_add_buff_success += ind_bufp->index;
+   ind_bufp->index = 0;
+   }
}
atomic_add(buffers_added, >available);
return;
@@ -377,13 +387,20 @@ static void replenish_rx_pool(struct ibmvnic_adapter 
*adapter,
 failure:
if (lpar_rc != H_PARAMETER && lpar_rc != H_CLOSED)
dev_err_ratelimited(dev, "rx: replenish packet buffer 
failed\n");
-   pool->free_map[pool->next_free] = index;
-   pool->rx_buff[index].skb = NULL;
-
-   dev_kfree_skb_any(skb);
-   adapter->replenish_add_buff_failure++;
-   atomic_add(buffers_added, >available);
+   for (i = ind_bufp->index - 1; i >= 0; --i) {
+   struct ibmvnic_rx_buff *rx_buff;
 
+   pool->next_free = pool->next_free == 0 ?
+ pool->size - 1 : pool->next_free - 1;
+   sub_crq = _bufp->indir_arr[i];
+   rx_buff = (struct ibmvnic_rx_buff *)
+   be64_to_cpu(sub_crq->rx_add.correlator);
+   index = (int)(rx_buff - pool->rx_buff);
+   pool->free_map[pool->next_free] = index;
+   dev_kfree_skb_any(pool->rx_buff[index].skb);
+   

[PATCH net-next v2 1/9] ibmvnic: Introduce indirect subordinate Command Response Queue buffer

2020-11-18 Thread Thomas Falcon
This patch introduces the infrastructure to send batched subordinate
Command Response Queue descriptors, which are used by the ibmvnic
driver to send TX frame and RX buffer descriptors.

Signed-off-by: Thomas Falcon 
---
 drivers/net/ethernet/ibm/ibmvnic.c | 23 +++
 drivers/net/ethernet/ibm/ibmvnic.h |  9 +
 2 files changed, 32 insertions(+)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index da15913879f8..3884f8a683a7 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2858,6 +2858,7 @@ static int reset_one_sub_crq_queue(struct ibmvnic_adapter 
*adapter,
memset(scrq->msgs, 0, 4 * PAGE_SIZE);
atomic_set(>used, 0);
scrq->cur = 0;
+   scrq->ind_buf.index = 0;
 
rc = h_reg_sub_crq(adapter->vdev->unit_address, scrq->msg_token,
   4 * PAGE_SIZE, >crq_num, >hw_irq);
@@ -2909,6 +2910,11 @@ static void release_sub_crq_queue(struct ibmvnic_adapter 
*adapter,
}
}
 
+   dma_free_coherent(dev,
+ IBMVNIC_IND_ARR_SZ,
+ scrq->ind_buf.indir_arr,
+ scrq->ind_buf.indir_dma);
+
dma_unmap_single(dev, scrq->msg_token, 4 * PAGE_SIZE,
 DMA_BIDIRECTIONAL);
free_pages((unsigned long)scrq->msgs, 2);
@@ -2955,6 +2961,17 @@ static struct ibmvnic_sub_crq_queue 
*init_sub_crq_queue(struct ibmvnic_adapter
 
scrq->adapter = adapter;
scrq->size = 4 * PAGE_SIZE / sizeof(*scrq->msgs);
+   scrq->ind_buf.index = 0;
+
+   scrq->ind_buf.indir_arr =
+   dma_alloc_coherent(dev,
+  IBMVNIC_IND_ARR_SZ,
+  >ind_buf.indir_dma,
+  GFP_KERNEL);
+
+   if (!scrq->ind_buf.indir_arr)
+   goto indir_failed;
+
spin_lock_init(>lock);
 
netdev_dbg(adapter->netdev,
@@ -2963,6 +2980,12 @@ static struct ibmvnic_sub_crq_queue 
*init_sub_crq_queue(struct ibmvnic_adapter
 
return scrq;
 
+indir_failed:
+   do {
+   rc = plpar_hcall_norets(H_FREE_SUB_CRQ,
+   adapter->vdev->unit_address,
+   scrq->crq_num);
+   } while (rc == H_BUSY || rc == H_IS_LONG_BUSY(rc));
 reg_failed:
dma_unmap_single(dev, scrq->msg_token, 4 * PAGE_SIZE,
 DMA_BIDIRECTIONAL);
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h 
b/drivers/net/ethernet/ibm/ibmvnic.h
index 217dcc7ded70..4a63e9886719 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -31,6 +31,8 @@
 #define IBMVNIC_BUFFS_PER_POOL 100
 #define IBMVNIC_MAX_QUEUES 16
 #define IBMVNIC_MAX_QUEUE_SZ   4096
+#define IBMVNIC_MAX_IND_DESCS  128
+#define IBMVNIC_IND_ARR_SZ (IBMVNIC_MAX_IND_DESCS * 32)
 
 #define IBMVNIC_TSO_BUF_SZ 65536
 #define IBMVNIC_TSO_BUFS   64
@@ -861,6 +863,12 @@ union sub_crq {
struct ibmvnic_rx_buff_add_desc rx_add;
 };
 
+struct ibmvnic_ind_xmit_queue {
+   union sub_crq *indir_arr;
+   dma_addr_t indir_dma;
+   int index;
+};
+
 struct ibmvnic_sub_crq_queue {
union sub_crq *msgs;
int size, cur;
@@ -873,6 +881,7 @@ struct ibmvnic_sub_crq_queue {
spinlock_t lock;
struct sk_buff *rx_skb_top;
struct ibmvnic_adapter *adapter;
+   struct ibmvnic_ind_xmit_queue ind_buf;
atomic_t used;
char name[32];
u64 handle;
-- 
2.26.2



[PATCH net-next v2 0/9] ibmvnic: Performance improvements and other updates

2020-11-18 Thread Thomas Falcon
The first three patches utilize a hypervisor call allowing multiple 
TX and RX buffer replenishment descriptors to be sent in one operation,
which significantly reduces hypervisor call overhead. The xmit_more
and Byte Queue Limit API's are leveraged to provide this support
for TX descriptors.

The subsequent two patches remove superfluous code and members in
TX completion handling function and TX buffer structure, respectively,
and remove unused routines.

Finally, four patches which ensure that device queue memory is
cache-line aligned, resolving slowdowns observed in PCI traces,
as well as optimize the driver's NAPI polling function and 
to RX buffer replenishment are provided by Dwip Banerjee.

This series provides significant performance improvements, allowing
the driver to fully utilize 100Gb NIC's.

v2 updates:

1) Removed three patches from the original series which
   were bug fixes and thus better suited for the net tree,
   suggested by Jakub Kicinski.
2) Fixed error handling when initializing device queues,
   suggested by Jakub Kicinski.
3) Fixed bug where queued entries were not flushed after a
   dropped frame, also suggested by Jakub. Two functions,
   ibmvnic_tx_scrq_flush and its helper ibmvnic_tx_scrq_clean_buffer,
   were introduced to ensure that queued frames are either submitted
   to firmware or, if that is not successful, freed as dropped and
   associated data structures are updated with the new device queue state.

Dwip N. Banerjee (4):
  ibmvnic: Ensure that device queue memory is cache-line aligned
  ibmvnic: Correctly re-enable interrupts in NAPI polling routine
  ibmvnic: Use netdev_alloc_skb instead of alloc_skb to replenish RX
buffers
  ibmvnic: Do not replenish RX buffers after every polling loop

Thomas Falcon (5):
  ibmvnic: Introduce indirect subordinate Command Response Queue buffer
  ibmvnic: Introduce batched RX buffer descriptor transmission
  ibmvnic: Introduce xmit_more support using batched subCRQ hcalls
  ibmvnic: Clean up TX code and TX buffer data structure
  ibmvnic: Remove send_subcrq function

 drivers/net/ethernet/ibm/ibmvnic.c | 398 ++---
 drivers/net/ethernet/ibm/ibmvnic.h |  27 +-
 2 files changed, 256 insertions(+), 169 deletions(-)

-- 
2.26.2



[powerpc:fixes] BUILD SUCCESS cd81acc600a9684ea4b4d25a47900d38a3890eab

2020-11-18 Thread kernel test robot
 allyesconfig
nios2   defconfig
arc  allyesconfig
nds32 allnoconfig
c6x  allyesconfig
nds32   defconfig
nios2allyesconfig
cskydefconfig
alpha   defconfig
alphaallyesconfig
xtensa   allyesconfig
h8300allyesconfig
sh   allmodconfig
parisc  defconfig
s390 allyesconfig
parisc   allyesconfig
s390defconfig
i386 allyesconfig
sparcallyesconfig
sparc   defconfig
i386defconfig
mips allyesconfig
mips allmodconfig
powerpc  allyesconfig
powerpc  allmodconfig
powerpc   allnoconfig
x86_64   randconfig-a005-20201118
x86_64   randconfig-a003-20201118
x86_64   randconfig-a004-20201118
x86_64   randconfig-a002-20201118
x86_64   randconfig-a006-20201118
x86_64   randconfig-a001-20201118
i386 randconfig-a006-20201118
i386 randconfig-a005-20201118
i386 randconfig-a002-20201118
i386 randconfig-a001-20201118
i386 randconfig-a003-20201118
i386 randconfig-a004-20201118
i386 randconfig-a012-20201118
i386 randconfig-a014-20201118
i386 randconfig-a016-20201118
i386 randconfig-a011-20201118
i386 randconfig-a013-20201118
i386 randconfig-a015-20201118
riscvnommu_k210_defconfig
riscvallyesconfig
riscv allnoconfig
riscv   defconfig
riscv  rv32_defconfig
riscvallmodconfig
x86_64   rhel
x86_64   allyesconfig
x86_64rhel-7.6-kselftests
x86_64  defconfig
x86_64   rhel-8.3
x86_64  kexec

clang tested configs:
x86_64   randconfig-a015-20201118
x86_64   randconfig-a014-20201118
x86_64   randconfig-a011-20201118
x86_64   randconfig-a013-20201118
x86_64   randconfig-a016-20201118
x86_64   randconfig-a012-20201118

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


[powerpc:next-test] BUILD SUCCESS a1062188413df416db21b02ffe4bd60228ad6240

2020-11-18 Thread kernel test robot
   allmodconfig
parisc  defconfig
s390 allyesconfig
parisc   allyesconfig
s390defconfig
i386 allyesconfig
sparcallyesconfig
sparc   defconfig
mips allyesconfig
mips allmodconfig
powerpc  allyesconfig
powerpc  allmodconfig
powerpc   allnoconfig
x86_64   randconfig-a005-20201118
x86_64   randconfig-a003-20201118
x86_64   randconfig-a004-20201118
x86_64   randconfig-a002-20201118
x86_64   randconfig-a006-20201118
x86_64   randconfig-a001-20201118
i386 randconfig-a006-20201118
i386 randconfig-a005-20201118
i386 randconfig-a002-20201118
i386 randconfig-a001-20201118
i386 randconfig-a003-20201118
i386 randconfig-a004-20201118
i386 randconfig-a006-20201119
i386 randconfig-a005-20201119
i386 randconfig-a002-20201119
i386 randconfig-a001-20201119
i386 randconfig-a003-20201119
i386 randconfig-a004-20201119
i386 randconfig-a012-20201118
i386 randconfig-a014-20201118
i386 randconfig-a016-20201118
i386 randconfig-a011-20201118
i386 randconfig-a013-20201118
i386 randconfig-a015-20201118
riscvnommu_k210_defconfig
riscvallyesconfig
riscv allnoconfig
riscv   defconfig
riscv  rv32_defconfig
riscvallmodconfig
x86_64   rhel
x86_64   allyesconfig
x86_64rhel-7.6-kselftests
x86_64  defconfig
x86_64   rhel-8.3
x86_64  kexec

clang tested configs:
x86_64   randconfig-a015-20201118
x86_64   randconfig-a014-20201118
x86_64   randconfig-a011-20201118
x86_64   randconfig-a013-20201118
x86_64   randconfig-a016-20201118
x86_64   randconfig-a012-20201118

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


Re: [PATCH v2 0/3] PPC: Fix -Wimplicit-fallthrough for clang

2020-11-18 Thread Gustavo A. R. Silva
Nick,

On Tue, Nov 17, 2020 at 04:07:48PM -0800, Nick Desaulniers wrote:
> While cleaning up the last few -Wimplicit-fallthrough warnings in tree
> for Clang, I noticed
> commit 6a9dc5fd6170d ("lib: Revert use of fallthrough pseudo-keyword in lib/")
> which seemed to undo a bunch of fixes in lib/ due to breakage in
> arch/powerpc/boot/ not including compiler_types.h.  We don't need
> compiler_types.h for the definition of `fallthrough`, simply
> compiler_attributes.h.  Include that, revert the revert to lib/, and fix
> the last remaining cases I observed for powernv_defconfig.

I've added the series to my -next tree, together with Miguel's
suggestions.

Thanks for the Acks and comments, Michael.

--
Gustavo