[PATCH 4.9 41/75] net: bridge: fix early call to br_stp_change_bridge_id and plug newlink leaks

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Nikolay Aleksandrov 


[ Upstream commit 84aeb437ab98a2bce3d4b2111c79723aedfceb33 ]

The early call to br_stp_change_bridge_id in bridge's newlink can cause
a memory leak if an error occurs during the newlink because the fdb
entries are not cleaned up if a different lladdr was specified, also
another minor issue is that it generates fdb notifications with
ifindex = 0. Another unrelated memory leak is the bridge sysfs entries
which get added on NETDEV_REGISTER event, but are not cleaned up in the
newlink error path. To remove this special case the call to
br_stp_change_bridge_id is done after netdev register and we cleanup the
bridge on changelink error via br_dev_delete to plug all leaks.

This patch makes netlink bridge destruction on newlink error the same as
dellink and ioctl del which is necessary since at that point we have a
fully initialized bridge device.

To reproduce the issue:
$ ip l add br0 address 00:11:22:33:44:55 type bridge group_fwd_mask 1
RTNETLINK answers: Invalid argument

$ rmmod bridge
[ 1822.142525] 
=
[ 1822.143640] BUG bridge_fdb_cache (Tainted: G   O): Objects 
remaining in bridge_fdb_cache on __kmem_cache_shutdown()
[ 1822.144821] 
-

[ 1822.145990] Disabling lock debugging due to kernel taint
[ 1822.146732] INFO: Slab 0x92a844b2 objects=32 used=2 
fp=0xfef011b0 flags=0x1800100
[ 1822.147700] CPU: 2 PID: 13584 Comm: rmmod Tainted: GB  O 
4.15.0-rc2+ #87
[ 1822.148578] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
1.7.5-20140531_083030-gandalf 04/01/2014
[ 1822.150008] Call Trace:
[ 1822.150510]  dump_stack+0x78/0xa9
[ 1822.151156]  slab_err+0xb1/0xd3
[ 1822.151834]  ? __kmalloc+0x1bb/0x1ce
[ 1822.152546]  __kmem_cache_shutdown+0x151/0x28b
[ 1822.153395]  shutdown_cache+0x13/0x144
[ 1822.154126]  kmem_cache_destroy+0x1c0/0x1fb
[ 1822.154669]  SyS_delete_module+0x194/0x244
[ 1822.155199]  ? trace_hardirqs_on_thunk+0x1a/0x1c
[ 1822.155773]  entry_SYSCALL_64_fastpath+0x23/0x9a
[ 1822.156343] RIP: 0033:0x7f929bd38b17
[ 1822.156859] RSP: 002b:7ffd160e9a98 EFLAGS: 0202 ORIG_RAX: 
00b0
[ 1822.157728] RAX: ffda RBX: 5578316ba090 RCX: 7f929bd38b17
[ 1822.158422] RDX: 7f929bd9ec60 RSI: 0800 RDI: 5578316ba0f0
[ 1822.159114] RBP: 0003 R08: 7f929bff5f20 R09: 7ffd160e8a11
[ 1822.159808] R10: 7ffd160e9860 R11: 0202 R12: 7ffd160e8a80
[ 1822.160513] R13:  R14:  R15: 5578316ba090
[ 1822.161278] INFO: Object 0x7645de29 @offset=0
[ 1822.161666] INFO: Object 0xd5df2ab5 @offset=128

Fixes: 30313a3d5794 ("bridge: Handle IFLA_ADDRESS correctly when creating 
bridge device")
Fixes: 5b8d5429daa0 ("bridge: netlink: register netdevice before executing 
changelink")
Signed-off-by: Nikolay Aleksandrov 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/bridge/br_netlink.c |   11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -1092,19 +1092,20 @@ static int br_dev_newlink(struct net *sr
struct net_bridge *br = netdev_priv(dev);
int err;
 
+   err = register_netdevice(dev);
+   if (err)
+   return err;
+
if (tb[IFLA_ADDRESS]) {
spin_lock_bh(>lock);
br_stp_change_bridge_id(br, nla_data(tb[IFLA_ADDRESS]));
spin_unlock_bh(>lock);
}
 
-   err = register_netdevice(dev);
-   if (err)
-   return err;
-
err = br_changelink(dev, tb, data);
if (err)
-   unregister_netdevice(dev);
+   br_dev_delete(dev, NULL);
+
return err;
 }
 




[PATCH 4.9 40/75] ipv4: Fix use-after-free when flushing FIB tables

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Ido Schimmel 


[ Upstream commit b4681c2829e24943aadd1a7bb3a30d41d0a20050 ]

Since commit 0ddcf43d5d4a ("ipv4: FIB Local/MAIN table collapse") the
local table uses the same trie allocated for the main table when custom
rules are not in use.

When a net namespace is dismantled, the main table is flushed and freed
(via an RCU callback) before the local table. In case the callback is
invoked before the local table is iterated, a use-after-free can occur.

Fix this by iterating over the FIB tables in reverse order, so that the
main table is always freed after the local table.

v3: Reworded comment according to Alex's suggestion.
v2: Add a comment to make the fix more explicit per Dave's and Alex's
feedback.

Fixes: 0ddcf43d5d4a ("ipv4: FIB Local/MAIN table collapse")
Signed-off-by: Ido Schimmel 
Reported-by: Fengguang Wu 
Acked-by: Alexander Duyck 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/fib_frontend.c |9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

--- a/net/ipv4/fib_frontend.c
+++ b/net/ipv4/fib_frontend.c
@@ -1253,14 +1253,19 @@ fail:
 
 static void ip_fib_net_exit(struct net *net)
 {
-   unsigned int i;
+   int i;
 
rtnl_lock();
 #ifdef CONFIG_IP_MULTIPLE_TABLES
RCU_INIT_POINTER(net->ipv4.fib_main, NULL);
RCU_INIT_POINTER(net->ipv4.fib_default, NULL);
 #endif
-   for (i = 0; i < FIB_TABLE_HASHSZ; i++) {
+   /* Destroy the tables in reverse order to guarantee that the
+* local table, ID 255, is destroyed before the main table, ID
+* 254. This is necessary as the local table may contain
+* references to data contained in the main table.
+*/
+   for (i = FIB_TABLE_HASHSZ - 1; i >= 0; i--) {
struct hlist_head *head = >ipv4.fib_table_hash[i];
struct hlist_node *tmp;
struct fib_table *tb;




[PATCH 4.9 49/75] net/mlx5e: Fix possible deadlock of VXLAN lock

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Gal Pressman 


[ Upstream commit 6323514116404cc651df1b7fffa1311ddf8ce647 ]

mlx5e_vxlan_lookup_port is called both from mlx5e_add_vxlan_port (user
context) and mlx5e_features_check (softirq), but the lock acquired does
not disable bottom half and might result in deadlock. Fix it by simply
replacing spin_lock() with spin_lock_bh().
While at it, replace all unnecessary spin_lock_irq() to spin_lock_bh().

lockdep's WARNING: inconsistent lock state
[  654.028136] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
[  654.028229] swapper/5/0 [HC0[0]:SC1[9]:HE1:SE0] takes:
[  654.028321]  (&(_db->lock)->rlock){+.?.}, at: [] 
mlx5e_vxlan_lookup_port+0x1e/0x50 [mlx5_core]
[  654.028528] {SOFTIRQ-ON-W} state was registered at:
[  654.028607]   _raw_spin_lock+0x3c/0x70
[  654.028689]   mlx5e_vxlan_lookup_port+0x1e/0x50 [mlx5_core]
[  654.028794]   mlx5e_vxlan_add_port+0x2e/0x120 [mlx5_core]
[  654.028878]   process_one_work+0x1e9/0x640
[  654.028942]   worker_thread+0x4a/0x3f0
[  654.029002]   kthread+0x141/0x180
[  654.029056]   ret_from_fork+0x24/0x30
[  654.029114] irq event stamp: 579088
[  654.029174] hardirqs last  enabled at (579088): [] 
ip6_finish_output2+0x49a/0x8c0
[  654.029309] hardirqs last disabled at (579087): [] 
ip6_finish_output2+0x44e/0x8c0
[  654.029446] softirqs last  enabled at (579030): [] 
irq_enter+0x6d/0x80
[  654.029567] softirqs last disabled at (579031): [] 
irq_exit+0xb5/0xc0
[  654.029684] other info that might help us debug this:
[  654.029781]  Possible unsafe locking scenario:

[  654.029868]CPU0
[  654.029908]
[  654.029947]   lock(&(_db->lock)->rlock);
[  654.030045]   
[  654.030090] lock(&(_db->lock)->rlock);
[  654.030162]
 *** DEADLOCK ***

Fixes: b3f63c3d5e2c ("net/mlx5e: Add netdev support for VXLAN tunneling")
Signed-off-by: Gal Pressman 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/vxlan.c |   20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
@@ -71,9 +71,9 @@ struct mlx5e_vxlan *mlx5e_vxlan_lookup_p
struct mlx5e_vxlan_db *vxlan_db = >vxlan;
struct mlx5e_vxlan *vxlan;
 
-   spin_lock(_db->lock);
+   spin_lock_bh(_db->lock);
vxlan = radix_tree_lookup(_db->tree, port);
-   spin_unlock(_db->lock);
+   spin_unlock_bh(_db->lock);
 
return vxlan;
 }
@@ -100,9 +100,9 @@ static void mlx5e_vxlan_add_port(struct
 
vxlan->udp_port = port;
 
-   spin_lock_irq(_db->lock);
+   spin_lock_bh(_db->lock);
err = radix_tree_insert(_db->tree, vxlan->udp_port, vxlan);
-   spin_unlock_irq(_db->lock);
+   spin_unlock_bh(_db->lock);
if (err)
goto err_free;
 
@@ -121,9 +121,9 @@ static void __mlx5e_vxlan_core_del_port(
struct mlx5e_vxlan_db *vxlan_db = >vxlan;
struct mlx5e_vxlan *vxlan;
 
-   spin_lock_irq(_db->lock);
+   spin_lock_bh(_db->lock);
vxlan = radix_tree_delete(_db->tree, port);
-   spin_unlock_irq(_db->lock);
+   spin_unlock_bh(_db->lock);
 
if (!vxlan)
return;
@@ -171,12 +171,12 @@ void mlx5e_vxlan_cleanup(struct mlx5e_pr
struct mlx5e_vxlan *vxlan;
unsigned int port = 0;
 
-   spin_lock_irq(_db->lock);
+   spin_lock_bh(_db->lock);
while (radix_tree_gang_lookup(_db->tree, (void **), port, 
1)) {
port = vxlan->udp_port;
-   spin_unlock_irq(_db->lock);
+   spin_unlock_bh(_db->lock);
__mlx5e_vxlan_core_del_port(priv, (u16)port);
-   spin_lock_irq(_db->lock);
+   spin_lock_bh(_db->lock);
}
-   spin_unlock_irq(_db->lock);
+   spin_unlock_bh(_db->lock);
 }




[PATCH 4.9 48/75] net/mlx5e: Fix features check of IPv6 traffic

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Gal Pressman 


[ Upstream commit 2989ad1ec03021ee6d2193c35414f1d970a243de ]

The assumption that the next header field contains the transport
protocol is wrong for IPv6 packets with extension headers.
Instead, we should look the inner-most next header field in the buffer.
This will fix TSO offload for tunnels over IPv6 with extension headers.

Performance testing: 19.25x improvement, cool!
Measuring bandwidth of 16 threads TCP traffic over IPv6 GRE tap.
CPU: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
NIC: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
TSO: Enabled
Before: 4,926.24  Mbps
Now   : 94,827.91 Mbps

Fixes: b3f63c3d5e2c ("net/mlx5e: Add netdev support for VXLAN tunneling")
Signed-off-by: Gal Pressman 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3038,6 +3038,7 @@ static netdev_features_t mlx5e_vxlan_fea
struct sk_buff *skb,
netdev_features_t features)
 {
+   unsigned int offset = 0;
struct udphdr *udph;
u16 proto;
u16 port = 0;
@@ -3047,7 +3048,7 @@ static netdev_features_t mlx5e_vxlan_fea
proto = ip_hdr(skb)->protocol;
break;
case htons(ETH_P_IPV6):
-   proto = ipv6_hdr(skb)->nexthdr;
+   proto = ipv6_find_hdr(skb, , -1, NULL, NULL);
break;
default:
goto out;




[PATCH 4.9 47/75] net/mlx5: Fix rate limit packet pacing naming and struct

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Eran Ben Elisha 


[ Upstream commit 37e92a9d4fe38dc3e7308913575983a6a088c8d4 ]

In mlx5_ifc, struct size was not complete, and thus driver was sending
garbage after the last defined field. Fixed it by adding reserved field
to complete the struct size.

In addition, rename all set_rate_limit to set_pp_rate_limit to be
compliant with the Firmware <-> Driver definition.

Fixes: 7486216b3a0b ("{net,IB}/mlx5: mlx5_ifc updates")
Fixes: 1466cc5b23d1 ("net/mlx5: Rate limit tables support")
Signed-off-by: Eran Ben Elisha 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/cmd.c |4 ++--
 drivers/net/ethernet/mellanox/mlx5/core/rl.c  |   22 +++---
 include/linux/mlx5/mlx5_ifc.h |8 +---
 3 files changed, 18 insertions(+), 16 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -367,7 +367,7 @@ static int mlx5_internal_err_ret_value(s
case MLX5_CMD_OP_QUERY_VPORT_COUNTER:
case MLX5_CMD_OP_ALLOC_Q_COUNTER:
case MLX5_CMD_OP_QUERY_Q_COUNTER:
-   case MLX5_CMD_OP_SET_RATE_LIMIT:
+   case MLX5_CMD_OP_SET_PP_RATE_LIMIT:
case MLX5_CMD_OP_QUERY_RATE_LIMIT:
case MLX5_CMD_OP_ALLOC_PD:
case MLX5_CMD_OP_ALLOC_UAR:
@@ -502,7 +502,7 @@ const char *mlx5_command_str(int command
MLX5_COMMAND_STR_CASE(ALLOC_Q_COUNTER);
MLX5_COMMAND_STR_CASE(DEALLOC_Q_COUNTER);
MLX5_COMMAND_STR_CASE(QUERY_Q_COUNTER);
-   MLX5_COMMAND_STR_CASE(SET_RATE_LIMIT);
+   MLX5_COMMAND_STR_CASE(SET_PP_RATE_LIMIT);
MLX5_COMMAND_STR_CASE(QUERY_RATE_LIMIT);
MLX5_COMMAND_STR_CASE(ALLOC_PD);
MLX5_COMMAND_STR_CASE(DEALLOC_PD);
--- a/drivers/net/ethernet/mellanox/mlx5/core/rl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/rl.c
@@ -60,16 +60,16 @@ static struct mlx5_rl_entry *find_rl_ent
return ret_entry;
 }
 
-static int mlx5_set_rate_limit_cmd(struct mlx5_core_dev *dev,
+static int mlx5_set_pp_rate_limit_cmd(struct mlx5_core_dev *dev,
   u32 rate, u16 index)
 {
-   u32 in[MLX5_ST_SZ_DW(set_rate_limit_in)]   = {0};
-   u32 out[MLX5_ST_SZ_DW(set_rate_limit_out)] = {0};
+   u32 in[MLX5_ST_SZ_DW(set_pp_rate_limit_in)]   = {0};
+   u32 out[MLX5_ST_SZ_DW(set_pp_rate_limit_out)] = {0};
 
-   MLX5_SET(set_rate_limit_in, in, opcode,
-MLX5_CMD_OP_SET_RATE_LIMIT);
-   MLX5_SET(set_rate_limit_in, in, rate_limit_index, index);
-   MLX5_SET(set_rate_limit_in, in, rate_limit, rate);
+   MLX5_SET(set_pp_rate_limit_in, in, opcode,
+MLX5_CMD_OP_SET_PP_RATE_LIMIT);
+   MLX5_SET(set_pp_rate_limit_in, in, rate_limit_index, index);
+   MLX5_SET(set_pp_rate_limit_in, in, rate_limit, rate);
return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
 }
 
@@ -108,7 +108,7 @@ int mlx5_rl_add_rate(struct mlx5_core_de
entry->refcount++;
} else {
/* new rate limit */
-   err = mlx5_set_rate_limit_cmd(dev, rate, entry->index);
+   err = mlx5_set_pp_rate_limit_cmd(dev, rate, entry->index);
if (err) {
mlx5_core_err(dev, "Failed configuring rate: %u (%d)\n",
  rate, err);
@@ -144,7 +144,7 @@ void mlx5_rl_remove_rate(struct mlx5_cor
entry->refcount--;
if (!entry->refcount) {
/* need to remove rate */
-   mlx5_set_rate_limit_cmd(dev, 0, entry->index);
+   mlx5_set_pp_rate_limit_cmd(dev, 0, entry->index);
entry->rate = 0;
}
 
@@ -197,8 +197,8 @@ void mlx5_cleanup_rl_table(struct mlx5_c
/* Clear all configured rates */
for (i = 0; i < table->max_size; i++)
if (table->rl_entry[i].rate)
-   mlx5_set_rate_limit_cmd(dev, 0,
-   table->rl_entry[i].index);
+   mlx5_set_pp_rate_limit_cmd(dev, 0,
+  table->rl_entry[i].index);
 
kfree(dev->priv.rl_table.rl_entry);
 }
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -143,7 +143,7 @@ enum {
MLX5_CMD_OP_ALLOC_Q_COUNTER   = 0x771,
MLX5_CMD_OP_DEALLOC_Q_COUNTER = 0x772,
MLX5_CMD_OP_QUERY_Q_COUNTER   = 0x773,
-   MLX5_CMD_OP_SET_RATE_LIMIT= 0x780,
+   MLX5_CMD_OP_SET_PP_RATE_LIMIT = 0x780,
MLX5_CMD_OP_QUERY_RATE_LIMIT  = 0x781,
MLX5_CMD_OP_ALLOC_PD  = 0x800,
MLX5_CMD_OP_DEALLOC_PD= 0x801,
@@ -6689,7 +6689,7 @@ struct mlx5_ifc_add_vxlan_udp_dport_in_b
u8 vxlan_udp_port[0x10];
 

[PATCH 4.9 56/75] s390/qeth: update takeover IPs after configuration change

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Julian Wiedmann 


[ Upstream commit 02f510f326501470348a5df341e8232c3497 ]

Any modification to the takeover IP-ranges requires that we re-evaluate
which IP addresses are takeover-eligible. Otherwise we might do takeover
for some addresses when we no longer should, or vice-versa.

Signed-off-by: Julian Wiedmann 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/s390/net/qeth_core.h  |4 +-
 drivers/s390/net/qeth_core_main.c |4 +-
 drivers/s390/net/qeth_l3.h|2 -
 drivers/s390/net/qeth_l3_main.c   |   31 --
 drivers/s390/net/qeth_l3_sys.c|   63 --
 5 files changed, 67 insertions(+), 37 deletions(-)

--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -577,8 +577,8 @@ enum qeth_cq {
 
 struct qeth_ipato {
bool enabled;
-   int invert4;
-   int invert6;
+   bool invert4;
+   bool invert6;
struct list_head entries;
 };
 
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -1476,8 +1476,8 @@ static int qeth_setup_card(struct qeth_c
/* IP address takeover */
INIT_LIST_HEAD(>ipato.entries);
card->ipato.enabled = false;
-   card->ipato.invert4 = 0;
-   card->ipato.invert6 = 0;
+   card->ipato.invert4 = false;
+   card->ipato.invert6 = false;
/* init QDIO stuff */
qeth_init_qdio_info(card);
INIT_DELAYED_WORK(>buffer_reclaim_work, qeth_buffer_reclaim_work);
--- a/drivers/s390/net/qeth_l3.h
+++ b/drivers/s390/net/qeth_l3.h
@@ -80,7 +80,7 @@ void qeth_l3_del_vipa(struct qeth_card *
 int qeth_l3_add_rxip(struct qeth_card *, enum qeth_prot_versions, const u8 *);
 void qeth_l3_del_rxip(struct qeth_card *card, enum qeth_prot_versions,
const u8 *);
-int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *, struct qeth_ipaddr *);
+void qeth_l3_update_ipato(struct qeth_card *card);
 struct qeth_ipaddr *qeth_l3_get_addr_buffer(enum qeth_prot_versions);
 int qeth_l3_add_ip(struct qeth_card *, struct qeth_ipaddr *);
 int qeth_l3_delete_ip(struct qeth_card *, struct qeth_ipaddr *);
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -168,8 +168,8 @@ static void qeth_l3_convert_addr_to_bits
}
 }
 
-int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card,
-   struct qeth_ipaddr *addr)
+static bool qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card,
+struct qeth_ipaddr *addr)
 {
struct qeth_ipato_entry *ipatoe;
u8 addr_bits[128] = {0, };
@@ -608,6 +608,27 @@ int qeth_l3_setrouting_v6(struct qeth_ca
 /*
  * IP address takeover related functions
  */
+
+/**
+ * qeth_l3_update_ipato() - Update 'takeover' property, for all NORMAL IPs.
+ *
+ * Caller must hold ip_lock.
+ */
+void qeth_l3_update_ipato(struct qeth_card *card)
+{
+   struct qeth_ipaddr *addr;
+   unsigned int i;
+
+   hash_for_each(card->ip_htable, i, addr, hnode) {
+   if (addr->type != QETH_IP_TYPE_NORMAL)
+   continue;
+   if (qeth_l3_is_addr_covered_by_ipato(card, addr))
+   addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
+   else
+   addr->set_flags &= ~QETH_IPA_SETIP_TAKEOVER_FLAG;
+   }
+}
+
 static void qeth_l3_clear_ipato_list(struct qeth_card *card)
 {
struct qeth_ipato_entry *ipatoe, *tmp;
@@ -619,6 +640,7 @@ static void qeth_l3_clear_ipato_list(str
kfree(ipatoe);
}
 
+   qeth_l3_update_ipato(card);
spin_unlock_bh(>ip_lock);
 }
 
@@ -643,8 +665,10 @@ int qeth_l3_add_ipato_entry(struct qeth_
}
}
 
-   if (!rc)
+   if (!rc) {
list_add_tail(>entry, >ipato.entries);
+   qeth_l3_update_ipato(card);
+   }
 
spin_unlock_bh(>ip_lock);
 
@@ -667,6 +691,7 @@ void qeth_l3_del_ipato_entry(struct qeth
(proto == QETH_PROT_IPV4)? 4:16) &&
(ipatoe->mask_bits == mask_bits)) {
list_del(>entry);
+   qeth_l3_update_ipato(card);
kfree(ipatoe);
}
}
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -372,9 +372,8 @@ static ssize_t qeth_l3_dev_ipato_enable_
struct device_attribute *attr, const char *buf, size_t count)
 {
struct qeth_card *card = dev_get_drvdata(dev);
-   struct qeth_ipaddr *addr;
-   int i, rc = 0;
bool enable;
+   int rc = 0;
 
if (!card)
return -EINVAL;
@@ -393,20 +392,12 @@ static ssize_t qeth_l3_dev_ipato_enable_
goto out;
}
 
-   if (card->ipato.enabled == 

[PATCH 4.9 35/75] tg3: Fix rx hang on MTU change with 5717/5719

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Brian King 


[ Upstream commit 748a240c589824e9121befb1cba5341c319885bc ]

This fixes a hang issue seen when changing the MTU size from 1500 MTU
to 9000 MTU on both 5717 and 5719 chips. In discussion with Broadcom,
they've indicated that these chipsets have the same phy as the 57766
chipset, so the same workarounds apply. This has been tested by IBM
on both Power 8 and Power 9 systems as well as by Broadcom on x86
hardware and has been confirmed to resolve the hang issue.

Signed-off-by: Brian King 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/broadcom/tg3.c |4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -14226,7 +14226,9 @@ static int tg3_change_mtu(struct net_dev
/* Reset PHY, otherwise the read DMA engine will be in a mode that
 * breaks all requests to 256 bytes.
 */
-   if (tg3_asic_rev(tp) == ASIC_REV_57766)
+   if (tg3_asic_rev(tp) == ASIC_REV_57766 ||
+   tg3_asic_rev(tp) == ASIC_REV_5717 ||
+   tg3_asic_rev(tp) == ASIC_REV_5719)
reset_phy = true;
 
err = tg3_restart_hw(tp, reset_phy);




[PATCH 4.9 54/75] s390/qeth: dont apply takeover changes to RXIP

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Julian Wiedmann 


[ Upstream commit b22d73d6689fd902a66c08ebe71ab2f3b351e22f ]

When takeover is switched off, current code clears the 'TAKEOVER' flag on
all IPs. But the flag is also used for RXIP addresses, and those should
not be affected by the takeover mode.
Fix the behaviour by consistenly applying takover logic to NORMAL
addresses only.

Signed-off-by: Julian Wiedmann 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/s390/net/qeth_l3_main.c |5 +++--
 drivers/s390/net/qeth_l3_sys.c  |5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)

--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -178,6 +178,8 @@ int qeth_l3_is_addr_covered_by_ipato(str
 
if (!card->ipato.enabled)
return 0;
+   if (addr->type != QETH_IP_TYPE_NORMAL)
+   return 0;
 
qeth_l3_convert_addr_to_bits((u8 *) >u, addr_bits,
  (addr->proto == QETH_PROT_IPV4)? 4:16);
@@ -293,8 +295,7 @@ int qeth_l3_add_ip(struct qeth_card *car
memcpy(addr, tmp_addr, sizeof(struct qeth_ipaddr));
addr->ref_counter = 1;
 
-   if (addr->type == QETH_IP_TYPE_NORMAL  &&
-   qeth_l3_is_addr_covered_by_ipato(card, addr)) {
+   if (qeth_l3_is_addr_covered_by_ipato(card, addr)) {
QETH_CARD_TEXT(card, 2, "tkovaddr");
addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
}
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -398,10 +398,11 @@ static ssize_t qeth_l3_dev_ipato_enable_
card->ipato.enabled = enable;
 
hash_for_each(card->ip_htable, i, addr, hnode) {
+   if (addr->type != QETH_IP_TYPE_NORMAL)
+   continue;
if (!enable)
addr->set_flags &= ~QETH_IPA_SETIP_TAKEOVER_FLAG;
-   else if (addr->type == QETH_IP_TYPE_NORMAL &&
-qeth_l3_is_addr_covered_by_ipato(card, addr))
+   else if (qeth_l3_is_addr_covered_by_ipato(card, addr))
addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
}
 out:




[PATCH 4.9 53/75] s390/qeth: apply takeover changes when mode is toggled

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Julian Wiedmann 


[ Upstream commit 7fbd9493f0eeae8cef58300505a9ef5c8fce6313 ]

Just as for an explicit enable/disable, toggling the takeover mode also
requires that the IP addresses get updated. Otherwise all IPs that were
added to the table before the mode-toggle, get registered with the old
settings.

Signed-off-by: Julian Wiedmann 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/s390/net/qeth_core.h  |2 +-
 drivers/s390/net/qeth_core_main.c |2 +-
 drivers/s390/net/qeth_l3_sys.c|   35 +--
 3 files changed, 19 insertions(+), 20 deletions(-)

--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -576,7 +576,7 @@ enum qeth_cq {
 };
 
 struct qeth_ipato {
-   int enabled;
+   bool enabled;
int invert4;
int invert6;
struct list_head entries;
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -1475,7 +1475,7 @@ static int qeth_setup_card(struct qeth_c
qeth_set_intial_options(card);
/* IP address takeover */
INIT_LIST_HEAD(>ipato.entries);
-   card->ipato.enabled = 0;
+   card->ipato.enabled = false;
card->ipato.invert4 = 0;
card->ipato.invert6 = 0;
/* init QDIO stuff */
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -374,6 +374,7 @@ static ssize_t qeth_l3_dev_ipato_enable_
struct qeth_card *card = dev_get_drvdata(dev);
struct qeth_ipaddr *addr;
int i, rc = 0;
+   bool enable;
 
if (!card)
return -EINVAL;
@@ -386,25 +387,23 @@ static ssize_t qeth_l3_dev_ipato_enable_
}
 
if (sysfs_streq(buf, "toggle")) {
-   card->ipato.enabled = (card->ipato.enabled)? 0 : 1;
-   } else if (sysfs_streq(buf, "1")) {
-   card->ipato.enabled = 1;
-   hash_for_each(card->ip_htable, i, addr, hnode) {
-   if ((addr->type == QETH_IP_TYPE_NORMAL) &&
-   qeth_l3_is_addr_covered_by_ipato(card, addr))
-   addr->set_flags |=
-   QETH_IPA_SETIP_TAKEOVER_FLAG;
-   }
-   } else if (sysfs_streq(buf, "0")) {
-   card->ipato.enabled = 0;
-   hash_for_each(card->ip_htable, i, addr, hnode) {
-   if (addr->set_flags &
-   QETH_IPA_SETIP_TAKEOVER_FLAG)
-   addr->set_flags &=
-   ~QETH_IPA_SETIP_TAKEOVER_FLAG;
-   }
-   } else
+   enable = !card->ipato.enabled;
+   } else if (kstrtobool(buf, )) {
rc = -EINVAL;
+   goto out;
+   }
+
+   if (card->ipato.enabled == enable)
+   goto out;
+   card->ipato.enabled = enable;
+
+   hash_for_each(card->ip_htable, i, addr, hnode) {
+   if (!enable)
+   addr->set_flags &= ~QETH_IPA_SETIP_TAKEOVER_FLAG;
+   else if (addr->type == QETH_IP_TYPE_NORMAL &&
+qeth_l3_is_addr_covered_by_ipato(card, addr))
+   addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
+   }
 out:
mutex_unlock(>conf_mutex);
return rc ? rc : count;




[PATCH 4.9 36/75] net: ipv4: fix for a race condition in raw_sendmsg

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Mohamed Ghannam 


[ Upstream commit 8f659a03a0ba9289b9aeb9b4470e6fb263d6f483 ]

inet->hdrincl is racy, and could lead to uninitialized stack pointer
usage, so its value should be read only once.

Fixes: c008ba5bdc9f ("ipv4: Avoid reading user iov twice after 
raw_probe_proto_opt")
Signed-off-by: Mohamed Ghannam 
Reviewed-by: Eric Dumazet 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/raw.c |   15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -502,11 +502,16 @@ static int raw_sendmsg(struct sock *sk,
int err;
struct ip_options_data opt_copy;
struct raw_frag_vec rfv;
+   int hdrincl;
 
err = -EMSGSIZE;
if (len > 0x)
goto out;
 
+   /* hdrincl should be READ_ONCE(inet->hdrincl)
+* but READ_ONCE() doesn't work with bit fields
+*/
+   hdrincl = inet->hdrincl;
/*
 *  Check the flags.
 */
@@ -582,7 +587,7 @@ static int raw_sendmsg(struct sock *sk,
/* Linux does not mangle headers on raw sockets,
 * so that IP options + IP_HDRINCL is non-sense.
 */
-   if (inet->hdrincl)
+   if (hdrincl)
goto done;
if (ipc.opt->opt.srr) {
if (!daddr)
@@ -604,12 +609,12 @@ static int raw_sendmsg(struct sock *sk,
 
flowi4_init_output(, ipc.oif, sk->sk_mark, tos,
   RT_SCOPE_UNIVERSE,
-  inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol,
+  hdrincl ? IPPROTO_RAW : sk->sk_protocol,
   inet_sk_flowi_flags(sk) |
-   (inet->hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
+   (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
   daddr, saddr, 0, 0);
 
-   if (!inet->hdrincl) {
+   if (!hdrincl) {
rfv.msg = msg;
rfv.hlen = 0;
 
@@ -634,7 +639,7 @@ static int raw_sendmsg(struct sock *sk,
goto do_confirm;
 back_from_confirm:
 
-   if (inet->hdrincl)
+   if (hdrincl)
err = raw_send_hdrinc(sk, , msg, len,
  , msg->msg_flags, );
 




[PATCH 4.9 61/75] USB: serial: ftdi_sio: add id for Airbus DS P8GR

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Max Schulze 

commit c6a36ad383559a60a249aa6016cebf3cb8b6c485 upstream.

Add AIRBUS_DS_P8GR device IDs to ftdi_sio driver.

Signed-off-by: Max Schulze 
Signed-off-by: Johan Hovold 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/serial/ftdi_sio.c |1 +
 drivers/usb/serial/ftdi_sio_ids.h |6 ++
 2 files changed, 7 insertions(+)

--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -1017,6 +1017,7 @@ static const struct usb_device_id id_tab
.driver_info = (kernel_ulong_t)_jtag_quirk },
{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_BT_USB_PID) },
{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_WL_USB_PID) },
+   { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
{ } /* Terminating entry */
 };
 
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -914,6 +914,12 @@
 #define ICPDAS_I7563U_PID  0x0105
 
 /*
+ * Airbus Defence and Space
+ */
+#define AIRBUS_DS_VID  0x1e8e  /* Vendor ID */
+#define AIRBUS_DS_P8GR 0x6001  /* Tetra P8GR */
+
+/*
  * RT Systems programming cables for various ham radios
  */
 #define RTSYSTEMS_VID  0x2100  /* Vendor ID */




[PATCH 4.9 60/75] usbip: vhci: stop printing kernel pointer addresses in messages

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Shuah Khan 

commit 8272d099d05f7ab2776cf56a2ab9f9443be18907 upstream.

Remove and/or change debug, info. and error messages to not print
kernel pointer addresses.

Signed-off-by: Shuah Khan 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/usbip/vhci_hcd.c |   10 --
 drivers/usb/usbip/vhci_rx.c  |   23 +++
 drivers/usb/usbip/vhci_tx.c  |3 ++-
 3 files changed, 13 insertions(+), 23 deletions(-)

--- a/drivers/usb/usbip/vhci_hcd.c
+++ b/drivers/usb/usbip/vhci_hcd.c
@@ -506,9 +506,6 @@ static int vhci_urb_enqueue(struct usb_h
struct vhci_device *vdev;
unsigned long flags;
 
-   usbip_dbg_vhci_hc("enter, usb_hcd %p urb %p mem_flags %d\n",
- hcd, urb, mem_flags);
-
if (portnum > VHCI_HC_PORTS) {
pr_err("invalid port number %d\n", portnum);
return -ENODEV;
@@ -671,8 +668,6 @@ static int vhci_urb_dequeue(struct usb_h
struct vhci_device *vdev;
unsigned long flags;
 
-   pr_info("dequeue a urb %p\n", urb);
-
spin_lock_irqsave(>lock, flags);
 
priv = urb->hcpriv;
@@ -700,7 +695,6 @@ static int vhci_urb_dequeue(struct usb_h
/* tcp connection is closed */
spin_lock(>priv_lock);
 
-   pr_info("device %p seems to be disconnected\n", vdev);
list_del(>list);
kfree(priv);
urb->hcpriv = NULL;
@@ -712,8 +706,6 @@ static int vhci_urb_dequeue(struct usb_h
 * vhci_rx will receive RET_UNLINK and give back the URB.
 * Otherwise, we give back it here.
 */
-   pr_info("gives back urb %p\n", urb);
-
usb_hcd_unlink_urb_from_ep(hcd, urb);
 
spin_unlock_irqrestore(>lock, flags);
@@ -741,8 +733,6 @@ static int vhci_urb_dequeue(struct usb_h
 
unlink->unlink_seqnum = priv->seqnum;
 
-   pr_info("device %p seems to be still connected\n", vdev);
-
/* send cmd_unlink and try to cancel the pending URB in the
 * peer */
list_add_tail(>list, >unlink_tx);
--- a/drivers/usb/usbip/vhci_rx.c
+++ b/drivers/usb/usbip/vhci_rx.c
@@ -37,24 +37,23 @@ struct urb *pickup_urb_and_free_priv(str
urb = priv->urb;
status = urb->status;
 
-   usbip_dbg_vhci_rx("find urb %p vurb %p seqnum %u\n",
-   urb, priv, seqnum);
+   usbip_dbg_vhci_rx("find urb seqnum %u\n", seqnum);
 
switch (status) {
case -ENOENT:
/* fall through */
case -ECONNRESET:
-   dev_info(>dev->dev,
-"urb %p was unlinked %ssynchronuously.\n", urb,
-status == -ENOENT ? "" : "a");
+   dev_dbg(>dev->dev,
+"urb seq# %u was unlinked %ssynchronuously\n",
+seqnum, status == -ENOENT ? "" : "a");
break;
case -EINPROGRESS:
/* no info output */
break;
default:
-   dev_info(>dev->dev,
-"urb %p may be in a error, status %d\n", urb,
-status);
+   dev_dbg(>dev->dev,
+"urb seq# %u may be in a error, status %d\n",
+seqnum, status);
}
 
list_del(>list);
@@ -80,8 +79,8 @@ static void vhci_recv_ret_submit(struct
spin_unlock_irqrestore(>priv_lock, flags);
 
if (!urb) {
-   pr_err("cannot find a urb of seqnum %u\n", pdu->base.seqnum);
-   pr_info("max seqnum %d\n",
+   pr_err("cannot find a urb of seqnum %u max seqnum %d\n",
+   pdu->base.seqnum,
atomic_read(>seqnum));
usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
return;
@@ -104,7 +103,7 @@ static void vhci_recv_ret_submit(struct
if (usbip_dbg_flag_vhci_rx)
usbip_dump_urb(urb);
 
-   usbip_dbg_vhci_rx("now giveback urb %p\n", urb);
+   usbip_dbg_vhci_rx("now giveback urb %u\n", pdu->base.seqnum);
 
spin_lock_irqsave(>lock, flags);
usb_hcd_unlink_urb_from_ep(vhci_to_hcd(vhci), urb);
@@ -170,7 +169,7 @@ static void vhci_recv_ret_unlink(struct
pr_info("the urb (seqnum %d) was already given back\n",
pdu->base.seqnum);
} else {
-   usbip_dbg_vhci_rx("now giveback urb %p\n", urb);
+   usbip_dbg_vhci_rx("now giveback urb %d\n", pdu->base.seqnum);
 
/* If unlink is successful, status is 

[PATCH 4.9 63/75] USB: serial: option: add support for Telit ME910 PID 0x1101

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Daniele Palmas 

commit 08933099e6404f588f81c2050bfec7313e06eeaf upstream.

This patch adds support for PID 0x1101 of Telit ME910.

Signed-off-by: Daniele Palmas 
Signed-off-by: Johan Hovold 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/serial/option.c |8 
 1 file changed, 8 insertions(+)

--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -283,6 +283,7 @@ static void option_instat_callback(struc
 #define TELIT_PRODUCT_LE922_USBCFG30x1043
 #define TELIT_PRODUCT_LE922_USBCFG50x1045
 #define TELIT_PRODUCT_ME9100x1100
+#define TELIT_PRODUCT_ME910_DUAL_MODEM 0x1101
 #define TELIT_PRODUCT_LE9200x1200
 #define TELIT_PRODUCT_LE9100x1201
 #define TELIT_PRODUCT_LE910_USBCFG40x1206
@@ -648,6 +649,11 @@ static const struct option_blacklist_inf
.reserved = BIT(1) | BIT(3),
 };
 
+static const struct option_blacklist_info telit_me910_dual_modem_blacklist = {
+   .sendsetup = BIT(0),
+   .reserved = BIT(3),
+};
+
 static const struct option_blacklist_info telit_le910_blacklist = {
.sendsetup = BIT(0),
.reserved = BIT(1) | BIT(2),
@@ -1247,6 +1253,8 @@ static const struct usb_device_id option
.driver_info = (kernel_ulong_t)_le922_blacklist_usbcfg0 },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
.driver_info = (kernel_ulong_t)_me910_blacklist },
+   { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+   .driver_info = 
(kernel_ulong_t)_me910_dual_modem_blacklist },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
.driver_info = (kernel_ulong_t)_le910_blacklist },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),




[PATCH 4.9 59/75] usbip: stub: stop printing kernel pointer addresses in messages

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Shuah Khan 

commit 248a22044366f588d46754c54dfe29ffe4f8b4df upstream.

Remove and/or change debug, info. and error messages to not print
kernel pointer addresses.

Signed-off-by: Shuah Khan 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/usbip/stub_main.c |5 +++--
 drivers/usb/usbip/stub_rx.c   |7 ++-
 drivers/usb/usbip/stub_tx.c   |6 +++---
 3 files changed, 8 insertions(+), 10 deletions(-)

--- a/drivers/usb/usbip/stub_main.c
+++ b/drivers/usb/usbip/stub_main.c
@@ -252,11 +252,12 @@ void stub_device_cleanup_urbs(struct stu
struct stub_priv *priv;
struct urb *urb;
 
-   dev_dbg(>udev->dev, "free sdev %p\n", sdev);
+   dev_dbg(>udev->dev, "Stub device cleaning up urbs\n");
 
while ((priv = stub_priv_pop(sdev))) {
urb = priv->urb;
-   dev_dbg(>udev->dev, "free urb %p\n", urb);
+   dev_dbg(>udev->dev, "free urb seqnum %lu\n",
+   priv->seqnum);
usb_kill_urb(urb);
 
kmem_cache_free(stub_priv_cache, priv);
--- a/drivers/usb/usbip/stub_rx.c
+++ b/drivers/usb/usbip/stub_rx.c
@@ -225,9 +225,6 @@ static int stub_recv_cmd_unlink(struct s
if (priv->seqnum != pdu->u.cmd_unlink.seqnum)
continue;
 
-   dev_info(>urb->dev->dev, "unlink urb %p\n",
-priv->urb);
-
/*
 * This matched urb is not completed yet (i.e., be in
 * flight in usb hcd hardware/driver). Now we are
@@ -266,8 +263,8 @@ static int stub_recv_cmd_unlink(struct s
ret = usb_unlink_urb(priv->urb);
if (ret != -EINPROGRESS)
dev_err(>urb->dev->dev,
-   "failed to unlink a urb %p, ret %d\n",
-   priv->urb, ret);
+   "failed to unlink a urb # %lu, ret %d\n",
+   priv->seqnum, ret);
 
return 0;
}
--- a/drivers/usb/usbip/stub_tx.c
+++ b/drivers/usb/usbip/stub_tx.c
@@ -102,7 +102,7 @@ void stub_complete(struct urb *urb)
/* link a urb to the queue of tx. */
spin_lock_irqsave(>priv_lock, flags);
if (sdev->ud.tcp_socket == NULL) {
-   usbip_dbg_stub_tx("ignore urb for closed connection %p", urb);
+   usbip_dbg_stub_tx("ignore urb for closed connection\n");
/* It will be freed in stub_device_cleanup_urbs(). */
} else if (priv->unlinking) {
stub_enqueue_ret_unlink(sdev, priv->seqnum, urb->status);
@@ -204,8 +204,8 @@ static int stub_send_ret_submit(struct s
 
/* 1. setup usbip_header */
setup_ret_submit_pdu(_header, urb);
-   usbip_dbg_stub_tx("setup txdata seqnum: %d urb: %p\n",
- pdu_header.base.seqnum, urb);
+   usbip_dbg_stub_tx("setup txdata seqnum: %d\n",
+ pdu_header.base.seqnum);
usbip_header_correct_endian(_header, 1);
 
iov[iovnum].iov_base = _header;




[PATCH 4.9 75/75] tty: fix tty_ldisc_receive_buf() documentation

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Johan Hovold 

commit e7e51dcf3b8a5f65c5653a054ad57eb2492a90d0 upstream.

The tty_ldisc_receive_buf() helper returns the number of bytes
processed so drop the bogus "not" from the kernel doc comment.

Fixes: 8d082cd300ab ("tty: Unify receive_buf() code paths")
Signed-off-by: Johan Hovold 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/tty/tty_buffer.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/tty/tty_buffer.c
+++ b/drivers/tty/tty_buffer.c
@@ -446,7 +446,7 @@ EXPORT_SYMBOL_GPL(tty_prepare_flip_strin
  * Callers other than flush_to_ldisc() need to exclude the kworker
  * from concurrent use of the line discipline, see paste_selection().
  *
- * Returns the number of bytes not processed
+ * Returns the number of bytes processed
  */
 int tty_ldisc_receive_buf(struct tty_ldisc *ld, unsigned char *p,
  char *f, int count)




[PATCH 4.9 25/75] ipv6: mcast: better catch silly mtu values

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Eric Dumazet 


[ Upstream commit b9b312a7a451e9c098921856e7cfbc201120e1a7 ]

syzkaller reported crashes in IPv6 stack [1]

Xin Long found that lo MTU was set to silly values.

IPv6 stack reacts to changes to small MTU, by disabling itself under
RTNL.

But there is a window where threads not using RTNL can see a wrong
device mtu. This can lead to surprises, in mld code where it is assumed
the mtu is suitable.

Fix this by reading device mtu once and checking IPv6 minimal MTU.

[1]
 skbuff: skb_over_panic: text:10b86b8d len:196 put:20
 head:3b477e60 data:0e85441e tail:0xd4 end:0xc0 dev:lo
 [ cut here ]
 kernel BUG at net/core/skbuff.c:104!
 invalid opcode:  [#1] SMP KASAN
 Dumping ftrace buffer:
(ftrace buffer empty)
 Modules linked in:
 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.15.0-rc2-mm1+ #39
 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
 Google 01/01/2011
 RIP: 0010:skb_panic+0x15c/0x1f0 net/core/skbuff.c:100
 RSP: 0018:8801db307508 EFLAGS: 00010286
 RAX: 0082 RBX: 8801c517e840 RCX: 
 RDX: 0082 RSI: 11003b660e61 RDI: ed003b660e95
 RBP: 8801db307570 R08: 11003b660e23 R09: 
 R10:  R11:  R12: 85bd4020
 R13: 84754ed2 R14: 0014 R15: 8801c4e26540
 FS:  () GS:8801db30() knlGS:
 CS:  0010 DS:  ES:  CR0: 80050033
 CR2: 00463610 CR3: 0001c6698000 CR4: 001406e0
 DR0:  DR1:  DR2: 
 DR3:  DR6: fffe0ff0 DR7: 0400
 Call Trace:
  
  skb_over_panic net/core/skbuff.c:109 [inline]
  skb_put+0x181/0x1c0 net/core/skbuff.c:1694
  add_grhead.isra.24+0x42/0x3b0 net/ipv6/mcast.c:1695
  add_grec+0xa55/0x1060 net/ipv6/mcast.c:1817
  mld_send_cr net/ipv6/mcast.c:1903 [inline]
  mld_ifc_timer_expire+0x4d2/0x770 net/ipv6/mcast.c:2448
  call_timer_fn+0x23b/0x840 kernel/time/timer.c:1320
  expire_timers kernel/time/timer.c:1357 [inline]
  __run_timers+0x7e1/0xb60 kernel/time/timer.c:1660
  run_timer_softirq+0x4c/0xb0 kernel/time/timer.c:1686
  __do_softirq+0x29d/0xbb2 kernel/softirq.c:285
  invoke_softirq kernel/softirq.c:365 [inline]
  irq_exit+0x1d3/0x210 kernel/softirq.c:405
  exiting_irq arch/x86/include/asm/apic.h:540 [inline]
  smp_apic_timer_interrupt+0x16b/0x700 arch/x86/kernel/apic/apic.c:1052
  apic_timer_interrupt+0xa9/0xb0 arch/x86/entry/entry_64.S:920

Signed-off-by: Eric Dumazet 
Reported-by: syzbot 
Tested-by: Xin Long 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv6/mcast.c |   25 +++--
 1 file changed, 15 insertions(+), 10 deletions(-)

--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -1682,16 +1682,16 @@ static int grec_size(struct ifmcaddr6 *p
 }
 
 static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc,
-   int type, struct mld2_grec **ppgr)
+   int type, struct mld2_grec **ppgr, unsigned int mtu)
 {
-   struct net_device *dev = pmc->idev->dev;
struct mld2_report *pmr;
struct mld2_grec *pgr;
 
-   if (!skb)
-   skb = mld_newpack(pmc->idev, dev->mtu);
-   if (!skb)
-   return NULL;
+   if (!skb) {
+   skb = mld_newpack(pmc->idev, mtu);
+   if (!skb)
+   return NULL;
+   }
pgr = (struct mld2_grec *)skb_put(skb, sizeof(struct mld2_grec));
pgr->grec_type = type;
pgr->grec_auxwords = 0;
@@ -1714,10 +1714,15 @@ static struct sk_buff *add_grec(struct s
struct mld2_grec *pgr = NULL;
struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list;
int scount, stotal, first, isquery, truncate;
+   unsigned int mtu;
 
if (pmc->mca_flags & MAF_NOREPORT)
return skb;
 
+   mtu = READ_ONCE(dev->mtu);
+   if (mtu < IPV6_MIN_MTU)
+   return skb;
+
isquery = type == MLD2_MODE_IS_INCLUDE ||
  type == MLD2_MODE_IS_EXCLUDE;
truncate = type == MLD2_MODE_IS_EXCLUDE ||
@@ -1738,7 +1743,7 @@ static struct sk_buff *add_grec(struct s
AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) {
if (skb)
mld_sendpack(skb);
-   skb = mld_newpack(idev, dev->mtu);
+   skb = mld_newpack(idev, mtu);
}
}
first = 1;
@@ -1774,12 +1779,12 @@ static struct sk_buff *add_grec(struct s
pgr->grec_nsrcs = htons(scount);
if (skb)
mld_sendpack(skb);
-   skb = mld_newpack(idev, dev->mtu);
+   

[PATCH 4.9 74/75] n_tty: fix EXTPROC vs ICANON interaction with TIOCINQ (aka FIONREAD)

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Linus Torvalds 

commit 966031f340185eddd05affcf72b740549f056348 upstream.

We added support for EXTPROC back in 2010 in commit 26df6d13406d ("tty:
Add EXTPROC support for LINEMODE") and the intent was to allow it to
override some (all?) ICANON behavior.  Quoting from that original commit
message:

 There is a new bit in the termios local flag word, EXTPROC.
 When this bit is set, several aspects of the terminal driver
 are disabled.  Input line editing, character echo, and mapping
 of signals are all disabled.  This allows the telnetd to turn
 off these functions when in linemode, but still keep track of
 what state the user wants the terminal to be in.

but the problem turns out that "several aspects of the terminal driver
are disabled" is a bit ambiguous, and you can really confuse the n_tty
layer by setting EXTPROC and then causing some of the ICANON invariants
to no longer be maintained.

This fixes at least one such case (TIOCINQ) becoming unhappy because of
the confusion over whether ICANON really means ICANON when EXTPROC is set.

This basically makes TIOCINQ match the case of read: if EXTPROC is set,
we ignore ICANON.  Also, make sure to reset the ICANON state ie EXTPROC
changes, not just if ICANON changes.

Fixes: 26df6d13406d ("tty: Add EXTPROC support for LINEMODE")
Reported-by: Tetsuo Handa 
Reported-by: syzkaller 
Cc: Jiri Slaby 
Signed-off-by: Linus Torvalds 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/tty/n_tty.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/tty/n_tty.c
+++ b/drivers/tty/n_tty.c
@@ -1764,7 +1764,7 @@ static void n_tty_set_termios(struct tty
 {
struct n_tty_data *ldata = tty->disc_data;
 
-   if (!old || (old->c_lflag ^ tty->termios.c_lflag) & ICANON) {
+   if (!old || (old->c_lflag ^ tty->termios.c_lflag) & (ICANON | EXTPROC)) 
{
bitmap_zero(ldata->read_flags, N_TTY_BUF_SIZE);
ldata->line_start = ldata->read_tail;
if (!L_ICANON(tty) || !read_cnt(ldata)) {
@@ -2427,7 +2427,7 @@ static int n_tty_ioctl(struct tty_struct
return put_user(tty_chars_in_buffer(tty), (int __user *) arg);
case TIOCINQ:
down_write(>termios_rwsem);
-   if (L_ICANON(tty))
+   if (L_ICANON(tty) && !L_EXTPROC(tty))
retval = inq_canon(ldata);
else
retval = read_cnt(ldata);




[PATCH 4.9 66/75] usb: add RESET_RESUME for ELSA MicroLink 56K

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Oliver Neukum 

commit b9096d9f15c142574ebebe8fbb137012bb9d99c2 upstream.

This modem needs this quirk to operate. It produces timeouts when
resumed without reset.

Signed-off-by: Oliver Neukum 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/core/quirks.c |3 +++
 1 file changed, 3 insertions(+)

--- a/drivers/usb/core/quirks.c
+++ b/drivers/usb/core/quirks.c
@@ -155,6 +155,9 @@ static const struct usb_device_id usb_qu
/* Genesys Logic hub, internally used by KY-688 USB 3.1 Type-C Hub */
{ USB_DEVICE(0x05e3, 0x0612), .driver_info = USB_QUIRK_NO_LPM },
 
+   /* ELSA MicroLink 56K */
+   { USB_DEVICE(0x05cc, 0x2267), .driver_info = USB_QUIRK_RESET_RESUME },
+
/* Genesys Logic hub, internally used by Moshi USB to Ethernet Adapter 
*/
{ USB_DEVICE(0x05e3, 0x0616), .driver_info = USB_QUIRK_NO_LPM },
 




[PATCH 4.9 31/75] ptr_ring: add barriers

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: "Michael S. Tsirkin" 


[ Upstream commit a8ceb5dbfde1092b466936bca0ff3be127ecf38e ]

Users of ptr_ring expect that it's safe to give the
data structure a pointer and have it be available
to consumers, but that actually requires an smb_wmb
or a stronger barrier.

In absence of such barriers and on architectures that reorder writes,
consumer might read an un=initialized value from an skb pointer stored
in the skb array.  This was observed causing crashes.

To fix, add memory barriers.  The barrier we use is a wmb, the
assumption being that producers do not need to read the value so we do
not need to order these reads.

Reported-by: George Cherian 
Suggested-by: Jason Wang 
Signed-off-by: Michael S. Tsirkin 
Acked-by: Jason Wang 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 include/linux/ptr_ring.h |9 +
 1 file changed, 9 insertions(+)

--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -99,12 +99,18 @@ static inline bool ptr_ring_full_bh(stru
 
 /* Note: callers invoking this in a loop must use a compiler barrier,
  * for example cpu_relax(). Callers must hold producer_lock.
+ * Callers are responsible for making sure pointer that is being queued
+ * points to a valid data.
  */
 static inline int __ptr_ring_produce(struct ptr_ring *r, void *ptr)
 {
if (unlikely(!r->size) || r->queue[r->producer])
return -ENOSPC;
 
+   /* Make sure the pointer we are storing points to a valid data. */
+   /* Pairs with smp_read_barrier_depends in __ptr_ring_consume. */
+   smp_wmb();
+
r->queue[r->producer++] = ptr;
if (unlikely(r->producer >= r->size))
r->producer = 0;
@@ -244,6 +250,9 @@ static inline void *__ptr_ring_consume(s
if (ptr)
__ptr_ring_discard_one(r);
 
+   /* Make sure anyone accessing data through the pointer is up to date. */
+   /* Pairs with smp_wmb in __ptr_ring_produce. */
+   smp_read_barrier_depends();
return ptr;
 }
 




[PATCH 4.9 32/75] RDS: Check cmsg_len before dereferencing CMSG_DATA

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Avinash Repaka 


[ Upstream commit 14e138a86f6347c6199f610576d2e11c03bec5f0 ]

RDS currently doesn't check if the length of the control message is
large enough to hold the required data, before dereferencing the control
message data. This results in following crash:

BUG: KASAN: stack-out-of-bounds in rds_rdma_bytes net/rds/send.c:1013
[inline]
BUG: KASAN: stack-out-of-bounds in rds_sendmsg+0x1f02/0x1f90
net/rds/send.c:1066
Read of size 8 at addr 8801c928fb70 by task syzkaller455006/3157

CPU: 0 PID: 3157 Comm: syzkaller455006 Not tainted 4.15.0-rc3+ #161
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x194/0x257 lib/dump_stack.c:53
 print_address_description+0x73/0x250 mm/kasan/report.c:252
 kasan_report_error mm/kasan/report.c:351 [inline]
 kasan_report+0x25b/0x340 mm/kasan/report.c:409
 __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:430
 rds_rdma_bytes net/rds/send.c:1013 [inline]
 rds_sendmsg+0x1f02/0x1f90 net/rds/send.c:1066
 sock_sendmsg_nosec net/socket.c:628 [inline]
 sock_sendmsg+0xca/0x110 net/socket.c:638
 ___sys_sendmsg+0x320/0x8b0 net/socket.c:2018
 __sys_sendmmsg+0x1ee/0x620 net/socket.c:2108
 SYSC_sendmmsg net/socket.c:2139 [inline]
 SyS_sendmmsg+0x35/0x60 net/socket.c:2134
 entry_SYSCALL_64_fastpath+0x1f/0x96
RIP: 0033:0x43fe49
RSP: 002b:7fffbe244ad8 EFLAGS: 0217 ORIG_RAX: 0133
RAX: ffda RBX: 004002c8 RCX: 0043fe49
RDX: 0001 RSI: 2020c000 RDI: 0003
RBP: 006ca018 R08:  R09: 
R10:  R11: 0217 R12: 004017b0
R13: 00401840 R14:  R15: 

To fix this, we verify that the cmsg_len is large enough to hold the
data to be read, before proceeding further.

Reported-by: syzbot 
Signed-off-by: Avinash Repaka 
Acked-by: Santosh Shilimkar 
Reviewed-by: Yuval Shaia 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/rds/send.c |3 +++
 1 file changed, 3 insertions(+)

--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1006,6 +1006,9 @@ static int rds_rdma_bytes(struct msghdr
continue;
 
if (cmsg->cmsg_type == RDS_CMSG_RDMA_ARGS) {
+   if (cmsg->cmsg_len <
+   CMSG_LEN(sizeof(struct rds_rdma_args)))
+   return -EINVAL;
args = CMSG_DATA(cmsg);
*rdma_bytes += args->remote_vec.bytes;
}




[PATCH 4.9 68/75] usb: xhci: Add XHCI_TRUST_TX_LENGTH for Renesas uPD720201

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Daniel Thompson 

commit da99706689481717998d1d48edd389f339eea979 upstream.

When plugging in a USB webcam I see the following message:
xhci_hcd :04:00.0: WARN Successful completion on short TX: needs
XHCI_TRUST_TX_LENGTH quirk?
handle_tx_event: 913 callbacks suppressed

All is quiet again with this patch (and I've done a fair but of soak
testing with the camera since).

Signed-off-by: Daniel Thompson 
Acked-by: Ard Biesheuvel 
Signed-off-by: Mathias Nyman 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/host/xhci-pci.c |3 +++
 1 file changed, 3 insertions(+)

--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -190,6 +190,9 @@ static void xhci_pci_quirks(struct devic
xhci->quirks |= XHCI_BROKEN_STREAMS;
}
if (pdev->vendor == PCI_VENDOR_ID_RENESAS &&
+   pdev->device == 0x0014)
+   xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+   if (pdev->vendor == PCI_VENDOR_ID_RENESAS &&
pdev->device == 0x0015)
xhci->quirks |= XHCI_RESET_ON_RESUME;
if (pdev->vendor == PCI_VENDOR_ID_VIA)




[PATCH 4.9 30/75] net: reevalulate autoflowlabel setting after sysctl setting

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Shaohua Li 


[ Upstream commit 513674b5a2c9c7a67501506419da5c3c77ac6f08 ]

sysctl.ip6.auto_flowlabels is default 1. In our hosts, we set it to 2.
If sockopt doesn't set autoflowlabel, outcome packets from the hosts are
supposed to not include flowlabel. This is true for normal packet, but
not for reset packet.

The reason is ipv6_pinfo.autoflowlabel is set in sock creation. Later if
we change sysctl.ip6.auto_flowlabels, the ipv6_pinfo.autoflowlabel isn't
changed, so the sock will keep the old behavior in terms of auto
flowlabel. Reset packet is suffering from this problem, because reset
packet is sent from a special control socket, which is created at boot
time. Since sysctl.ipv6.auto_flowlabels is 1 by default, the control
socket will always have its ipv6_pinfo.autoflowlabel set, even after
user set sysctl.ipv6.auto_flowlabels to 1, so reset packset will always
have flowlabel. Normal sock created before sysctl setting suffers from
the same issue. We can't even turn off autoflowlabel unless we kill all
socks in the hosts.

To fix this, if IPV6_AUTOFLOWLABEL sockopt is used, we use the
autoflowlabel setting from user, otherwise we always call
ip6_default_np_autolabel() which has the new settings of sysctl.

Note, this changes behavior a little bit. Before commit 42240901f7c4
(ipv6: Implement different admin modes for automatic flow labels), the
autoflowlabel behavior of a sock isn't sticky, eg, if sysctl changes,
existing connection will change autoflowlabel behavior. After that
commit, autoflowlabel behavior is sticky in the whole life of the sock.
With this patch, the behavior isn't sticky again.

Cc: Martin KaFai Lau 
Cc: Eric Dumazet 
Cc: Tom Herbert 
Signed-off-by: Shaohua Li 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 include/linux/ipv6.h |3 ++-
 net/ipv6/af_inet6.c  |1 -
 net/ipv6/ip6_output.c|   12 ++--
 net/ipv6/ipv6_sockglue.c |1 +
 4 files changed, 13 insertions(+), 4 deletions(-)

--- a/include/linux/ipv6.h
+++ b/include/linux/ipv6.h
@@ -246,7 +246,8 @@ struct ipv6_pinfo {
 * 100: prefer care-of address
 */
dontfrag:1,
-   autoflowlabel:1;
+   autoflowlabel:1,
+   autoflowlabel_set:1;
__u8min_hopcount;
__u8tclass;
__be32  rcv_flowinfo;
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -209,7 +209,6 @@ lookup_protocol:
np->mcast_hops  = IPV6_DEFAULT_MCASTHOPS;
np->mc_loop = 1;
np->pmtudisc= IPV6_PMTUDISC_WANT;
-   np->autoflowlabel = ip6_default_np_autolabel(sock_net(sk));
sk->sk_ipv6only = net->ipv6.sysctl.bindv6only;
 
/* Init the ipv4 part of the socket since we can have sockets
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -156,6 +156,14 @@ int ip6_output(struct net *net, struct s
!(IP6CB(skb)->flags & IP6SKB_REROUTED));
 }
 
+static bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np)
+{
+   if (!np->autoflowlabel_set)
+   return ip6_default_np_autolabel(net);
+   else
+   return np->autoflowlabel;
+}
+
 /*
  * xmit an sk_buff (used by TCP, SCTP and DCCP)
  * Note : socket lock is not held for SYNACK packets, but might be modified
@@ -219,7 +227,7 @@ int ip6_xmit(const struct sock *sk, stru
hlimit = ip6_dst_hoplimit(dst);
 
ip6_flow_hdr(hdr, tclass, ip6_make_flowlabel(net, skb, fl6->flowlabel,
-np->autoflowlabel, fl6));
+   ip6_autoflowlabel(net, np), fl6));
 
hdr->payload_len = htons(seg_len);
hdr->nexthdr = proto;
@@ -1691,7 +1699,7 @@ struct sk_buff *__ip6_make_skb(struct so
 
ip6_flow_hdr(hdr, v6_cork->tclass,
 ip6_make_flowlabel(net, skb, fl6->flowlabel,
-   np->autoflowlabel, fl6));
+   ip6_autoflowlabel(net, np), fl6));
hdr->hop_limit = v6_cork->hop_limit;
hdr->nexthdr = proto;
hdr->saddr = fl6->saddr;
--- a/net/ipv6/ipv6_sockglue.c
+++ b/net/ipv6/ipv6_sockglue.c
@@ -874,6 +874,7 @@ pref_skip_coa:
break;
case IPV6_AUTOFLOWLABEL:
np->autoflowlabel = valbool;
+   np->autoflowlabel_set = 1;
retv = 0;
break;
}




[PATCH 4.9 70/75] timers: Invoke timer_start_debug() where it makes sense

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit fd45bb77ad682be728d1002431d77b8c73342836 upstream.

The timer start debug function is called before the proper timer base is
set. As a consequence the trace data contains the stale CPU and flags
values.

Call the debug function after setting the new base and flags.

Fixes: 500462a9de65 ("timers: Switch to a non-cascading wheel")
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: Frederic Weisbecker 
Cc: Sebastian Siewior 
Cc: r...@linutronix.de
Cc: Paul McKenney 
Cc: Anna-Maria Gleixner 
Link: https://lkml.kernel.org/r/20171222145337.792907...@linutronix.de
Signed-off-by: Greg Kroah-Hartman 

---
 kernel/time/timer.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1019,8 +1019,6 @@ __mod_timer(struct timer_list *timer, un
if (!ret && pending_only)
goto out_unlock;
 
-   debug_activate(timer, expires);
-
new_base = get_target_base(base, timer->flags);
 
if (base != new_base) {
@@ -1044,6 +1042,8 @@ __mod_timer(struct timer_list *timer, un
}
}
 
+   debug_activate(timer, expires);
+
timer->expires = expires;
/*
 * If 'idx' was calculated above and the base time did not advance




[PATCH 4.9 72/75] nohz: Prevent a timer interrupt storm in tick_nohz_stop_sched_tick()

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 5d62c183f9e9df1deeea0906d099a94e8a43047a upstream.

The conditions in irq_exit() to invoke tick_nohz_irq_exit() which
subsequently invokes tick_nohz_stop_sched_tick() are:

  if ((idle_cpu(cpu) && !need_resched()) || tick_nohz_full_cpu(cpu))

If need_resched() is not set, but a timer softirq is pending then this is
an indication that the softirq code punted and delegated the execution to
softirqd. need_resched() is not true because the current interrupted task
takes precedence over softirqd.

Invoking tick_nohz_irq_exit() in this case can cause an endless loop of
timer interrupts because the timer wheel contains an expired timer, but
softirqs are not yet executed. So it returns an immediate expiry request,
which causes the timer to fire immediately again. Lather, rinse and
repeat

Prevent that by adding a check for a pending timer soft interrupt to the
conditions in tick_nohz_stop_sched_tick() which avoid calling
get_next_timer_interrupt(). That keeps the tick sched timer on the tick and
prevents a repetitive programming of an already expired timer.

Reported-by: Sebastian Siewior 
Signed-off-by: Thomas Gleixner 
Acked-by: Frederic Weisbecker 
Cc: Peter Zijlstra 
Cc: Paul McKenney 
Cc: Anna-Maria Gleixner 
Cc: Sebastian Siewior 
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1712272156050.2431@nanos
Signed-off-by: Greg Kroah-Hartman 

---
 kernel/time/tick-sched.c |   19 +--
 1 file changed, 17 insertions(+), 2 deletions(-)

--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -663,6 +663,11 @@ static void tick_nohz_restart(struct tic
tick_program_event(hrtimer_get_expires(>sched_timer), 1);
 }
 
+static inline bool local_timer_softirq_pending(void)
+{
+   return local_softirq_pending() & TIMER_SOFTIRQ;
+}
+
 static ktime_t tick_nohz_stop_sched_tick(struct tick_sched *ts,
 ktime_t now, int cpu)
 {
@@ -679,8 +684,18 @@ static ktime_t tick_nohz_stop_sched_tick
} while (read_seqretry(_lock, seq));
ts->last_jiffies = basejiff;
 
-   if (rcu_needs_cpu(basemono, _rcu) ||
-   arch_needs_cpu() || irq_work_needs_cpu()) {
+   /*
+* Keep the periodic tick, when RCU, architecture or irq_work
+* requests it.
+* Aside of that check whether the local timer softirq is
+* pending. If so its a bad idea to call get_next_timer_interrupt()
+* because there is an already expired timer, so it will request
+* immeditate expiry, which rearms the hardware timer with a
+* minimal delta which brings us back to this place
+* immediately. Lather, rinse and repeat...
+*/
+   if (rcu_needs_cpu(basemono, _rcu) || arch_needs_cpu() ||
+   irq_work_needs_cpu() || local_timer_softirq_pending()) {
next_tick = basemono + TICK_NSEC;
} else {
/*




[PATCH 4.9 33/75] tcp_bbr: record "full bw reached" decision in new full_bw_reached bit

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Neal Cardwell 


[ Upstream commit c589e69b508d29ed8e644dfecda453f71c02ec27 ]

This commit records the "full bw reached" decision in a new
full_bw_reached bit. This is a pure refactor that does not change the
current behavior, but enables subsequent fixes and improvements.

In particular, this enables simple and clean fixes because the full_bw
and full_bw_cnt can be unconditionally zeroed without worrying about
forgetting that we estimated we filled the pipe in Startup. And it
enables future improvements because multiple code paths can be used
for estimating that we filled the pipe in Startup; any new code paths
only need to set this bit when they think the pipe is full.

Note that this fix intentionally reduces the width of the full_bw_cnt
counter, since we have never used the most significant bit.

Signed-off-by: Neal Cardwell 
Reviewed-by: Yuchung Cheng 
Acked-by: Soheil Hassas Yeganeh 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_bbr.c |7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -81,7 +81,8 @@ struct bbr {
u32 lt_last_lost;/* LT intvl start: tp->lost */
u32 pacing_gain:10, /* current gain for setting pacing rate */
cwnd_gain:10,   /* current gain for setting cwnd */
-   full_bw_cnt:3,  /* number of rounds without large bw gains */
+   full_bw_reached:1,   /* reached full bw in Startup? */
+   full_bw_cnt:2,  /* number of rounds without large bw gains */
cycle_idx:3,/* current index in pacing_gain cycle array */
has_seen_rtt:1, /* have we seen an RTT sample yet? */
unused_b:5;
@@ -151,7 +152,7 @@ static bool bbr_full_bw_reached(const st
 {
const struct bbr *bbr = inet_csk_ca(sk);
 
-   return bbr->full_bw_cnt >= bbr_full_bw_cnt;
+   return bbr->full_bw_reached;
 }
 
 /* Return the windowed max recent bandwidth sample, in pkts/uS << BW_SCALE. */
@@ -688,6 +689,7 @@ static void bbr_check_full_bw_reached(st
return;
}
++bbr->full_bw_cnt;
+   bbr->full_bw_reached = bbr->full_bw_cnt >= bbr_full_bw_cnt;
 }
 
 /* If pipe is probably full, drain the queue and then enter steady-state. */
@@ -821,6 +823,7 @@ static void bbr_init(struct sock *sk)
bbr->restore_cwnd = 0;
bbr->round_start = 0;
bbr->idle_restart = 0;
+   bbr->full_bw_reached = 0;
bbr->full_bw = 0;
bbr->full_bw_cnt = 0;
bbr->cycle_mstamp.v64 = 0;




[PATCH 4.9 73/75] x86/smpboot: Remove stale TLB flush invocations

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 322f8b8b340c824aef891342b0f5795d15e11562 upstream.

smpboot_setup_warm_reset_vector() and smpboot_restore_warm_reset_vector()
invoke local_flush_tlb() for no obvious reason.

Digging in history revealed that the original code in the 2.1 era added
those because the code manipulated a swapper_pg_dir pagetable entry. The
pagetable manipulation was removed long ago in the 2.3 timeframe, but the
TLB flush invocations stayed around forever.

Remove them along with the pointless pr_debug()s which come from the same 2.1
change.

Reported-by: Dominik Brodowski 
Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Linus Torvalds 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Link: http://lkml.kernel.org/r/20171230211829.586548...@linutronix.de
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/kernel/smpboot.c |9 -
 1 file changed, 9 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -115,14 +115,10 @@ static inline void smpboot_setup_warm_re
spin_lock_irqsave(_lock, flags);
CMOS_WRITE(0xa, 0xf);
spin_unlock_irqrestore(_lock, flags);
-   local_flush_tlb();
-   pr_debug("1.\n");
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) =
start_eip >> 4;
-   pr_debug("2.\n");
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) =
start_eip & 0xf;
-   pr_debug("3.\n");
 }
 
 static inline void smpboot_restore_warm_reset_vector(void)
@@ -130,11 +126,6 @@ static inline void smpboot_restore_warm_
unsigned long flags;
 
/*
-* Install writable page 0 entry to set BIOS data area.
-*/
-   local_flush_tlb();
-
-   /*
 * Paranoid:  Set warm reset code and vector here back
 * to default values.
 */




[PATCH 4.9 71/75] timers: Reinitialize per cpu bases on hotplug

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 26456f87aca7157c057de65c9414b37f1ab881d1 upstream.

The timer wheel bases are not (re)initialized on CPU hotplug. That leaves
them with a potentially stale clk and next_expiry valuem, which can cause
trouble then the CPU is plugged.

Add a prepare callback which forwards the clock, sets next_expiry to far in
the future and reset the control flags to a known state.

Set base->must_forward_clk so the first timer which is queued will try to
forward the clock to current jiffies.

Fixes: 500462a9de65 ("timers: Switch to a non-cascading wheel")
Reported-by: Paul E. McKenney 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: Frederic Weisbecker 
Cc: Sebastian Siewior 
Cc: Anna-Maria Gleixner 
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1712272152200.2431@nanos
Signed-off-by: Greg Kroah-Hartman 

---
 include/linux/cpuhotplug.h |2 +-
 include/linux/timer.h  |4 +++-
 kernel/cpu.c   |4 ++--
 kernel/time/timer.c|   15 +++
 4 files changed, 21 insertions(+), 4 deletions(-)

--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -48,7 +48,7 @@ enum cpuhp_state {
CPUHP_ARM_SHMOBILE_SCU_PREPARE,
CPUHP_SH_SH3X_PREPARE,
CPUHP_BLK_MQ_PREPARE,
-   CPUHP_TIMERS_DEAD,
+   CPUHP_TIMERS_PREPARE,
CPUHP_NOTF_ERR_INJ_PREPARE,
CPUHP_MIPS_SOC_PREPARE,
CPUHP_BRINGUP_CPU,
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -274,9 +274,11 @@ unsigned long round_jiffies_up(unsigned
 unsigned long round_jiffies_up_relative(unsigned long j);
 
 #ifdef CONFIG_HOTPLUG_CPU
+int timers_prepare_cpu(unsigned int cpu);
 int timers_dead_cpu(unsigned int cpu);
 #else
-#define timers_dead_cpu NULL
+#define timers_prepare_cpu NULL
+#define timers_dead_cpuNULL
 #endif
 
 #endif
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1309,9 +1309,9 @@ static struct cpuhp_step cpuhp_bp_states
 * before blk_mq_queue_reinit_notify() from notify_dead(),
 * otherwise a RCU stall occurs.
 */
-   [CPUHP_TIMERS_DEAD] = {
+   [CPUHP_TIMERS_PREPARE] = {
.name   = "timers:dead",
-   .startup.single = NULL,
+   .startup.single = timers_prepare_cpu,
.teardown.single= timers_dead_cpu,
},
/* Kicks the plugged cpu into life */
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1851,6 +1851,21 @@ static void migrate_timer_list(struct ti
}
 }
 
+int timers_prepare_cpu(unsigned int cpu)
+{
+   struct timer_base *base;
+   int b;
+
+   for (b = 0; b < NR_BASES; b++) {
+   base = per_cpu_ptr(_bases[b], cpu);
+   base->clk = jiffies;
+   base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA;
+   base->is_idle = false;
+   base->must_forward_clk = true;
+   }
+   return 0;
+}
+
 int timers_dead_cpu(unsigned int cpu)
 {
struct timer_base *old_base;




[PATCH 4.9 69/75] timers: Use deferrable base independent of base::nohz_active

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Anna-Maria Gleixner 

commit ced6d5c11d3e7b342f1a80f908e6756ebd4b8ddd upstream.

During boot and before base::nohz_active is set in the timer bases, deferrable
timers are enqueued into the standard timer base. This works correctly as
long as base::nohz_active is false.

Once it base::nohz_active is set and a timer which was enqueued before that
is accessed the lock selector code choses the lock of the deferred
base. This causes unlocked access to the standard base and in case the
timer is removed it does not clear the pending flag in the standard base
bitmap which causes get_next_timer_interrupt() to return bogus values.

To prevent that, the deferrable timers must be enqueued in the deferrable
base, even when base::nohz_active is not set. Those deferrable timers also
need to be expired unconditional.

Fixes: 500462a9de65 ("timers: Switch to a non-cascading wheel")
Signed-off-by: Anna-Maria Gleixner 
Signed-off-by: Thomas Gleixner 
Reviewed-by: Frederic Weisbecker 
Cc: Peter Zijlstra 
Cc: Sebastian Siewior 
Cc: r...@linutronix.de
Cc: Paul McKenney 
Link: https://lkml.kernel.org/r/20171222145337.633328...@linutronix.de
Signed-off-by: Greg Kroah-Hartman 

---
 kernel/time/timer.c |   16 +++-
 1 file changed, 7 insertions(+), 9 deletions(-)

--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -849,11 +849,10 @@ static inline struct timer_base *get_tim
struct timer_base *base = per_cpu_ptr(_bases[BASE_STD], cpu);
 
/*
-* If the timer is deferrable and nohz is active then we need to use
-* the deferrable base.
+* If the timer is deferrable and NO_HZ_COMMON is set then we need
+* to use the deferrable base.
 */
-   if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active &&
-   (tflags & TIMER_DEFERRABLE))
+   if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && (tflags & TIMER_DEFERRABLE))
base = per_cpu_ptr(_bases[BASE_DEF], cpu);
return base;
 }
@@ -863,11 +862,10 @@ static inline struct timer_base *get_tim
struct timer_base *base = this_cpu_ptr(_bases[BASE_STD]);
 
/*
-* If the timer is deferrable and nohz is active then we need to use
-* the deferrable base.
+* If the timer is deferrable and NO_HZ_COMMON is set then we need
+* to use the deferrable base.
 */
-   if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active &&
-   (tflags & TIMER_DEFERRABLE))
+   if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && (tflags & TIMER_DEFERRABLE))
base = this_cpu_ptr(_bases[BASE_DEF]);
return base;
 }
@@ -1684,7 +1682,7 @@ static __latent_entropy void run_timer_s
base->must_forward_clk = false;
 
__run_timers(base);
-   if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active)
+   if (IS_ENABLED(CONFIG_NO_HZ_COMMON))
__run_timers(this_cpu_ptr(_bases[BASE_DEF]));
 }
 




[PATCH 4.9 67/75] USB: Fix off by one in type-specific length check of BOS SSP capability

2018-01-01 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Mathias Nyman 

commit 07b9f12864d16c3a861aef4817eb1efccbc5d0e6 upstream.

USB 3.1 devices are not detected as 3.1 capable since 4.15-rc3 due to a
off by one in commit 81cf4a45360f ("USB: core: Add type-specific length
check of BOS descriptors")

It uses USB_DT_USB_SSP_CAP_SIZE() to get SSP capability size which takes
the zero based SSAC as argument, not the actual count of sublink speed
attributes.

USB3 spec 9.6.2.5 says "The number of Sublink Speed Attributes = SSAC + 1."

The type-specific length check patch was added to stable and needs to be
fixed there as well

Fixes: 81cf4a45360f ("USB: core: Add type-specific length check of BOS 
descriptors")
CC: Masakazu Mokuno 
Signed-off-by: Mathias Nyman 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/core/config.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/usb/core/config.c
+++ b/drivers/usb/core/config.c
@@ -1002,7 +1002,7 @@ int usb_get_bos_descriptor(struct usb_de
case USB_SSP_CAP_TYPE:
ssp_cap = (struct usb_ssp_cap_descriptor *)buffer;
ssac = (le32_to_cpu(ssp_cap->bmAttributes) &
-   USB_SSP_SUBLINK_SPEED_ATTRIBS) + 1;
+   USB_SSP_SUBLINK_SPEED_ATTRIBS);
if (length >= USB_DT_USB_SSP_CAP_SIZE(ssac))
dev->bos->ssp_cap = ssp_cap;
break;




[PATCH 4.14 010/146] x86/mm/pti: Allow NX poison to be set in p4d/pgd

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit 1c4de1ff4fe50453b968579ee86fac3da80dd783 upstream.

With PAGE_TABLE_ISOLATION the user portion of the kernel page tables is
poisoned with the NX bit so if the entry code exits with the kernel page
tables selected in CR3, userspace crashes.

But doing so trips the p4d/pgd_bad() checks.  Make sure it does not do
that.

Signed-off-by: Dave Hansen 
Signed-off-by: Thomas Gleixner 
Reviewed-by: Borislav Petkov 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/pgtable.h |   14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -846,7 +846,12 @@ static inline pud_t *pud_offset(p4d_t *p
 
 static inline int p4d_bad(p4d_t p4d)
 {
-   return (p4d_flags(p4d) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0;
+   unsigned long ignore_flags = _KERNPG_TABLE | _PAGE_USER;
+
+   if (IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION))
+   ignore_flags |= _PAGE_NX;
+
+   return (p4d_flags(p4d) & ~ignore_flags) != 0;
 }
 #endif  /* CONFIG_PGTABLE_LEVELS > 3 */
 
@@ -880,7 +885,12 @@ static inline p4d_t *p4d_offset(pgd_t *p
 
 static inline int pgd_bad(pgd_t pgd)
 {
-   return (pgd_flags(pgd) & ~_PAGE_USER) != _KERNPG_TABLE;
+   unsigned long ignore_flags = _PAGE_USER;
+
+   if (IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION))
+   ignore_flags |= _PAGE_NX;
+
+   return (pgd_flags(pgd) & ~ignore_flags) != _KERNPG_TABLE;
 }
 
 static inline int pgd_none(pgd_t pgd)




[PATCH 4.14 000/146] 4.14.11-stable review

2018-01-01 Thread Greg Kroah-Hartman
This is the start of the stable review cycle for the 4.14.11 release.
There are 146 patches in this series, all will be posted as a response
to this one.  If anyone has any issues with these being applied, please
let me know.

Responses should be made by Wed Jan  3 14:00:12 UTC 2018.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.11-rc1.gz
or in the git tree and branch at:
  git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git 
linux-4.14.y
and the diffstat can be found below.

thanks,

greg k-h

-
Pseudo-Shortlog of commits:

Greg Kroah-Hartman 
Linux 4.14.11-rc1

Johan Hovold 
tty: fix tty_ldisc_receive_buf() documentation

Linus Torvalds 
n_tty: fix EXTPROC vs ICANON interaction with TIOCINQ (aka FIONREAD)

Thomas Gleixner 
x86/ldt: Make LDT pgtable free conditional

Thomas Gleixner 
x86/ldt: Plug memory leak in error path

Andy Lutomirski 
x86/espfix/64: Fix espfix double-fault handling on 5-level systems

Linus Torvalds 
x86-32: Fix kexec with stack canary (CONFIG_CC_STACKPROTECTOR)

Thomas Gleixner 
x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()

Thomas Gleixner 
x86/smpboot: Remove stale TLB flush invocations

Thomas Gleixner 
nohz: Prevent a timer interrupt storm in tick_nohz_stop_sched_tick()

Sushmita Susheelendra 
staging: android: ion: Fix dma direction for dma_sync_sg_for_cpu/device

Sudeep Holla 
drivers: base: cacheinfo: fix cache type for non-architected system cache

Johan Hovold 
phy: tegra: fix device-tree node lookups

Todd Kjos 
binder: fix proc->files use-after-free

Thomas Gleixner 
timers: Reinitialize per cpu bases on hotplug

Thomas Gleixner 
timers: Invoke timer_start_debug() where it makes sense

Anna-Maria Gleixner 
timers: Use deferrable base independent of base::nohz_active

Daniel Thompson 
usb: xhci: Add XHCI_TRUST_TX_LENGTH for Renesas uPD720201

Mathias Nyman 
USB: Fix off by one in type-specific length check of BOS SSP capability

Oliver Neukum 
usb: add RESET_RESUME for ELSA MicroLink 56K

Dmitry Fleytman Dmitry Fleytman 
usb: Add device quirk for Logitech HD Pro Webcam C925e

SZ Lin (林上智) 
USB: serial: option: adding support for YUGA CLM920-NC5

Daniele Palmas 
USB: serial: option: add support for Telit ME910 PID 0x1101

Reinhard Speyerer 
USB: serial: qcserial: add Sierra Wireless EM7565

Max Schulze 
USB: serial: ftdi_sio: add id for Airbus DS P8GR

Johan Hovold 
USB: chipidea: msm: fix ulpi-node lookup

Shuah Khan 
usbip: vhci: stop printing kernel pointer addresses in messages

Shuah Khan 
usbip: stub: stop printing kernel pointer addresses in messages

Shuah Khan 
usbip: prevent leaking socket pointer address in messages

Juan Zea 
usbip: fix usbip bind writing random string after command in match_busid

Jan Engelhardt 
sparc64: repair calling incorrect hweight function from stubs

Willem de Bruijn 
skbuff: in skb_copy_ubufs unclone before releasing zerocopy

Willem de Bruijn 
skbuff: skb_copy_ubufs must release uarg even without user frags

Willem de Bruijn 
skbuff: orphan frags before zerocopy clone

Saeed Mahameed 
Revert "mlx5: move affinity hints assignments to generic code"

Nicolas Dichtel 
ipv6: set all.accept_dad to 0 by default

Phil Sutter 
ipv4: fib: Fix metrics match when deleting a route

Russell King 
phylink: ensure AN is enabled

Russell King 
phylink: ensure the PHY interface mode is appropriately set

Calvin Owens 
bnxt_en: Fix sources of spurious netpoll warnings

Jiri Pirko 
net: sched: fix static key imbalance in case of ingress/clsact_init error

Alexey Kodanev 
vxlan: restore dev->mtu setting based on lower device

Kamal Heib 
net/mlx5: FPGA, return -EINVAL if size is zero

Eric Dumazet 
tcp: refresh tcp_mstamp from timers callbacks

Ido Schimmel 
ipv6: Honor specified parameters in fibmatch lookup

Zhao Qiang 
net: phy: marvell: Limit 88m1101 autoneg errata to 88E1145 as well.

Wei Wang 
tcp: fix potential underestimation on rcv_rtt

Yuval Mintz 
mlxsw: spectrum: Disable MAC learning for ovs port

Parthasarathy Bhuvaragan 
tipc: fix hanging poll() for stream sockets

Xin Long 
sctp: make sure stream nums can match optlen in 
sctp_setsockopt_reset_streams

Julian Wiedmann 
s390/qeth: fix error handling in checksum cmd callback

Florian Fainelli 
net: dsa: bcm_sf2: Clear IDDQ_GLOBAL_PWR bit for PHY

Bert Kenward 
sfc: pass valid pointers from efx_enqueue_unwind

Eric Garver 
openvswitch: Fix pop_vlan action for double tagged frames

Moni Shoua 
net/mlx5: Fix error flow in CREATE_QP command

Gal Pressman 
net/mlx5e: Prevent possible races in VXLAN control flow

Gal Pressman 
net/mlx5e: Add refcount to VXLAN structure

Gal Pressman 

[PATCH 4.14 015/146] x86/mm/pti: Share cpu_entry_area with user space page tables

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Andy Lutomirski 

commit f7cfbee91559ca7e3e961a00ffac921208a115ad upstream.

Share the cpu entry area so the user space and kernel space page tables
have the same P4D page.

Signed-off-by: Andy Lutomirski 
Signed-off-by: Thomas Gleixner 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/mm/pti.c |   25 +
 1 file changed, 25 insertions(+)

--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -265,6 +265,29 @@ pti_clone_pmds(unsigned long start, unsi
 }
 
 /*
+ * Clone a single p4d (i.e. a top-level entry on 4-level systems and a
+ * next-level entry on 5-level systems.
+ */
+static void __init pti_clone_p4d(unsigned long addr)
+{
+   p4d_t *kernel_p4d, *user_p4d;
+   pgd_t *kernel_pgd;
+
+   user_p4d = pti_user_pagetable_walk_p4d(addr);
+   kernel_pgd = pgd_offset_k(addr);
+   kernel_p4d = p4d_offset(kernel_pgd, addr);
+   *user_p4d = *kernel_p4d;
+}
+
+/*
+ * Clone the CPU_ENTRY_AREA into the user space visible page table.
+ */
+static void __init pti_clone_user_shared(void)
+{
+   pti_clone_p4d(CPU_ENTRY_AREA_BASE);
+}
+
+/*
  * Initialize kernel page table isolation
  */
 void __init pti_init(void)
@@ -273,4 +296,6 @@ void __init pti_init(void)
return;
 
pr_info("enabled\n");
+
+   pti_clone_user_shared();
 }




[PATCH 4.14 017/146] x86/mm/pti: Share entry text PMD

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 6dc72c3cbca0580642808d677181cad4c6433893 upstream.

Share the entry text PMD of the kernel mapping with the user space
mapping. If large pages are enabled this is a single PMD entry and at the
point where it is copied into the user page table the RW bit has not been
cleared yet. Clear it right away so the user space visible map becomes RX.

Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/mm/pti.c |   10 ++
 1 file changed, 10 insertions(+)

--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -288,6 +288,15 @@ static void __init pti_clone_user_shared
 }
 
 /*
+ * Clone the populated PMDs of the entry and irqentry text and force it RO.
+ */
+static void __init pti_clone_entry_text(void)
+{
+   pti_clone_pmds((unsigned long) __entry_text_start,
+   (unsigned long) __irqentry_text_end, _PAGE_RW);
+}
+
+/*
  * Initialize kernel page table isolation
  */
 void __init pti_init(void)
@@ -298,4 +307,5 @@ void __init pti_init(void)
pr_info("enabled\n");
 
pti_clone_user_shared();
+   pti_clone_entry_text();
 }




[PATCH 4.14 016/146] x86/entry: Align entry text section to PMD boundary

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 2f7412ba9c6af5ab16bdbb4a3fdb1dcd2b4fd3c2 upstream.

The (irq)entry text must be visible in the user space page tables. To allow
simple PMD based sharing, make the entry text PMD aligned.

Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/kernel/vmlinux.lds.S |8 
 1 file changed, 8 insertions(+)

--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -61,11 +61,17 @@ jiffies_64 = jiffies;
. = ALIGN(HPAGE_SIZE);  \
__end_rodata_hpage_align = .;
 
+#define ALIGN_ENTRY_TEXT_BEGIN . = ALIGN(PMD_SIZE);
+#define ALIGN_ENTRY_TEXT_END   . = ALIGN(PMD_SIZE);
+
 #else
 
 #define X64_ALIGN_RODATA_BEGIN
 #define X64_ALIGN_RODATA_END
 
+#define ALIGN_ENTRY_TEXT_BEGIN
+#define ALIGN_ENTRY_TEXT_END
+
 #endif
 
 PHDRS {
@@ -102,8 +108,10 @@ SECTIONS
CPUIDLE_TEXT
LOCK_TEXT
KPROBES_TEXT
+   ALIGN_ENTRY_TEXT_BEGIN
ENTRY_TEXT
IRQENTRY_TEXT
+   ALIGN_ENTRY_TEXT_END
SOFTIRQENTRY_TEXT
*(.fixup)
*(.gnu.warning)




[PATCH 4.14 014/146] x86/mm/pti: Force entry through trampoline when PTI active

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 8d4b067895791ab9fdb1aadfc505f64d71239dd2 upstream.

Force the entry through the trampoline only when PTI is active. Otherwise
go through the normal entry code.

Signed-off-by: Thomas Gleixner 
Reviewed-by: Borislav Petkov 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/kernel/cpu/common.c |5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1339,7 +1339,10 @@ void syscall_init(void)
(entry_SYSCALL_64_trampoline - _entry_trampoline);
 
wrmsr(MSR_STAR, 0, (__USER32_CS << 16) | __KERNEL_CS);
-   wrmsrl(MSR_LSTAR, SYSCALL64_entry_trampoline);
+   if (static_cpu_has(X86_FEATURE_PTI))
+   wrmsrl(MSR_LSTAR, SYSCALL64_entry_trampoline);
+   else
+   wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);
 
 #ifdef CONFIG_IA32_EMULATION
wrmsrl(MSR_CSTAR, (unsigned long)entry_SYSCALL_compat);




[PATCH 4.14 018/146] x86/mm/pti: Map ESPFIX into user space

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Andy Lutomirski 

commit 4b6bbe95b87966ba08999574db65c93c5e925a36 upstream.

Map the ESPFIX pages into user space when PTI is enabled.

Signed-off-by: Andy Lutomirski 
Signed-off-by: Thomas Gleixner 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Kees Cook 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/mm/pti.c |   11 +++
 1 file changed, 11 insertions(+)

--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -288,6 +288,16 @@ static void __init pti_clone_user_shared
 }
 
 /*
+ * Clone the ESPFIX P4D into the user space visinble page table
+ */
+static void __init pti_setup_espfix64(void)
+{
+#ifdef CONFIG_X86_ESPFIX64
+   pti_clone_p4d(ESPFIX_BASE_ADDR);
+#endif
+}
+
+/*
  * Clone the populated PMDs of the entry and irqentry text and force it RO.
  */
 static void __init pti_clone_entry_text(void)
@@ -308,4 +318,5 @@ void __init pti_init(void)
 
pti_clone_user_shared();
pti_clone_entry_text();
+   pti_setup_espfix64();
 }




[PATCH 4.14 012/146] x86/mm/pti: Populate user PGD

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit fc2fbc8512ed08d1de7720936fd7d2e4ce02c3a2 upstream.

In clone_pgd_range() copy the init user PGDs which cover the kernel half of
the address space, so a process has all the required kernel mappings
visible.

[ tglx: Split out from the big kaiser dump and folded Andys simplification ]

Signed-off-by: Dave Hansen 
Signed-off-by: Thomas Gleixner 
Reviewed-by: Borislav Petkov 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/pgtable.h |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1125,7 +1125,14 @@ static inline int pud_write(pud_t pud)
  */
 static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count)
 {
-   memcpy(dst, src, count * sizeof(pgd_t));
+   memcpy(dst, src, count * sizeof(pgd_t));
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+   if (!static_cpu_has(X86_FEATURE_PTI))
+   return;
+   /* Clone the user space pgd as well */
+   memcpy(kernel_to_user_pgdp(dst), kernel_to_user_pgdp(src),
+  count * sizeof(pgd_t));
+#endif
 }
 
 #define PTE_SHIFT ilog2(PTRS_PER_PTE)




[PATCH 4.14 019/146] x86/cpu_entry_area: Add debugstore entries to cpu_entry_area

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 10043e02db7f8a4161f76434931051e7d797a5f6 upstream.

The Intel PEBS/BTS debug store is a design trainwreck as it expects virtual
addresses which must be visible in any execution context.

So it is required to make these mappings visible to user space when kernel
page table isolation is active.

Provide enough room for the buffer mappings in the cpu_entry_area so the
buffers are available in the user space visible page tables.

At the point where the kernel side entry area is populated there is no
buffer available yet, but the kernel PMD must be populated. To achieve this
set the entries for these buffers to non present.

Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/events/intel/ds.c|5 ++--
 arch/x86/events/perf_event.h  |   21 +--
 arch/x86/include/asm/cpu_entry_area.h |   13 
 arch/x86/include/asm/intel_ds.h   |   36 ++
 arch/x86/mm/cpu_entry_area.c  |   27 +
 5 files changed, 81 insertions(+), 21 deletions(-)

--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -8,11 +8,12 @@
 
 #include "../perf_event.h"
 
+/* Waste a full page so it can be mapped into the cpu_entry_area */
+DEFINE_PER_CPU_PAGE_ALIGNED(struct debug_store, cpu_debug_store);
+
 /* The size of a BTS record in bytes: */
 #define BTS_RECORD_SIZE24
 
-#define BTS_BUFFER_SIZE(PAGE_SIZE << 4)
-#define PEBS_BUFFER_SIZE   (PAGE_SIZE << 4)
 #define PEBS_FIXUP_SIZEPAGE_SIZE
 
 /*
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -14,6 +14,8 @@
 
 #include 
 
+#include 
+
 /* To enable MSR tracing please use the generic trace points. */
 
 /*
@@ -77,8 +79,6 @@ struct amd_nb {
struct event_constraint event_constraints[X86_PMC_IDX_MAX];
 };
 
-/* The maximal number of PEBS events: */
-#define MAX_PEBS_EVENTS8
 #define PEBS_COUNTER_MASK  ((1ULL << MAX_PEBS_EVENTS) - 1)
 
 /*
@@ -95,23 +95,6 @@ struct amd_nb {
PERF_SAMPLE_TRANSACTION | PERF_SAMPLE_PHYS_ADDR | \
PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)
 
-/*
- * A debug store configuration.
- *
- * We only support architectures that use 64bit fields.
- */
-struct debug_store {
-   u64 bts_buffer_base;
-   u64 bts_index;
-   u64 bts_absolute_maximum;
-   u64 bts_interrupt_threshold;
-   u64 pebs_buffer_base;
-   u64 pebs_index;
-   u64 pebs_absolute_maximum;
-   u64 pebs_interrupt_threshold;
-   u64 pebs_event_reset[MAX_PEBS_EVENTS];
-};
-
 #define PEBS_REGS \
(PERF_REG_X86_AX | \
 PERF_REG_X86_BX | \
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -5,6 +5,7 @@
 
 #include 
 #include 
+#include 
 
 /*
  * cpu_entry_area is a percpu region that contains things needed by the CPU
@@ -40,6 +41,18 @@ struct cpu_entry_area {
 */
char exception_stacks[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + 
DEBUG_STKSZ];
 #endif
+#ifdef CONFIG_CPU_SUP_INTEL
+   /*
+* Per CPU debug store for Intel performance monitoring. Wastes a
+* full page at the moment.
+*/
+   struct debug_store cpu_debug_store;
+   /*
+* The actual PEBS/BTS buffers must be mapped to user space
+* Reserve enough fixmap PTEs.
+*/
+   struct debug_store_buffers cpu_debug_buffers;
+#endif
 };
 
 #define CPU_ENTRY_AREA_SIZE(sizeof(struct cpu_entry_area))
--- /dev/null
+++ b/arch/x86/include/asm/intel_ds.h
@@ -0,0 +1,36 @@
+#ifndef _ASM_INTEL_DS_H
+#define _ASM_INTEL_DS_H
+
+#include 
+
+#define BTS_BUFFER_SIZE(PAGE_SIZE << 4)
+#define PEBS_BUFFER_SIZE   (PAGE_SIZE << 4)
+
+/* The maximal number of PEBS events: */
+#define MAX_PEBS_EVENTS8
+
+/*
+ * A debug store configuration.
+ *
+ * We only support architectures that use 64bit fields.
+ */
+struct debug_store {
+   u64 bts_buffer_base;
+   u64 bts_index;
+   u64 bts_absolute_maximum;
+   u64 bts_interrupt_threshold;
+   u64 pebs_buffer_base;
+   u64 pebs_index;
+   u64 pebs_absolute_maximum;
+   u64 pebs_interrupt_threshold;
+   u64 pebs_event_reset[MAX_PEBS_EVENTS];
+} __aligned(PAGE_SIZE);
+
+DECLARE_PER_CPU_PAGE_ALIGNED(struct debug_store, cpu_debug_store);

[PATCH 4.14 022/146] x86/pti: Put the LDT in its own PGD if PTI is on

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Andy Lutomirski 

commit f55f0501cbf65ec41cca5058513031b711730b1d upstream.

With PTI enabled, the LDT must be mapped in the usermode tables somewhere.
The LDT is per process, i.e. per mm.

An earlier approach mapped the LDT on context switch into a fixmap area,
but that's a big overhead and exhausted the fixmap space when NR_CPUS got
big.

Take advantage of the fact that there is an address space hole which
provides a completely unused pgd. Use this pgd to manage per-mm LDT
mappings.

This has a down side: the LDT isn't (currently) randomized, and an attack
that can write the LDT is instant root due to call gates (thanks, AMD, for
leaving call gates in AMD64 but designing them wrong so they're only useful
for exploits).  This can be mitigated by making the LDT read-only or
randomizing the mapping, either of which is strightforward on top of this
patch.

This will significantly slow down LDT users, but that shouldn't matter for
important workloads -- the LDT is only used by DOSEMU(2), Wine, and very
old libc implementations.

[ tglx: Cleaned it up. ]

Signed-off-by: Andy Lutomirski 
Signed-off-by: Thomas Gleixner 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: Dave Hansen 
Cc: David Laight 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Kees Cook 
Cc: Kirill A. Shutemov 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 Documentation/x86/x86_64/mm.txt |3 
 arch/x86/include/asm/mmu_context.h  |   59 -
 arch/x86/include/asm/pgtable_64_types.h |4 
 arch/x86/include/asm/processor.h|   23 +++--
 arch/x86/kernel/ldt.c   |  139 +++-
 arch/x86/mm/dump_pagetables.c   |9 ++
 6 files changed, 220 insertions(+), 17 deletions(-)

--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,7 @@ ea00 - eaff (=40
 ... unused hole ...
 ec00 - fbff (=44 bits) kasan shadow memory (16TB)
 ... unused hole ...
+fe00 - fe7f (=39 bits) LDT remap for PTI
 fe80 - feff (=39 bits) cpu_entry_area mapping
 ff00 - ff7f (=39 bits) %esp fixup stacks
 ... unused hole ...
@@ -29,7 +30,7 @@ Virtual memory map with 5 level page tab
 hole caused by [56:63] sign extension
 ff00 - ff0f (=52 bits) guard hole, reserved for 
hypervisor
 ff10 - ff8f (=55 bits) direct mapping of all phys. 
memory
-ff90 - ff9f (=52 bits) hole
+ff90 - ff9f (=52 bits) LDT remap for PTI
 ffa0 - ffd1 (=54 bits) vmalloc/ioremap space (12800 TB)
 ffd2 - ffd3 (=49 bits) hole
 ffd4 - ffd5 (=49 bits) virtual memory map (512TB)
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -50,10 +50,33 @@ struct ldt_struct {
 * call gates.  On native, we could merge the ldt_struct and LDT
 * allocations, but it's not worth trying to optimize.
 */
-   struct desc_struct *entries;
-   unsigned int nr_entries;
+   struct desc_struct  *entries;
+   unsigned intnr_entries;
+
+   /*
+* If PTI is in use, then the entries array is not mapped while we're
+* in user mode.  The whole array will be aliased at the addressed
+* given by ldt_slot_va(slot).  We use two slots so that we can allocate
+* and map, and enable a new LDT without invalidating the mapping
+* of an older, still-in-use LDT.
+*
+* slot will be -1 if this LDT doesn't have an alias mapping.
+*/
+   int slot;
 };
 
+/* This is a multiple of PAGE_SIZE. */
+#define LDT_SLOT_STRIDE (LDT_ENTRIES * LDT_ENTRY_SIZE)
+
+static inline void *ldt_slot_va(int slot)
+{
+#ifdef CONFIG_X86_64
+   return (void *)(LDT_BASE_ADDR + LDT_SLOT_STRIDE * slot);
+#else
+   BUG();
+#endif
+}
+
 /*
  * Used for LDT copy/destruction.
  */
@@ -64,6 +87,7 @@ static inline void init_new_context_ldt(
 }
 int ldt_dup_context(struct mm_struct *oldmm, struct mm_struct *mm);
 void destroy_context_ldt(struct mm_struct *mm);
+void ldt_arch_exit_mmap(struct mm_struct *mm);
 #else  /* CONFIG_MODIFY_LDT_SYSCALL */
 static inline void init_new_context_ldt(struct mm_struct *mm) { }
 static inline int ldt_dup_context(struct mm_struct *oldmm,
@@ -71,7 +95,8 @@ static inline int ldt_dup_context(struct
 {
return 0;
 }
-static inline void destroy_context_ldt(struct mm_struct *mm) {}
+static inline void destroy_context_ldt(struct mm_struct *mm) { }
+static inline void ldt_arch_exit_mmap(struct mm_struct *mm) { }
 #endif
 
 static inline void load_mm_ldt(struct mm_struct *mm)
@@ -96,10 +121,31 @@ 

[PATCH 4.14 020/146] x86/events/intel/ds: Map debug buffers in cpu_entry_area

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Hugh Dickins 

commit c1961a4631daef4aeabee8e368b1b13e8f173c91 upstream.

The BTS and PEBS buffers both have their virtual addresses programmed into
the hardware.  This means that any access to them is performed via the page
tables.  The times that the hardware accesses these are entirely dependent
on how the performance monitoring hardware events are set up.  In other
words, there is no way for the kernel to tell when the hardware might
access these buffers.

To avoid perf crashes, place 'debug_store' allocate pages and map them into
the cpu_entry_area.

The PEBS fixup buffer does not need this treatment.

[ tglx: Got rid of the kaiser_add_mapping() complication ]

Signed-off-by: Hugh Dickins 
Signed-off-by: Dave Hansen 
Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/events/intel/ds.c   |  125 +++
 arch/x86/events/perf_event.h |2 
 2 files changed, 82 insertions(+), 45 deletions(-)

--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -3,6 +3,7 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 
@@ -280,17 +281,52 @@ void fini_debug_store_on_cpu(int cpu)
 
 static DEFINE_PER_CPU(void *, insn_buffer);
 
-static int alloc_pebs_buffer(int cpu)
+static void ds_update_cea(void *cea, void *addr, size_t size, pgprot_t prot)
 {
-   struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
+   phys_addr_t pa;
+   size_t msz = 0;
+
+   pa = virt_to_phys(addr);
+   for (; msz < size; msz += PAGE_SIZE, pa += PAGE_SIZE, cea += PAGE_SIZE)
+   cea_set_pte(cea, pa, prot);
+}
+
+static void ds_clear_cea(void *cea, size_t size)
+{
+   size_t msz = 0;
+
+   for (; msz < size; msz += PAGE_SIZE, cea += PAGE_SIZE)
+   cea_set_pte(cea, 0, PAGE_NONE);
+}
+
+static void *dsalloc_pages(size_t size, gfp_t flags, int cpu)
+{
+   unsigned int order = get_order(size);
int node = cpu_to_node(cpu);
-   int max;
-   void *buffer, *ibuffer;
+   struct page *page;
+
+   page = __alloc_pages_node(node, flags | __GFP_ZERO, order);
+   return page ? page_address(page) : NULL;
+}
+
+static void dsfree_pages(const void *buffer, size_t size)
+{
+   if (buffer)
+   free_pages((unsigned long)buffer, get_order(size));
+}
+
+static int alloc_pebs_buffer(int cpu)
+{
+   struct cpu_hw_events *hwev = per_cpu_ptr(_hw_events, cpu);
+   struct debug_store *ds = hwev->ds;
+   size_t bsiz = x86_pmu.pebs_buffer_size;
+   int max, node = cpu_to_node(cpu);
+   void *buffer, *ibuffer, *cea;
 
if (!x86_pmu.pebs)
return 0;
 
-   buffer = kzalloc_node(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
+   buffer = dsalloc_pages(bsiz, GFP_KERNEL, cpu);
if (unlikely(!buffer))
return -ENOMEM;
 
@@ -301,25 +337,27 @@ static int alloc_pebs_buffer(int cpu)
if (x86_pmu.intel_cap.pebs_format < 2) {
ibuffer = kzalloc_node(PEBS_FIXUP_SIZE, GFP_KERNEL, node);
if (!ibuffer) {
-   kfree(buffer);
+   dsfree_pages(buffer, bsiz);
return -ENOMEM;
}
per_cpu(insn_buffer, cpu) = ibuffer;
}
-
-   max = x86_pmu.pebs_buffer_size / x86_pmu.pebs_record_size;
-
-   ds->pebs_buffer_base = (u64)(unsigned long)buffer;
+   hwev->ds_pebs_vaddr = buffer;
+   /* Update the cpu entry area mapping */
+   cea = _cpu_entry_area(cpu)->cpu_debug_buffers.pebs_buffer;
+   ds->pebs_buffer_base = (unsigned long) cea;
+   ds_update_cea(cea, buffer, bsiz, PAGE_KERNEL);
ds->pebs_index = ds->pebs_buffer_base;
-   ds->pebs_absolute_maximum = ds->pebs_buffer_base +
-   max * x86_pmu.pebs_record_size;
-
+   max = x86_pmu.pebs_record_size * (bsiz / x86_pmu.pebs_record_size);
+   ds->pebs_absolute_maximum = ds->pebs_buffer_base + max;
return 0;
 }
 
 static void release_pebs_buffer(int cpu)
 {
-   struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
+   struct cpu_hw_events *hwev = per_cpu_ptr(_hw_events, cpu);
+   struct debug_store *ds = hwev->ds;
+   void *cea;
 
if (!ds || !x86_pmu.pebs)
return;
@@ -327,73 +365,70 @@ static void release_pebs_buffer(int cpu)
kfree(per_cpu(insn_buffer, cpu));
per_cpu(insn_buffer, cpu) = NULL;
 
-   kfree((void *)(unsigned long)ds->pebs_buffer_base);
+   /* Clear 

[PATCH 4.14 025/146] x86/mm: Abstract switching CR3

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit 48e111982cda033fec832c6b0592c2acedd85d04 upstream.

In preparation to adding additional PCID flushing, abstract the
loading of a new ASID into CR3.

[ PeterZ: Split out from big combo patch ]

Signed-off-by: Dave Hansen 
Signed-off-by: Peter Zijlstra (Intel) 
Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/mm/tlb.c |   22 --
 1 file changed, 20 insertions(+), 2 deletions(-)

--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -100,6 +100,24 @@ static void choose_new_asid(struct mm_st
*need_flush = true;
 }
 
+static void load_new_mm_cr3(pgd_t *pgdir, u16 new_asid, bool need_flush)
+{
+   unsigned long new_mm_cr3;
+
+   if (need_flush) {
+   new_mm_cr3 = build_cr3(pgdir, new_asid);
+   } else {
+   new_mm_cr3 = build_cr3_noflush(pgdir, new_asid);
+   }
+
+   /*
+* Caution: many callers of this function expect
+* that load_cr3() is serializing and orders TLB
+* fills with respect to the mm_cpumask writes.
+*/
+   write_cr3(new_mm_cr3);
+}
+
 void leave_mm(int cpu)
 {
struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
@@ -230,7 +248,7 @@ void switch_mm_irqs_off(struct mm_struct
if (need_flush) {
this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, 
next->context.ctx_id);
this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, 
next_tlb_gen);
-   write_cr3(build_cr3(next->pgd, new_asid));
+   load_new_mm_cr3(next->pgd, new_asid, true);
 
/*
 * NB: This gets called via leave_mm() in the idle path
@@ -243,7 +261,7 @@ void switch_mm_irqs_off(struct mm_struct
trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 
TLB_FLUSH_ALL);
} else {
/* The new ASID is already up to date. */
-   write_cr3(build_cr3_noflush(next->pgd, new_asid));
+   load_new_mm_cr3(next->pgd, new_asid, false);
 
/* See above wrt _rcuidle. */
trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);




[PATCH 4.14 004/146] x86/cpufeatures: Add X86_BUG_CPU_INSECURE

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit a89f040fa34ec9cd682aed98b8f04e3c47d998bd upstream.

Many x86 CPUs leak information to user space due to missing isolation of
user space and kernel space page tables. There are many well documented
ways to exploit that.

The upcoming software migitation of isolating the user and kernel space
page tables needs a misfeature flag so code can be made runtime
conditional.

Add the BUG bits which indicates that the CPU is affected and add a feature
bit which indicates that the software migitation is enabled.

Assume for now that _ALL_ x86 CPUs are affected by this. Exceptions can be
made later.

Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/cpufeatures.h   |3 ++-
 arch/x86/include/asm/disabled-features.h |8 +++-
 arch/x86/kernel/cpu/common.c |4 
 3 files changed, 13 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -201,7 +201,7 @@
 #define X86_FEATURE_HW_PSTATE  ( 7*32+ 8) /* AMD HW-PState */
 #define X86_FEATURE_PROC_FEEDBACK  ( 7*32+ 9) /* AMD ProcFeedbackInterface 
*/
 #define X86_FEATURE_SME( 7*32+10) /* AMD Secure Memory 
Encryption */
-
+#define X86_FEATURE_PTI( 7*32+11) /* Kernel Page Table 
Isolation enabled */
 #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory 
Number */
 #define X86_FEATURE_INTEL_PT   ( 7*32+15) /* Intel Processor Trace */
 #define X86_FEATURE_AVX512_4VNNIW  ( 7*32+16) /* AVX-512 Neural Network 
Instructions */
@@ -340,5 +340,6 @@
 #define X86_BUG_SWAPGS_FENCE   X86_BUG(11) /* SWAPGS without input dep 
on GS */
 #define X86_BUG_MONITORX86_BUG(12) /* IPI required to 
wake up remote CPU */
 #define X86_BUG_AMD_E400   X86_BUG(13) /* CPU is among the 
affected by Erratum 400 */
+#define X86_BUG_CPU_INSECURE   X86_BUG(14) /* CPU is insecure and 
needs kernel page table isolation */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -44,6 +44,12 @@
 # define DISABLE_LA57  (1<<(X86_FEATURE_LA57 & 31))
 #endif
 
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+# define DISABLE_PTI   0
+#else
+# define DISABLE_PTI   (1 << (X86_FEATURE_PTI & 31))
+#endif
+
 /*
  * Make sure to add features to the correct mask
  */
@@ -54,7 +60,7 @@
 #define DISABLED_MASK4 (DISABLE_PCID)
 #define DISABLED_MASK5 0
 #define DISABLED_MASK6 0
-#define DISABLED_MASK7 0
+#define DISABLED_MASK7 (DISABLE_PTI)
 #define DISABLED_MASK8 0
 #define DISABLED_MASK9 (DISABLE_MPX)
 #define DISABLED_MASK100
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -898,6 +898,10 @@ static void __init early_identify_cpu(st
}
 
setup_force_cpu_cap(X86_FEATURE_ALWAYS);
+
+   /* Assume for now that ALL x86 CPUs are insecure */
+   setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
+
fpu__init_system(c);
 
 #ifdef CONFIG_X86_32




[PATCH 4.14 003/146] tracing: Fix crash when it fails to alloc ring buffer

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Jing Xia 

commit 24f2aaf952ee0b59f31c3a18b8b36c9e3d3c2cf5 upstream.

Double free of the ring buffer happens when it fails to alloc new
ring buffer instance for max_buffer if TRACER_MAX_TRACE is configured.
The root cause is that the pointer is not set to NULL after the buffer
is freed in allocate_trace_buffers(), and the freeing of the ring
buffer is invoked again later if the pointer is not equal to Null,
as:

instance_mkdir()
|-allocate_trace_buffers()
|-allocate_trace_buffer(tr, >trace_buffer...)
|-allocate_trace_buffer(tr, >max_buffer...)

  // allocate fail(-ENOMEM),first free
  // and the buffer pointer is not set to null
|-ring_buffer_free(tr->trace_buffer.buffer)

   // out_free_tr
|-free_trace_buffers()
|-free_trace_buffer(>trace_buffer);

  //if trace_buffer is not null, free again
|-ring_buffer_free(buf->buffer)
|-rb_free_cpu_buffer(buffer->buffers[cpu])
// ring_buffer_per_cpu is null, and
// crash in ring_buffer_per_cpu->pages

Link: 
http://lkml.kernel.org/r/20171226071253.8968-1-chunyan.zh...@spreadtrum.com

Fixes: 737223fbca3b1 ("tracing: Consolidate buffer allocation code")
Signed-off-by: Jing Xia 
Signed-off-by: Chunyan Zhang 
Signed-off-by: Steven Rostedt (VMware) 
Signed-off-by: Greg Kroah-Hartman 

---
 kernel/trace/trace.c |2 ++
 1 file changed, 2 insertions(+)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -7604,7 +7604,9 @@ static int allocate_trace_buffers(struct
allocate_snapshot ? size : 1);
if (WARN_ON(ret)) {
ring_buffer_free(tr->trace_buffer.buffer);
+   tr->trace_buffer.buffer = NULL;
free_percpu(tr->trace_buffer.data);
+   tr->trace_buffer.data = NULL;
return -ENOMEM;
}
tr->allocated_snapshot = allocate_snapshot;




[PATCH 4.14 006/146] x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit 8a09317b895f073977346779df52f67c1056d81d upstream.

PAGE_TABLE_ISOLATION needs to switch to a different CR3 value when it
enters the kernel and switch back when it exits.  This essentially needs to
be done before leaving assembly code.

This is extra challenging because the switching context is tricky: the
registers that can be clobbered can vary.  It is also hard to store things
on the stack because there is an established ABI (ptregs) or the stack is
entirely unsafe to use.

Establish a set of macros that allow changing to the user and kernel CR3
values.

Interactions with SWAPGS:

  Previous versions of the PAGE_TABLE_ISOLATION code relied on having
  per-CPU scratch space to save/restore a register that can be used for the
  CR3 MOV.  The %GS register is used to index into our per-CPU space, so
  SWAPGS *had* to be done before the CR3 switch.  That scratch space is gone
  now, but the semantic that SWAPGS must be done before the CR3 MOV is
  retained.  This is good to keep because it is not that hard to do and it
  allows to do things like add per-CPU debugging information.

What this does in the NMI code is worth pointing out.  NMIs can interrupt
*any* context and they can also be nested with NMIs interrupting other
NMIs.  The comments below ".Lnmi_from_kernel" explain the format of the
stack during this situation.  Changing the format of this stack is hard.
Instead of storing the old CR3 value on the stack, this depends on the
*regular* register save/restore mechanism and then uses %r14 to keep CR3
during the NMI.  It is callee-saved and will not be clobbered by the C NMI
handlers that get called.

[ PeterZ: ESPFIX optimization ]

Based-on-code-from: Andy Lutomirski 
Signed-off-by: Dave Hansen 
Signed-off-by: Thomas Gleixner 
Reviewed-by: Borislav Petkov 
Reviewed-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Cc: linux...@kvack.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/entry/calling.h |   66 +++
 arch/x86/entry/entry_64.S|   45 +++---
 arch/x86/entry/entry_64_compat.S |   24 +-
 3 files changed, 128 insertions(+), 7 deletions(-)

--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -1,6 +1,8 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #include 
 #include 
+#include 
+#include 
 
 /*
 
@@ -187,6 +189,70 @@ For 32-bit we have the following convent
 #endif
 .endm
 
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+
+/* PAGE_TABLE_ISOLATION PGDs are 8k.  Flip bit 12 to switch between the two 
halves: */
+#define PTI_SWITCH_MASK (1< in kernel */
SWAPGS
xorl%ebx, %ebx
-1: ret
+
+1:
+   SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
+
+   ret
 END(paranoid_entry)
 
 /*
@@ -1266,6 +1287,7 @@ ENTRY(paranoid_exit)
testl   %ebx, %ebx  /* swapgs needed? */
jnz .Lparanoid_exit_no_swapgs
TRACE_IRQS_IRETQ
+   RESTORE_CR3 save_reg=%r14
SWAPGS_UNSAFE_STACK
jmp .Lparanoid_exit_restore
 .Lparanoid_exit_no_swapgs:
@@ -1293,6 +1315,8 @@ ENTRY(error_entry)
 * from user mode due to an IRET fault.
 */
SWAPGS
+   /* We have user CR3.  Change to kernel CR3. */
+   SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 
 .Lerror_entry_from_usermode_after_swapgs:
/* Put us onto the real thread stack. */
@@ -1339,6 +1363,7 @@ ENTRY(error_entry)
 * .Lgs_change's error handler with kernel gsbase.
 */
SWAPGS
+   SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
jmp .Lerror_entry_done
 
 .Lbstep_iret:
@@ -1348,10 +1373,11 @@ ENTRY(error_entry)
 
 .Lerror_bad_iret:
/*
-* We came from an IRET to user mode, so we have user gsbase.
-* Switch to kernel gsbase:
+* We came from an IRET to user mode, so we have user
+* gsbase and CR3.  Switch to kernel gsbase and CR3:
 */
SWAPGS
+   SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 
/*
 * Pretend that the exception came from user mode: set up pt_regs
@@ -1383,6 +1409,10 @@ END(error_exit)
 /*
  * Runs on exception stack.  Xen PV does not go through this path at all,
  * so we can use real assembly here.
+ *
+ * Registers:
+ * %r14: Used to save/restore the CR3 of the interrupted context
+ *   when PAGE_TABLE_ISOLATION is in use.  Do not clobber.
  */
 ENTRY(nmi)
UNWIND_HINT_IRET_REGS
@@ -1446,6 +1476,7 @@ ENTRY(nmi)
 
swapgs
cld
+   

[PATCH 4.14 009/146] x86/mm/pti: Add mapping helper functions

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit 61e9b3671007a5da8127955a1a3bda7e0d5f42e8 upstream.

Add the pagetable helper functions do manage the separate user space page
tables.

[ tglx: Split out from the big combo kaiser patch. Folded Andys
simplification and made it out of line as Boris suggested ]

Signed-off-by: Dave Hansen 
Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/pgtable.h|6 ++
 arch/x86/include/asm/pgtable_64.h |   92 ++
 arch/x86/mm/pti.c |   41 
 3 files changed, 138 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -909,7 +909,11 @@ static inline int pgd_none(pgd_t pgd)
  * pgd_offset() returns a (pgd_t *)
  * pgd_index() is used get the offset into the pgd page's array of pgd_t's;
  */
-#define pgd_offset(mm, address) ((mm)->pgd + pgd_index((address)))
+#define pgd_offset_pgd(pgd, address) (pgd + pgd_index((address)))
+/*
+ * a shortcut to get a pgd_t in a given mm
+ */
+#define pgd_offset(mm, address) pgd_offset_pgd((mm)->pgd, (address))
 /*
  * a shortcut which implies the use of the kernel's pgd, instead
  * of a process's
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -131,9 +131,97 @@ static inline pud_t native_pudp_get_and_
 #endif
 }
 
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+/*
+ * All top-level PAGE_TABLE_ISOLATION page tables are order-1 pages
+ * (8k-aligned and 8k in size).  The kernel one is at the beginning 4k and
+ * the user one is in the last 4k.  To switch between them, you
+ * just need to flip the 12th bit in their addresses.
+ */
+#define PTI_PGTABLE_SWITCH_BIT PAGE_SHIFT
+
+/*
+ * This generates better code than the inline assembly in
+ * __set_bit().
+ */
+static inline void *ptr_set_bit(void *ptr, int bit)
+{
+   unsigned long __ptr = (unsigned long)ptr;
+
+   __ptr |= BIT(bit);
+   return (void *)__ptr;
+}
+static inline void *ptr_clear_bit(void *ptr, int bit)
+{
+   unsigned long __ptr = (unsigned long)ptr;
+
+   __ptr &= ~BIT(bit);
+   return (void *)__ptr;
+}
+
+static inline pgd_t *kernel_to_user_pgdp(pgd_t *pgdp)
+{
+   return ptr_set_bit(pgdp, PTI_PGTABLE_SWITCH_BIT);
+}
+
+static inline pgd_t *user_to_kernel_pgdp(pgd_t *pgdp)
+{
+   return ptr_clear_bit(pgdp, PTI_PGTABLE_SWITCH_BIT);
+}
+
+static inline p4d_t *kernel_to_user_p4dp(p4d_t *p4dp)
+{
+   return ptr_set_bit(p4dp, PTI_PGTABLE_SWITCH_BIT);
+}
+
+static inline p4d_t *user_to_kernel_p4dp(p4d_t *p4dp)
+{
+   return ptr_clear_bit(p4dp, PTI_PGTABLE_SWITCH_BIT);
+}
+#endif /* CONFIG_PAGE_TABLE_ISOLATION */
+
+/*
+ * Page table pages are page-aligned.  The lower half of the top
+ * level is used for userspace and the top half for the kernel.
+ *
+ * Returns true for parts of the PGD that map userspace and
+ * false for the parts that map the kernel.
+ */
+static inline bool pgdp_maps_userspace(void *__ptr)
+{
+   unsigned long ptr = (unsigned long)__ptr;
+
+   return (ptr & ~PAGE_MASK) < (PAGE_SIZE / 2);
+}
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+pgd_t __pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd);
+
+/*
+ * Take a PGD location (pgdp) and a pgd value that needs to be set there.
+ * Populates the user and returns the resulting PGD that must be set in
+ * the kernel copy of the page tables.
+ */
+static inline pgd_t pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+   if (!static_cpu_has(X86_FEATURE_PTI))
+   return pgd;
+   return __pti_set_user_pgd(pgdp, pgd);
+}
+#else
+static inline pgd_t pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+   return pgd;
+}
+#endif
+
 static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d)
 {
+#if defined(CONFIG_PAGE_TABLE_ISOLATION) && !defined(CONFIG_X86_5LEVEL)
+   p4dp->pgd = pti_set_user_pgd(>pgd, p4d.pgd);
+#else
*p4dp = p4d;
+#endif
 }
 
 static inline void native_p4d_clear(p4d_t *p4d)
@@ -147,7 +235,11 @@ static inline void native_p4d_clear(p4d_
 
 static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd)
 {
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+   *pgdp = pti_set_user_pgd(pgdp, pgd);
+#else
*pgdp = pgd;
+#endif
 }
 
 static inline void native_pgd_clear(pgd_t *pgd)
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -96,6 +96,47 @@ enable:
setup_force_cpu_cap(X86_FEATURE_PTI);
 }
 
+pgd_t __pti_set_user_pgd(pgd_t *pgdp, pgd_t pgd)
+{
+   /*
+* 

[PATCH 4.14 041/146] ASoC: da7218: fix fix child-node lookup

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Johan Hovold 

commit bc6476d6c1edcb9b97621b5131bd169aa81f27db upstream.

Fix child-node lookup during probe, which ended up searching the whole
device tree depth-first starting at the parent rather than just matching
on its children.

To make things worse, the parent codec node was also prematurely freed.

Fixes: 4d50934abd22 ("ASoC: da7218: Add da7218 codec driver")
Signed-off-by: Johan Hovold 
Acked-by: Adam Thomson 
Signed-off-by: Mark Brown 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/soc/codecs/da7218.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/sound/soc/codecs/da7218.c
+++ b/sound/soc/codecs/da7218.c
@@ -2520,7 +2520,7 @@ static struct da7218_pdata *da7218_of_to
}
 
if (da7218->dev_id == DA7218_DEV_ID) {
-   hpldet_np = of_find_node_by_name(np, "da7218_hpldet");
+   hpldet_np = of_get_child_by_name(np, "da7218_hpldet");
if (!hpldet_np)
return pdata;
 




[PATCH 4.14 007/146] x86/mm/pti: Add infrastructure for page table isolation

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit aa8c6248f8c75acfd610fe15d8cae23cf70d9d09 upstream.

Add the initial files for kernel page table isolation, with a minimal init
function and the boot time detection for this misfeature.

Signed-off-by: Thomas Gleixner 
Reviewed-by: Borislav Petkov 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 Documentation/admin-guide/kernel-parameters.txt |2 
 arch/x86/boot/compressed/pagetable.c|3 
 arch/x86/entry/calling.h|7 ++
 arch/x86/include/asm/pti.h  |   14 
 arch/x86/mm/Makefile|7 +-
 arch/x86/mm/init.c  |2 
 arch/x86/mm/pti.c   |   84 
 include/linux/pti.h |   11 +++
 init/main.c |3 
 9 files changed, 130 insertions(+), 3 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2685,6 +2685,8 @@
steal time is computed, but won't influence scheduler
behaviour
 
+   nopti   [X86-64] Disable kernel page table isolation
+
nolapic [X86-32,APIC] Do not enable or use the local APIC.
 
nolapic_timer   [X86-32,APIC] Do not use the local APIC timer.
--- a/arch/x86/boot/compressed/pagetable.c
+++ b/arch/x86/boot/compressed/pagetable.c
@@ -23,6 +23,9 @@
  */
 #undef CONFIG_AMD_MEM_ENCRYPT
 
+/* No PAGE_TABLE_ISOLATION support needed either: */
+#undef CONFIG_PAGE_TABLE_ISOLATION
+
 #include "misc.h"
 
 /* These actually do the work of building the kernel identity maps. */
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -205,18 +205,23 @@ For 32-bit we have the following convent
 .endm
 
 .macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+   ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
mov %cr3, \scratch_reg
ADJUST_KERNEL_CR3 \scratch_reg
mov \scratch_reg, %cr3
+.Lend_\@:
 .endm
 
 .macro SWITCH_TO_USER_CR3 scratch_reg:req
+   ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
mov %cr3, \scratch_reg
ADJUST_USER_CR3 \scratch_reg
mov \scratch_reg, %cr3
+.Lend_\@:
 .endm
 
 .macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+   ALTERNATIVE "jmp .Ldone_\@", "", X86_FEATURE_PTI
movq%cr3, \scratch_reg
movq\scratch_reg, \save_reg
/*
@@ -233,11 +238,13 @@ For 32-bit we have the following convent
 .endm
 
 .macro RESTORE_CR3 save_reg:req
+   ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
/*
 * The CR3 write could be avoided when not changing its value,
 * but would require a CR3 read *and* a scratch register.
 */
movq\save_reg, %cr3
+.Lend_\@:
 .endm
 
 #else /* CONFIG_PAGE_TABLE_ISOLATION=n: */
--- /dev/null
+++ b/arch/x86/include/asm/pti.h
@@ -0,0 +1,14 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef _ASM_X86_PTI_H
+#define _ASM_X86_PTI_H
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+extern void pti_init(void);
+extern void pti_check_boottime_disable(void);
+#else
+static inline void pti_check_boottime_disable(void) { }
+#endif
+
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_X86_PTI_H */
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -43,9 +43,10 @@ obj-$(CONFIG_AMD_NUMA)   += amdtopology.o
 obj-$(CONFIG_ACPI_NUMA)+= srat.o
 obj-$(CONFIG_NUMA_EMU) += numa_emulation.o
 
-obj-$(CONFIG_X86_INTEL_MPX)+= mpx.o
-obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
-obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
+obj-$(CONFIG_X86_INTEL_MPX)+= mpx.o
+obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
+obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
+obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)  += mem_encrypt.o
 obj-$(CONFIG_AMD_MEM_ENCRYPT)  += mem_encrypt_boot.o
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * We need to define the tracepoints somewhere, and tlb.c
@@ -630,6 +631,7 @@ void __init init_mem_mapping(void)
 {
unsigned long end;
 
+   pti_check_boottime_disable();
probe_page_size_mask();
setup_pcid();
 
--- /dev/null
+++ b/arch/x86/mm/pti.c
@@ -0,0 +1,84 @@
+/*
+ * 

[PATCH 4.14 028/146] x86/mm: Use INVPCID for __native_flush_tlb_single()

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit 6cff64b8607f89f50498055a20e45754b0c1 upstream.

This uses INVPCID to shoot down individual lines of the user mapping
instead of marking the entire user map as invalid. This
could/might/possibly be faster.

This for sure needs tlb_single_page_flush_ceiling to be redetermined;
esp. since INVPCID is _slow_.

A detailed performance analysis is available here:

  https://lkml.kernel.org/r/3062e486-3539-8a1f-5724-16199420b...@intel.com

[ Peterz: Split out from big combo patch ]

Signed-off-by: Dave Hansen 
Signed-off-by: Peter Zijlstra (Intel) 
Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/cpufeatures.h |1 
 arch/x86/include/asm/tlbflush.h|   23 -
 arch/x86/mm/init.c |   64 +
 3 files changed, 60 insertions(+), 28 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -197,6 +197,7 @@
 #define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* Cache Allocation 
Technology L3 */
 #define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* Cache Allocation 
Technology L2 */
 #define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* Code and Data 
Prioritization L3 */
+#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && 
CR4.PCIDE=1 */
 
 #define X86_FEATURE_HW_PSTATE  ( 7*32+ 8) /* AMD HW-PState */
 #define X86_FEATURE_PROC_FEEDBACK  ( 7*32+ 9) /* AMD ProcFeedbackInterface 
*/
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -85,6 +85,18 @@ static inline u16 kern_pcid(u16 asid)
return asid + 1;
 }
 
+/*
+ * The user PCID is just the kernel one, plus the "switch bit".
+ */
+static inline u16 user_pcid(u16 asid)
+{
+   u16 ret = kern_pcid(asid);
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+   ret |= 1 << X86_CR3_PTI_SWITCH_BIT;
+#endif
+   return ret;
+}
+
 struct pgd_t;
 static inline unsigned long build_cr3(pgd_t *pgd, u16 asid)
 {
@@ -335,6 +347,8 @@ static inline void __native_flush_tlb_gl
/*
 * Using INVPCID is considerably faster than a pair of writes
 * to CR4 sandwiched inside an IRQ flag save/restore.
+*
+* Note, this works with CR4.PCIDE=0 or 1.
 */
invpcid_flush_all();
return;
@@ -368,7 +382,14 @@ static inline void __native_flush_tlb_si
if (!static_cpu_has(X86_FEATURE_PTI))
return;
 
-   invalidate_user_asid(loaded_mm_asid);
+   /*
+* Some platforms #GP if we call invpcid(type=1/2) before CR4.PCIDE=1.
+* Just use invalidate_user_asid() in case we are called early.
+*/
+   if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE))
+   invalidate_user_asid(loaded_mm_asid);
+   else
+   invpcid_flush_one(user_pcid(loaded_mm_asid), addr);
 }
 
 /*
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -203,34 +203,44 @@ static void __init probe_page_size_mask(
 
 static void setup_pcid(void)
 {
-#ifdef CONFIG_X86_64
-   if (boot_cpu_has(X86_FEATURE_PCID)) {
-   if (boot_cpu_has(X86_FEATURE_PGE)) {
-   /*
-* This can't be cr4_set_bits_and_update_boot() --
-* the trampoline code can't handle CR4.PCIDE and
-* it wouldn't do any good anyway.  Despite the name,
-* cr4_set_bits_and_update_boot() doesn't actually
-* cause the bits in question to remain set all the
-* way through the secondary boot asm.
-*
-* Instead, we brute-force it and set CR4.PCIDE
-* manually in start_secondary().
-*/
-   cr4_set_bits(X86_CR4_PCIDE);
-   } else {
-   /*
-* flush_tlb_all(), as currently implemented, won't
-* work if PCID is on but PGE is not.  Since that
-* combination doesn't exist on real hardware, there's
-* no reason to try to fully support it, but it's
-* polite to avoid corrupting data if we're on
-* an improperly configured VM.
-*/
-   setup_clear_cpu_cap(X86_FEATURE_PCID);
-   }
+

[PATCH 4.14 039/146] ASoC: codecs: msm8916-wcd: Fix supported formats

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Srinivas Kandagatla 

commit 51f493ae71adc2c49a317a13c38e54e1cdf46005 upstream.

This codec is configurable for only 16 bit and 32 bit samples, so reflect
this in the supported formats also remove 24bit sample from supported list.

Signed-off-by: Srinivas Kandagatla 
Signed-off-by: Mark Brown 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/soc/codecs/msm8916-wcd-analog.c  |2 +-
 sound/soc/codecs/msm8916-wcd-digital.c |4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

--- a/sound/soc/codecs/msm8916-wcd-analog.c
+++ b/sound/soc/codecs/msm8916-wcd-analog.c
@@ -267,7 +267,7 @@
 #define MSM8916_WCD_ANALOG_RATES (SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000 |\
SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_48000)
 #define MSM8916_WCD_ANALOG_FORMATS (SNDRV_PCM_FMTBIT_S16_LE |\
-   SNDRV_PCM_FMTBIT_S24_LE)
+   SNDRV_PCM_FMTBIT_S32_LE)
 
 static int btn_mask = SND_JACK_BTN_0 | SND_JACK_BTN_1 |
   SND_JACK_BTN_2 | SND_JACK_BTN_3 | SND_JACK_BTN_4;
--- a/sound/soc/codecs/msm8916-wcd-digital.c
+++ b/sound/soc/codecs/msm8916-wcd-digital.c
@@ -194,7 +194,7 @@
   SNDRV_PCM_RATE_32000 | \
   SNDRV_PCM_RATE_48000)
 #define MSM8916_WCD_DIGITAL_FORMATS (SNDRV_PCM_FMTBIT_S16_LE |\
-SNDRV_PCM_FMTBIT_S24_LE)
+SNDRV_PCM_FMTBIT_S32_LE)
 
 struct msm8916_wcd_digital_priv {
struct clk *ahbclk, *mclk;
@@ -645,7 +645,7 @@ static int msm8916_wcd_digital_hw_params
RX_I2S_CTL_RX_I2S_MODE_MASK,
RX_I2S_CTL_RX_I2S_MODE_16);
break;
-   case SNDRV_PCM_FORMAT_S24_LE:
+   case SNDRV_PCM_FORMAT_S32_LE:
snd_soc_update_bits(dai->codec, LPASS_CDC_CLK_TX_I2S_CTL,
TX_I2S_CTL_TX_I2S_MODE_MASK,
TX_I2S_CTL_TX_I2S_MODE_32);




[PATCH 4.14 037/146] ring-buffer: Do no reuse reader page if still in use

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Steven Rostedt (VMware) 

commit ae415fa4c5248a8cf4faabd5a3c20576cb1ad607 upstream.

To free the reader page that is allocated with ring_buffer_alloc_read_page(),
ring_buffer_free_read_page() must be called. For faster performance, this
page can be reused by the ring buffer to avoid having to free and allocate
new pages.

The issue arises when the page is used with a splice pipe into the
networking code. The networking code may up the page counter for the page,
and keep it active while sending it is queued to go to the network. The
incrementing of the page ref does not prevent it from being reused in the
ring buffer, and this can cause the page that is being sent out to the
network to be modified before it is sent by reading new data.

Add a check to the page ref counter, and only reuse the page if it is not
being used anywhere else.

Fixes: 73a757e63114d ("ring-buffer: Return reader page back into existing ring 
buffer")
Signed-off-by: Steven Rostedt (VMware) 
Signed-off-by: Greg Kroah-Hartman 

---
 kernel/trace/ring_buffer.c |6 ++
 1 file changed, 6 insertions(+)

--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4443,8 +4443,13 @@ void ring_buffer_free_read_page(struct r
 {
struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];
struct buffer_data_page *bpage = data;
+   struct page *page = virt_to_page(bpage);
unsigned long flags;
 
+   /* If the page is still in use someplace else, we can't reuse it */
+   if (page_ref_count(page) > 1)
+   goto out;
+
local_irq_save(flags);
arch_spin_lock(_buffer->lock);
 
@@ -4456,6 +4461,7 @@ void ring_buffer_free_read_page(struct r
arch_spin_unlock(_buffer->lock);
local_irq_restore(flags);
 
+ out:
free_page((unsigned long)bpage);
 }
 EXPORT_SYMBOL_GPL(ring_buffer_free_read_page);




[PATCH 4.14 042/146] ASoC: fsl_ssi: AC97 ops need regmap, clock and cleaning up on failure

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Maciej S. Szmigiero 

commit 695b78b548d8a26288f041e907ff17758df9e1d5 upstream.

AC'97 ops (register read / write) need SSI regmap and clock, so they have
to be set after them.

We also need to set these ops back to NULL if we fail the probe.

Signed-off-by: Maciej S. Szmigiero 
Acked-by: Nicolin Chen 
Signed-off-by: Mark Brown 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/soc/fsl/fsl_ssi.c |   18 --
 1 file changed, 12 insertions(+), 6 deletions(-)

--- a/sound/soc/fsl/fsl_ssi.c
+++ b/sound/soc/fsl/fsl_ssi.c
@@ -1452,12 +1452,6 @@ static int fsl_ssi_probe(struct platform
sizeof(fsl_ssi_ac97_dai));
 
fsl_ac97_data = ssi_private;
-
-   ret = snd_soc_set_ac97_ops_of_reset(_ssi_ac97_ops, pdev);
-   if (ret) {
-   dev_err(>dev, "could not set AC'97 ops\n");
-   return ret;
-   }
} else {
/* Initialize this copy of the CPU DAI driver structure */
memcpy(_private->cpu_dai_drv, _ssi_dai_template,
@@ -1568,6 +1562,14 @@ static int fsl_ssi_probe(struct platform
return ret;
}
 
+   if (fsl_ssi_is_ac97(ssi_private)) {
+   ret = snd_soc_set_ac97_ops_of_reset(_ssi_ac97_ops, pdev);
+   if (ret) {
+   dev_err(>dev, "could not set AC'97 ops\n");
+   goto error_ac97_ops;
+   }
+   }
+
ret = devm_snd_soc_register_component(>dev, _ssi_component,
  _private->cpu_dai_drv, 1);
if (ret) {
@@ -1651,6 +1653,10 @@ error_sound_card:
fsl_ssi_debugfs_remove(_private->dbg_stats);
 
 error_asoc_register:
+   if (fsl_ssi_is_ac97(ssi_private))
+   snd_soc_set_ac97_ops(NULL);
+
+error_ac97_ops:
if (ssi_private->soc->imx)
fsl_ssi_imx_clean(pdev, ssi_private);
 




[PATCH 4.14 046/146] IB/hfi: Only read capability registers if the capability exists

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Michael J. Ruhl 

commit 4c009af473b2026caaa26107e34d7cc68dad7756 upstream.

During driver init, various registers are saved to allow restoration
after an FLR or gen3 bump.  Some of these registers are not available
in some circumstances (i.e. Virtual machines).

This bug makes the driver unusable when the PCI device is passed into
a VM, it fails during probe.

Delete unnecessary register read/write, and only access register if
the capability exists.

Fixes: a618b7e40af2 ("IB/hfi1: Move saving PCI values to a separate function")
Reviewed-by: Mike Marciniszyn 
Signed-off-by: Michael J. Ruhl 
Signed-off-by: Dennis Dalessandro 
Signed-off-by: Jason Gunthorpe 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/infiniband/hw/hfi1/hfi.h  |1 -
 drivers/infiniband/hw/hfi1/pcie.c |   30 --
 2 files changed, 12 insertions(+), 19 deletions(-)

--- a/drivers/infiniband/hw/hfi1/hfi.h
+++ b/drivers/infiniband/hw/hfi1/hfi.h
@@ -1129,7 +1129,6 @@ struct hfi1_devdata {
u16 pcie_lnkctl;
u16 pcie_devctl2;
u32 pci_msix0;
-   u32 pci_lnkctl3;
u32 pci_tph2;
 
/*
--- a/drivers/infiniband/hw/hfi1/pcie.c
+++ b/drivers/infiniband/hw/hfi1/pcie.c
@@ -411,15 +411,12 @@ int restore_pci_variables(struct hfi1_de
if (ret)
goto error;
 
-   ret = pci_write_config_dword(dd->pcidev, PCIE_CFG_SPCIE1,
-dd->pci_lnkctl3);
-   if (ret)
-   goto error;
-
-   ret = pci_write_config_dword(dd->pcidev, PCIE_CFG_TPH2, dd->pci_tph2);
-   if (ret)
-   goto error;
-
+   if (pci_find_ext_capability(dd->pcidev, PCI_EXT_CAP_ID_TPH)) {
+   ret = pci_write_config_dword(dd->pcidev, PCIE_CFG_TPH2,
+dd->pci_tph2);
+   if (ret)
+   goto error;
+   }
return 0;
 
 error:
@@ -469,15 +466,12 @@ int save_pci_variables(struct hfi1_devda
if (ret)
goto error;
 
-   ret = pci_read_config_dword(dd->pcidev, PCIE_CFG_SPCIE1,
-   >pci_lnkctl3);
-   if (ret)
-   goto error;
-
-   ret = pci_read_config_dword(dd->pcidev, PCIE_CFG_TPH2, >pci_tph2);
-   if (ret)
-   goto error;
-
+   if (pci_find_ext_capability(dd->pcidev, PCI_EXT_CAP_ID_TPH)) {
+   ret = pci_read_config_dword(dd->pcidev, PCIE_CFG_TPH2,
+   >pci_tph2);
+   if (ret)
+   goto error;
+   }
return 0;
 
 error:




[PATCH 4.14 045/146] gpio: fix "gpio-line-names" property retrieval

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Christophe Leroy 

commit 822703354774ec935169cbbc8d503236bcb54fda upstream.

Following commit 9427ecbed46cc ("gpio: Rework of_gpiochip_set_names()
to use device property accessors"), "gpio-line-names" DT property is
not retrieved anymore when chip->parent is not set by the driver.
This is due to OF based property reads having been replaced by device
based property reads.

This patch fixes that by making use of
fwnode_property_read_string_array() instead of
device_property_read_string_array() and handing over either
of_fwnode_handle(chip->of_node) or dev_fwnode(chip->parent)
to that function.

Fixes: 9427ecbed46cc ("gpio: Rework of_gpiochip_set_names() to use device 
property accessors")
Signed-off-by: Christophe Leroy 
Acked-by: Mika Westerberg 
Signed-off-by: Linus Walleij 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/gpio/gpiolib-acpi.c|2 +-
 drivers/gpio/gpiolib-devprop.c |   17 +++--
 drivers/gpio/gpiolib-of.c  |3 ++-
 drivers/gpio/gpiolib.h |3 ++-
 4 files changed, 12 insertions(+), 13 deletions(-)

--- a/drivers/gpio/gpiolib-acpi.c
+++ b/drivers/gpio/gpiolib-acpi.c
@@ -1074,7 +1074,7 @@ void acpi_gpiochip_add(struct gpio_chip
}
 
if (!chip->names)
-   devprop_gpiochip_set_names(chip);
+   devprop_gpiochip_set_names(chip, dev_fwnode(chip->parent));
 
acpi_gpiochip_request_regions(acpi_gpio);
acpi_gpiochip_scan_gpios(acpi_gpio);
--- a/drivers/gpio/gpiolib-devprop.c
+++ b/drivers/gpio/gpiolib-devprop.c
@@ -19,30 +19,27 @@
 /**
  * devprop_gpiochip_set_names - Set GPIO line names using device properties
  * @chip: GPIO chip whose lines should be named, if possible
+ * @fwnode: Property Node containing the gpio-line-names property
  *
  * Looks for device property "gpio-line-names" and if it exists assigns
  * GPIO line names for the chip. The memory allocated for the assigned
  * names belong to the underlying firmware node and should not be released
  * by the caller.
  */
-void devprop_gpiochip_set_names(struct gpio_chip *chip)
+void devprop_gpiochip_set_names(struct gpio_chip *chip,
+   const struct fwnode_handle *fwnode)
 {
struct gpio_device *gdev = chip->gpiodev;
const char **names;
int ret, i;
 
-   if (!chip->parent) {
-   dev_warn(>dev, "GPIO chip parent is NULL\n");
-   return;
-   }
-
-   ret = device_property_read_string_array(chip->parent, "gpio-line-names",
+   ret = fwnode_property_read_string_array(fwnode, "gpio-line-names",
NULL, 0);
if (ret < 0)
return;
 
if (ret != gdev->ngpio) {
-   dev_warn(chip->parent,
+   dev_warn(>dev,
 "names %d do not match number of GPIOs %d\n", ret,
 gdev->ngpio);
return;
@@ -52,10 +49,10 @@ void devprop_gpiochip_set_names(struct g
if (!names)
return;
 
-   ret = device_property_read_string_array(chip->parent, "gpio-line-names",
+   ret = fwnode_property_read_string_array(fwnode, "gpio-line-names",
names, gdev->ngpio);
if (ret < 0) {
-   dev_warn(chip->parent, "failed to read GPIO line names\n");
+   dev_warn(>dev, "failed to read GPIO line names\n");
kfree(names);
return;
}
--- a/drivers/gpio/gpiolib-of.c
+++ b/drivers/gpio/gpiolib-of.c
@@ -493,7 +493,8 @@ int of_gpiochip_add(struct gpio_chip *ch
 
/* If the chip defines names itself, these take precedence */
if (!chip->names)
-   devprop_gpiochip_set_names(chip);
+   devprop_gpiochip_set_names(chip,
+  of_fwnode_handle(chip->of_node));
 
of_node_get(chip->of_node);
 
--- a/drivers/gpio/gpiolib.h
+++ b/drivers/gpio/gpiolib.h
@@ -224,7 +224,8 @@ static inline int gpio_chip_hwgpio(const
return desc - >gdev->descs[0];
 }
 
-void devprop_gpiochip_set_names(struct gpio_chip *chip);
+void devprop_gpiochip_set_names(struct gpio_chip *chip,
+   const struct fwnode_handle *fwnode);
 
 /* With descriptor prefix */
 




[PATCH 4.14 051/146] ALSA: hda - Add MIC_NO_PRESENCE fixup for 2 HP machines

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Hui Wang 

commit 322f74ede933b3e2cb78768b6a6fdbfbf478a0c1 upstream.

There is a headset jack on the front panel, when we plug a headset
into it, the headset mic can't trigger unsol events, and
read_pin_sense() can't detect its presence too. So add this fixup
to fix this issue.

Signed-off-by: Hui Wang 
Signed-off-by: Takashi Iwai 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/pci/hda/patch_conexant.c |   29 +
 1 file changed, 29 insertions(+)

--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -271,6 +271,8 @@ enum {
CXT_FIXUP_HP_SPECTRE,
CXT_FIXUP_HP_GATE_MIC,
CXT_FIXUP_MUTE_LED_GPIO,
+   CXT_FIXUP_HEADSET_MIC,
+   CXT_FIXUP_HP_MIC_NO_PRESENCE,
 };
 
 /* for hda_fixup_thinkpad_acpi() */
@@ -350,6 +352,18 @@ static void cxt_fixup_headphone_mic(stru
}
 }
 
+static void cxt_fixup_headset_mic(struct hda_codec *codec,
+   const struct hda_fixup *fix, int action)
+{
+   struct conexant_spec *spec = codec->spec;
+
+   switch (action) {
+   case HDA_FIXUP_ACT_PRE_PROBE:
+   spec->parse_flags |= HDA_PINCFG_HEADSET_MIC;
+   break;
+   }
+}
+
 /* OPLC XO 1.5 fixup */
 
 /* OLPC XO-1.5 supports DC input mode (e.g. for use with analog sensors)
@@ -880,6 +894,19 @@ static const struct hda_fixup cxt_fixups
.type = HDA_FIXUP_FUNC,
.v.func = cxt_fixup_mute_led_gpio,
},
+   [CXT_FIXUP_HEADSET_MIC] = {
+   .type = HDA_FIXUP_FUNC,
+   .v.func = cxt_fixup_headset_mic,
+   },
+   [CXT_FIXUP_HP_MIC_NO_PRESENCE] = {
+   .type = HDA_FIXUP_PINS,
+   .v.pins = (const struct hda_pintbl[]) {
+   { 0x1a, 0x02a1113c },
+   { }
+   },
+   .chained = true,
+   .chain_id = CXT_FIXUP_HEADSET_MIC,
+   },
 };
 
 static const struct snd_pci_quirk cxt5045_fixups[] = {
@@ -934,6 +961,8 @@ static const struct snd_pci_quirk cxt506
SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", 
CXT_FIXUP_MUTE_LED_GPIO),
SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", 
CXT_FIXUP_MUTE_LED_GPIO),
+   SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", 
CXT_FIXUP_HP_MIC_NO_PRESENCE),
+   SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", 
CXT_FIXUP_HP_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),




[PATCH 4.14 047/146] IB/mlx5: Serialize access to the VMA list

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Majd Dibbiny 

commit ad9a3668a434faca1339789ed2f043d679199309 upstream.

User-space applications can do mmap and munmap directly at
any time.

Since the VMA list is not protected with a mutex, concurrent
accesses to the VMA list from the mmap and munmap can cause
data corruption. Add a mutex around the list.

Fixes: 7c2344c3bbf9 ("IB/mlx5: Implements disassociate_ucontext API")
Reviewed-by: Yishai Hadas 
Signed-off-by: Majd Dibbiny 
Signed-off-by: Leon Romanovsky 
Signed-off-by: Jason Gunthorpe 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/infiniband/hw/mlx5/main.c|8 
 drivers/infiniband/hw/mlx5/mlx5_ib.h |4 
 2 files changed, 12 insertions(+)

--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1415,6 +1415,7 @@ static struct ib_ucontext *mlx5_ib_alloc
}
 
INIT_LIST_HEAD(>vma_private_list);
+   mutex_init(>vma_private_list_mutex);
INIT_LIST_HEAD(>db_page_list);
mutex_init(>db_page_mutex);
 
@@ -1576,7 +1577,9 @@ static void  mlx5_ib_vma_close(struct vm
 * mlx5_ib_disassociate_ucontext().
 */
mlx5_ib_vma_priv_data->vma = NULL;
+   mutex_lock(mlx5_ib_vma_priv_data->vma_private_list_mutex);
list_del(_ib_vma_priv_data->list);
+   mutex_unlock(mlx5_ib_vma_priv_data->vma_private_list_mutex);
kfree(mlx5_ib_vma_priv_data);
 }
 
@@ -1596,10 +1599,13 @@ static int mlx5_ib_set_vma_data(struct v
return -ENOMEM;
 
vma_prv->vma = vma;
+   vma_prv->vma_private_list_mutex = >vma_private_list_mutex;
vma->vm_private_data = vma_prv;
vma->vm_ops =  _ib_vm_ops;
 
+   mutex_lock(>vma_private_list_mutex);
list_add(_prv->list, vma_head);
+   mutex_unlock(>vma_private_list_mutex);
 
return 0;
 }
@@ -1642,6 +1648,7 @@ static void mlx5_ib_disassociate_ucontex
 * mlx5_ib_vma_close.
 */
down_write(_mm->mmap_sem);
+   mutex_lock(>vma_private_list_mutex);
list_for_each_entry_safe(vma_private, n, >vma_private_list,
 list) {
vma = vma_private->vma;
@@ -1656,6 +1663,7 @@ static void mlx5_ib_disassociate_ucontex
list_del(_private->list);
kfree(vma_private);
}
+   mutex_unlock(>vma_private_list_mutex);
up_write(_mm->mmap_sem);
mmput(owning_mm);
put_task_struct(owning_process);
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -115,6 +115,8 @@ enum {
 struct mlx5_ib_vma_private_data {
struct list_head list;
struct vm_area_struct *vma;
+   /* protect vma_private_list add/del */
+   struct mutex *vma_private_list_mutex;
 };
 
 struct mlx5_ib_ucontext {
@@ -129,6 +131,8 @@ struct mlx5_ib_ucontext {
/* Transport Domain number */
u32 tdn;
struct list_headvma_private_list;
+   /* protect vma_private_list add/del */
+   struct mutexvma_private_list_mutex;
 
unsigned long   upd_xlt_page;
/* protect ODP/KSM */




[PATCH 4.14 050/146] ALSA: hda: Drop useless WARN_ON()

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Takashi Iwai 

commit a36c2638380c0a4676647a1f553b70b20d3ebce1 upstream.

Since the commit 97cc2ed27e5a ("ALSA: hda - Fix yet another i915
pointer leftover in error path") cleared hdac_acomp pointer, the
WARN_ON() non-NULL check in snd_hdac_i915_register_notifier() may give
a false-positive warning, as the function gets called no matter
whether the component is registered or not.  For fixing it, let's get
rid of the spurious WARN_ON().

Fixes: 97cc2ed27e5a ("ALSA: hda - Fix yet another i915 pointer leftover in 
error path")
Reported-by: Kouta Okamoto 
Signed-off-by: Takashi Iwai 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/hda/hdac_i915.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/sound/hda/hdac_i915.c
+++ b/sound/hda/hdac_i915.c
@@ -325,7 +325,7 @@ static int hdac_component_master_match(s
  */
 int snd_hdac_i915_register_notifier(const struct 
i915_audio_component_audio_ops *aops)
 {
-   if (WARN_ON(!hdac_acomp))
+   if (!hdac_acomp)
return -ENODEV;
 
hdac_acomp->audio_ops = aops;




[PATCH 4.14 060/146] ipv6: mcast: better catch silly mtu values

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Eric Dumazet 


[ Upstream commit b9b312a7a451e9c098921856e7cfbc201120e1a7 ]

syzkaller reported crashes in IPv6 stack [1]

Xin Long found that lo MTU was set to silly values.

IPv6 stack reacts to changes to small MTU, by disabling itself under
RTNL.

But there is a window where threads not using RTNL can see a wrong
device mtu. This can lead to surprises, in mld code where it is assumed
the mtu is suitable.

Fix this by reading device mtu once and checking IPv6 minimal MTU.

[1]
 skbuff: skb_over_panic: text:10b86b8d len:196 put:20
 head:3b477e60 data:0e85441e tail:0xd4 end:0xc0 dev:lo
 [ cut here ]
 kernel BUG at net/core/skbuff.c:104!
 invalid opcode:  [#1] SMP KASAN
 Dumping ftrace buffer:
(ftrace buffer empty)
 Modules linked in:
 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.15.0-rc2-mm1+ #39
 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
 Google 01/01/2011
 RIP: 0010:skb_panic+0x15c/0x1f0 net/core/skbuff.c:100
 RSP: 0018:8801db307508 EFLAGS: 00010286
 RAX: 0082 RBX: 8801c517e840 RCX: 
 RDX: 0082 RSI: 11003b660e61 RDI: ed003b660e95
 RBP: 8801db307570 R08: 11003b660e23 R09: 
 R10:  R11:  R12: 85bd4020
 R13: 84754ed2 R14: 0014 R15: 8801c4e26540
 FS:  () GS:8801db30() knlGS:
 CS:  0010 DS:  ES:  CR0: 80050033
 CR2: 00463610 CR3: 0001c6698000 CR4: 001406e0
 DR0:  DR1:  DR2: 
 DR3:  DR6: fffe0ff0 DR7: 0400
 Call Trace:
  
  skb_over_panic net/core/skbuff.c:109 [inline]
  skb_put+0x181/0x1c0 net/core/skbuff.c:1694
  add_grhead.isra.24+0x42/0x3b0 net/ipv6/mcast.c:1695
  add_grec+0xa55/0x1060 net/ipv6/mcast.c:1817
  mld_send_cr net/ipv6/mcast.c:1903 [inline]
  mld_ifc_timer_expire+0x4d2/0x770 net/ipv6/mcast.c:2448
  call_timer_fn+0x23b/0x840 kernel/time/timer.c:1320
  expire_timers kernel/time/timer.c:1357 [inline]
  __run_timers+0x7e1/0xb60 kernel/time/timer.c:1660
  run_timer_softirq+0x4c/0xb0 kernel/time/timer.c:1686
  __do_softirq+0x29d/0xbb2 kernel/softirq.c:285
  invoke_softirq kernel/softirq.c:365 [inline]
  irq_exit+0x1d3/0x210 kernel/softirq.c:405
  exiting_irq arch/x86/include/asm/apic.h:540 [inline]
  smp_apic_timer_interrupt+0x16b/0x700 arch/x86/kernel/apic/apic.c:1052
  apic_timer_interrupt+0xa9/0xb0 arch/x86/entry/entry_64.S:920

Signed-off-by: Eric Dumazet 
Reported-by: syzbot 
Tested-by: Xin Long 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv6/mcast.c |   25 +++--
 1 file changed, 15 insertions(+), 10 deletions(-)

--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -1682,16 +1682,16 @@ static int grec_size(struct ifmcaddr6 *p
 }
 
 static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc,
-   int type, struct mld2_grec **ppgr)
+   int type, struct mld2_grec **ppgr, unsigned int mtu)
 {
-   struct net_device *dev = pmc->idev->dev;
struct mld2_report *pmr;
struct mld2_grec *pgr;
 
-   if (!skb)
-   skb = mld_newpack(pmc->idev, dev->mtu);
-   if (!skb)
-   return NULL;
+   if (!skb) {
+   skb = mld_newpack(pmc->idev, mtu);
+   if (!skb)
+   return NULL;
+   }
pgr = skb_put(skb, sizeof(struct mld2_grec));
pgr->grec_type = type;
pgr->grec_auxwords = 0;
@@ -1714,10 +1714,15 @@ static struct sk_buff *add_grec(struct s
struct mld2_grec *pgr = NULL;
struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list;
int scount, stotal, first, isquery, truncate;
+   unsigned int mtu;
 
if (pmc->mca_flags & MAF_NOREPORT)
return skb;
 
+   mtu = READ_ONCE(dev->mtu);
+   if (mtu < IPV6_MIN_MTU)
+   return skb;
+
isquery = type == MLD2_MODE_IS_INCLUDE ||
  type == MLD2_MODE_IS_EXCLUDE;
truncate = type == MLD2_MODE_IS_EXCLUDE ||
@@ -1738,7 +1743,7 @@ static struct sk_buff *add_grec(struct s
AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) {
if (skb)
mld_sendpack(skb);
-   skb = mld_newpack(idev, dev->mtu);
+   skb = mld_newpack(idev, mtu);
}
}
first = 1;
@@ -1774,12 +1779,12 @@ static struct sk_buff *add_grec(struct s
pgr->grec_nsrcs = htons(scount);
if (skb)
mld_sendpack(skb);
-   skb = mld_newpack(idev, dev->mtu);
+   skb = 

[PATCH 4.14 057/146] block: dont let passthrough IO go into .make_request_fn()

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Ming Lei 

commit 14cb0dc6479dc5ebc63b3a459a5d89a2f1b39fed upstream.

Commit a8821f3f3("block: Improvements to bounce-buffer handling") tries
to make sure that the bio to .make_request_fn won't exceed BIO_MAX_PAGES,
but ignores that passthrough I/O can use blk_queue_bounce() too.
Especially, passthrough IO may not be sector-aligned, and the check
of 'sectors < bio_sectors(*bio_orig)' inside __blk_queue_bounce() may
become true even though the max bvec number doesn't exceed BIO_MAX_PAGES,
then cause the bio splitted, and the original passthrough bio is submited
to generic_make_request().

This patch fixes this issue by checking if the bio is passthrough IO,
and use bio_kmalloc() to allocate the cloned passthrough bio.

Cc: NeilBrown 
Fixes: a8821f3f3("block: Improvements to bounce-buffer handling")
Tested-by: Michele Ballabio 
Signed-off-by: Ming Lei 
Signed-off-by: Jens Axboe 
Signed-off-by: Greg Kroah-Hartman 

---
 block/bounce.c |6 --
 include/linux/blkdev.h |   21 +++--
 2 files changed, 23 insertions(+), 4 deletions(-)

--- a/block/bounce.c
+++ b/block/bounce.c
@@ -200,6 +200,7 @@ static void __blk_queue_bounce(struct re
unsigned i = 0;
bool bounce = false;
int sectors = 0;
+   bool passthrough = bio_is_passthrough(*bio_orig);
 
bio_for_each_segment(from, *bio_orig, iter) {
if (i++ < BIO_MAX_PAGES)
@@ -210,13 +211,14 @@ static void __blk_queue_bounce(struct re
if (!bounce)
return;
 
-   if (sectors < bio_sectors(*bio_orig)) {
+   if (!passthrough && sectors < bio_sectors(*bio_orig)) {
bio = bio_split(*bio_orig, sectors, GFP_NOIO, bounce_bio_split);
bio_chain(bio, *bio_orig);
generic_make_request(*bio_orig);
*bio_orig = bio;
}
-   bio = bio_clone_bioset(*bio_orig, GFP_NOIO, bounce_bio_set);
+   bio = bio_clone_bioset(*bio_orig, GFP_NOIO, passthrough ? NULL :
+   bounce_bio_set);
 
bio_for_each_segment_all(to, bio, i) {
struct page *page = to->bv_page;
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -241,14 +241,24 @@ struct request {
struct request *next_rq;
 };
 
+static inline bool blk_op_is_scsi(unsigned int op)
+{
+   return op == REQ_OP_SCSI_IN || op == REQ_OP_SCSI_OUT;
+}
+
+static inline bool blk_op_is_private(unsigned int op)
+{
+   return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
+}
+
 static inline bool blk_rq_is_scsi(struct request *rq)
 {
-   return req_op(rq) == REQ_OP_SCSI_IN || req_op(rq) == REQ_OP_SCSI_OUT;
+   return blk_op_is_scsi(req_op(rq));
 }
 
 static inline bool blk_rq_is_private(struct request *rq)
 {
-   return req_op(rq) == REQ_OP_DRV_IN || req_op(rq) == REQ_OP_DRV_OUT;
+   return blk_op_is_private(req_op(rq));
 }
 
 static inline bool blk_rq_is_passthrough(struct request *rq)
@@ -256,6 +266,13 @@ static inline bool blk_rq_is_passthrough
return blk_rq_is_scsi(rq) || blk_rq_is_private(rq);
 }
 
+static inline bool bio_is_passthrough(struct bio *bio)
+{
+   unsigned op = bio_op(bio);
+
+   return blk_op_is_scsi(op) || blk_op_is_private(op);
+}
+
 static inline unsigned short req_get_ioprio(struct request *req)
 {
return req->ioprio;




[PATCH 4.14 054/146] ALSA: hda - Fix missing COEF init for ALC225/295/299

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Takashi Iwai 

commit 44be77c590f381bc629815ac789b8b15ecc4ddcf upstream.

There was a long-standing problem on HP Spectre X360 with Kabylake
where it lacks of the front speaker output in some situations.  Also
there are other products showing the similar behavior.  The culprit
seems to be the missing COEF setup on ALC codecs, ALC225/295/299,
which are all compatible.

This patch adds the proper COEF setup (to initialize idx 0x67 / bits
0x3000) for addressing the issue.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=195457
Signed-off-by: Takashi Iwai 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/pci/hda/patch_realtek.c |8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -324,8 +324,12 @@ static void alc_fill_eapd_coef(struct hd
case 0x10ec0292:
alc_update_coef_idx(codec, 0x4, 1<<15, 0);
break;
-   case 0x10ec0215:
case 0x10ec0225:
+   case 0x10ec0295:
+   case 0x10ec0299:
+   alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
+   /* fallthrough */
+   case 0x10ec0215:
case 0x10ec0233:
case 0x10ec0236:
case 0x10ec0255:
@@ -336,10 +340,8 @@ static void alc_fill_eapd_coef(struct hd
case 0x10ec0286:
case 0x10ec0288:
case 0x10ec0285:
-   case 0x10ec0295:
case 0x10ec0298:
case 0x10ec0289:
-   case 0x10ec0299:
alc_update_coef_idx(codec, 0x10, 1<<9, 0);
break;
case 0x10ec0275:




[PATCH 4.14 052/146] ALSA: hda - change the location for one mic on a Lenovo machine

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Hui Wang 

commit 8da5bbfc7cbba909f4f32d5e1dda3750baa5d853 upstream.

There are two front mics on this machine, and current driver assign
the same name Mic to both of them, but pulseaudio can't handle them.
As a workaround, we change the location for one of them, then the
driver will assign "Front Mic" and "Mic" for them.

Signed-off-by: Hui Wang 
Signed-off-by: Takashi Iwai 
Signed-off-by: Greg Kroah-Hartman 

---
 sound/pci/hda/patch_realtek.c |1 +
 1 file changed, 1 insertion(+)

--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -6305,6 +6305,7 @@ static const struct snd_pci_quirk alc269
SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", 
ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", 
ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", 
ALC294_FIXUP_LENOVO_MIC_LOCATION),
+   SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", 
ALC294_FIXUP_LENOVO_MIC_LOCATION),
SND_PCI_QUIRK(0x17aa, 0x3112, "ThinkCentre AIO", 
ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", 
ALC269_FIXUP_DMIC_THINKPAD_ACPI),
SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),




[PATCH 4.14 056/146] block: fix blk_rq_append_bio

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Jens Axboe 

commit 0abc2a10389f0c9070f76ca906c7382788036b93 upstream.

Commit caa4b02476e3(blk-map: call blk_queue_bounce from blk_rq_append_bio)
moves blk_queue_bounce() into blk_rq_append_bio(), but don't consider
the fact that the bounced bio becomes invisible to caller since the
parameter type is 'struct bio *'. Make it a pointer to a pointer to
a bio, so the caller sees the right bio also after a bounce.

Fixes: caa4b02476e3 ("blk-map: call blk_queue_bounce from blk_rq_append_bio")
Cc: Christoph Hellwig 
Reported-by: Michele Ballabio 
(handling failure of blk_rq_append_bio(), only call bio_get() after
blk_rq_append_bio() returns OK)
Tested-by: Michele Ballabio 
Signed-off-by: Ming Lei 
Signed-off-by: Jens Axboe 
Signed-off-by: Greg Kroah-Hartman 

---
 block/blk-map.c|   38 +
 drivers/scsi/osd/osd_initiator.c   |4 ++-
 drivers/target/target_core_pscsi.c |4 +--
 include/linux/blkdev.h |2 -
 4 files changed, 28 insertions(+), 20 deletions(-)

--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -12,22 +12,29 @@
 #include "blk.h"
 
 /*
- * Append a bio to a passthrough request.  Only works can be merged into
- * the request based on the driver constraints.
+ * Append a bio to a passthrough request.  Only works if the bio can be merged
+ * into the request based on the driver constraints.
  */
-int blk_rq_append_bio(struct request *rq, struct bio *bio)
+int blk_rq_append_bio(struct request *rq, struct bio **bio)
 {
-   blk_queue_bounce(rq->q, );
+   struct bio *orig_bio = *bio;
+
+   blk_queue_bounce(rq->q, bio);
 
if (!rq->bio) {
-   blk_rq_bio_prep(rq->q, rq, bio);
+   blk_rq_bio_prep(rq->q, rq, *bio);
} else {
-   if (!ll_back_merge_fn(rq->q, rq, bio))
+   if (!ll_back_merge_fn(rq->q, rq, *bio)) {
+   if (orig_bio != *bio) {
+   bio_put(*bio);
+   *bio = orig_bio;
+   }
return -EINVAL;
+   }
 
-   rq->biotail->bi_next = bio;
-   rq->biotail = bio;
-   rq->__data_len += bio->bi_iter.bi_size;
+   rq->biotail->bi_next = *bio;
+   rq->biotail = *bio;
+   rq->__data_len += (*bio)->bi_iter.bi_size;
}
 
return 0;
@@ -80,14 +87,12 @@ static int __blk_rq_map_user_iov(struct
 * We link the bounce buffer in and could have to traverse it
 * later so we have to get a ref to prevent it from being freed
 */
-   ret = blk_rq_append_bio(rq, bio);
-   bio_get(bio);
+   ret = blk_rq_append_bio(rq, );
if (ret) {
-   bio_endio(bio);
__blk_rq_unmap_user(orig_bio);
-   bio_put(bio);
return ret;
}
+   bio_get(bio);
 
return 0;
 }
@@ -220,7 +225,7 @@ int blk_rq_map_kern(struct request_queue
int reading = rq_data_dir(rq) == READ;
unsigned long addr = (unsigned long) kbuf;
int do_copy = 0;
-   struct bio *bio;
+   struct bio *bio, *orig_bio;
int ret;
 
if (len > (queue_max_hw_sectors(q) << 9))
@@ -243,10 +248,11 @@ int blk_rq_map_kern(struct request_queue
if (do_copy)
rq->rq_flags |= RQF_COPY_USER;
 
-   ret = blk_rq_append_bio(rq, bio);
+   orig_bio = bio;
+   ret = blk_rq_append_bio(rq, );
if (unlikely(ret)) {
/* request is too big */
-   bio_put(bio);
+   bio_put(orig_bio);
return ret;
}
 
--- a/drivers/scsi/osd/osd_initiator.c
+++ b/drivers/scsi/osd/osd_initiator.c
@@ -1576,7 +1576,9 @@ static struct request *_make_request(str
return req;
 
for_each_bio(bio) {
-   ret = blk_rq_append_bio(req, bio);
+   struct bio *bounce_bio = bio;
+
+   ret = blk_rq_append_bio(req, _bio);
if (ret)
return ERR_PTR(ret);
}
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -920,7 +920,7 @@ pscsi_map_sg(struct se_cmd *cmd, struct
" %d i: %d bio: %p, allocating another"
" bio\n", bio->bi_vcnt, i, bio);
 
-   rc = blk_rq_append_bio(req, bio);
+   rc = blk_rq_append_bio(req, );
if (rc) {
pr_err("pSCSI: failed to append bio\n");
goto fail;
@@ -938,7 +938,7 @@ pscsi_map_sg(struct se_cmd *cmd, struct
}
 
if (bio) {
-   rc = blk_rq_append_bio(req, bio);
+   rc = blk_rq_append_bio(req, );
 

[PATCH 4.14 065/146] net: reevalulate autoflowlabel setting after sysctl setting

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Shaohua Li 


[ Upstream commit 513674b5a2c9c7a67501506419da5c3c77ac6f08 ]

sysctl.ip6.auto_flowlabels is default 1. In our hosts, we set it to 2.
If sockopt doesn't set autoflowlabel, outcome packets from the hosts are
supposed to not include flowlabel. This is true for normal packet, but
not for reset packet.

The reason is ipv6_pinfo.autoflowlabel is set in sock creation. Later if
we change sysctl.ip6.auto_flowlabels, the ipv6_pinfo.autoflowlabel isn't
changed, so the sock will keep the old behavior in terms of auto
flowlabel. Reset packet is suffering from this problem, because reset
packet is sent from a special control socket, which is created at boot
time. Since sysctl.ipv6.auto_flowlabels is 1 by default, the control
socket will always have its ipv6_pinfo.autoflowlabel set, even after
user set sysctl.ipv6.auto_flowlabels to 1, so reset packset will always
have flowlabel. Normal sock created before sysctl setting suffers from
the same issue. We can't even turn off autoflowlabel unless we kill all
socks in the hosts.

To fix this, if IPV6_AUTOFLOWLABEL sockopt is used, we use the
autoflowlabel setting from user, otherwise we always call
ip6_default_np_autolabel() which has the new settings of sysctl.

Note, this changes behavior a little bit. Before commit 42240901f7c4
(ipv6: Implement different admin modes for automatic flow labels), the
autoflowlabel behavior of a sock isn't sticky, eg, if sysctl changes,
existing connection will change autoflowlabel behavior. After that
commit, autoflowlabel behavior is sticky in the whole life of the sock.
With this patch, the behavior isn't sticky again.

Cc: Martin KaFai Lau 
Cc: Eric Dumazet 
Cc: Tom Herbert 
Signed-off-by: Shaohua Li 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 include/linux/ipv6.h |3 ++-
 net/ipv6/af_inet6.c  |1 -
 net/ipv6/ip6_output.c|   12 ++--
 net/ipv6/ipv6_sockglue.c |1 +
 4 files changed, 13 insertions(+), 4 deletions(-)

--- a/include/linux/ipv6.h
+++ b/include/linux/ipv6.h
@@ -272,7 +272,8 @@ struct ipv6_pinfo {
 * 100: prefer care-of address
 */
dontfrag:1,
-   autoflowlabel:1;
+   autoflowlabel:1,
+   autoflowlabel_set:1;
__u8min_hopcount;
__u8tclass;
__be32  rcv_flowinfo;
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -210,7 +210,6 @@ lookup_protocol:
np->mcast_hops  = IPV6_DEFAULT_MCASTHOPS;
np->mc_loop = 1;
np->pmtudisc= IPV6_PMTUDISC_WANT;
-   np->autoflowlabel = ip6_default_np_autolabel(net);
np->repflow = net->ipv6.sysctl.flowlabel_reflect;
sk->sk_ipv6only = net->ipv6.sysctl.bindv6only;
 
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -166,6 +166,14 @@ int ip6_output(struct net *net, struct s
!(IP6CB(skb)->flags & IP6SKB_REROUTED));
 }
 
+static bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np)
+{
+   if (!np->autoflowlabel_set)
+   return ip6_default_np_autolabel(net);
+   else
+   return np->autoflowlabel;
+}
+
 /*
  * xmit an sk_buff (used by TCP, SCTP and DCCP)
  * Note : socket lock is not held for SYNACK packets, but might be modified
@@ -230,7 +238,7 @@ int ip6_xmit(const struct sock *sk, stru
hlimit = ip6_dst_hoplimit(dst);
 
ip6_flow_hdr(hdr, tclass, ip6_make_flowlabel(net, skb, fl6->flowlabel,
-np->autoflowlabel, fl6));
+   ip6_autoflowlabel(net, np), fl6));
 
hdr->payload_len = htons(seg_len);
hdr->nexthdr = proto;
@@ -1626,7 +1634,7 @@ struct sk_buff *__ip6_make_skb(struct so
 
ip6_flow_hdr(hdr, v6_cork->tclass,
 ip6_make_flowlabel(net, skb, fl6->flowlabel,
-   np->autoflowlabel, fl6));
+   ip6_autoflowlabel(net, np), fl6));
hdr->hop_limit = v6_cork->hop_limit;
hdr->nexthdr = proto;
hdr->saddr = fl6->saddr;
--- a/net/ipv6/ipv6_sockglue.c
+++ b/net/ipv6/ipv6_sockglue.c
@@ -878,6 +878,7 @@ pref_skip_coa:
break;
case IPV6_AUTOFLOWLABEL:
np->autoflowlabel = valbool;
+   np->autoflowlabel_set = 1;
retv = 0;
break;
case IPV6_RECVFRAGSIZE:




[PATCH 4.14 062/146] net: igmp: Use correct source address on IGMPv3 reports

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Kevin Cernekee 


[ Upstream commit a46182b00290839fa3fa159d54fd3237bd8669f0 ]

Closing a multicast socket after the final IPv4 address is deleted
from an interface can generate a membership report that uses the
source IP from a different interface.  The following test script, run
from an isolated netns, reproduces the issue:

#!/bin/bash

ip link add dummy0 type dummy
ip link add dummy1 type dummy
ip link set dummy0 up
ip link set dummy1 up
ip addr add 10.1.1.1/24 dev dummy0
ip addr add 192.168.99.99/24 dev dummy1

tcpdump -U -i dummy0 &
socat EXEC:"sleep 2" \
UDP4-DATAGRAM:239.101.1.68:8889,ip-add-membership=239.0.1.68:10.1.1.1 &

sleep 1
ip addr del 10.1.1.1/24 dev dummy0
sleep 5
kill %tcpdump

RFC 3376 specifies that the report must be sent with a valid IP source
address from the destination subnet, or from address 0.0.0.0.  Add an
extra check to make sure this is the case.

Signed-off-by: Kevin Cernekee 
Reviewed-by: Andrew Lunn 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/igmp.c |   20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)

--- a/net/ipv4/igmp.c
+++ b/net/ipv4/igmp.c
@@ -89,6 +89,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -321,6 +322,23 @@ igmp_scount(struct ip_mc_list *pmc, int
return scount;
 }
 
+/* source address selection per RFC 3376 section 4.2.13 */
+static __be32 igmpv3_get_srcaddr(struct net_device *dev,
+const struct flowi4 *fl4)
+{
+   struct in_device *in_dev = __in_dev_get_rcu(dev);
+
+   if (!in_dev)
+   return htonl(INADDR_ANY);
+
+   for_ifa(in_dev) {
+   if (inet_ifa_match(fl4->saddr, ifa))
+   return fl4->saddr;
+   } endfor_ifa(in_dev);
+
+   return htonl(INADDR_ANY);
+}
+
 static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu)
 {
struct sk_buff *skb;
@@ -368,7 +386,7 @@ static struct sk_buff *igmpv3_newpack(st
pip->frag_off = htons(IP_DF);
pip->ttl  = 1;
pip->daddr= fl4.daddr;
-   pip->saddr= fl4.saddr;
+   pip->saddr= igmpv3_get_srcaddr(dev, );
pip->protocol = IPPROTO_IGMP;
pip->tot_len  = 0;  /* filled in later */
ip_select_ident(net, skb, NULL);




[PATCH 4.14 063/146] netlink: Add netns check on taps

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Kevin Cernekee 


[ Upstream commit 93c647643b48f0131f02e45da3bd367d80443291 ]

Currently, a nlmon link inside a child namespace can observe systemwide
netlink activity.  Filter the traffic so that nlmon can only sniff
netlink messages from its own netns.

Test case:

vpnns -- bash -c "ip link add nlmon0 type nlmon; \
  ip link set nlmon0 up; \
  tcpdump -i nlmon0 -q -w /tmp/nlmon.pcap -U" &
sudo ip xfrm state add src 10.1.1.1 dst 10.1.1.2 proto esp \
spi 0x1 mode transport \
auth sha1 0x616263313233 \
enc aes 0x
grep --binary abc123 /tmp/nlmon.pcap

Signed-off-by: Kevin Cernekee 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/netlink/af_netlink.c |3 +++
 1 file changed, 3 insertions(+)

--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -254,6 +254,9 @@ static int __netlink_deliver_tap_skb(str
struct sock *sk = skb->sk;
int ret = -ENOMEM;
 
+   if (!net_eq(dev_net(dev), sock_net(sk)))
+   return 0;
+
dev_hold(dev);
 
if (is_vmalloc_addr(skb->head))




[PATCH 4.14 031/146] x86/mm/pti: Add Kconfig

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Dave Hansen 

commit 385ce0ea4c078517fa51c261882c4e72fba53005 upstream.

Finally allow CONFIG_PAGE_TABLE_ISOLATION to be enabled.

PARAVIRT generally requires that the kernel not manage its own page tables.
It also means that the hypervisor and kernel must agree wholeheartedly
about what format the page tables are in and what they contain.
PAGE_TABLE_ISOLATION, unfortunately, changes the rules and they
can not be used together.

I've seen conflicting feedback from maintainers lately about whether they
want the Kconfig magic to go first or last in a patch series.  It's going
last here because the partially-applied series leads to kernels that can
not boot in a bunch of cases.  I did a run through the entire series with
CONFIG_PAGE_TABLE_ISOLATION=y to look for build errors, though.

[ tglx: Removed SMP and !PARAVIRT dependencies as they not longer exist ]

Signed-off-by: Dave Hansen 
Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Cc: linux...@kvack.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 security/Kconfig |   10 ++
 1 file changed, 10 insertions(+)

--- a/security/Kconfig
+++ b/security/Kconfig
@@ -54,6 +54,16 @@ config SECURITY_NETWORK
  implement socket and networking access controls.
  If you are unsure how to answer this question, answer N.
 
+config PAGE_TABLE_ISOLATION
+   bool "Remove the kernel mapping in user mode"
+   depends on X86_64 && !UML
+   help
+ This feature reduces the number of hardware side channels by
+ ensuring that the majority of kernel addresses are not mapped
+ into userspace.
+
+ See Documentation/x86/pagetable-isolation.txt for more details.
+
 config SECURITY_INFINIBAND
bool "Infiniband Security Hooks"
depends on SECURITY && INFINIBAND




[PATCH 4.14 076/146] s390/qeth: update takeover IPs after configuration change

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Julian Wiedmann 


[ Upstream commit 02f510f326501470348a5df341e8232c3497 ]

Any modification to the takeover IP-ranges requires that we re-evaluate
which IP addresses are takeover-eligible. Otherwise we might do takeover
for some addresses when we no longer should, or vice-versa.

Signed-off-by: Julian Wiedmann 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/s390/net/qeth_core.h  |4 +-
 drivers/s390/net/qeth_core_main.c |4 +-
 drivers/s390/net/qeth_l3.h|2 -
 drivers/s390/net/qeth_l3_main.c   |   31 --
 drivers/s390/net/qeth_l3_sys.c|   63 --
 5 files changed, 67 insertions(+), 37 deletions(-)

--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -565,8 +565,8 @@ enum qeth_cq {
 
 struct qeth_ipato {
bool enabled;
-   int invert4;
-   int invert6;
+   bool invert4;
+   bool invert6;
struct list_head entries;
 };
 
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -1480,8 +1480,8 @@ static int qeth_setup_card(struct qeth_c
/* IP address takeover */
INIT_LIST_HEAD(>ipato.entries);
card->ipato.enabled = false;
-   card->ipato.invert4 = 0;
-   card->ipato.invert6 = 0;
+   card->ipato.invert4 = false;
+   card->ipato.invert6 = false;
/* init QDIO stuff */
qeth_init_qdio_info(card);
INIT_DELAYED_WORK(>buffer_reclaim_work, qeth_buffer_reclaim_work);
--- a/drivers/s390/net/qeth_l3.h
+++ b/drivers/s390/net/qeth_l3.h
@@ -82,7 +82,7 @@ void qeth_l3_del_vipa(struct qeth_card *
 int qeth_l3_add_rxip(struct qeth_card *, enum qeth_prot_versions, const u8 *);
 void qeth_l3_del_rxip(struct qeth_card *card, enum qeth_prot_versions,
const u8 *);
-int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *, struct qeth_ipaddr *);
+void qeth_l3_update_ipato(struct qeth_card *card);
 struct qeth_ipaddr *qeth_l3_get_addr_buffer(enum qeth_prot_versions);
 int qeth_l3_add_ip(struct qeth_card *, struct qeth_ipaddr *);
 int qeth_l3_delete_ip(struct qeth_card *, struct qeth_ipaddr *);
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -163,8 +163,8 @@ static void qeth_l3_convert_addr_to_bits
}
 }
 
-int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card,
-   struct qeth_ipaddr *addr)
+static bool qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card,
+struct qeth_ipaddr *addr)
 {
struct qeth_ipato_entry *ipatoe;
u8 addr_bits[128] = {0, };
@@ -605,6 +605,27 @@ int qeth_l3_setrouting_v6(struct qeth_ca
 /*
  * IP address takeover related functions
  */
+
+/**
+ * qeth_l3_update_ipato() - Update 'takeover' property, for all NORMAL IPs.
+ *
+ * Caller must hold ip_lock.
+ */
+void qeth_l3_update_ipato(struct qeth_card *card)
+{
+   struct qeth_ipaddr *addr;
+   unsigned int i;
+
+   hash_for_each(card->ip_htable, i, addr, hnode) {
+   if (addr->type != QETH_IP_TYPE_NORMAL)
+   continue;
+   if (qeth_l3_is_addr_covered_by_ipato(card, addr))
+   addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
+   else
+   addr->set_flags &= ~QETH_IPA_SETIP_TAKEOVER_FLAG;
+   }
+}
+
 static void qeth_l3_clear_ipato_list(struct qeth_card *card)
 {
struct qeth_ipato_entry *ipatoe, *tmp;
@@ -616,6 +637,7 @@ static void qeth_l3_clear_ipato_list(str
kfree(ipatoe);
}
 
+   qeth_l3_update_ipato(card);
spin_unlock_bh(>ip_lock);
 }
 
@@ -640,8 +662,10 @@ int qeth_l3_add_ipato_entry(struct qeth_
}
}
 
-   if (!rc)
+   if (!rc) {
list_add_tail(>entry, >ipato.entries);
+   qeth_l3_update_ipato(card);
+   }
 
spin_unlock_bh(>ip_lock);
 
@@ -664,6 +688,7 @@ void qeth_l3_del_ipato_entry(struct qeth
(proto == QETH_PROT_IPV4)? 4:16) &&
(ipatoe->mask_bits == mask_bits)) {
list_del(>entry);
+   qeth_l3_update_ipato(card);
kfree(ipatoe);
}
}
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -370,9 +370,8 @@ static ssize_t qeth_l3_dev_ipato_enable_
struct device_attribute *attr, const char *buf, size_t count)
 {
struct qeth_card *card = dev_get_drvdata(dev);
-   struct qeth_ipaddr *addr;
-   int i, rc = 0;
bool enable;
+   int rc = 0;
 
if (!card)
return -EINVAL;
@@ -391,20 +390,12 @@ static ssize_t qeth_l3_dev_ipato_enable_
goto out;
}
 
-   if (card->ipato.enabled == 

[PATCH 4.14 077/146] net: ipv4: fix for a race condition in raw_sendmsg

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Mohamed Ghannam 


[ Upstream commit 8f659a03a0ba9289b9aeb9b4470e6fb263d6f483 ]

inet->hdrincl is racy, and could lead to uninitialized stack pointer
usage, so its value should be read only once.

Fixes: c008ba5bdc9f ("ipv4: Avoid reading user iov twice after 
raw_probe_proto_opt")
Signed-off-by: Mohamed Ghannam 
Reviewed-by: Eric Dumazet 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/raw.c |   15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -513,11 +513,16 @@ static int raw_sendmsg(struct sock *sk,
int err;
struct ip_options_data opt_copy;
struct raw_frag_vec rfv;
+   int hdrincl;
 
err = -EMSGSIZE;
if (len > 0x)
goto out;
 
+   /* hdrincl should be READ_ONCE(inet->hdrincl)
+* but READ_ONCE() doesn't work with bit fields
+*/
+   hdrincl = inet->hdrincl;
/*
 *  Check the flags.
 */
@@ -593,7 +598,7 @@ static int raw_sendmsg(struct sock *sk,
/* Linux does not mangle headers on raw sockets,
 * so that IP options + IP_HDRINCL is non-sense.
 */
-   if (inet->hdrincl)
+   if (hdrincl)
goto done;
if (ipc.opt->opt.srr) {
if (!daddr)
@@ -615,12 +620,12 @@ static int raw_sendmsg(struct sock *sk,
 
flowi4_init_output(, ipc.oif, sk->sk_mark, tos,
   RT_SCOPE_UNIVERSE,
-  inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol,
+  hdrincl ? IPPROTO_RAW : sk->sk_protocol,
   inet_sk_flowi_flags(sk) |
-   (inet->hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
+   (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
   daddr, saddr, 0, 0, sk->sk_uid);
 
-   if (!inet->hdrincl) {
+   if (!hdrincl) {
rfv.msg = msg;
rfv.hlen = 0;
 
@@ -645,7 +650,7 @@ static int raw_sendmsg(struct sock *sk,
goto do_confirm;
 back_from_confirm:
 
-   if (inet->hdrincl)
+   if (hdrincl)
err = raw_send_hdrinc(sk, , msg, len,
  , msg->msg_flags, );
 




[PATCH 4.14 034/146] x86/mm/dump_pagetables: Allow dumping current pagetables

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit a4b51ef6552c704764684cef7e753162dc87c5fa upstream.

Add two debugfs files which allow to dump the pagetable of the current
task.

current_kernel dumps the regular page table. This is the page table which
is normally shared between kernel and user space. If kernel page table
isolation is enabled this is the kernel space mapping.

If kernel page table isolation is enabled the second file, current_user,
dumps the user space page table.

These files allow to verify the resulting page tables for page table
isolation, but even in the normal case its useful to be able to inspect
user space page tables of current for debugging purposes.

Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Boris Ostrovsky 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: David Laight 
Cc: Denys Vlasenko 
Cc: Eduardo Valentin 
Cc: Greg KH 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Will Deacon 
Cc: aligu...@amazon.com
Cc: daniel.gr...@iaik.tugraz.at
Cc: hu...@google.com
Cc: keesc...@google.com
Cc: linux...@kvack.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/pgtable.h |2 -
 arch/x86/mm/debug_pagetables.c |   71 ++---
 arch/x86/mm/dump_pagetables.c  |6 ++-
 3 files changed, 73 insertions(+), 6 deletions(-)

--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -28,7 +28,7 @@ extern pgd_t early_top_pgt[PTRS_PER_PGD]
 int __init __early_make_pgtable(unsigned long address, pmdval_t pmd);
 
 void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd);
-void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd);
+void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user);
 void ptdump_walk_pgd_level_checkwx(void);
 
 #ifdef CONFIG_DEBUG_WX
--- a/arch/x86/mm/debug_pagetables.c
+++ b/arch/x86/mm/debug_pagetables.c
@@ -5,7 +5,7 @@
 
 static int ptdump_show(struct seq_file *m, void *v)
 {
-   ptdump_walk_pgd_level_debugfs(m, NULL);
+   ptdump_walk_pgd_level_debugfs(m, NULL, false);
return 0;
 }
 
@@ -22,7 +22,57 @@ static const struct file_operations ptdu
.release= single_release,
 };
 
-static struct dentry *dir, *pe;
+static int ptdump_show_curknl(struct seq_file *m, void *v)
+{
+   if (current->mm->pgd) {
+   down_read(>mm->mmap_sem);
+   ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, false);
+   up_read(>mm->mmap_sem);
+   }
+   return 0;
+}
+
+static int ptdump_open_curknl(struct inode *inode, struct file *filp)
+{
+   return single_open(filp, ptdump_show_curknl, NULL);
+}
+
+static const struct file_operations ptdump_curknl_fops = {
+   .owner  = THIS_MODULE,
+   .open   = ptdump_open_curknl,
+   .read   = seq_read,
+   .llseek = seq_lseek,
+   .release= single_release,
+};
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+static struct dentry *pe_curusr;
+
+static int ptdump_show_curusr(struct seq_file *m, void *v)
+{
+   if (current->mm->pgd) {
+   down_read(>mm->mmap_sem);
+   ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, true);
+   up_read(>mm->mmap_sem);
+   }
+   return 0;
+}
+
+static int ptdump_open_curusr(struct inode *inode, struct file *filp)
+{
+   return single_open(filp, ptdump_show_curusr, NULL);
+}
+
+static const struct file_operations ptdump_curusr_fops = {
+   .owner  = THIS_MODULE,
+   .open   = ptdump_open_curusr,
+   .read   = seq_read,
+   .llseek = seq_lseek,
+   .release= single_release,
+};
+#endif
+
+static struct dentry *dir, *pe_knl, *pe_curknl;
 
 static int __init pt_dump_debug_init(void)
 {
@@ -30,9 +80,22 @@ static int __init pt_dump_debug_init(voi
if (!dir)
return -ENOMEM;
 
-   pe = debugfs_create_file("kernel", 0400, dir, NULL, _fops);
-   if (!pe)
+   pe_knl = debugfs_create_file("kernel", 0400, dir, NULL,
+_fops);
+   if (!pe_knl)
+   goto err;
+
+   pe_curknl = debugfs_create_file("current_kernel", 0400,
+   dir, NULL, _curknl_fops);
+   if (!pe_curknl)
+   goto err;
+
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+   pe_curusr = debugfs_create_file("current_user", 0400,
+   dir, NULL, _curusr_fops);
+   if (!pe_curusr)
goto err;
+#endif
return 0;
 err:
debugfs_remove_recursive(dir);
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -530,8 +530,12 @@ void ptdump_walk_pgd_level(struct seq_fi
ptdump_walk_pgd_level_core(m, pgd, false, true);
 }
 
-void 

[PATCH 4.14 067/146] RDS: Check cmsg_len before dereferencing CMSG_DATA

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Avinash Repaka 


[ Upstream commit 14e138a86f6347c6199f610576d2e11c03bec5f0 ]

RDS currently doesn't check if the length of the control message is
large enough to hold the required data, before dereferencing the control
message data. This results in following crash:

BUG: KASAN: stack-out-of-bounds in rds_rdma_bytes net/rds/send.c:1013
[inline]
BUG: KASAN: stack-out-of-bounds in rds_sendmsg+0x1f02/0x1f90
net/rds/send.c:1066
Read of size 8 at addr 8801c928fb70 by task syzkaller455006/3157

CPU: 0 PID: 3157 Comm: syzkaller455006 Not tainted 4.15.0-rc3+ #161
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x194/0x257 lib/dump_stack.c:53
 print_address_description+0x73/0x250 mm/kasan/report.c:252
 kasan_report_error mm/kasan/report.c:351 [inline]
 kasan_report+0x25b/0x340 mm/kasan/report.c:409
 __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:430
 rds_rdma_bytes net/rds/send.c:1013 [inline]
 rds_sendmsg+0x1f02/0x1f90 net/rds/send.c:1066
 sock_sendmsg_nosec net/socket.c:628 [inline]
 sock_sendmsg+0xca/0x110 net/socket.c:638
 ___sys_sendmsg+0x320/0x8b0 net/socket.c:2018
 __sys_sendmmsg+0x1ee/0x620 net/socket.c:2108
 SYSC_sendmmsg net/socket.c:2139 [inline]
 SyS_sendmmsg+0x35/0x60 net/socket.c:2134
 entry_SYSCALL_64_fastpath+0x1f/0x96
RIP: 0033:0x43fe49
RSP: 002b:7fffbe244ad8 EFLAGS: 0217 ORIG_RAX: 0133
RAX: ffda RBX: 004002c8 RCX: 0043fe49
RDX: 0001 RSI: 2020c000 RDI: 0003
RBP: 006ca018 R08:  R09: 
R10:  R11: 0217 R12: 004017b0
R13: 00401840 R14:  R15: 

To fix this, we verify that the cmsg_len is large enough to hold the
data to be read, before proceeding further.

Reported-by: syzbot 
Signed-off-by: Avinash Repaka 
Acked-by: Santosh Shilimkar 
Reviewed-by: Yuval Shaia 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/rds/send.c |3 +++
 1 file changed, 3 insertions(+)

--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1009,6 +1009,9 @@ static int rds_rdma_bytes(struct msghdr
continue;
 
if (cmsg->cmsg_type == RDS_CMSG_RDMA_ARGS) {
+   if (cmsg->cmsg_len <
+   CMSG_LEN(sizeof(struct rds_rdma_args)))
+   return -EINVAL;
args = CMSG_DATA(cmsg);
*rdma_bytes += args->remote_vec.bytes;
}




[PATCH 4.14 081/146] ip6_gre: fix device features for ioctl setup

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Alexey Kodanev 


[ Upstream commit e5a9336adb317db55eb3fe8200856096f3c71109 ]

When ip6gre is created using ioctl, its features, such as
scatter-gather, GSO and tx-checksumming will be turned off:

  # ip -f inet6 tunnel add gre6 mode ip6gre remote fd00::1
  # ethtool -k gre6 (truncated output)
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
generic-segmentation-offload: off [requested on]

But when netlink is used, they will be enabled:
  # ip link add gre6 type ip6gre remote fd00::1
  # ethtool -k gre6 (truncated output)
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
generic-segmentation-offload: on

This results in a loss of performance when gre6 is created via ioctl.
The issue was found with LTP/gre tests.

Fix it by moving the setup of device features to a separate function
and invoke it with ndo_init callback because both netlink and ioctl
will eventually call it via register_netdevice():

   register_netdevice()
   - ndo_init() callback -> ip6gre_tunnel_init() or ip6gre_tap_init()
   - ip6gre_tunnel_init_common()
- ip6gre_tnl_init_features()

The moved code also contains two minor style fixes:
  * removed needless tab from GRE6_FEATURES on NETIF_F_HIGHDMA line.
  * fixed the issue reported by checkpatch: "Unnecessary parentheses around
'nt->encap.type == TUNNEL_ENCAP_NONE'"

Fixes: ac4eb009e477 ("ip6gre: Add support for basic offloads offloads excluding 
GSO")
Signed-off-by: Alexey Kodanev 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv6/ip6_gre.c |   57 +
 1 file changed, 32 insertions(+), 25 deletions(-)

--- a/net/ipv6/ip6_gre.c
+++ b/net/ipv6/ip6_gre.c
@@ -1020,6 +1020,36 @@ static void ip6gre_tunnel_setup(struct n
eth_random_addr(dev->perm_addr);
 }
 
+#define GRE6_FEATURES (NETIF_F_SG |\
+  NETIF_F_FRAGLIST |   \
+  NETIF_F_HIGHDMA |\
+  NETIF_F_HW_CSUM)
+
+static void ip6gre_tnl_init_features(struct net_device *dev)
+{
+   struct ip6_tnl *nt = netdev_priv(dev);
+
+   dev->features   |= GRE6_FEATURES;
+   dev->hw_features|= GRE6_FEATURES;
+
+   if (!(nt->parms.o_flags & TUNNEL_SEQ)) {
+   /* TCP offload with GRE SEQ is not supported, nor
+* can we support 2 levels of outer headers requiring
+* an update.
+*/
+   if (!(nt->parms.o_flags & TUNNEL_CSUM) ||
+   nt->encap.type == TUNNEL_ENCAP_NONE) {
+   dev->features|= NETIF_F_GSO_SOFTWARE;
+   dev->hw_features |= NETIF_F_GSO_SOFTWARE;
+   }
+
+   /* Can use a lockless transmit, unless we generate
+* output sequences
+*/
+   dev->features |= NETIF_F_LLTX;
+   }
+}
+
 static int ip6gre_tunnel_init_common(struct net_device *dev)
 {
struct ip6_tnl *tunnel;
@@ -1054,6 +1084,8 @@ static int ip6gre_tunnel_init_common(str
if (!(tunnel->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
dev->mtu -= 8;
 
+   ip6gre_tnl_init_features(dev);
+
return 0;
 }
 
@@ -1302,11 +1334,6 @@ static const struct net_device_ops ip6gr
.ndo_get_iflink = ip6_tnl_get_iflink,
 };
 
-#define GRE6_FEATURES (NETIF_F_SG |\
-  NETIF_F_FRAGLIST |   \
-  NETIF_F_HIGHDMA |\
-  NETIF_F_HW_CSUM)
-
 static void ip6gre_tap_setup(struct net_device *dev)
 {
 
@@ -1386,26 +1413,6 @@ static int ip6gre_newlink(struct net *sr
nt->net = dev_net(dev);
ip6gre_tnl_link_config(nt, !tb[IFLA_MTU]);
 
-   dev->features   |= GRE6_FEATURES;
-   dev->hw_features|= GRE6_FEATURES;
-
-   if (!(nt->parms.o_flags & TUNNEL_SEQ)) {
-   /* TCP offload with GRE SEQ is not supported, nor
-* can we support 2 levels of outer headers requiring
-* an update.
-*/
-   if (!(nt->parms.o_flags & TUNNEL_CSUM) ||
-   (nt->encap.type == TUNNEL_ENCAP_NONE)) {
-   dev->features|= NETIF_F_GSO_SOFTWARE;
-   dev->hw_features |= NETIF_F_GSO_SOFTWARE;
-   }
-
-   /* Can use a lockless transmit, unless we generate
-* output sequences
-*/
-   dev->features |= NETIF_F_LLTX;
-   }
-
err = register_netdevice(dev);
if (err)
goto out;




[PATCH 4.14 079/146] sctp: Replace use of sockets_allocated with specified macro.

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Tonghao Zhang 


[ Upstream commit 8cb38a602478e9f806571f6920b0a3298aabf042 ]

The patch(180d8cd942ce) replaces all uses of struct sock fields'
memory_pressure, memory_allocated, sockets_allocated, and sysctl_mem
to accessor macros. But the sockets_allocated field of sctp sock is
not replaced at all. Then replace it now for unifying the code.

Fixes: 180d8cd942ce ("foundations of per-cgroup memory pressure controlling.")
Cc: Glauber Costa 
Signed-off-by: Tonghao Zhang 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/sctp/socket.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -4413,7 +4413,7 @@ static int sctp_init_sock(struct sock *s
SCTP_DBG_OBJCNT_INC(sock);
 
local_bh_disable();
-   percpu_counter_inc(_sockets_allocated);
+   sk_sockets_allocated_inc(sk);
sock_prot_inuse_add(net, sk->sk_prot, 1);
 
/* Nothing can fail after this block, otherwise
@@ -4457,7 +4457,7 @@ static void sctp_destroy_sock(struct soc
}
sctp_endpoint_free(sp->ep);
local_bh_disable();
-   percpu_counter_dec(_sockets_allocated);
+   sk_sockets_allocated_dec(sk);
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
local_bh_enable();
 }




[PATCH 4.14 035/146] x86/ldt: Make the LDT mapping RO

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Thomas Gleixner 

commit 9f5cb6b32d9e0a3a7453222baaf15664d92adbf2 upstream.

Now that the LDT mapping is in a known area when PAGE_TABLE_ISOLATION is
enabled its a primary target for attacks, if a user space interface fails
to validate a write address correctly. That can never happen, right?

The SDM states:

If the segment descriptors in the GDT or an LDT are placed in ROM, the
processor can enter an indefinite loop if software or the processor
attempts to update (write to) the ROM-based segment descriptors. To
prevent this problem, set the accessed bits for all segment descriptors
placed in a ROM. Also, remove operating-system or executive code that
attempts to modify segment descriptors located in ROM.

So its a valid approach to set the ACCESS bit when setting up the LDT entry
and to map the table RO. Fixup the selftest so it can handle that new mode.

Remove the manual ACCESS bit setter in set_tls_desc() as this is now
pointless. Folded the patch from Peter Ziljstra.

Signed-off-by: Thomas Gleixner 
Cc: Andy Lutomirski 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/desc.h   |2 ++
 arch/x86/kernel/ldt.c |7 ++-
 arch/x86/kernel/tls.c |   11 ++-
 tools/testing/selftests/x86/ldt_gdt.c |3 +--
 4 files changed, 11 insertions(+), 12 deletions(-)

--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -21,6 +21,8 @@ static inline void fill_ldt(struct desc_
 
desc->type  = (info->read_exec_only ^ 1) << 1;
desc->type |= info->contents << 2;
+   /* Set the ACCESS bit so it can be mapped RO */
+   desc->type |= 1;
 
desc->s = 1;
desc->dpl   = 0x3;
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -158,7 +158,12 @@ map_ldt_struct(struct mm_struct *mm, str
ptep = get_locked_pte(mm, va, );
if (!ptep)
return -ENOMEM;
-   pte = pfn_pte(pfn, __pgprot(__PAGE_KERNEL & ~_PAGE_GLOBAL));
+   /*
+* Map it RO so the easy to find address is not a primary
+* target via some kernel interface which misses a
+* permission check.
+*/
+   pte = pfn_pte(pfn, __pgprot(__PAGE_KERNEL_RO & ~_PAGE_GLOBAL));
set_pte_at(mm, va, ptep, pte);
pte_unmap_unlock(ptep, ptl);
}
--- a/arch/x86/kernel/tls.c
+++ b/arch/x86/kernel/tls.c
@@ -93,17 +93,10 @@ static void set_tls_desc(struct task_str
cpu = get_cpu();
 
while (n-- > 0) {
-   if (LDT_empty(info) || LDT_zero(info)) {
+   if (LDT_empty(info) || LDT_zero(info))
memset(desc, 0, sizeof(*desc));
-   } else {
+   else
fill_ldt(desc, info);
-
-   /*
-* Always set the accessed bit so that the CPU
-* doesn't try to write to the (read-only) GDT.
-*/
-   desc->type |= 1;
-   }
++info;
++desc;
}
--- a/tools/testing/selftests/x86/ldt_gdt.c
+++ b/tools/testing/selftests/x86/ldt_gdt.c
@@ -122,8 +122,7 @@ static void check_valid_segment(uint16_t
 * NB: Different Linux versions do different things with the
 * accessed bit in set_thread_area().
 */
-   if (ar != expected_ar &&
-   (ldt || ar != (expected_ar | AR_ACCESSED))) {
+   if (ar != expected_ar && ar != (expected_ar | AR_ACCESSED)) {
printf("[FAIL]\t%s entry %hu has AR 0x%08X but expected 
0x%08X\n",
   (ldt ? "LDT" : "GDT"), index, ar, expected_ar);
nerrs++;




[PATCH 4.14 090/146] net/mlx5e: Fix features check of IPv6 traffic

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Gal Pressman 


[ Upstream commit 2989ad1ec03021ee6d2193c35414f1d970a243de ]

The assumption that the next header field contains the transport
protocol is wrong for IPv6 packets with extension headers.
Instead, we should look the inner-most next header field in the buffer.
This will fix TSO offload for tunnels over IPv6 with extension headers.

Performance testing: 19.25x improvement, cool!
Measuring bandwidth of 16 threads TCP traffic over IPv6 GRE tap.
CPU: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
NIC: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
TSO: Enabled
Before: 4,926.24  Mbps
Now   : 94,827.91 Mbps

Fixes: b3f63c3d5e2c ("net/mlx5e: Add netdev support for VXLAN tunneling")
Signed-off-by: Gal Pressman 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3554,6 +3554,7 @@ static netdev_features_t mlx5e_tunnel_fe
 struct sk_buff *skb,
 netdev_features_t features)
 {
+   unsigned int offset = 0;
struct udphdr *udph;
u8 proto;
u16 port;
@@ -3563,7 +3564,7 @@ static netdev_features_t mlx5e_tunnel_fe
proto = ip_hdr(skb)->protocol;
break;
case htons(ETH_P_IPV6):
-   proto = ipv6_hdr(skb)->nexthdr;
+   proto = ipv6_find_hdr(skb, , -1, NULL, NULL);
break;
default:
goto out;




[PATCH 4.14 083/146] net: bridge: fix early call to br_stp_change_bridge_id and plug newlink leaks

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Nikolay Aleksandrov 


[ Upstream commit 84aeb437ab98a2bce3d4b2111c79723aedfceb33 ]

The early call to br_stp_change_bridge_id in bridge's newlink can cause
a memory leak if an error occurs during the newlink because the fdb
entries are not cleaned up if a different lladdr was specified, also
another minor issue is that it generates fdb notifications with
ifindex = 0. Another unrelated memory leak is the bridge sysfs entries
which get added on NETDEV_REGISTER event, but are not cleaned up in the
newlink error path. To remove this special case the call to
br_stp_change_bridge_id is done after netdev register and we cleanup the
bridge on changelink error via br_dev_delete to plug all leaks.

This patch makes netlink bridge destruction on newlink error the same as
dellink and ioctl del which is necessary since at that point we have a
fully initialized bridge device.

To reproduce the issue:
$ ip l add br0 address 00:11:22:33:44:55 type bridge group_fwd_mask 1
RTNETLINK answers: Invalid argument

$ rmmod bridge
[ 1822.142525] 
=
[ 1822.143640] BUG bridge_fdb_cache (Tainted: G   O): Objects 
remaining in bridge_fdb_cache on __kmem_cache_shutdown()
[ 1822.144821] 
-

[ 1822.145990] Disabling lock debugging due to kernel taint
[ 1822.146732] INFO: Slab 0x92a844b2 objects=32 used=2 
fp=0xfef011b0 flags=0x1800100
[ 1822.147700] CPU: 2 PID: 13584 Comm: rmmod Tainted: GB  O 
4.15.0-rc2+ #87
[ 1822.148578] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
1.7.5-20140531_083030-gandalf 04/01/2014
[ 1822.150008] Call Trace:
[ 1822.150510]  dump_stack+0x78/0xa9
[ 1822.151156]  slab_err+0xb1/0xd3
[ 1822.151834]  ? __kmalloc+0x1bb/0x1ce
[ 1822.152546]  __kmem_cache_shutdown+0x151/0x28b
[ 1822.153395]  shutdown_cache+0x13/0x144
[ 1822.154126]  kmem_cache_destroy+0x1c0/0x1fb
[ 1822.154669]  SyS_delete_module+0x194/0x244
[ 1822.155199]  ? trace_hardirqs_on_thunk+0x1a/0x1c
[ 1822.155773]  entry_SYSCALL_64_fastpath+0x23/0x9a
[ 1822.156343] RIP: 0033:0x7f929bd38b17
[ 1822.156859] RSP: 002b:7ffd160e9a98 EFLAGS: 0202 ORIG_RAX: 
00b0
[ 1822.157728] RAX: ffda RBX: 5578316ba090 RCX: 7f929bd38b17
[ 1822.158422] RDX: 7f929bd9ec60 RSI: 0800 RDI: 5578316ba0f0
[ 1822.159114] RBP: 0003 R08: 7f929bff5f20 R09: 7ffd160e8a11
[ 1822.159808] R10: 7ffd160e9860 R11: 0202 R12: 7ffd160e8a80
[ 1822.160513] R13:  R14:  R15: 5578316ba090
[ 1822.161278] INFO: Object 0x7645de29 @offset=0
[ 1822.161666] INFO: Object 0xd5df2ab5 @offset=128

Fixes: 30313a3d5794 ("bridge: Handle IFLA_ADDRESS correctly when creating 
bridge device")
Fixes: 5b8d5429daa0 ("bridge: netlink: register netdevice before executing 
changelink")
Signed-off-by: Nikolay Aleksandrov 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/bridge/br_netlink.c |   11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -1223,19 +1223,20 @@ static int br_dev_newlink(struct net *sr
struct net_bridge *br = netdev_priv(dev);
int err;
 
+   err = register_netdevice(dev);
+   if (err)
+   return err;
+
if (tb[IFLA_ADDRESS]) {
spin_lock_bh(>lock);
br_stp_change_bridge_id(br, nla_data(tb[IFLA_ADDRESS]));
spin_unlock_bh(>lock);
}
 
-   err = register_netdevice(dev);
-   if (err)
-   return err;
-
err = br_changelink(dev, tb, data, extack);
if (err)
-   unregister_netdevice(dev);
+   br_dev_delete(dev, NULL);
+
return err;
 }
 




[PATCH 4.14 086/146] sock: free skb in skb_complete_tx_timestamp on error

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Willem de Bruijn 


[ Upstream commit 35b99dffc3f710cafceee6c8c6ac6a98eb2cb4bf ]

skb_complete_tx_timestamp must ingest the skb it is passed. Call
kfree_skb if the skb cannot be enqueued.

Fixes: b245be1f4db1 ("net-timestamp: no-payload only sysctl")
Fixes: 9ac25fc06375 ("net: fix socket refcounting in 
skb_complete_tx_timestamp()")
Reported-by: Richard Cochran 
Signed-off-by: Willem de Bruijn 
Reviewed-by: Eric Dumazet 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/core/skbuff.c |6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -4296,7 +4296,7 @@ void skb_complete_tx_timestamp(struct sk
struct sock *sk = skb->sk;
 
if (!skb_may_tx_timestamp(sk, false))
-   return;
+   goto err;
 
/* Take a reference to prevent skb_orphan() from freeing the socket,
 * but only if the socket refcount is not zero.
@@ -4305,7 +4305,11 @@ void skb_complete_tx_timestamp(struct sk
*skb_hwtstamps(skb) = *hwtstamps;
__skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND, false);
sock_put(sk);
+   return;
}
+
+err:
+   kfree_skb(skb);
 }
 EXPORT_SYMBOL_GPL(skb_complete_tx_timestamp);
 




[PATCH 4.14 092/146] net/mlx5e: Prevent possible races in VXLAN control flow

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Gal Pressman 


[ Upstream commit 0c1cc8b2215f5122ca614b5adca60346018758c3 ]

When calling add/remove VXLAN port, a lock must be held in order to
prevent race scenarios when more than one add/remove happens at the
same time.
Fix by holding our state_lock (mutex) as done by all other parts of the
driver.
Note that the spinlock protecting the radix-tree is still needed in
order to synchronize radix-tree access from softirq context.

Fixes: b3f63c3d5e2c ("net/mlx5e: Add netdev support for VXLAN tunneling")
Signed-off-by: Gal Pressman 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/vxlan.c |4 
 1 file changed, 4 insertions(+)

--- a/drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
@@ -88,6 +88,7 @@ static void mlx5e_vxlan_add_port(struct
struct mlx5e_vxlan *vxlan;
int err;
 
+   mutex_lock(>state_lock);
vxlan = mlx5e_vxlan_lookup_port(priv, port);
if (vxlan) {
atomic_inc(>refcount);
@@ -117,6 +118,7 @@ err_free:
 err_delete_port:
mlx5e_vxlan_core_del_port_cmd(priv->mdev, port);
 free_work:
+   mutex_unlock(>state_lock);
kfree(vxlan_work);
 }
 
@@ -130,6 +132,7 @@ static void mlx5e_vxlan_del_port(struct
struct mlx5e_vxlan *vxlan;
bool remove = false;
 
+   mutex_lock(>state_lock);
spin_lock_bh(_db->lock);
vxlan = radix_tree_lookup(_db->tree, port);
if (!vxlan)
@@ -147,6 +150,7 @@ out_unlock:
mlx5e_vxlan_core_del_port_cmd(priv->mdev, port);
kfree(vxlan);
}
+   mutex_unlock(>state_lock);
kfree(vxlan_work);
 }
 




[PATCH 4.14 088/146] net/mlx5: Fix rate limit packet pacing naming and struct

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Eran Ben Elisha 


[ Upstream commit 37e92a9d4fe38dc3e7308913575983a6a088c8d4 ]

In mlx5_ifc, struct size was not complete, and thus driver was sending
garbage after the last defined field. Fixed it by adding reserved field
to complete the struct size.

In addition, rename all set_rate_limit to set_pp_rate_limit to be
compliant with the Firmware <-> Driver definition.

Fixes: 7486216b3a0b ("{net,IB}/mlx5: mlx5_ifc updates")
Fixes: 1466cc5b23d1 ("net/mlx5: Rate limit tables support")
Signed-off-by: Eran Ben Elisha 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/cmd.c |4 ++--
 drivers/net/ethernet/mellanox/mlx5/core/rl.c  |   22 +++---
 include/linux/mlx5/mlx5_ifc.h |8 +---
 3 files changed, 18 insertions(+), 16 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -362,7 +362,7 @@ static int mlx5_internal_err_ret_value(s
case MLX5_CMD_OP_QUERY_VPORT_COUNTER:
case MLX5_CMD_OP_ALLOC_Q_COUNTER:
case MLX5_CMD_OP_QUERY_Q_COUNTER:
-   case MLX5_CMD_OP_SET_RATE_LIMIT:
+   case MLX5_CMD_OP_SET_PP_RATE_LIMIT:
case MLX5_CMD_OP_QUERY_RATE_LIMIT:
case MLX5_CMD_OP_CREATE_SCHEDULING_ELEMENT:
case MLX5_CMD_OP_QUERY_SCHEDULING_ELEMENT:
@@ -505,7 +505,7 @@ const char *mlx5_command_str(int command
MLX5_COMMAND_STR_CASE(ALLOC_Q_COUNTER);
MLX5_COMMAND_STR_CASE(DEALLOC_Q_COUNTER);
MLX5_COMMAND_STR_CASE(QUERY_Q_COUNTER);
-   MLX5_COMMAND_STR_CASE(SET_RATE_LIMIT);
+   MLX5_COMMAND_STR_CASE(SET_PP_RATE_LIMIT);
MLX5_COMMAND_STR_CASE(QUERY_RATE_LIMIT);
MLX5_COMMAND_STR_CASE(CREATE_SCHEDULING_ELEMENT);
MLX5_COMMAND_STR_CASE(DESTROY_SCHEDULING_ELEMENT);
--- a/drivers/net/ethernet/mellanox/mlx5/core/rl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/rl.c
@@ -125,16 +125,16 @@ static struct mlx5_rl_entry *find_rl_ent
return ret_entry;
 }
 
-static int mlx5_set_rate_limit_cmd(struct mlx5_core_dev *dev,
+static int mlx5_set_pp_rate_limit_cmd(struct mlx5_core_dev *dev,
   u32 rate, u16 index)
 {
-   u32 in[MLX5_ST_SZ_DW(set_rate_limit_in)]   = {0};
-   u32 out[MLX5_ST_SZ_DW(set_rate_limit_out)] = {0};
+   u32 in[MLX5_ST_SZ_DW(set_pp_rate_limit_in)]   = {0};
+   u32 out[MLX5_ST_SZ_DW(set_pp_rate_limit_out)] = {0};
 
-   MLX5_SET(set_rate_limit_in, in, opcode,
-MLX5_CMD_OP_SET_RATE_LIMIT);
-   MLX5_SET(set_rate_limit_in, in, rate_limit_index, index);
-   MLX5_SET(set_rate_limit_in, in, rate_limit, rate);
+   MLX5_SET(set_pp_rate_limit_in, in, opcode,
+MLX5_CMD_OP_SET_PP_RATE_LIMIT);
+   MLX5_SET(set_pp_rate_limit_in, in, rate_limit_index, index);
+   MLX5_SET(set_pp_rate_limit_in, in, rate_limit, rate);
return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
 }
 
@@ -173,7 +173,7 @@ int mlx5_rl_add_rate(struct mlx5_core_de
entry->refcount++;
} else {
/* new rate limit */
-   err = mlx5_set_rate_limit_cmd(dev, rate, entry->index);
+   err = mlx5_set_pp_rate_limit_cmd(dev, rate, entry->index);
if (err) {
mlx5_core_err(dev, "Failed configuring rate: %u (%d)\n",
  rate, err);
@@ -209,7 +209,7 @@ void mlx5_rl_remove_rate(struct mlx5_cor
entry->refcount--;
if (!entry->refcount) {
/* need to remove rate */
-   mlx5_set_rate_limit_cmd(dev, 0, entry->index);
+   mlx5_set_pp_rate_limit_cmd(dev, 0, entry->index);
entry->rate = 0;
}
 
@@ -262,8 +262,8 @@ void mlx5_cleanup_rl_table(struct mlx5_c
/* Clear all configured rates */
for (i = 0; i < table->max_size; i++)
if (table->rl_entry[i].rate)
-   mlx5_set_rate_limit_cmd(dev, 0,
-   table->rl_entry[i].index);
+   mlx5_set_pp_rate_limit_cmd(dev, 0,
+  table->rl_entry[i].index);
 
kfree(dev->priv.rl_table.rl_entry);
 }
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -147,7 +147,7 @@ enum {
MLX5_CMD_OP_ALLOC_Q_COUNTER   = 0x771,
MLX5_CMD_OP_DEALLOC_Q_COUNTER = 0x772,
MLX5_CMD_OP_QUERY_Q_COUNTER   = 0x773,
-   MLX5_CMD_OP_SET_RATE_LIMIT= 0x780,
+   MLX5_CMD_OP_SET_PP_RATE_LIMIT = 0x780,
MLX5_CMD_OP_QUERY_RATE_LIMIT  = 0x781,
MLX5_CMD_OP_CREATE_SCHEDULING_ELEMENT  = 0x782,
MLX5_CMD_OP_DESTROY_SCHEDULING_ELEMENT = 0x783,
@@ -7233,7 +7233,7 @@ struct 

[PATCH 4.14 068/146] tcp_bbr: record "full bw reached" decision in new full_bw_reached bit

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Neal Cardwell 


[ Upstream commit c589e69b508d29ed8e644dfecda453f71c02ec27 ]

This commit records the "full bw reached" decision in a new
full_bw_reached bit. This is a pure refactor that does not change the
current behavior, but enables subsequent fixes and improvements.

In particular, this enables simple and clean fixes because the full_bw
and full_bw_cnt can be unconditionally zeroed without worrying about
forgetting that we estimated we filled the pipe in Startup. And it
enables future improvements because multiple code paths can be used
for estimating that we filled the pipe in Startup; any new code paths
only need to set this bit when they think the pipe is full.

Note that this fix intentionally reduces the width of the full_bw_cnt
counter, since we have never used the most significant bit.

Signed-off-by: Neal Cardwell 
Reviewed-by: Yuchung Cheng 
Acked-by: Soheil Hassas Yeganeh 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_bbr.c |7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -110,7 +110,8 @@ struct bbr {
u32 lt_last_lost;/* LT intvl start: tp->lost */
u32 pacing_gain:10, /* current gain for setting pacing rate */
cwnd_gain:10,   /* current gain for setting cwnd */
-   full_bw_cnt:3,  /* number of rounds without large bw gains */
+   full_bw_reached:1,   /* reached full bw in Startup? */
+   full_bw_cnt:2,  /* number of rounds without large bw gains */
cycle_idx:3,/* current index in pacing_gain cycle array */
has_seen_rtt:1, /* have we seen an RTT sample yet? */
unused_b:5;
@@ -180,7 +181,7 @@ static bool bbr_full_bw_reached(const st
 {
const struct bbr *bbr = inet_csk_ca(sk);
 
-   return bbr->full_bw_cnt >= bbr_full_bw_cnt;
+   return bbr->full_bw_reached;
 }
 
 /* Return the windowed max recent bandwidth sample, in pkts/uS << BW_SCALE. */
@@ -717,6 +718,7 @@ static void bbr_check_full_bw_reached(st
return;
}
++bbr->full_bw_cnt;
+   bbr->full_bw_reached = bbr->full_bw_cnt >= bbr_full_bw_cnt;
 }
 
 /* If pipe is probably full, drain the queue and then enter steady-state. */
@@ -850,6 +852,7 @@ static void bbr_init(struct sock *sk)
bbr->restore_cwnd = 0;
bbr->round_start = 0;
bbr->idle_restart = 0;
+   bbr->full_bw_reached = 0;
bbr->full_bw = 0;
bbr->full_bw_cnt = 0;
bbr->cycle_mstamp = 0;




[PATCH 4.14 091/146] net/mlx5e: Add refcount to VXLAN structure

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Gal Pressman 


[ Upstream commit 23f4cc2cd9ed92570647220aca60d0197d8c1fa9 ]

A refcount mechanism must be implemented in order to prevent unwanted
scenarios such as:
- Open an IPv4 VXLAN interface
- Open an IPv6 VXLAN interface (different socket)
- Remove one of the interfaces

With current implementation, the UDP port will be removed from our VXLAN
database and turn off the offloads for the other interface, which is
still active.
The reference count mechanism will only allow UDP port removals once all
consumers are gone.

Fixes: b3f63c3d5e2c ("net/mlx5e: Add netdev support for VXLAN tunneling")
Signed-off-by: Gal Pressman 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/vxlan.c |   50 
 drivers/net/ethernet/mellanox/mlx5/core/vxlan.h |1 
 2 files changed, 28 insertions(+), 23 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
@@ -88,8 +88,11 @@ static void mlx5e_vxlan_add_port(struct
struct mlx5e_vxlan *vxlan;
int err;
 
-   if (mlx5e_vxlan_lookup_port(priv, port))
+   vxlan = mlx5e_vxlan_lookup_port(priv, port);
+   if (vxlan) {
+   atomic_inc(>refcount);
goto free_work;
+   }
 
if (mlx5e_vxlan_core_add_port_cmd(priv->mdev, port))
goto free_work;
@@ -99,6 +102,7 @@ static void mlx5e_vxlan_add_port(struct
goto err_delete_port;
 
vxlan->udp_port = port;
+   atomic_set(>refcount, 1);
 
spin_lock_bh(_db->lock);
err = radix_tree_insert(_db->tree, vxlan->udp_port, vxlan);
@@ -116,32 +120,33 @@ free_work:
kfree(vxlan_work);
 }
 
-static void __mlx5e_vxlan_core_del_port(struct mlx5e_priv *priv, u16 port)
+static void mlx5e_vxlan_del_port(struct work_struct *work)
 {
+   struct mlx5e_vxlan_work *vxlan_work =
+   container_of(work, struct mlx5e_vxlan_work, work);
+   struct mlx5e_priv *priv = vxlan_work->priv;
struct mlx5e_vxlan_db *vxlan_db = >vxlan;
+   u16 port = vxlan_work->port;
struct mlx5e_vxlan *vxlan;
+   bool remove = false;
 
spin_lock_bh(_db->lock);
-   vxlan = radix_tree_delete(_db->tree, port);
-   spin_unlock_bh(_db->lock);
-
+   vxlan = radix_tree_lookup(_db->tree, port);
if (!vxlan)
-   return;
-
-   mlx5e_vxlan_core_del_port_cmd(priv->mdev, vxlan->udp_port);
-
-   kfree(vxlan);
-}
+   goto out_unlock;
 
-static void mlx5e_vxlan_del_port(struct work_struct *work)
-{
-   struct mlx5e_vxlan_work *vxlan_work =
-   container_of(work, struct mlx5e_vxlan_work, work);
-   struct mlx5e_priv *priv = vxlan_work->priv;
-   u16 port = vxlan_work->port;
+   if (atomic_dec_and_test(>refcount)) {
+   radix_tree_delete(_db->tree, port);
+   remove = true;
+   }
 
-   __mlx5e_vxlan_core_del_port(priv, port);
+out_unlock:
+   spin_unlock_bh(_db->lock);
 
+   if (remove) {
+   mlx5e_vxlan_core_del_port_cmd(priv->mdev, port);
+   kfree(vxlan);
+   }
kfree(vxlan_work);
 }
 
@@ -171,12 +176,11 @@ void mlx5e_vxlan_cleanup(struct mlx5e_pr
struct mlx5e_vxlan *vxlan;
unsigned int port = 0;
 
-   spin_lock_bh(_db->lock);
+   /* Lockless since we are the only radix-tree consumers, wq is disabled 
*/
while (radix_tree_gang_lookup(_db->tree, (void **), port, 
1)) {
port = vxlan->udp_port;
-   spin_unlock_bh(_db->lock);
-   __mlx5e_vxlan_core_del_port(priv, (u16)port);
-   spin_lock_bh(_db->lock);
+   radix_tree_delete(_db->tree, port);
+   mlx5e_vxlan_core_del_port_cmd(priv->mdev, port);
+   kfree(vxlan);
}
-   spin_unlock_bh(_db->lock);
 }
--- a/drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
@@ -36,6 +36,7 @@
 #include "en.h"
 
 struct mlx5e_vxlan {
+   atomic_t refcount;
u16 udp_port;
 };
 




[PATCH 4.14 085/146] net: phy: micrel: ksz9031: reconfigure autoneg after phy autoneg workaround

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Grygorii Strashko 


[ Upstream commit c1a8d0a3accf64a014d605e6806ce05d1c17adf1 ]

Under some circumstances driver will perform PHY reset in
ksz9031_read_status() to fix autoneg failure case (idle error count =
0xFF). When this happens ksz9031 will not detect link status change any
more when connecting to Netgear 1G switch (link can be recovered sometimes by
restarting netdevice "ifconfig down up"). Reproduced with TI am572x board
equipped with ksz9031 PHY while connecting to Netgear 1G switch.

Fix the issue by reconfiguring autonegotiation after PHY reset in
ksz9031_read_status().

Fixes: d2fd719bcb0e ("net/phy: micrel: Add workaround for bad autoneg")
Signed-off-by: Grygorii Strashko 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/phy/micrel.c |1 +
 1 file changed, 1 insertion(+)

--- a/drivers/net/phy/micrel.c
+++ b/drivers/net/phy/micrel.c
@@ -622,6 +622,7 @@ static int ksz9031_read_status(struct ph
phydev->link = 0;
if (phydev->drv->config_intr && phy_interrupt_is_valid(phydev))
phydev->drv->config_intr(phydev);
+   return genphy_config_aneg(phydev);
}
 
return 0;




[PATCH 4.14 098/146] sctp: make sure stream nums can match optlen in sctp_setsockopt_reset_streams

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Xin Long 


[ Upstream commit 2342b8d95bcae5946e1b9b8d58645f37500ef2e7 ]

Now in sctp_setsockopt_reset_streams, it only does the check
optlen < sizeof(*params) for optlen. But it's not enough, as
params->srs_number_streams should also match optlen.

If the streams in params->srs_stream_list are less than stream
nums in params->srs_number_streams, later when dereferencing
the stream list, it could cause a slab-out-of-bounds crash, as
reported by syzbot.

This patch is to fix it by also checking the stream numbers in
sctp_setsockopt_reset_streams to make sure at least it's not
greater than the streams in the list.

Fixes: 7f9d68ac944e ("sctp: implement sender-side procedures for SSN Reset 
Request Parameter")
Reported-by: Dmitry Vyukov 
Signed-off-by: Xin Long 
Acked-by: Marcelo Ricardo Leitner 
Acked-by: Neil Horman 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/sctp/socket.c |6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -3874,13 +3874,17 @@ static int sctp_setsockopt_reset_streams
struct sctp_association *asoc;
int retval = -EINVAL;
 
-   if (optlen < sizeof(struct sctp_reset_streams))
+   if (optlen < sizeof(*params))
return -EINVAL;
 
params = memdup_user(optval, optlen);
if (IS_ERR(params))
return PTR_ERR(params);
 
+   if (params->srs_number_streams * sizeof(__u16) >
+   optlen - sizeof(*params))
+   goto out;
+
asoc = sctp_id2assoc(sk, params->srs_assoc_id);
if (!asoc)
goto out;




[PATCH 4.14 069/146] tcp md5sig: Use skbs saddr when replying to an incoming segment

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Christoph Paasch 


[ Upstream commit 30791ac41927ebd3e75486f9504b6d2280463bf0 ]

The MD5-key that belongs to a connection is identified by the peer's
IP-address. When we are in tcp_v4(6)_reqsk_send_ack(), we are replying
to an incoming segment from tcp_check_req() that failed the seq-number
checks.

Thus, to find the correct key, we need to use the skb's saddr and not
the daddr.

This bug seems to have been there since quite a while, but probably got
unnoticed because the consequences are not catastrophic. We will call
tcp_v4_reqsk_send_ack only to send a challenge-ACK back to the peer,
thus the connection doesn't really fail.

Fixes: 9501f9722922 ("tcp md5sig: Let the caller pass appropriate key for 
tcp_v{4,6}_do_calc_md5_hash().")
Signed-off-by: Christoph Paasch 
Reviewed-by: Eric Dumazet 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_ipv4.c |2 +-
 net/ipv6/tcp_ipv6.c |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -844,7 +844,7 @@ static void tcp_v4_reqsk_send_ack(const
tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
req->ts_recent,
0,
-   tcp_md5_do_lookup(sk, (union tcp_md5_addr 
*)_hdr(skb)->daddr,
+   tcp_md5_do_lookup(sk, (union tcp_md5_addr 
*)_hdr(skb)->saddr,
  AF_INET),
inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 
0,
ip_hdr(skb)->tos);
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -988,7 +988,7 @@ static void tcp_v6_reqsk_send_ack(const
req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
req->ts_recent, sk->sk_bound_dev_if,
-   tcp_v6_md5_do_lookup(sk, _hdr(skb)->daddr),
+   tcp_v6_md5_do_lookup(sk, _hdr(skb)->saddr),
0, 0);
 }
 




[PATCH 4.14 096/146] net: dsa: bcm_sf2: Clear IDDQ_GLOBAL_PWR bit for PHY

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Florian Fainelli 


[ Upstream commit 4b52d010113e11006a389f2a8315167ede9e0b10 ]

The PHY on BCM7278 has an additional bit that needs to be cleared:
IDDQ_GLOBAL_PWR, without doing this, the PHY remains stuck in reset out
of suspend/resume cycles.

Fixes: 0fe9933804eb ("net: dsa: bcm_sf2: Add support for BCM7278 integrated 
switch")
Signed-off-by: Florian Fainelli 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/dsa/bcm_sf2.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/dsa/bcm_sf2.c
+++ b/drivers/net/dsa/bcm_sf2.c
@@ -167,7 +167,7 @@ static void bcm_sf2_gphy_enable_set(stru
reg = reg_readl(priv, REG_SPHY_CNTRL);
if (enable) {
reg |= PHY_RESET;
-   reg &= ~(EXT_PWR_DOWN | IDDQ_BIAS | CK25_DIS);
+   reg &= ~(EXT_PWR_DOWN | IDDQ_BIAS | IDDQ_GLOBAL_PWR | CK25_DIS);
reg_writel(priv, reg, REG_SPHY_CNTRL);
udelay(21);
reg = reg_readl(priv, REG_SPHY_CNTRL);




linux-next: Signed-off-by missing for commit in the iversion tree

2018-01-01 Thread Stephen Rothwell
Hi Jeff,

Commit

  e9f12c2601ee ("ima: Use i_version only when filesystem supports it")

is missing a Signed-off-by from its committer.

-- 
Cheers,
Stephen Rothwell


[PATCH 4.14 097/146] s390/qeth: fix error handling in checksum cmd callback

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Julian Wiedmann 


[ Upstream commit ad3cbf61332914711e5f506972b1dc9af8d62146 ]

Make sure to check both return code fields before processing the
response. Otherwise we risk operating on invalid data.

Fixes: c9475369bd2b ("s390/qeth: rework RX/TX checksum offload")
Signed-off-by: Julian Wiedmann 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/s390/net/qeth_core_main.c |9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -5445,6 +5445,13 @@ out:
 }
 EXPORT_SYMBOL_GPL(qeth_poll);
 
+static int qeth_setassparms_inspect_rc(struct qeth_ipa_cmd *cmd)
+{
+   if (!cmd->hdr.return_code)
+   cmd->hdr.return_code = cmd->data.setassparms.hdr.return_code;
+   return cmd->hdr.return_code;
+}
+
 int qeth_setassparms_cb(struct qeth_card *card,
struct qeth_reply *reply, unsigned long data)
 {
@@ -6304,7 +6311,7 @@ static int qeth_ipa_checksum_run_cmd_cb(
(struct qeth_checksum_cmd *)reply->param;
 
QETH_CARD_TEXT(card, 4, "chkdoccb");
-   if (cmd->hdr.return_code)
+   if (qeth_setassparms_inspect_rc(cmd))
return 0;
 
memset(chksum_cb, 0, sizeof(*chksum_cb));




[PATCH 4.14 093/146] net/mlx5: Fix error flow in CREATE_QP command

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Moni Shoua 


[ Upstream commit dbff26e44dc3ec4de6578733b054a0114652a764 ]

In error flow, when DESTROY_QP command should be executed, the wrong
mailbox was set with data, not the one that is written to hardware,
Fix that.

Fixes: 09a7d9eca1a6 '{net,IB}/mlx5: QP/XRCD commands via mlx5 ifc'
Signed-off-by: Moni Shoua 
Signed-off-by: Saeed Mahameed 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/mellanox/mlx5/core/qp.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
@@ -213,8 +213,8 @@ int mlx5_core_create_qp(struct mlx5_core
 err_cmd:
memset(din, 0, sizeof(din));
memset(dout, 0, sizeof(dout));
-   MLX5_SET(destroy_qp_in, in, opcode, MLX5_CMD_OP_DESTROY_QP);
-   MLX5_SET(destroy_qp_in, in, qpn, qp->qpn);
+   MLX5_SET(destroy_qp_in, din, opcode, MLX5_CMD_OP_DESTROY_QP);
+   MLX5_SET(destroy_qp_in, din, qpn, qp->qpn);
mlx5_cmd_exec(dev, din, sizeof(din), dout, sizeof(dout));
return err;
 }




[PATCH 4.14 102/146] net: phy: marvell: Limit 88m1101 autoneg errata to 88E1145 as well.

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Zhao Qiang 


[ Upstream commit c505873eaece2b4aefd07d339dc7e1400e0235ac ]

88E1145 also need this autoneg errata.

Fixes: f2899788353c ("net: phy: marvell: Limit errata to 88m1101")
Signed-off-by: Zhao Qiang 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/phy/marvell.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/phy/marvell.c
+++ b/drivers/net/phy/marvell.c
@@ -2069,7 +2069,7 @@ static struct phy_driver marvell_drivers
.flags = PHY_HAS_INTERRUPT,
.probe = marvell_probe,
.config_init = _config_init,
-   .config_aneg = _config_aneg,
+   .config_aneg = _config_aneg,
.read_status = _read_status,
.ack_interrupt = _ack_interrupt,
.config_intr = _config_intr,




[PATCH 4.14 101/146] tcp: fix potential underestimation on rcv_rtt

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Wei Wang 


[ Upstream commit 9ee11bd03cb1a5c3ca33c2bb70e7ed325f68890f ]

When ms timestamp is used, current logic uses 1us in
tcp_rcv_rtt_update() when the real rcv_rtt is within 1 - 999us.
This could cause rcv_rtt underestimation.
Fix it by always using a min value of 1ms if ms timestamp is used.

Fixes: 645f4c6f2ebd ("tcp: switch rcv_rtt_est and rcvq_space to high resolution 
timestamps")
Signed-off-by: Wei Wang 
Signed-off-by: Eric Dumazet 
Acked-by: Neal Cardwell 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_input.c |   10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -521,9 +521,6 @@ static void tcp_rcv_rtt_update(struct tc
u32 new_sample = tp->rcv_rtt_est.rtt_us;
long m = sample;
 
-   if (m == 0)
-   m = 1;
-
if (new_sample != 0) {
/* If we sample in larger samples in the non-timestamp
 * case, we could grossly overestimate the RTT especially
@@ -560,6 +557,8 @@ static inline void tcp_rcv_rtt_measure(s
if (before(tp->rcv_nxt, tp->rcv_rtt_est.seq))
return;
delta_us = tcp_stamp_us_delta(tp->tcp_mstamp, tp->rcv_rtt_est.time);
+   if (!delta_us)
+   delta_us = 1;
tcp_rcv_rtt_update(tp, delta_us, 1);
 
 new_measure:
@@ -576,8 +575,11 @@ static inline void tcp_rcv_rtt_measure_t
(TCP_SKB_CB(skb)->end_seq -
 TCP_SKB_CB(skb)->seq >= inet_csk(sk)->icsk_ack.rcv_mss)) {
u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
-   u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
+   u32 delta_us;
 
+   if (!delta)
+   delta = 1;
+   delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
tcp_rcv_rtt_update(tp, delta_us, 0);
}
 }




[PATCH 4.14 070/146] tg3: Fix rx hang on MTU change with 5717/5719

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Brian King 


[ Upstream commit 748a240c589824e9121befb1cba5341c319885bc ]

This fixes a hang issue seen when changing the MTU size from 1500 MTU
to 9000 MTU on both 5717 and 5719 chips. In discussion with Broadcom,
they've indicated that these chipsets have the same phy as the 57766
chipset, so the same workarounds apply. This has been tested by IBM
on both Power 8 and Power 9 systems as well as by Broadcom on x86
hardware and has been confirmed to resolve the hang issue.

Signed-off-by: Brian King 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/net/ethernet/broadcom/tg3.c |4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -14227,7 +14227,9 @@ static int tg3_change_mtu(struct net_dev
/* Reset PHY, otherwise the read DMA engine will be in a mode that
 * breaks all requests to 256 bytes.
 */
-   if (tg3_asic_rev(tp) == ASIC_REV_57766)
+   if (tg3_asic_rev(tp) == ASIC_REV_57766 ||
+   tg3_asic_rev(tp) == ASIC_REV_5717 ||
+   tg3_asic_rev(tp) == ASIC_REV_5719)
reset_phy = true;
 
err = tg3_restart_hw(tp, reset_phy);




[PATCH 4.14 071/146] tcp_bbr: reset full pipe detection on loss recovery undo

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Neal Cardwell 


[ Upstream commit 2f6c498e4f15d27852c04ed46d804a39137ba364 ]

Fix BBR so that upon notification of a loss recovery undo BBR resets
the full pipe detection (STARTUP exit) state machine.

Under high reordering, reordering events can be interpreted as loss.
If the reordering and spurious loss estimates are high enough, this
could previously cause BBR to spuriously estimate that the pipe is
full.

Since spurious loss recovery means that our overall sending will have
slowed down spuriously, this commit gives a flow more time to probe
robustly for bandwidth and decide the pipe is really full.

Signed-off-by: Neal Cardwell 
Reviewed-by: Yuchung Cheng 
Acked-by: Soheil Hassas Yeganeh 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_bbr.c |4 
 1 file changed, 4 insertions(+)

--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -874,6 +874,10 @@ static u32 bbr_sndbuf_expand(struct sock
  */
 static u32 bbr_undo_cwnd(struct sock *sk)
 {
+   struct bbr *bbr = inet_csk_ca(sk);
+
+   bbr->full_bw = 0;   /* spurious slow-down; reset full pipe detection */
+   bbr->full_bw_cnt = 0;
return tcp_sk(sk)->snd_cwnd;
 }
 




[PATCH 4.14 104/146] tcp: refresh tcp_mstamp from timers callbacks

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Eric Dumazet 


[ Upstream commit 4688eb7cf3ae2c2721d1dacff5c1384cba47d176 ]

Only the retransmit timer currently refreshes tcp_mstamp

We should do the same for delayed acks and keepalives.

Even if RFC 7323 does not request it, this is consistent to what linux
did in the past, when TS values were based on jiffies.

Fixes: 385e20706fac ("tcp: use tp->tcp_mstamp in output path")
Signed-off-by: Eric Dumazet 
Cc: Soheil Hassas Yeganeh 
Cc: Mike Maloney 
Cc: Neal Cardwell 
Acked-by: Neal Cardwell 
Acked-by: Soheil Hassas Yeganeh 
Acked-by:  Mike Maloney 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_timer.c |2 ++
 1 file changed, 2 insertions(+)

--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -264,6 +264,7 @@ void tcp_delack_timer_handler(struct soc
icsk->icsk_ack.pingpong = 0;
icsk->icsk_ack.ato  = TCP_ATO_MIN;
}
+   tcp_mstamp_refresh(tcp_sk(sk));
tcp_send_ack(sk);
__NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS);
}
@@ -627,6 +628,7 @@ static void tcp_keepalive_timer (unsigne
goto out;
}
 
+   tcp_mstamp_refresh(tp);
if (sk->sk_state == TCP_FIN_WAIT2 && sock_flag(sk, SOCK_DEAD)) {
if (tp->linger2 >= 0) {
const int tmo = tcp_fin_time(sk) - TCP_TIMEWAIT_LEN;




[PATCH 4.14 072/146] tcp_bbr: reset long-term bandwidth sampling on loss recovery undo

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Neal Cardwell 


[ Upstream commit 600647d467c6d04b3954b41a6ee1795b5ae00550 ]

Fix BBR so that upon notification of a loss recovery undo BBR resets
long-term bandwidth sampling.

Under high reordering, reordering events can be interpreted as loss.
If the reordering and spurious loss estimates are high enough, this
can cause BBR to spuriously estimate that we are seeing loss rates
high enough to trigger long-term bandwidth estimation. To avoid that
problem, this commit resets long-term bandwidth sampling on loss
recovery undo events.

Signed-off-by: Neal Cardwell 
Reviewed-by: Yuchung Cheng 
Acked-by: Soheil Hassas Yeganeh 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/ipv4/tcp_bbr.c |1 +
 1 file changed, 1 insertion(+)

--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -878,6 +878,7 @@ static u32 bbr_undo_cwnd(struct sock *sk
 
bbr->full_bw = 0;   /* spurious slow-down; reset full pipe detection */
bbr->full_bw_cnt = 0;
+   bbr_reset_lt_bw_sampling(sk);
return tcp_sk(sk)->snd_cwnd;
 }
 




[PATCH 4.14 074/146] s390/qeth: dont apply takeover changes to RXIP

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Julian Wiedmann 


[ Upstream commit b22d73d6689fd902a66c08ebe71ab2f3b351e22f ]

When takeover is switched off, current code clears the 'TAKEOVER' flag on
all IPs. But the flag is also used for RXIP addresses, and those should
not be affected by the takeover mode.
Fix the behaviour by consistenly applying takover logic to NORMAL
addresses only.

Signed-off-by: Julian Wiedmann 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/s390/net/qeth_l3_main.c |5 +++--
 drivers/s390/net/qeth_l3_sys.c  |5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)

--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -173,6 +173,8 @@ int qeth_l3_is_addr_covered_by_ipato(str
 
if (!card->ipato.enabled)
return 0;
+   if (addr->type != QETH_IP_TYPE_NORMAL)
+   return 0;
 
qeth_l3_convert_addr_to_bits((u8 *) >u, addr_bits,
  (addr->proto == QETH_PROT_IPV4)? 4:16);
@@ -289,8 +291,7 @@ int qeth_l3_add_ip(struct qeth_card *car
memcpy(addr, tmp_addr, sizeof(struct qeth_ipaddr));
addr->ref_counter = 1;
 
-   if (addr->type == QETH_IP_TYPE_NORMAL  &&
-   qeth_l3_is_addr_covered_by_ipato(card, addr)) {
+   if (qeth_l3_is_addr_covered_by_ipato(card, addr)) {
QETH_CARD_TEXT(card, 2, "tkovaddr");
addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
}
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -396,10 +396,11 @@ static ssize_t qeth_l3_dev_ipato_enable_
card->ipato.enabled = enable;
 
hash_for_each(card->ip_htable, i, addr, hnode) {
+   if (addr->type != QETH_IP_TYPE_NORMAL)
+   continue;
if (!enable)
addr->set_flags &= ~QETH_IPA_SETIP_TAKEOVER_FLAG;
-   else if (addr->type == QETH_IP_TYPE_NORMAL &&
-qeth_l3_is_addr_covered_by_ipato(card, addr))
+   else if (qeth_l3_is_addr_covered_by_ipato(card, addr))
addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG;
}
 out:




[PATCH 4.14 120/146] usbip: stub: stop printing kernel pointer addresses in messages

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Shuah Khan 

commit 248a22044366f588d46754c54dfe29ffe4f8b4df upstream.

Remove and/or change debug, info. and error messages to not print
kernel pointer addresses.

Signed-off-by: Shuah Khan 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/usbip/stub_main.c |5 +++--
 drivers/usb/usbip/stub_rx.c   |7 ++-
 drivers/usb/usbip/stub_tx.c   |6 +++---
 3 files changed, 8 insertions(+), 10 deletions(-)

--- a/drivers/usb/usbip/stub_main.c
+++ b/drivers/usb/usbip/stub_main.c
@@ -251,11 +251,12 @@ void stub_device_cleanup_urbs(struct stu
struct stub_priv *priv;
struct urb *urb;
 
-   dev_dbg(>udev->dev, "free sdev %p\n", sdev);
+   dev_dbg(>udev->dev, "Stub device cleaning up urbs\n");
 
while ((priv = stub_priv_pop(sdev))) {
urb = priv->urb;
-   dev_dbg(>udev->dev, "free urb %p\n", urb);
+   dev_dbg(>udev->dev, "free urb seqnum %lu\n",
+   priv->seqnum);
usb_kill_urb(urb);
 
kmem_cache_free(stub_priv_cache, priv);
--- a/drivers/usb/usbip/stub_rx.c
+++ b/drivers/usb/usbip/stub_rx.c
@@ -225,9 +225,6 @@ static int stub_recv_cmd_unlink(struct s
if (priv->seqnum != pdu->u.cmd_unlink.seqnum)
continue;
 
-   dev_info(>urb->dev->dev, "unlink urb %p\n",
-priv->urb);
-
/*
 * This matched urb is not completed yet (i.e., be in
 * flight in usb hcd hardware/driver). Now we are
@@ -266,8 +263,8 @@ static int stub_recv_cmd_unlink(struct s
ret = usb_unlink_urb(priv->urb);
if (ret != -EINPROGRESS)
dev_err(>urb->dev->dev,
-   "failed to unlink a urb %p, ret %d\n",
-   priv->urb, ret);
+   "failed to unlink a urb # %lu, ret %d\n",
+   priv->seqnum, ret);
 
return 0;
}
--- a/drivers/usb/usbip/stub_tx.c
+++ b/drivers/usb/usbip/stub_tx.c
@@ -102,7 +102,7 @@ void stub_complete(struct urb *urb)
/* link a urb to the queue of tx. */
spin_lock_irqsave(>priv_lock, flags);
if (sdev->ud.tcp_socket == NULL) {
-   usbip_dbg_stub_tx("ignore urb for closed connection %p", urb);
+   usbip_dbg_stub_tx("ignore urb for closed connection\n");
/* It will be freed in stub_device_cleanup_urbs(). */
} else if (priv->unlinking) {
stub_enqueue_ret_unlink(sdev, priv->seqnum, urb->status);
@@ -204,8 +204,8 @@ static int stub_send_ret_submit(struct s
 
/* 1. setup usbip_header */
setup_ret_submit_pdu(_header, urb);
-   usbip_dbg_stub_tx("setup txdata seqnum: %d urb: %p\n",
- pdu_header.base.seqnum, urb);
+   usbip_dbg_stub_tx("setup txdata seqnum: %d\n",
+ pdu_header.base.seqnum);
usbip_header_correct_endian(_header, 1);
 
iov[iovnum].iov_base = _header;




[PATCH 4.14 119/146] usbip: prevent leaking socket pointer address in messages

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Shuah Khan 

commit 90120d15f4c397272aaf41077960a157fc4212bf upstream.

usbip driver is leaking socket pointer address in messages. Remove
the messages that aren't useful and print sockfd in the ones that
are useful for debugging.

Signed-off-by: Shuah Khan 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/usbip/stub_dev.c |3 +--
 drivers/usb/usbip/usbip_common.c |   16 +---
 drivers/usb/usbip/vhci_hcd.c |2 +-
 3 files changed, 7 insertions(+), 14 deletions(-)

--- a/drivers/usb/usbip/stub_dev.c
+++ b/drivers/usb/usbip/stub_dev.c
@@ -163,8 +163,7 @@ static void stub_shutdown_connection(str
 * step 1?
 */
if (ud->tcp_socket) {
-   dev_dbg(>udev->dev, "shutdown tcp_socket %p\n",
-   ud->tcp_socket);
+   dev_dbg(>udev->dev, "shutdown sockfd %d\n", ud->sockfd);
kernel_sock_shutdown(ud->tcp_socket, SHUT_RDWR);
}
 
--- a/drivers/usb/usbip/usbip_common.c
+++ b/drivers/usb/usbip/usbip_common.c
@@ -331,26 +331,20 @@ int usbip_recv(struct socket *sock, void
struct msghdr msg = {.msg_flags = MSG_NOSIGNAL};
int total = 0;
 
+   if (!sock || !buf || !size)
+   return -EINVAL;
+
iov_iter_kvec(_iter, READ|ITER_KVEC, , 1, size);
 
usbip_dbg_xmit("enter\n");
 
-   if (!sock || !buf || !size) {
-   pr_err("invalid arg, sock %p buff %p size %d\n", sock, buf,
-  size);
-   return -EINVAL;
-   }
-
do {
-   int sz = msg_data_left();
+   msg_data_left();
sock->sk->sk_allocation = GFP_NOIO;
 
result = sock_recvmsg(sock, , MSG_WAITALL);
-   if (result <= 0) {
-   pr_debug("receive sock %p buf %p size %u ret %d total 
%d\n",
-sock, buf + total, sz, result, total);
+   if (result <= 0)
goto err;
-   }
 
total += result;
} while (msg_data_left());
--- a/drivers/usb/usbip/vhci_hcd.c
+++ b/drivers/usb/usbip/vhci_hcd.c
@@ -989,7 +989,7 @@ static void vhci_shutdown_connection(str
 
/* need this? see stub_dev.c */
if (ud->tcp_socket) {
-   pr_debug("shutdown tcp_socket %p\n", ud->tcp_socket);
+   pr_debug("shutdown tcp_socket %d\n", ud->sockfd);
kernel_sock_shutdown(ud->tcp_socket, SHUT_RDWR);
}
 




[PATCH 4.14 107/146] net: sched: fix static key imbalance in case of ingress/clsact_init error

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Jiri Pirko 


[ Upstream commit b59e6979a86384e68b0ab6ffeab11f0034fba82d ]

Move static key increments to the beginning of the init function
so they pair 1:1 with decrements in ingress/clsact_destroy,
which is called in case ingress/clsact_init fails.

Fixes: 6529eaba33f0 ("net: sched: introduce tcf block infractructure")
Signed-off-by: Jiri Pirko 
Acked-by: Cong Wang 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/sched/sch_ingress.c |9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

--- a/net/sched/sch_ingress.c
+++ b/net/sched/sch_ingress.c
@@ -59,11 +59,12 @@ static int ingress_init(struct Qdisc *sc
struct net_device *dev = qdisc_dev(sch);
int err;
 
+   net_inc_ingress_queue();
+
err = tcf_block_get(>block, >ingress_cl_list);
if (err)
return err;
 
-   net_inc_ingress_queue();
sch->flags |= TCQ_F_CPUSTATS;
 
return 0;
@@ -153,6 +154,9 @@ static int clsact_init(struct Qdisc *sch
struct net_device *dev = qdisc_dev(sch);
int err;
 
+   net_inc_ingress_queue();
+   net_inc_egress_queue();
+
err = tcf_block_get(>ingress_block, >ingress_cl_list);
if (err)
return err;
@@ -161,9 +165,6 @@ static int clsact_init(struct Qdisc *sch
if (err)
return err;
 
-   net_inc_ingress_queue();
-   net_inc_egress_queue();
-
sch->flags |= TCQ_F_CPUSTATS;
 
return 0;




[PATCH 4.14 121/146] usbip: vhci: stop printing kernel pointer addresses in messages

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Shuah Khan 

commit 8272d099d05f7ab2776cf56a2ab9f9443be18907 upstream.

Remove and/or change debug, info. and error messages to not print
kernel pointer addresses.

Signed-off-by: Shuah Khan 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/usbip/vhci_hcd.c |   10 --
 drivers/usb/usbip/vhci_rx.c  |   23 +++
 drivers/usb/usbip/vhci_tx.c  |3 ++-
 3 files changed, 13 insertions(+), 23 deletions(-)

--- a/drivers/usb/usbip/vhci_hcd.c
+++ b/drivers/usb/usbip/vhci_hcd.c
@@ -670,9 +670,6 @@ static int vhci_urb_enqueue(struct usb_h
struct vhci_device *vdev;
unsigned long flags;
 
-   usbip_dbg_vhci_hc("enter, usb_hcd %p urb %p mem_flags %d\n",
- hcd, urb, mem_flags);
-
if (portnum > VHCI_HC_PORTS) {
pr_err("invalid port number %d\n", portnum);
return -ENODEV;
@@ -836,8 +833,6 @@ static int vhci_urb_dequeue(struct usb_h
struct vhci_device *vdev;
unsigned long flags;
 
-   pr_info("dequeue a urb %p\n", urb);
-
spin_lock_irqsave(>lock, flags);
 
priv = urb->hcpriv;
@@ -865,7 +860,6 @@ static int vhci_urb_dequeue(struct usb_h
/* tcp connection is closed */
spin_lock(>priv_lock);
 
-   pr_info("device %p seems to be disconnected\n", vdev);
list_del(>list);
kfree(priv);
urb->hcpriv = NULL;
@@ -877,8 +871,6 @@ static int vhci_urb_dequeue(struct usb_h
 * vhci_rx will receive RET_UNLINK and give back the URB.
 * Otherwise, we give back it here.
 */
-   pr_info("gives back urb %p\n", urb);
-
usb_hcd_unlink_urb_from_ep(hcd, urb);
 
spin_unlock_irqrestore(>lock, flags);
@@ -906,8 +898,6 @@ static int vhci_urb_dequeue(struct usb_h
 
unlink->unlink_seqnum = priv->seqnum;
 
-   pr_info("device %p seems to be still connected\n", vdev);
-
/* send cmd_unlink and try to cancel the pending URB in the
 * peer */
list_add_tail(>list, >unlink_tx);
--- a/drivers/usb/usbip/vhci_rx.c
+++ b/drivers/usb/usbip/vhci_rx.c
@@ -37,24 +37,23 @@ struct urb *pickup_urb_and_free_priv(str
urb = priv->urb;
status = urb->status;
 
-   usbip_dbg_vhci_rx("find urb %p vurb %p seqnum %u\n",
-   urb, priv, seqnum);
+   usbip_dbg_vhci_rx("find urb seqnum %u\n", seqnum);
 
switch (status) {
case -ENOENT:
/* fall through */
case -ECONNRESET:
-   dev_info(>dev->dev,
-"urb %p was unlinked %ssynchronuously.\n", urb,
-status == -ENOENT ? "" : "a");
+   dev_dbg(>dev->dev,
+"urb seq# %u was unlinked %ssynchronuously\n",
+seqnum, status == -ENOENT ? "" : "a");
break;
case -EINPROGRESS:
/* no info output */
break;
default:
-   dev_info(>dev->dev,
-"urb %p may be in a error, status %d\n", urb,
-status);
+   dev_dbg(>dev->dev,
+"urb seq# %u may be in a error, status %d\n",
+seqnum, status);
}
 
list_del(>list);
@@ -81,8 +80,8 @@ static void vhci_recv_ret_submit(struct
spin_unlock_irqrestore(>priv_lock, flags);
 
if (!urb) {
-   pr_err("cannot find a urb of seqnum %u\n", pdu->base.seqnum);
-   pr_info("max seqnum %d\n",
+   pr_err("cannot find a urb of seqnum %u max seqnum %d\n",
+   pdu->base.seqnum,
atomic_read(_hcd->seqnum));
usbip_event_add(ud, VDEV_EVENT_ERROR_TCP);
return;
@@ -105,7 +104,7 @@ static void vhci_recv_ret_submit(struct
if (usbip_dbg_flag_vhci_rx)
usbip_dump_urb(urb);
 
-   usbip_dbg_vhci_rx("now giveback urb %p\n", urb);
+   usbip_dbg_vhci_rx("now giveback urb %u\n", pdu->base.seqnum);
 
spin_lock_irqsave(>lock, flags);
usb_hcd_unlink_urb_from_ep(vhci_hcd_to_hcd(vhci_hcd), urb);
@@ -172,7 +171,7 @@ static void vhci_recv_ret_unlink(struct
pr_info("the urb (seqnum %d) was already given back\n",
pdu->base.seqnum);
} else {
-   usbip_dbg_vhci_rx("now giveback urb %p\n", urb);
+   usbip_dbg_vhci_rx("now giveback urb %d\n", pdu->base.seqnum);
 
/* If unlink is successful, 

[PATCH 4.14 118/146] usbip: fix usbip bind writing random string after command in match_busid

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Juan Zea 

commit 544c4605acc5ae4afe7dd5914147947db182f2fb upstream.

usbip bind writes commands followed by random string when writing to
match_busid attribute in sysfs, caused by using full variable size
instead of string length.

Signed-off-by: Juan Zea 
Acked-by: Shuah Khan 
Signed-off-by: Greg Kroah-Hartman 

---
 tools/usb/usbip/src/utils.c |9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

--- a/tools/usb/usbip/src/utils.c
+++ b/tools/usb/usbip/src/utils.c
@@ -30,6 +30,7 @@ int modify_match_busid(char *busid, int
char command[SYSFS_BUS_ID_SIZE + 4];
char match_busid_attr_path[SYSFS_PATH_MAX];
int rc;
+   int cmd_size;
 
snprintf(match_busid_attr_path, sizeof(match_busid_attr_path),
 "%s/%s/%s/%s/%s/%s", SYSFS_MNT_PATH, SYSFS_BUS_NAME,
@@ -37,12 +38,14 @@ int modify_match_busid(char *busid, int
 attr_name);
 
if (add)
-   snprintf(command, SYSFS_BUS_ID_SIZE + 4, "add %s", busid);
+   cmd_size = snprintf(command, SYSFS_BUS_ID_SIZE + 4, "add %s",
+   busid);
else
-   snprintf(command, SYSFS_BUS_ID_SIZE + 4, "del %s", busid);
+   cmd_size = snprintf(command, SYSFS_BUS_ID_SIZE + 4, "del %s",
+   busid);
 
rc = write_sysfs_attribute(match_busid_attr_path, command,
-  sizeof(command));
+  cmd_size);
if (rc < 0) {
dbg("failed to write match_busid: %s", strerror(errno));
return -1;




[PATCH 4.14 116/146] skbuff: in skb_copy_ubufs unclone before releasing zerocopy

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Willem de Bruijn 


skb_copy_ubufs must unclone before it is safe to modify its
skb_shared_info with skb_zcopy_clear.

Commit b90ddd568792 ("skbuff: skb_copy_ubufs must release uarg even
without user frags") ensures that all skbs release their zerocopy
state, even those without frags.

But I forgot an edge case where such an skb arrives that is cloned.

The stack does not build such packets. Vhost/tun skbs have their
frags orphaned before cloning. TCP skbs only attach zerocopy state
when a frag is added.

But if TCP packets can be trimmed or linearized, this might occur.
Tracing the code I found no instance so far (e.g., skb_linearize
ends up calling skb_zcopy_clear if !skb->data_len).

Still, it is non-obvious that no path exists. And it is fragile to
rely on this.

Fixes: b90ddd568792 ("skbuff: skb_copy_ubufs must release uarg even without 
user frags")
Signed-off-by: Willem de Bruijn 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/core/skbuff.c |6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -1181,12 +1181,12 @@ int skb_copy_ubufs(struct sk_buff *skb,
int i, new_frags;
u32 d_off;
 
-   if (!num_frags)
-   goto release;
-
if (skb_shared(skb) || skb_unclone(skb, gfp_mask))
return -EINVAL;
 
+   if (!num_frags)
+   goto release;
+
new_frags = (__skb_pagelen(skb) + PAGE_SIZE - 1) >> PAGE_SHIFT;
for (i = 0; i < new_frags; i++) {
page = alloc_page(gfp_mask);




[PATCH 4.14 122/146] USB: chipidea: msm: fix ulpi-node lookup

2018-01-01 Thread Greg Kroah-Hartman
4.14-stable review patch.  If anyone has any objections, please let me know.

--

From: Johan Hovold 

commit 964728f9f407eca0b417fdf8e784b7a76979490c upstream.

Fix child-node lookup during probe, which ended up searching the whole
device tree depth-first starting at the parent rather than just matching
on its children.

Note that the original premature free of the parent node has already
been fixed separately, but that fix was apparently never backported to
stable.

Fixes: 47654a162081 ("usb: chipidea: msm: Restore wrapper settings after reset")
Fixes: b74c43156c0c ("usb: chipidea: msm: ci_hdrc_msm_probe() missing 
of_node_get()")
Cc: Stephen Boyd 
Cc: Frank Rowand 
Signed-off-by: Johan Hovold 
Signed-off-by: Peter Chen 
Signed-off-by: Greg Kroah-Hartman 

---
 drivers/usb/chipidea/ci_hdrc_msm.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/usb/chipidea/ci_hdrc_msm.c
+++ b/drivers/usb/chipidea/ci_hdrc_msm.c
@@ -251,7 +251,7 @@ static int ci_hdrc_msm_probe(struct plat
if (ret)
goto err_mux;
 
-   ulpi_node = of_find_node_by_name(of_node_get(pdev->dev.of_node), 
"ulpi");
+   ulpi_node = of_get_child_by_name(pdev->dev.of_node, "ulpi");
if (ulpi_node) {
phy_node = of_get_next_available_child(ulpi_node, NULL);
ci->hsic = of_device_is_compatible(phy_node, 
"qcom,usb-hsic-phy");




<    4   5   6   7   8   9   10   11   12   13   >