[tip:x86/pti] x86/speculation: Propagate information about RSB filling mitigation to sysfs

2018-09-26 Thread tip-bot for Jiri Kosina
Commit-ID:  bb4b3b7762735cdaba5a40fd94c9303d9ffa147a
Gitweb: https://git.kernel.org/tip/bb4b3b7762735cdaba5a40fd94c9303d9ffa147a
Author: Jiri Kosina 
AuthorDate: Tue, 25 Sep 2018 14:39:28 +0200
Committer:  Thomas Gleixner 
CommitDate: Wed, 26 Sep 2018 14:26:52 +0200

x86/speculation: Propagate information about RSB filling mitigation to sysfs

If spectrev2 mitigation has been enabled, RSB is filled on context switch
in order to protect from various classes of spectrev2 attacks.

If this mitigation is enabled, say so in sysfs for spectrev2.

Signed-off-by: Jiri Kosina 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: Josh Poimboeuf 
Cc: Andrea Arcangeli 
Cc:  "WoodhouseDavid" 
Cc: Andi Kleen 
Cc: Tim Chen 
Cc:  "SchauflerCasey" 
Link: 
https://lkml.kernel.org/r/nycvar.yfh.7.76.1809251438580.15...@cbobk.fhfr.pm

---
 arch/x86/kernel/cpu/bugs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 53eb14a65610..fe32103fcdc7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -874,10 +874,11 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
   (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
+  boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB 
filling" : "",
   spectre_v2_module_string());
return ret;
 


[tip:x86/pti] x86/speculation: Apply IBPB more strictly to avoid cross-process data leak

2018-09-26 Thread tip-bot for Jiri Kosina
Commit-ID:  dbfe2953f63c640463c630746cd5d9de8b2f63ae
Gitweb: https://git.kernel.org/tip/dbfe2953f63c640463c630746cd5d9de8b2f63ae
Author: Jiri Kosina 
AuthorDate: Tue, 25 Sep 2018 14:38:18 +0200
Committer:  Thomas Gleixner 
CommitDate: Wed, 26 Sep 2018 14:26:51 +0200

x86/speculation: Apply IBPB more strictly to avoid cross-process data leak

Currently, IBPB is only issued in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security
sensitive' processess (such as GPG) from data leaking into a different
userspace process via spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

[ tglx: Split up PTRACE_MODE_NOACCESS_CHK into PTRACE_MODE_SCHED and
  PTRACE_MODE_IBPB to be able to do ptrace() context tracking reasonably
  fine-grained ]

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: Josh Poimboeuf 
Cc: Andrea Arcangeli 
Cc:  "WoodhouseDavid" 
Cc: Andi Kleen 
Cc:  "SchauflerCasey" 
Link: 
https://lkml.kernel.org/r/nycvar.yfh.7.76.1809251437340.15...@cbobk.fhfr.pm

---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h | 21 +++--
 kernel/ptrace.c| 10 ++
 3 files changed, 49 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..073b8df349a0 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..e5e5ef513df3 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -62,14 +62,17 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_READ   0x01
 #define PTRACE_MODE_ATTACH 0x02
 #define PTRACE_MODE_NOAUDIT0x04
-#define PTRACE_MODE_FSCREDS 0x08
-#define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_FSCREDS0x08
+#define PTRACE_MODE_REALCREDS  0x10
+#define PTRACE_MODE_SCHED 

[tip:x86/pti] x86/speculation: Apply IBPB more strictly to avoid cross-process data leak

2018-09-26 Thread tip-bot for Jiri Kosina
Commit-ID:  dbfe2953f63c640463c630746cd5d9de8b2f63ae
Gitweb: https://git.kernel.org/tip/dbfe2953f63c640463c630746cd5d9de8b2f63ae
Author: Jiri Kosina 
AuthorDate: Tue, 25 Sep 2018 14:38:18 +0200
Committer:  Thomas Gleixner 
CommitDate: Wed, 26 Sep 2018 14:26:51 +0200

x86/speculation: Apply IBPB more strictly to avoid cross-process data leak

Currently, IBPB is only issued in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security
sensitive' processess (such as GPG) from data leaking into a different
userspace process via spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

[ tglx: Split up PTRACE_MODE_NOACCESS_CHK into PTRACE_MODE_SCHED and
  PTRACE_MODE_IBPB to be able to do ptrace() context tracking reasonably
  fine-grained ]

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: Josh Poimboeuf 
Cc: Andrea Arcangeli 
Cc:  "WoodhouseDavid" 
Cc: Andi Kleen 
Cc:  "SchauflerCasey" 
Link: 
https://lkml.kernel.org/r/nycvar.yfh.7.76.1809251437340.15...@cbobk.fhfr.pm

---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h | 21 +++--
 kernel/ptrace.c| 10 ++
 3 files changed, 49 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..073b8df349a0 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..e5e5ef513df3 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -62,14 +62,17 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_READ   0x01
 #define PTRACE_MODE_ATTACH 0x02
 #define PTRACE_MODE_NOAUDIT0x04
-#define PTRACE_MODE_FSCREDS 0x08
-#define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_FSCREDS0x08
+#define PTRACE_MODE_REALCREDS  0x10
+#define PTRACE_MODE_SCHED 

[PATCH v7 3/3] x86/speculation: Propagate information about RSB filling mitigation to sysfs

2018-09-25 Thread Jiri Kosina
From: Jiri Kosina 

If spectrev2 mitigation has been enabled, we're filling RSB on context 
switch in order to protect from various classess of spectrev2 attacks.

If this mitigation is enabled, say so in sysfs for spectrev2.

Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 53eb14a65610..fe32103fcdc7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -874,10 +874,11 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
   (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
+  boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB 
filling" : "",
   spectre_v2_module_string());
return ret;
 
-- 
2.12.3


-- 
Jiri Kosina
SUSE Labs



[PATCH v7 3/3] x86/speculation: Propagate information about RSB filling mitigation to sysfs

2018-09-25 Thread Jiri Kosina
From: Jiri Kosina 

If spectrev2 mitigation has been enabled, we're filling RSB on context 
switch in order to protect from various classess of spectrev2 attacks.

If this mitigation is enabled, say so in sysfs for spectrev2.

Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 53eb14a65610..fe32103fcdc7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -874,10 +874,11 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
   (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
+  boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB 
filling" : "",
   spectre_v2_module_string());
return ret;
 
-- 
2.12.3


-- 
Jiri Kosina
SUSE Labs



[PATCH v7 2/3] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-25 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 57 +-
 kernel/cpu.c   | 11 -
 2 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..53eb14a65610 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 
-/*
- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
- * writes to SPEC_CTRL contain whatever reserved bits have been set.
- */
-u64 __ro_after_init x86_spec_ctrl_base;
+/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+u64 x86_spec_ctrl_base;
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+static DEFINE_MUTEX(spec_ctrl_mutex);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+   u64 mask;
+
+   if (!stibp_needed())
+   return;
+
+   mutex_lock(_ctrl_mutex);
+   mask = x86_spec_ctrl_base;
+   if (cpu_smt_control == CPU_SMT_ENABLED)
+   mask |= SPEC_CTRL_STIBP;
+   else
+   mask &= ~SPEC_CTRL_STIBP;
+
+   if (mask != x86_spec_ctrl_base) {
+   pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+   cpu_smt_control == CPU_SMT_ENABLED ?
+   "Enabling" : "Disabling");
+   x86_spec_ctrl_base = mask;
+   on_each_cpu(update_stibp_msr, NULL, 1);
+   }
+   mutex_unlock(_ctrl_mutex);
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +462,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP if appropriate */
+   arch_smt_update();
 }
 
 #undef pr_fmt
@@ -814,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute 
*attr,
   char *buf, unsigned int bug)
 {
+   int ret;
+
if (!boot_cpu_has_bug(bug))
return sprintf(buf, "Not affected\n");
 
@@ -831,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   return sprintf(buf, "%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
+  (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
   spectre_v2_module_string());
+   return ret;
 
case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..2fb49916ea56 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ 

[PATCH v7 2/3] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-25 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 57 +-
 kernel/cpu.c   | 11 -
 2 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..53eb14a65610 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 
-/*
- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
- * writes to SPEC_CTRL contain whatever reserved bits have been set.
- */
-u64 __ro_after_init x86_spec_ctrl_base;
+/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+u64 x86_spec_ctrl_base;
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+static DEFINE_MUTEX(spec_ctrl_mutex);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+   u64 mask;
+
+   if (!stibp_needed())
+   return;
+
+   mutex_lock(_ctrl_mutex);
+   mask = x86_spec_ctrl_base;
+   if (cpu_smt_control == CPU_SMT_ENABLED)
+   mask |= SPEC_CTRL_STIBP;
+   else
+   mask &= ~SPEC_CTRL_STIBP;
+
+   if (mask != x86_spec_ctrl_base) {
+   pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+   cpu_smt_control == CPU_SMT_ENABLED ?
+   "Enabling" : "Disabling");
+   x86_spec_ctrl_base = mask;
+   on_each_cpu(update_stibp_msr, NULL, 1);
+   }
+   mutex_unlock(_ctrl_mutex);
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +462,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP if appropriate */
+   arch_smt_update();
 }
 
 #undef pr_fmt
@@ -814,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute 
*attr,
   char *buf, unsigned int bug)
 {
+   int ret;
+
if (!boot_cpu_has_bug(bug))
return sprintf(buf, "Not affected\n");
 
@@ -831,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   return sprintf(buf, "%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
+  (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
   spectre_v2_module_string());
+   return ret;
 
case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..2fb49916ea56 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ 

[PATCH v7 1/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-25 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

[ t...@linutronix.de: split up PTRACE_MODE_NOACCESS_CHK into PTRACE_MODE_SCHED 
and
  PTRACE_MODE_IBPB to be able to do ptrace() context tracking reasonably
  fine-grained ]

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h | 21 +++--
 kernel/ptrace.c| 10 ++
 3 files changed, 49 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..073b8df349a0 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..e5e5ef513df3 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -62,14 +62,17 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_READ   0x01
 #define PTRACE_MODE_ATTACH 0x02
 #define PTRACE_MODE_NOAUDIT0x04
-#define PTRACE_MODE_FSCREDS 0x08
-#define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_FSCREDS0x08
+#define PTRACE_MODE_REALCREDS  0x10
+#define PTRACE_MODE_SCHED  0x20
+#define PTRACE_MODE_IBPB   0x40
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)
 
 /**
  * ptrace_may_access - check whether the caller is

[PATCH v7 1/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-25 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

[ t...@linutronix.de: split up PTRACE_MODE_NOACCESS_CHK into PTRACE_MODE_SCHED 
and
  PTRACE_MODE_IBPB to be able to do ptrace() context tracking reasonably
  fine-grained ]

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h | 21 +++--
 kernel/ptrace.c| 10 ++
 3 files changed, 49 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..073b8df349a0 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..e5e5ef513df3 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -62,14 +62,17 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_READ   0x01
 #define PTRACE_MODE_ATTACH 0x02
 #define PTRACE_MODE_NOAUDIT0x04
-#define PTRACE_MODE_FSCREDS 0x08
-#define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_FSCREDS0x08
+#define PTRACE_MODE_REALCREDS  0x10
+#define PTRACE_MODE_SCHED  0x20
+#define PTRACE_MODE_IBPB   0x40
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB)
 
 /**
  * ptrace_may_access - check whether the caller is

[PATCH v7 0/3] Harden spectrev2 userspace-userspace protection

2018-09-25 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

There is more work going on by Tim Chen, that will go on top, and enable 
prctl()-based more fine-grained per-process tuning of this mitigation.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

v4->v5:
fix MSR writing logic (Thomas Gleixner, Josh Poimboeuf)

v5->v6:
propagate X86_FEATURE_RSB_CTXSW setting to sysfs
propagate STIBP setting to sysfs (Thomas Gleixner)
simplify arch_smt_update() (Thomas Gleixner)

v6->v7:
PTRACE_MODE_NOACCESS_CHK -> PTRACE_MODE_SCHED and PTRACE_MODE_IBPB
(Thomas Gleixner)
drop unnecessary x86_spec_ctrl_base mutex in cpu_show_common()

Jiri Kosina (3):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation
  x86/speculation: Propagate information about RSB filling mitigation to 
sysfs

 arch/x86/kernel/cpu/bugs.c | 60 
++--
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 11 ++-
 kernel/ptrace.c| 12 
 5 files changed, 96 insertions(+), 22 deletions(-)

-- 
Jiri Kosina
SUSE Labs



[PATCH v7 0/3] Harden spectrev2 userspace-userspace protection

2018-09-25 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

There is more work going on by Tim Chen, that will go on top, and enable 
prctl()-based more fine-grained per-process tuning of this mitigation.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

v4->v5:
fix MSR writing logic (Thomas Gleixner, Josh Poimboeuf)

v5->v6:
propagate X86_FEATURE_RSB_CTXSW setting to sysfs
propagate STIBP setting to sysfs (Thomas Gleixner)
simplify arch_smt_update() (Thomas Gleixner)

v6->v7:
PTRACE_MODE_NOACCESS_CHK -> PTRACE_MODE_SCHED and PTRACE_MODE_IBPB
(Thomas Gleixner)
drop unnecessary x86_spec_ctrl_base mutex in cpu_show_common()

Jiri Kosina (3):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation
  x86/speculation: Propagate information about RSB filling mitigation to 
sysfs

 arch/x86/kernel/cpu/bugs.c | 60 
++--
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 11 ++-
 kernel/ptrace.c| 12 
 5 files changed, 96 insertions(+), 22 deletions(-)

-- 
Jiri Kosina
SUSE Labs



Re: INFO: task hung in fsnotify_connector_destroy_workfn (2)

2018-09-24 Thread Jiri Kosina
On Sun, 16 Sep 2018, Amir Goldstein wrote:

> > > syzbot found the following crash on:
> > >
> > > HEAD commit:11da3a7f84f1 Linux 4.19-rc3
> > > git tree:   upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=141ffbca40
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=9917ff4b798e1a1e
> > > dashboard link: 
> > > https://syzkaller.appspot.com/bug?extid=6fb572170402d311dd39
> > > compiler:   gcc (GCC) 8.0.1 20180413 (experimental)
> > > syz repro:  https://syzkaller.appspot.com/x/repro.syz?x=136a60ae40
> > >
> > > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > > Reported-by: syzbot+6fb572170402d311d...@syzkaller.appspotmail.com
> > >
> >
> > Since it kept printk()ing for at least 93 seconds, it might have kept for 
> > 150 seconds.
> >
> > [  174.614333] hid-generic ::.0003: unknown main item tag 0x0
> > [  268.196102] INFO: task kworker/u4:1:23 blocked for more than 140 seconds.
> >
> > Since the reproducer is simple, this might be lockup due to continuous 
> > printk().
> > If syzbot can reliably reproduce this problem using the reproducer, try
> > ratelimiting printk().
> 
> Right.. and I was able to reproduce after setting CONFIG_HID_GENERIC=y
> but unless I am missing something, the core problem doesn't seem related to
> fsnotify and $SUBJECT, so CCing HID maintainer.

Alright, so you're basically triggering a never-ending flow of kmsgs being 
printed out from HID parser due to doing crazy things with the parser, and 
that causes the issues for the kworker trying to flush them out.

I guess the patch below fixes it, however the kworker should not really be 
blocked by this I think ... adding a few more printk folks to double-check 
why the kworker would get stuck due to massive printk() flood.

diff --git a/include/linux/hid.h b/include/linux/hid.h
index d44a78362942..b81332fe85a4 100644
--- a/include/linux/hid.h
+++ b/include/linux/hid.h
@@ -1157,22 +1157,22 @@ do {
\
 } while (0)
 
 #define hid_printk(level, hid, fmt, arg...)\
-   dev_printk(level, &(hid)->dev, fmt, ##arg)
+   dev_printk_ratelimited(level, &(hid)->dev, fmt, ##arg)
 #define hid_emerg(hid, fmt, arg...)\
-   dev_emerg(&(hid)->dev, fmt, ##arg)
+   dev_emerg_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_crit(hid, fmt, arg...) \
-   dev_crit(&(hid)->dev, fmt, ##arg)
+   dev_crit_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_alert(hid, fmt, arg...)\
-   dev_alert(&(hid)->dev, fmt, ##arg)
+   dev_alert_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_err(hid, fmt, arg...)  \
-   dev_err(&(hid)->dev, fmt, ##arg)
+   dev_err_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_notice(hid, fmt, arg...)   \
-   dev_notice(&(hid)->dev, fmt, ##arg)
+   dev_notice_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_warn(hid, fmt, arg...) \
-   dev_warn(&(hid)->dev, fmt, ##arg)
+   dev_warn_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_info(hid, fmt, arg...) \
-   dev_info(&(hid)->dev, fmt, ##arg)
+   dev_info_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_dbg(hid, fmt, arg...)  \
-   dev_dbg(&(hid)->dev, fmt, ##arg)
+   dev_dbg_ratelimited(&(hid)->dev, fmt, ##arg)
 
 #endif

-- 
Jiri Kosina
SUSE Labs



Re: INFO: task hung in fsnotify_connector_destroy_workfn (2)

2018-09-24 Thread Jiri Kosina
On Sun, 16 Sep 2018, Amir Goldstein wrote:

> > > syzbot found the following crash on:
> > >
> > > HEAD commit:11da3a7f84f1 Linux 4.19-rc3
> > > git tree:   upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=141ffbca40
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=9917ff4b798e1a1e
> > > dashboard link: 
> > > https://syzkaller.appspot.com/bug?extid=6fb572170402d311dd39
> > > compiler:   gcc (GCC) 8.0.1 20180413 (experimental)
> > > syz repro:  https://syzkaller.appspot.com/x/repro.syz?x=136a60ae40
> > >
> > > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > > Reported-by: syzbot+6fb572170402d311d...@syzkaller.appspotmail.com
> > >
> >
> > Since it kept printk()ing for at least 93 seconds, it might have kept for 
> > 150 seconds.
> >
> > [  174.614333] hid-generic ::.0003: unknown main item tag 0x0
> > [  268.196102] INFO: task kworker/u4:1:23 blocked for more than 140 seconds.
> >
> > Since the reproducer is simple, this might be lockup due to continuous 
> > printk().
> > If syzbot can reliably reproduce this problem using the reproducer, try
> > ratelimiting printk().
> 
> Right.. and I was able to reproduce after setting CONFIG_HID_GENERIC=y
> but unless I am missing something, the core problem doesn't seem related to
> fsnotify and $SUBJECT, so CCing HID maintainer.

Alright, so you're basically triggering a never-ending flow of kmsgs being 
printed out from HID parser due to doing crazy things with the parser, and 
that causes the issues for the kworker trying to flush them out.

I guess the patch below fixes it, however the kworker should not really be 
blocked by this I think ... adding a few more printk folks to double-check 
why the kworker would get stuck due to massive printk() flood.

diff --git a/include/linux/hid.h b/include/linux/hid.h
index d44a78362942..b81332fe85a4 100644
--- a/include/linux/hid.h
+++ b/include/linux/hid.h
@@ -1157,22 +1157,22 @@ do {
\
 } while (0)
 
 #define hid_printk(level, hid, fmt, arg...)\
-   dev_printk(level, &(hid)->dev, fmt, ##arg)
+   dev_printk_ratelimited(level, &(hid)->dev, fmt, ##arg)
 #define hid_emerg(hid, fmt, arg...)\
-   dev_emerg(&(hid)->dev, fmt, ##arg)
+   dev_emerg_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_crit(hid, fmt, arg...) \
-   dev_crit(&(hid)->dev, fmt, ##arg)
+   dev_crit_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_alert(hid, fmt, arg...)\
-   dev_alert(&(hid)->dev, fmt, ##arg)
+   dev_alert_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_err(hid, fmt, arg...)  \
-   dev_err(&(hid)->dev, fmt, ##arg)
+   dev_err_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_notice(hid, fmt, arg...)   \
-   dev_notice(&(hid)->dev, fmt, ##arg)
+   dev_notice_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_warn(hid, fmt, arg...) \
-   dev_warn(&(hid)->dev, fmt, ##arg)
+   dev_warn_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_info(hid, fmt, arg...) \
-   dev_info(&(hid)->dev, fmt, ##arg)
+   dev_info_ratelimited(&(hid)->dev, fmt, ##arg)
 #define hid_dbg(hid, fmt, arg...)  \
-   dev_dbg(&(hid)->dev, fmt, ##arg)
+   dev_dbg_ratelimited(&(hid)->dev, fmt, ##arg)
 
 #endif

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] HID: intel-ish-hid: Enable Ice Lake mobile

2018-09-24 Thread Jiri Kosina
On Tue, 11 Sep 2018, Srinivas Pandruvada wrote:

> Added PCI ID for Ice Lake mobile platform.
> 
> Signed-off-by: Srinivas Pandruvada 
> ---
>  drivers/hid/intel-ish-hid/ipc/hw-ish.h  | 1 +
>  drivers/hid/intel-ish-hid/ipc/pci-ish.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h 
> b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
> index 97869b7410eb..1b4e93e19e6b 100644
> --- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
> +++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
> @@ -29,6 +29,7 @@
>  #define CNL_Ax_DEVICE_ID 0x9DFC
>  #define GLK_Ax_DEVICE_ID 0x31A2
>  #define CNL_H_DEVICE_ID  0xA37C
> +#define ICL_MOBILE_DEVICE_ID 0x34FC
>  
>  #define  REVISION_ID_CHT_A0  0x6
>  #define  REVISION_ID_CHT_Ax_SI   0x0
> diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c 
> b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
> index 09d085946db3..4d78f85021a5 100644
> --- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
> +++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
> @@ -38,6 +38,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_Ax_DEVICE_ID)},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, GLK_Ax_DEVICE_ID)},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_H_DEVICE_ID)},
> + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ICL_MOBILE_DEVICE_ID)},
>   {0, }

Applied to for-4.19/fixes.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] HID: intel-ish-hid: Enable Ice Lake mobile

2018-09-24 Thread Jiri Kosina
On Tue, 11 Sep 2018, Srinivas Pandruvada wrote:

> Added PCI ID for Ice Lake mobile platform.
> 
> Signed-off-by: Srinivas Pandruvada 
> ---
>  drivers/hid/intel-ish-hid/ipc/hw-ish.h  | 1 +
>  drivers/hid/intel-ish-hid/ipc/pci-ish.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h 
> b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
> index 97869b7410eb..1b4e93e19e6b 100644
> --- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
> +++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
> @@ -29,6 +29,7 @@
>  #define CNL_Ax_DEVICE_ID 0x9DFC
>  #define GLK_Ax_DEVICE_ID 0x31A2
>  #define CNL_H_DEVICE_ID  0xA37C
> +#define ICL_MOBILE_DEVICE_ID 0x34FC
>  
>  #define  REVISION_ID_CHT_A0  0x6
>  #define  REVISION_ID_CHT_Ax_SI   0x0
> diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c 
> b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
> index 09d085946db3..4d78f85021a5 100644
> --- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
> +++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
> @@ -38,6 +38,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_Ax_DEVICE_ID)},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, GLK_Ax_DEVICE_ID)},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_H_DEVICE_ID)},
> + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ICL_MOBILE_DEVICE_ID)},
>   {0, }

Applied to for-4.19/fixes.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v2] HID: logitech: fix a used uninitialized GCC warning

2018-09-24 Thread Jiri Kosina
On Thu, 13 Sep 2018, zhong jiang wrote:

> Fix the following compile warning:
> 
> drivers/hid/hid-logitech-hidpp.c: In function 'hi_res_scroll_enable':
> drivers/hid/hid-logitech-hidpp.c:2714:54: warning: 'multiplier' may be used 
> uninitialized in this function [-Wmaybe-uninitialized]
>   hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;
> 
> Signed-off-by: zhong jiang 
> ---
> v1->v2:
>  According to Benjamin's suggestion, To initialize the value
> and remove the duplicated assignement. 

Applied to for-4.20/logitech-highres. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v2] HID: logitech: fix a used uninitialized GCC warning

2018-09-24 Thread Jiri Kosina
On Thu, 13 Sep 2018, zhong jiang wrote:

> Fix the following compile warning:
> 
> drivers/hid/hid-logitech-hidpp.c: In function 'hi_res_scroll_enable':
> drivers/hid/hid-logitech-hidpp.c:2714:54: warning: 'multiplier' may be used 
> uninitialized in this function [-Wmaybe-uninitialized]
>   hidpp->vertical_wheel_counter.resolution_multiplier = multiplier;
> 
> Signed-off-by: zhong jiang 
> ---
> v1->v2:
>  According to Benjamin's suggestion, To initialize the value
> and remove the duplicated assignement. 

Applied to for-4.20/logitech-highres. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH V2] hid: hid-core: Fix a sleep-in-atomic-context bug in __hid_request()

2018-09-24 Thread Jiri Kosina
On Thu, 13 Sep 2018, Jia-Ju Bai wrote:

> hid_alloc_report_buf() has to be called with GFP_ATOMIC in 
> __hid_request(), because there are the following callchains 
> leading to __hid_request() being an atomic context:
> 
> picolcd_send_and_wait (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)
> 
> picolcd_reset (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)
> 
> lg4ff_play (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)
> 
> lg4ff_set_autocenter_ffex (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)

Hm, so it's always drivers calling out into core in atomic context. So 
either we take this, and put our bets on being able to allocate the buffer 
without sleeping, or actually fix the few drivers (it's just lg4ff and 
picolcd at the end of the day) not to do that, and explicitly anotate 
__hid_request() with might_sleep().

Hmm?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH V2] hid: hid-core: Fix a sleep-in-atomic-context bug in __hid_request()

2018-09-24 Thread Jiri Kosina
On Thu, 13 Sep 2018, Jia-Ju Bai wrote:

> hid_alloc_report_buf() has to be called with GFP_ATOMIC in 
> __hid_request(), because there are the following callchains 
> leading to __hid_request() being an atomic context:
> 
> picolcd_send_and_wait (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)
> 
> picolcd_reset (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)
> 
> lg4ff_play (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)
> 
> lg4ff_set_autocenter_ffex (acquire a spinlock)
>   hid_hw_request
> __hid_request
>   hid_alloc_report_buf(GFP_KERNEL)

Hm, so it's always drivers calling out into core in atomic context. So 
either we take this, and put our bets on being able to allocate the buffer 
without sleeping, or actually fix the few drivers (it's just lg4ff and 
picolcd at the end of the day) not to do that, and explicitly anotate 
__hid_request() with might_sleep().

Hmm?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] hyper-v: Fix wakeup from suspend-to-idle

2018-09-24 Thread Jiri Kosina
On Wed, 12 Sep 2018, Vitaly Kuznetsov wrote:

> It makes little sense but still possible to put Hyper-V guests into
> suspend-to-idle state. To wake them up two wakeup sources were registered
> in the past: hyperv-keyboard and hid-hyperv. However, since
> commit eed4d47efe95 ("ACPI / sleep: Ignore spurious SCI wakeups from
> suspend-to-idle") pm_wakeup_event() from these devices is ignored. Switch
> to pm_wakeup_hard_event() API as these devices are actually the only
> possible way to wakeup Hyper-V guests.
> 
> Fixes: eed4d47efe95 (ACPI / sleep: Ignore spurious SCI wakeups from 
> suspend-to-idle)
> Signed-off-by: Vitaly Kuznetsov 
> ---
>  drivers/hid/hid-hyperv.c  | 2 +-

Acked-by: Jiri Kosina 

for the above. I guess this'd better go through ACPI tree?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] hyper-v: Fix wakeup from suspend-to-idle

2018-09-24 Thread Jiri Kosina
On Wed, 12 Sep 2018, Vitaly Kuznetsov wrote:

> It makes little sense but still possible to put Hyper-V guests into
> suspend-to-idle state. To wake them up two wakeup sources were registered
> in the past: hyperv-keyboard and hid-hyperv. However, since
> commit eed4d47efe95 ("ACPI / sleep: Ignore spurious SCI wakeups from
> suspend-to-idle") pm_wakeup_event() from these devices is ignored. Switch
> to pm_wakeup_hard_event() API as these devices are actually the only
> possible way to wakeup Hyper-V guests.
> 
> Fixes: eed4d47efe95 (ACPI / sleep: Ignore spurious SCI wakeups from 
> suspend-to-idle)
> Signed-off-by: Vitaly Kuznetsov 
> ---
>  drivers/hid/hid-hyperv.c  | 2 +-

Acked-by: Jiri Kosina 

for the above. I guess this'd better go through ACPI tree?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH 0/9] HID: intel ISH: Cleanup patches

2018-09-24 Thread Jiri Kosina
On Tue, 11 Sep 2018, Srinivas Pandruvada wrote:

> This series is a cleanup series only and help to abstract client API.
> 
> There are no functional changes.
> 
> Even Xu (6):
>   hid: intel-ish-hid: ishtp: add helper function for driver data get/set
>   hid: intel-ish-hid: use helper function for private driver data
> set/get
>   hid: intel-ish-hid: ishtp: add helper functions for client buffer
> operation
>   hid: intel-ish-hid: use helper function to access client buffer
>   hid: intel-ish-hid: ishtp: add helper function for client search
>   hid: intel-ish-hid: use helper function to search client id
> 
> Hong Liu (2):
>   HID: intel-ish-hid: use resource-managed api
>   HID: intel-ish-hid: using list_head for ipc write queue
> 
> Srinivas Pandruvada (1):
>   HID: intel_ish-hid: Enhance API to get ring buffer sizes

Applied to for-4.20/ish.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH 0/9] HID: intel ISH: Cleanup patches

2018-09-24 Thread Jiri Kosina
On Tue, 11 Sep 2018, Srinivas Pandruvada wrote:

> This series is a cleanup series only and help to abstract client API.
> 
> There are no functional changes.
> 
> Even Xu (6):
>   hid: intel-ish-hid: ishtp: add helper function for driver data get/set
>   hid: intel-ish-hid: use helper function for private driver data
> set/get
>   hid: intel-ish-hid: ishtp: add helper functions for client buffer
> operation
>   hid: intel-ish-hid: use helper function to access client buffer
>   hid: intel-ish-hid: ishtp: add helper function for client search
>   hid: intel-ish-hid: use helper function to search client id
> 
> Hong Liu (2):
>   HID: intel-ish-hid: use resource-managed api
>   HID: intel-ish-hid: using list_head for ipc write queue
> 
> Srinivas Pandruvada (1):
>   HID: intel_ish-hid: Enhance API to get ring buffer sizes

Applied to for-4.20/ish.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v6 0/3] Harden spectrev2 userspace-userspace protection

2018-09-24 Thread Jiri Kosina
On Sat, 22 Sep 2018, Thomas Gleixner wrote:

> Lunch and coffee indeed made brain work better. The simple solution was way
> too obvious.

Ah, cool, I like it a lot.

Do you want me to fold this into v7, or are you on it already?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v6 0/3] Harden spectrev2 userspace-userspace protection

2018-09-24 Thread Jiri Kosina
On Sat, 22 Sep 2018, Thomas Gleixner wrote:

> Lunch and coffee indeed made brain work better. The simple solution was way
> too obvious.

Ah, cool, I like it a lot.

Do you want me to fold this into v7, or are you on it already?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v6 0/3] Harden spectrev2 userspace-userspace protection

2018-09-22 Thread Jiri Kosina
On Wed, 19 Sep 2018, Peter Zijlstra wrote:

> > diff --git a/kernel/ptrace.c b/kernel/ptrace.c
> > index 5c5e7cb597cd..202a4d9c2af7 100644
> > --- a/kernel/ptrace.c
> > +++ b/kernel/ptrace.c
> > @@ -330,9 +330,7 @@ int __ptrace_may_access(struct task_struct *task, 
> > unsigned int mode)
> >!ptrace_has_cap(mm->user_ns, mode
> > return -EPERM;
> > 
> > -   if (!(mode & PTRACE_MODE_NOACCESS_CHK))
> > -   return security_ptrace_access_check(task, mode);
> > -   return 0;
> > +   return security_ptrace_access_check(task, mode);
> >  }
> > 
> >  bool ptrace_may_access(struct task_struct *task, unsigned int mode)
> > diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
> > index 161a4f29f860..30d21142e9fe 100644
> > --- a/security/selinux/hooks.c
> > +++ b/security/selinux/hooks.c
> > @@ -2215,7 +2215,12 @@ static int selinux_ptrace_access_check(struct 
> > task_struct *child,
> >  {
> > u32 sid = current_sid();
> > u32 csid = task_sid(child);
> > +   struct av_decision avd;
> > 
> > +   if (mode == PTRACE_MODE_IBPB)
> > +   return avc_has_perm_noaudit(_state, sid, csid,
> > +   SECCLASS_PROCESS, 
> > PROCESS__PTRACE,
> > +   0, );
> > if (mode & PTRACE_MODE_READ)
> > return avc_has_perm(_state,
> > sid, csid, SECCLASS_FILE, FILE__READ, 
> > NULL);
> > 
> 
> As far as I can tell, this still has:
> 
>   avc_has_perm_noaudit()
> security_compute_av()
>   read_lock(>ss->policy_rwlock);
> avc_insert()
>   spin_lock_irqsave();
> avc_denied()
>   avc_update_node()
> spin_lock_irqsave();
> 
> under the scheduler's raw_spinlock_t, which are invalid lock nestings.

Agreed. Therefore, if the current form (v6) of the patches is merged, the 
check before security_ptrace_access_check() should stay.

Once all the LSM callbacks are potentially audited, it could then go in 
second phase.

Is there anything else blocking v6 being merged? (and then Tim's set on 
top I guess, once the details are sorted out there).

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v6 0/3] Harden spectrev2 userspace-userspace protection

2018-09-22 Thread Jiri Kosina
On Wed, 19 Sep 2018, Peter Zijlstra wrote:

> > diff --git a/kernel/ptrace.c b/kernel/ptrace.c
> > index 5c5e7cb597cd..202a4d9c2af7 100644
> > --- a/kernel/ptrace.c
> > +++ b/kernel/ptrace.c
> > @@ -330,9 +330,7 @@ int __ptrace_may_access(struct task_struct *task, 
> > unsigned int mode)
> >!ptrace_has_cap(mm->user_ns, mode
> > return -EPERM;
> > 
> > -   if (!(mode & PTRACE_MODE_NOACCESS_CHK))
> > -   return security_ptrace_access_check(task, mode);
> > -   return 0;
> > +   return security_ptrace_access_check(task, mode);
> >  }
> > 
> >  bool ptrace_may_access(struct task_struct *task, unsigned int mode)
> > diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
> > index 161a4f29f860..30d21142e9fe 100644
> > --- a/security/selinux/hooks.c
> > +++ b/security/selinux/hooks.c
> > @@ -2215,7 +2215,12 @@ static int selinux_ptrace_access_check(struct 
> > task_struct *child,
> >  {
> > u32 sid = current_sid();
> > u32 csid = task_sid(child);
> > +   struct av_decision avd;
> > 
> > +   if (mode == PTRACE_MODE_IBPB)
> > +   return avc_has_perm_noaudit(_state, sid, csid,
> > +   SECCLASS_PROCESS, 
> > PROCESS__PTRACE,
> > +   0, );
> > if (mode & PTRACE_MODE_READ)
> > return avc_has_perm(_state,
> > sid, csid, SECCLASS_FILE, FILE__READ, 
> > NULL);
> > 
> 
> As far as I can tell, this still has:
> 
>   avc_has_perm_noaudit()
> security_compute_av()
>   read_lock(>ss->policy_rwlock);
> avc_insert()
>   spin_lock_irqsave();
> avc_denied()
>   avc_update_node()
> spin_lock_irqsave();
> 
> under the scheduler's raw_spinlock_t, which are invalid lock nestings.

Agreed. Therefore, if the current form (v6) of the patches is merged, the 
check before security_ptrace_access_check() should stay.

Once all the LSM callbacks are potentially audited, it could then go in 
second phase.

Is there anything else blocking v6 being merged? (and then Tim's set on 
top I guess, once the details are sorted out there).

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v6 1/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-14 Thread Jiri Kosina
On Thu, 13 Sep 2018, Schaufler, Casey wrote:

> > -   return security_ptrace_access_check(task, mode);
> > +   if (!(mode & PTRACE_MODE_NOACCESS_CHK))
> > +   return security_ptrace_access_check(task, mode);
> > +   return 0;
> 
> Because PTRACE_MODE_IBPB includes PTRACE_MODE_NOAUDIT you
> shouldn't need this change. 

That is true, but that's not my concern here. 

security_ptrace_access_check() -> call_int_hook() -> P->hook.FUNC().

If it's somehow guaranteed that all functions called this ways are fine to 
be called from scheduler context (wrt. locks), then it's all fine and I'll 
happily drop that check.

Is it guaranteed?

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v6 1/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-14 Thread Jiri Kosina
On Thu, 13 Sep 2018, Schaufler, Casey wrote:

> > -   return security_ptrace_access_check(task, mode);
> > +   if (!(mode & PTRACE_MODE_NOACCESS_CHK))
> > +   return security_ptrace_access_check(task, mode);
> > +   return 0;
> 
> Because PTRACE_MODE_IBPB includes PTRACE_MODE_NOAUDIT you
> shouldn't need this change. 

That is true, but that's not my concern here. 

security_ptrace_access_check() -> call_int_hook() -> P->hook.FUNC().

If it's somehow guaranteed that all functions called this ways are fine to 
be called from scheduler context (wrt. locks), then it's all fine and I'll 
happily drop that check.

Is it guaranteed?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-12 Thread Jiri Kosina
On Wed, 12 Sep 2018, Tim Chen wrote:

> I'm working on a patch for choosing the Spectre v2 app to app
> mitigation option.
> 
> Something like the following:
> 
> enum spectre_v2_app2app_mitigation {
> SPECTRE_V2_APP2APP_NONE,
> SPECTRE_V2_APP2APP_LITE,
> SPECTRE_V2_APP2APP_IBPB,
> SPECTRE_V2_APP2APP_STIBP,
> SPECTRE_V2_APP2APP_STRICT,
> };
> 
> static const char *spectre_v2_app2app_strings[] = {
> [SPECTRE_V2_APP2APP_NONE]   = "App-App Vulnerable",
> [SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: 
> Protect only non-dumpable process",
> [SPECTRE_V2_APP2APP_IBPB]   = "App-App Mitigation: 
> Protect app against attack from same cpu",
> [SPECTRE_V2_APP2APP_STIBP]  = "App-App Mitigation: 
> Protect app against attack from sibling cpu",
> [SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full 
> app to app attack protection",
> };
> 
> So the APP2APP_LITE protection's intention is to turn on STIBP and IBPB for 
> non-dumpable
> process.  But in my first version I may limit it to IBPB as choosing
> STIBP based on process characteristics will require some frobbing of
> the flags as what we've done in SSBD.  That will require more careful
> work and tests.
> 
> The STRICT option will turn STIBP on always and IBPB always on
> non-ptraceable context switches.
> 
> Is this something reasonable?

It's probably 100% correct, but it's also 100% super-complex at the same 
time if you ask me.

Try to imagine you're a very advanced senior sysadmin, who has heard that 
spectre and meltdown existed of course, but figured out that updating to 
latest kernel/distro vendor update fixes all the security issues (and it 
actually indeed did).

Now, all of a sudden, this new option pops up, and the poor sysadmin has 
to make a decision again.

"Do you care only about security across non-dumpable process 
 boundaries?"

"Scheduled to same CPU at the time of attack? Can you guarantee that 
this 
 is (not) happening?"

"If the processess can actually ptrace/debug each other, are you okay 
with 
 them attacking each other?"

 "Shared HT siblings return target buffer, do you want it or 
  not?"

These are the questions that even an excellent sysadmin might not have 
qualified answers to so far. Now, all of a sudden, he/her has to make 
these decisions?

I don't think that's how it should work. It all should be digestible by 
"linux end-users" (where users are also super-advanced sysadmins) easily.

We currently have "I do care about spectrev2 / I don't care about 
spectrev2" boot-time switch, and I don't see us going any deeper / more 
fine-grained without sacrificing clarity and sanity.

Or do you see a way how to do that nicely?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-12 Thread Jiri Kosina
On Wed, 12 Sep 2018, Tim Chen wrote:

> I'm working on a patch for choosing the Spectre v2 app to app
> mitigation option.
> 
> Something like the following:
> 
> enum spectre_v2_app2app_mitigation {
> SPECTRE_V2_APP2APP_NONE,
> SPECTRE_V2_APP2APP_LITE,
> SPECTRE_V2_APP2APP_IBPB,
> SPECTRE_V2_APP2APP_STIBP,
> SPECTRE_V2_APP2APP_STRICT,
> };
> 
> static const char *spectre_v2_app2app_strings[] = {
> [SPECTRE_V2_APP2APP_NONE]   = "App-App Vulnerable",
> [SPECTRE_V2_APP2APP_LITE]   = "App-App Mitigation: 
> Protect only non-dumpable process",
> [SPECTRE_V2_APP2APP_IBPB]   = "App-App Mitigation: 
> Protect app against attack from same cpu",
> [SPECTRE_V2_APP2APP_STIBP]  = "App-App Mitigation: 
> Protect app against attack from sibling cpu",
> [SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full 
> app to app attack protection",
> };
> 
> So the APP2APP_LITE protection's intention is to turn on STIBP and IBPB for 
> non-dumpable
> process.  But in my first version I may limit it to IBPB as choosing
> STIBP based on process characteristics will require some frobbing of
> the flags as what we've done in SSBD.  That will require more careful
> work and tests.
> 
> The STRICT option will turn STIBP on always and IBPB always on
> non-ptraceable context switches.
> 
> Is this something reasonable?

It's probably 100% correct, but it's also 100% super-complex at the same 
time if you ask me.

Try to imagine you're a very advanced senior sysadmin, who has heard that 
spectre and meltdown existed of course, but figured out that updating to 
latest kernel/distro vendor update fixes all the security issues (and it 
actually indeed did).

Now, all of a sudden, this new option pops up, and the poor sysadmin has 
to make a decision again.

"Do you care only about security across non-dumpable process 
 boundaries?"

"Scheduled to same CPU at the time of attack? Can you guarantee that 
this 
 is (not) happening?"

"If the processess can actually ptrace/debug each other, are you okay 
with 
 them attacking each other?"

 "Shared HT siblings return target buffer, do you want it or 
  not?"

These are the questions that even an excellent sysadmin might not have 
qualified answers to so far. Now, all of a sudden, he/her has to make 
these decisions?

I don't think that's how it should work. It all should be digestible by 
"linux end-users" (where users are also super-advanced sysadmins) easily.

We currently have "I do care about spectrev2 / I don't care about 
spectrev2" boot-time switch, and I don't see us going any deeper / more 
fine-grained without sacrificing clarity and sanity.

Or do you see a way how to do that nicely?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v6 2/3] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-12 Thread Jiri Kosina
On Wed, 12 Sep 2018, Thomas Gleixner wrote:

> > case X86_BUG_SPECTRE_V2:
> > -   return sprintf(buf, "%s%s%s%s\n", 
> > spectre_v2_strings[spectre_v2_enabled],
> > +   mutex_lock(_ctrl_mutex);
> > +   ret = sprintf(buf, "%s%s%s%s%s\n", 
> > spectre_v2_strings[spectre_v2_enabled],
> >boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
> > "",
> >boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
> > IBRS_FW" : "",
> > +  (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
> > STIBP" : "",
> >spectre_v2_module_string());
> > +   mutex_unlock(_ctrl_mutex);
> 
> The mutex for this printing is overkill. It's a read after all and if there
> is a concurrent SMT control fiddling going on then you have a chance of
> getting the wrong information as well. 

Yeah; I was just happy to be able to stick second use of it there, with 
the first one being basically useless as well :)

> I'll zap it.

Absolutely feel free to. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v6 2/3] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-12 Thread Jiri Kosina
On Wed, 12 Sep 2018, Thomas Gleixner wrote:

> > case X86_BUG_SPECTRE_V2:
> > -   return sprintf(buf, "%s%s%s%s\n", 
> > spectre_v2_strings[spectre_v2_enabled],
> > +   mutex_lock(_ctrl_mutex);
> > +   ret = sprintf(buf, "%s%s%s%s%s\n", 
> > spectre_v2_strings[spectre_v2_enabled],
> >boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
> > "",
> >boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
> > IBRS_FW" : "",
> > +  (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
> > STIBP" : "",
> >spectre_v2_module_string());
> > +   mutex_unlock(_ctrl_mutex);
> 
> The mutex for this printing is overkill. It's a read after all and if there
> is a concurrent SMT control fiddling going on then you have a chance of
> getting the wrong information as well. 

Yeah; I was just happy to be able to stick second use of it there, with 
the first one being basically useless as well :)

> I'll zap it.

Absolutely feel free to. Thanks,

-- 
Jiri Kosina
SUSE Labs



[PATCH v6 3/3] x86/speculation: Propagate information about RSB filling mitigation to sysfs

2018-09-12 Thread Jiri Kosina
From: Jiri Kosina 

If spectrev2 mitigation has been enabled, we're filling RSB on context switch
in order to protect from various classess of spectrev2 attacks.

If this mitigation is enabled, say so in sysfs for spectrev2.

Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6bc76bdf5a0a..ee46dcbae5fa 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -875,10 +875,11 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
 
case X86_BUG_SPECTRE_V2:
mutex_lock(_ctrl_mutex);
-   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
   (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
+  boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB 
filling" : "",
   spectre_v2_module_string());
mutex_unlock(_ctrl_mutex);
return ret;

-- 
Jiri Kosina
SUSE Labs



[PATCH v6 3/3] x86/speculation: Propagate information about RSB filling mitigation to sysfs

2018-09-12 Thread Jiri Kosina
From: Jiri Kosina 

If spectrev2 mitigation has been enabled, we're filling RSB on context switch
in order to protect from various classess of spectrev2 attacks.

If this mitigation is enabled, say so in sysfs for spectrev2.

Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6bc76bdf5a0a..ee46dcbae5fa 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -875,10 +875,11 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
 
case X86_BUG_SPECTRE_V2:
mutex_lock(_ctrl_mutex);
-   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   ret = sprintf(buf, "%s%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
   (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
+  boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB 
filling" : "",
   spectre_v2_module_string());
mutex_unlock(_ctrl_mutex);
return ret;

-- 
Jiri Kosina
SUSE Labs



[PATCH v6 2/3] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-12 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 59 +-
 kernel/cpu.c   | 11 -
 2 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..6bc76bdf5a0a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 
-/*
- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
- * writes to SPEC_CTRL contain whatever reserved bits have been set.
- */
-u64 __ro_after_init x86_spec_ctrl_base;
+/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+u64 x86_spec_ctrl_base;
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+static DEFINE_MUTEX(spec_ctrl_mutex);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+   u64 mask;
+
+   if (!stibp_needed())
+   return;
+
+   mutex_lock(_ctrl_mutex);
+   mask = x86_spec_ctrl_base;
+   if (cpu_smt_control == CPU_SMT_ENABLED)
+   mask |= SPEC_CTRL_STIBP;
+   else
+   mask &= ~SPEC_CTRL_STIBP;
+
+   if (mask != x86_spec_ctrl_base) {
+   pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+   cpu_smt_control == CPU_SMT_ENABLED ?
+   "Enabling" : "Disabling");
+   x86_spec_ctrl_base = mask;
+   on_each_cpu(update_stibp_msr, NULL, 1);
+   }
+   mutex_unlock(_ctrl_mutex);
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +462,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP if appropriate */
+   arch_smt_update();
 }
 
 #undef pr_fmt
@@ -814,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute 
*attr,
   char *buf, unsigned int bug)
 {
+   int ret;
+
if (!boot_cpu_has_bug(bug))
return sprintf(buf, "Not affected\n");
 
@@ -831,10 +874,14 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   return sprintf(buf, "%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   mutex_lock(_ctrl_mutex);
+   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
+  (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
   spectre_v2_module_string());
+   mutex_unlock(_ctrl_mutex);
+   return ret;
 
case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
diff --git a/kernel/cpu.c b/kernel/c

[PATCH v6 2/3] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-12 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 59 +-
 kernel/cpu.c   | 11 -
 2 files changed, 63 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..6bc76bdf5a0a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 
-/*
- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
- * writes to SPEC_CTRL contain whatever reserved bits have been set.
- */
-u64 __ro_after_init x86_spec_ctrl_base;
+/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+u64 x86_spec_ctrl_base;
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+static DEFINE_MUTEX(spec_ctrl_mutex);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+   u64 mask;
+
+   if (!stibp_needed())
+   return;
+
+   mutex_lock(_ctrl_mutex);
+   mask = x86_spec_ctrl_base;
+   if (cpu_smt_control == CPU_SMT_ENABLED)
+   mask |= SPEC_CTRL_STIBP;
+   else
+   mask &= ~SPEC_CTRL_STIBP;
+
+   if (mask != x86_spec_ctrl_base) {
+   pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
+   cpu_smt_control == CPU_SMT_ENABLED ?
+   "Enabling" : "Disabling");
+   x86_spec_ctrl_base = mask;
+   on_each_cpu(update_stibp_msr, NULL, 1);
+   }
+   mutex_unlock(_ctrl_mutex);
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +462,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP if appropriate */
+   arch_smt_update();
 }
 
 #undef pr_fmt
@@ -814,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
 static ssize_t cpu_show_common(struct device *dev, struct device_attribute 
*attr,
   char *buf, unsigned int bug)
 {
+   int ret;
+
if (!boot_cpu_has_bug(bug))
return sprintf(buf, "Not affected\n");
 
@@ -831,10 +874,14 @@ static ssize_t cpu_show_common(struct device *dev, struct 
device_attribute *attr
return sprintf(buf, "Mitigation: __user pointer 
sanitization\n");
 
case X86_BUG_SPECTRE_V2:
-   return sprintf(buf, "%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
+   mutex_lock(_ctrl_mutex);
+   ret = sprintf(buf, "%s%s%s%s%s\n", 
spectre_v2_strings[spectre_v2_enabled],
   boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : 
"",
   boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", 
IBRS_FW" : "",
+  (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", 
STIBP" : "",
   spectre_v2_module_string());
+   mutex_unlock(_ctrl_mutex);
+   return ret;
 
case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
diff --git a/kernel/cpu.c b/kernel/c

[PATCH v6 1/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-12 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev or ptrace (and should use the real credentials).
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
+extern int __ptrace_may_access(s

[PATCH v6 1/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-12 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev or ptrace (and should use the real credentials).
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
+extern int __ptrace_may_access(s

[PATCH v6 0/3] Harden spectrev2 userspace-userspace protection

2018-09-12 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

v4->v5:
fix MSR writing logic (Thomas Gleixner, Josh Poimboeuf)

v5->v6:
propagate X86_FEATURE_RSB_CTXSW setting to sysfs
propagate STIBP setting to sysfs (Thomas Gleixner)
simplify arch_smt_update() (Thomas Gleixner)

Jiri Kosina (3):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation
  x86/speculation: Propagate information about RSB filling mitigation to 
sysfs

 arch/x86/kernel/cpu/bugs.c | 60 
++--
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 11 ++-
 kernel/ptrace.c| 12 
 5 files changed, 96 insertions(+), 22 deletions(-)

-- 
Jiri Kosina
SUSE Labs



[PATCH v6 0/3] Harden spectrev2 userspace-userspace protection

2018-09-12 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

v4->v5:
fix MSR writing logic (Thomas Gleixner, Josh Poimboeuf)

v5->v6:
propagate X86_FEATURE_RSB_CTXSW setting to sysfs
propagate STIBP setting to sysfs (Thomas Gleixner)
simplify arch_smt_update() (Thomas Gleixner)

Jiri Kosina (3):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation
  x86/speculation: Propagate information about RSB filling mitigation to 
sysfs

 arch/x86/kernel/cpu/bugs.c | 60 
++--
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 11 ++-
 kernel/ptrace.c| 12 
 5 files changed, 96 insertions(+), 22 deletions(-)

-- 
Jiri Kosina
SUSE Labs



[GIT PULL] HID fixes

2018-09-11 Thread Jiri Kosina
Linus,

please pull from

  git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git for-linus

to receive fixes for HID subsystem; highlights:

=
- functional regression fix for sensor-hub driver from Hans de Goede
- stop doing device reset for i2c-hid devices, which unbreaks some of them 
  (and is in line with the specification), from Kai-Heng Feng
- error handling fix for hid-core from Gustavo A. R. Silva
- functional regression fix for some Elan panels from Benjamin Tissoires
- a few new device ID additions and misc small fixes
=

Thanks.


AceLan Kao (1):
  HID: i2c-hid: Fix flooded incomplete report after S3 on Rayd touchscreen

Andreas Bosch (1):
  HID: intel-ish-hid: Enable Sunrise Point-H ish driver

Benjamin Tissoires (2):
  HID: multitouch: fix Elan panels with 2 input modes declaration
  HID: core: fix grouping by application

Gustavo A. R. Silva (1):
  HID: core: fix NULL pointer dereference

Hans de Goede (1):
  HID: sensor-hub: Restore fixup for Lenovo ThinkPad Helix 2 sensor hub 
report

Harry Mallon (1):
  HID: hid-saitek: Add device ID for RAT 7 Contagion

Kai-Heng Feng (1):
  HID: i2c-hid: Don't reset device upon system resume

Sean O'Brien (1):
  HID: add support for Apple Magic Keyboards

Stefan Agner (2):
  HID: input: fix leaking custom input node name
  HID: core: fix memory leak on probe

 drivers/hid/hid-apple.c |  9 -
 drivers/hid/hid-core.c  |  5 -
 drivers/hid/hid-ids.h   |  6 +++---
 drivers/hid/hid-input.c |  5 +++--
 drivers/hid/hid-multitouch.c| 19 +--
 drivers/hid/hid-saitek.c|  2 ++
 drivers/hid/hid-sensor-hub.c| 23 +++
 drivers/hid/i2c-hid/i2c-hid.c   | 11 +++
 drivers/hid/intel-ish-hid/ipc/hw-ish.h  |  1 +
 drivers/hid/intel-ish-hid/ipc/pci-ish.c |  1 +
 include/linux/hid.h |  1 +
 11 files changed, 70 insertions(+), 13 deletions(-)

-- 
Jiri Kosina
SUSE Labs



[GIT PULL] HID fixes

2018-09-11 Thread Jiri Kosina
Linus,

please pull from

  git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git for-linus

to receive fixes for HID subsystem; highlights:

=
- functional regression fix for sensor-hub driver from Hans de Goede
- stop doing device reset for i2c-hid devices, which unbreaks some of them 
  (and is in line with the specification), from Kai-Heng Feng
- error handling fix for hid-core from Gustavo A. R. Silva
- functional regression fix for some Elan panels from Benjamin Tissoires
- a few new device ID additions and misc small fixes
=

Thanks.


AceLan Kao (1):
  HID: i2c-hid: Fix flooded incomplete report after S3 on Rayd touchscreen

Andreas Bosch (1):
  HID: intel-ish-hid: Enable Sunrise Point-H ish driver

Benjamin Tissoires (2):
  HID: multitouch: fix Elan panels with 2 input modes declaration
  HID: core: fix grouping by application

Gustavo A. R. Silva (1):
  HID: core: fix NULL pointer dereference

Hans de Goede (1):
  HID: sensor-hub: Restore fixup for Lenovo ThinkPad Helix 2 sensor hub 
report

Harry Mallon (1):
  HID: hid-saitek: Add device ID for RAT 7 Contagion

Kai-Heng Feng (1):
  HID: i2c-hid: Don't reset device upon system resume

Sean O'Brien (1):
  HID: add support for Apple Magic Keyboards

Stefan Agner (2):
  HID: input: fix leaking custom input node name
  HID: core: fix memory leak on probe

 drivers/hid/hid-apple.c |  9 -
 drivers/hid/hid-core.c  |  5 -
 drivers/hid/hid-ids.h   |  6 +++---
 drivers/hid/hid-input.c |  5 +++--
 drivers/hid/hid-multitouch.c| 19 +--
 drivers/hid/hid-saitek.c|  2 ++
 drivers/hid/hid-sensor-hub.c| 23 +++
 drivers/hid/i2c-hid/i2c-hid.c   | 11 +++
 drivers/hid/intel-ish-hid/ipc/hw-ish.h  |  1 +
 drivers/hid/intel-ish-hid/ipc/pci-ish.c |  1 +
 include/linux/hid.h |  1 +
 11 files changed, 70 insertions(+), 13 deletions(-)

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> > So please tell me what exactly you'd like to see changed in the IBPB patch
> > and why exactly, I am not seeing it yet.
> 
> Short of a patch to show the changes (which I wish I could do today, but 
> really can't)
> what I want to see is:
> 
>   - Put ptrace back to using the security module interfaces.
>   - Identify where this causes locking issues and work with the module
> owners (a reasonable lot, all) to provide lock safe paths for the 
> IBPB case.
> 
> Otherwise, I have to add a new LSM hook right after your ptrace call and 
> duplicate
> a whole lot of what you've just turned off, plus creating lock safe code that 
> duplicates
> what ptrace already does. While I would rather have the side-channel checks be
> separate from the ptrace checks I can't justify doing both.

So why can't this be then done as 2nd step, once you've audited the LSM 
callbacks and worked around the locking in LSM callbacks/audit code?

Once that is taken care of, of course feel free to undo the changes my 
patch is doing so that you don't have to duplicate any ptrace code.

But before all that is fixed / worked around in LSM/audit (and I don't 
have spare cycles for doing that myself), why not take the simple aproach 
for now?

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> > So please tell me what exactly you'd like to see changed in the IBPB patch
> > and why exactly, I am not seeing it yet.
> 
> Short of a patch to show the changes (which I wish I could do today, but 
> really can't)
> what I want to see is:
> 
>   - Put ptrace back to using the security module interfaces.
>   - Identify where this causes locking issues and work with the module
> owners (a reasonable lot, all) to provide lock safe paths for the 
> IBPB case.
> 
> Otherwise, I have to add a new LSM hook right after your ptrace call and 
> duplicate
> a whole lot of what you've just turned off, plus creating lock safe code that 
> duplicates
> what ptrace already does. While I would rather have the side-channel checks be
> separate from the ptrace checks I can't justify doing both.

So why can't this be then done as 2nd step, once you've audited the LSM 
callbacks and worked around the locking in LSM callbacks/audit code?

Once that is taken care of, of course feel free to undo the changes my 
patch is doing so that you don't have to duplicate any ptrace code.

But before all that is fixed / worked around in LSM/audit (and I don't 
have spare cycles for doing that myself), why not take the simple aproach 
for now?

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> It you're going to call __ptrace_access_check(), 

I guess you mean __ptrace_may_access() here.

> which already includes an LSM hook, it makes a whole lot of sense to 
> make that the path for doing any module specific checks. It seems wrong 
> to disable the LSM hook there, then turn around and introduce a new one 
> that does the check you just disabled. The patches I had proposed 
> created a new LSM hook because there was not path to an existing hook. 
> With your addition of __ptrace_access_check() that is no longer an issue 
> once any locking problems are resolved. Rather than use a new hook, the 
> existing ptrace hooks ought to work just fine, and any new checks can be 
> added in a new module that has its own ptrace_access_check hook.

Sorry for being dense, but what exactly are you proposing here then?

This patch (v4 and v5) explicitly avoids calling out to ptrace_has_cap() 
(which calls out to LSM through has_ns_capability_*() -> 
security_capable()) in PTRACE_MODE_IBPB case, exactly to avoid locking 
issues with security modules; there are known callchains that lead to 
deadlock.

With the same reasoning, security_ptrace_access_check() call is avoided, 
only there is no know particular callchain that'd lead to a lock being 
taken, but noone has done such audit (yet), as it's all hidden behind LSM 
callbacks.

So please tell me what exactly you'd like to see changed in the IBPB patch 
and why exactly, I am not seeing it yet.

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> It you're going to call __ptrace_access_check(), 

I guess you mean __ptrace_may_access() here.

> which already includes an LSM hook, it makes a whole lot of sense to 
> make that the path for doing any module specific checks. It seems wrong 
> to disable the LSM hook there, then turn around and introduce a new one 
> that does the check you just disabled. The patches I had proposed 
> created a new LSM hook because there was not path to an existing hook. 
> With your addition of __ptrace_access_check() that is no longer an issue 
> once any locking problems are resolved. Rather than use a new hook, the 
> existing ptrace hooks ought to work just fine, and any new checks can be 
> added in a new module that has its own ptrace_access_check hook.

Sorry for being dense, but what exactly are you proposing here then?

This patch (v4 and v5) explicitly avoids calling out to ptrace_has_cap() 
(which calls out to LSM through has_ns_capability_*() -> 
security_capable()) in PTRACE_MODE_IBPB case, exactly to avoid locking 
issues with security modules; there are known callchains that lead to 
deadlock.

With the same reasoning, security_ptrace_access_check() call is avoided, 
only there is no know particular callchain that'd lead to a lock being 
taken, but noone has done such audit (yet), as it's all hidden behind LSM 
callbacks.

So please tell me what exactly you'd like to see changed in the IBPB patch 
and why exactly, I am not seeing it yet.

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> Yes, It would require that this patch be tested against all the existing 
> security modules that provide a ptrace_access_check hook. It's not like 
> the security module writers don't have a bunch of locking issues to deal 
> with.

Yeah, that was indeed my concern.

So can we agree on doing this in the 2nd envisioned step, when this is 
going to be replaced by LSM as discussed [1] previously?

[1] 
http://lkml.kernel.org/r/99fc4b6efcefd44486c35f4c281dc67321447...@orsmsx107.amr.corp.intel.com

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> Yes, It would require that this patch be tested against all the existing 
> security modules that provide a ptrace_access_check hook. It's not like 
> the security module writers don't have a bunch of locking issues to deal 
> with.

Yeah, that was indeed my concern.

So can we agree on doing this in the 2nd envisioned step, when this is 
going to be replaced by LSM as discussed [1] previously?

[1] 
http://lkml.kernel.org/r/99fc4b6efcefd44486c35f4c281dc67321447...@orsmsx107.amr.corp.intel.com

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> Why are you dropping the LSM check here, when in v4 you fixed the
> SELinux audit locking issue? We can avoid introducing an LSM hook
> and all the baggage around it if you can do the security_ptrace_access_check()
> here.

So what guarantees that none of the hooks that 
security_ptrace_access_check() is invoking will not be taking locks (from 
scheduler context in this case)?

Thanks,

-- 
Jiri Kosina
SUSE Labs



RE: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Schaufler, Casey wrote:

> Why are you dropping the LSM check here, when in v4 you fixed the
> SELinux audit locking issue? We can avoid introducing an LSM hook
> and all the baggage around it if you can do the security_ptrace_access_check()
> here.

So what guarantees that none of the hooks that 
security_ptrace_access_check() is invoking will not be taking locks (from 
scheduler context in this case)?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Jiri Kosina wrote:

> > That looks much more palatable. One missing piece is the sysfs 
> > mitigation file for spectre v2. That should reflect STIPB state as well.
> 
> FWIW, we're missing a bit more in that area, namely RSB stuffing on 
> context switch, IBRS (even through only around fw) and IBPB; those are 
> only signalled in dmesg during bootup.

Nah, IBPB is actuall there, sorry. So I'll add reporting of STIBP + fixup 
the missing reporting of RSB_CTXSW for v6.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Jiri Kosina wrote:

> > That looks much more palatable. One missing piece is the sysfs 
> > mitigation file for spectre v2. That should reflect STIPB state as well.
> 
> FWIW, we're missing a bit more in that area, namely RSB stuffing on 
> context switch, IBRS (even through only around fw) and IBPB; those are 
> only signalled in dmesg during bootup.

Nah, IBPB is actuall there, sorry. So I'll add reporting of STIBP + fixup 
the missing reporting of RSB_CTXSW for v6.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Thomas Gleixner wrote:

> That looks much more palatable. One missing piece is the sysfs 
> mitigation file for spectre v2. That should reflect STIPB state as well.

FWIW, we're missing a bit more in that area, namely RSB stuffing on 
context switch, IBRS (even through only around fw) and IBPB; those are 
only signalled in dmesg during bootup.

Want to add all this stuff there?

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-10 Thread Jiri Kosina
On Mon, 10 Sep 2018, Thomas Gleixner wrote:

> That looks much more palatable. One missing piece is the sysfs 
> mitigation file for spectre v2. That should reflect STIPB state as well.

FWIW, we're missing a bit more in that area, namely RSB stuffing on 
context switch, IBRS (even through only around fw) and IBPB; those are 
only signalled in dmesg during bootup.

Want to add all this stuff there?

Thanks,

-- 
Jiri Kosina
SUSE Labs



[PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-10 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 49 +-
 kernel/cpu.c   | 11 ++-
 2 files changed, 54 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..ccc5e6b7fe40 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 
-/*
- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
- * writes to SPEC_CTRL contain whatever reserved bits have been set.
- */
-u64 __ro_after_init x86_spec_ctrl_base;
+/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+u64 x86_spec_ctrl_base;
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+static DEFINE_MUTEX(spec_ctrl_mutex);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,44 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+   if (stibp_needed()) {
+   u64 mask;
+   mutex_lock(_ctrl_mutex);
+   mask = x86_spec_ctrl_base;
+   if (cpu_smt_control == CPU_SMT_ENABLED)
+   mask |= SPEC_CTRL_STIBP;
+   else
+   mask &= ~SPEC_CTRL_STIBP;
+
+   if (mask != x86_spec_ctrl_base) {
+   pr_info("Spectre v2 cross-process SMT mitigation: %s 
STIBP\n",
+   cpu_smt_control == CPU_SMT_ENABLED ?
+   "Enabling" : "Disabling");
+   x86_spec_ctrl_base = mask;
+   on_each_cpu(update_stibp_msr, NULL, 1);
+   }
+   mutex_unlock(_ctrl_mutex);
+   }
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +460,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP if appropriate */
+   arch_smt_update();
 }
 
 #undef pr_fmt
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..4bba5071d61e 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2025,6 +2025,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
kobject_uevent(>kobj, KOBJ_ONLINE);
 }
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override these.
+ */
+void __weak arch_smt_update(void) { };
+
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
int cpu, ret = 0;
@@ -2051,8 +2057,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control 
ctrlval)
 */
cpuhp_offline_cpu_device(cpu);
}
-   if (!ret)
+   if (!ret) {
cpu_smt_control = ctrlval;
+   arch_smt_update();
+   }
cpu_maps_update_done();
return ret;
 }
@@ -2063,6 +2071,7 @@ static int cpuhp_smt_enable(void)
 
cpu_maps_update_begin();
cpu_smt_control = CPU_SMT_ENABLED;
+   arch_smt_update();
for_each_present_cpu(cpu) {
/* Skip online CPUs and CPUs on offline nodes */
if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))

-- 
Jiri Kosina
SUSE Labs



[PATCH v5 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-10 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 49 +-
 kernel/cpu.c   | 11 ++-
 2 files changed, 54 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..ccc5e6b7fe40 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 
-/*
- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
- * writes to SPEC_CTRL contain whatever reserved bits have been set.
- */
-u64 __ro_after_init x86_spec_ctrl_base;
+/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+u64 x86_spec_ctrl_base;
 EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+static DEFINE_MUTEX(spec_ctrl_mutex);
 
 /*
  * The vendor and possibly platform specific bits which can be modified in
@@ -325,6 +323,44 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+static void update_stibp_msr(void *info)
+{
+   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+}
+
+void arch_smt_update(void)
+{
+   if (stibp_needed()) {
+   u64 mask;
+   mutex_lock(_ctrl_mutex);
+   mask = x86_spec_ctrl_base;
+   if (cpu_smt_control == CPU_SMT_ENABLED)
+   mask |= SPEC_CTRL_STIBP;
+   else
+   mask &= ~SPEC_CTRL_STIBP;
+
+   if (mask != x86_spec_ctrl_base) {
+   pr_info("Spectre v2 cross-process SMT mitigation: %s 
STIBP\n",
+   cpu_smt_control == CPU_SMT_ENABLED ?
+   "Enabling" : "Disabling");
+   x86_spec_ctrl_base = mask;
+   on_each_cpu(update_stibp_msr, NULL, 1);
+   }
+   mutex_unlock(_ctrl_mutex);
+   }
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +460,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP if appropriate */
+   arch_smt_update();
 }
 
 #undef pr_fmt
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..4bba5071d61e 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2025,6 +2025,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
kobject_uevent(>kobj, KOBJ_ONLINE);
 }
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override these.
+ */
+void __weak arch_smt_update(void) { };
+
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
int cpu, ret = 0;
@@ -2051,8 +2057,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control 
ctrlval)
 */
cpuhp_offline_cpu_device(cpu);
}
-   if (!ret)
+   if (!ret) {
cpu_smt_control = ctrlval;
+   arch_smt_update();
+   }
cpu_maps_update_done();
return ret;
 }
@@ -2063,6 +2071,7 @@ static int cpuhp_smt_enable(void)
 
cpu_maps_update_begin();
cpu_smt_control = CPU_SMT_ENABLED;
+   arch_smt_update();
for_each_present_cpu(cpu) {
/* Skip online CPUs and CPUs on offline nodes */
if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))

-- 
Jiri Kosina
SUSE Labs



[PATCH v5 0/2] Harden spectrev2 userspace-userspace protection

2018-09-10 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

v4->v5:
fix MSR writing logic (Thomas Gleixner, Josh Poimboeuf)

Jiri Kosina (2):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

 arch/x86/kernel/cpu/bugs.c | 49 
-
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 11 ++-
 kernel/ptrace.c| 12 
 5 files changed, 86 insertions(+), 21 deletions(-)

-- 
Jiri Kosina
SUSE Labs


[PATCH v5 0/2] Harden spectrev2 userspace-userspace protection

2018-09-10 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

v4->v5:
fix MSR writing logic (Thomas Gleixner, Josh Poimboeuf)

Jiri Kosina (2):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

 arch/x86/kernel/cpu/bugs.c | 49 
-
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 11 ++-
 kernel/ptrace.c| 12 
 5 files changed, 86 insertions(+), 21 deletions(-)

-- 
Jiri Kosina
SUSE Labs


[PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev or ptrace (and should use the real credentials).
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
+extern int __ptrace_may_access(s

[PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-10 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev or ptrace (and should use the real credentials).
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
+extern int __ptrace_may_access(s

Re: [PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-08 Thread Jiri Kosina
On Sat, 8 Sep 2018, Thomas Gleixner wrote:

> If after changing the SMT to enable a normal hotplug operation happens 
> then you need to update the MSR as well.

Ah, right you are, thanks. Will fix in v5.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-08 Thread Jiri Kosina
On Sat, 8 Sep 2018, Thomas Gleixner wrote:

> If after changing the SMT to enable a normal hotplug operation happens 
> then you need to update the MSR as well.

Ah, right you are, thanks. Will fix in v5.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-07 Thread Jiri Kosina
On Fri, 7 Sep 2018, Thomas Gleixner wrote:

> > + * The read-modify-write of the MSR doesn't need any race protection here,
> > + * as we're running in atomic context.
> > + */
> > +static void enable_stibp(void *info)
> > +{
> > +   u64 mask;
> > +   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
> > +   mask |= SPEC_CTRL_STIBP;
> > +   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
> > +}
> 
> You need to write x86_spec_ctrl_base 

Agreed (Josh pointed that out as well) -- otherwise this gets lost during 
VMEXIT at least.

> >  #undef pr_fmt
> > @@ -655,6 +708,16 @@ void x86_spec_ctrl_setup_ap(void)
> >  
> > if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
> > x86_amd_ssb_disable();
> > +
> > +   /*
> > +* If we are here during system bootup, enable STIBP.
> > +*
> > +* If we are here because of SMT hotplug, STIBP will be enabled by the
> > +* SMT control code (enabling here would not be sufficient, as it
> > +* needs to happen on primary threads as well).
> > +*/
> > +   if (stibp_needed() && system_state < SYSTEM_RUNNING)
> > +   enable_stibp(NULL);
> 
> That's broken and pointless. If a CPU is hotplugged for whatever reason,
> then it needs the update of the IA32_SPEC_CTRL msr and it's already there:
> 
> void x86_spec_ctrl_setup_ap(void)
> {
> if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> 
> so again. x86_spec_ctrl_base needs the STIPB bit update when the
> enable/disable changes.

Agreed, but that's not sufficient. We need to update the MSR value on 
primary threads as well (which we do in sysfs store path), so this is 
merely an optimization so that we don't do it pointlessly twice on 
siblings.

> > +/*
> > + * Architectures that need SMT-specific errata handling during SMT hotplug
> > + * should override these.
> > + */
> > +void __weak arch_smt_enable_errata(void) { };
> > +void __weak arch_smt_disable_errata(void) { };
> > +
> >  static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
> >  {
> > int cpu, ret = 0;
> >  
> > cpu_maps_update_begin();
> > +   arch_smt_disable_errata();
> > for_each_online_cpu(cpu) {
> > if (topology_is_primary_thread(cpu))
> > continue;
> > ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
> > -   if (ret)
> > +   if (ret) {
> > +   arch_smt_enable_errata();
> > break;
> > +   }
> 
> Why don't you do the disable_errata call _after_ the loop and only on
> success? That's more logical and spares 50% IPIs
> 
> > @@ -2073,6 +2083,7 @@ static int cpuhp_smt_enable(void)
> >/* See comment in cpuhp_smt_disable() */
> >cpuhp_online_cpu_device(cpu);
> >}
> > +   arch_smt_enable_errata();
> > cpu_maps_update_done();
> >return ret;
> 
> If you do that _before_ the loop then you spare 50% IPIs again because the
> siblings will do the right thing via x86_spec_ctrl_base.

So I will go through the whole codepath again, but I fear your suggestion 
would not work -- see the check for cpu_smt_control in stibp_needed(). We 
need to see the old (or new, depending on the direction of the transition) 
value of cpu_smt_contol, which will break if we move 
arch_smt_enable_errata() (and thus the check).

But I am not 100% sure about this now, will double-check it tomorrow.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-07 Thread Jiri Kosina
On Fri, 7 Sep 2018, Thomas Gleixner wrote:

> > + * The read-modify-write of the MSR doesn't need any race protection here,
> > + * as we're running in atomic context.
> > + */
> > +static void enable_stibp(void *info)
> > +{
> > +   u64 mask;
> > +   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
> > +   mask |= SPEC_CTRL_STIBP;
> > +   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
> > +}
> 
> You need to write x86_spec_ctrl_base 

Agreed (Josh pointed that out as well) -- otherwise this gets lost during 
VMEXIT at least.

> >  #undef pr_fmt
> > @@ -655,6 +708,16 @@ void x86_spec_ctrl_setup_ap(void)
> >  
> > if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
> > x86_amd_ssb_disable();
> > +
> > +   /*
> > +* If we are here during system bootup, enable STIBP.
> > +*
> > +* If we are here because of SMT hotplug, STIBP will be enabled by the
> > +* SMT control code (enabling here would not be sufficient, as it
> > +* needs to happen on primary threads as well).
> > +*/
> > +   if (stibp_needed() && system_state < SYSTEM_RUNNING)
> > +   enable_stibp(NULL);
> 
> That's broken and pointless. If a CPU is hotplugged for whatever reason,
> then it needs the update of the IA32_SPEC_CTRL msr and it's already there:
> 
> void x86_spec_ctrl_setup_ap(void)
> {
> if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> 
> so again. x86_spec_ctrl_base needs the STIPB bit update when the
> enable/disable changes.

Agreed, but that's not sufficient. We need to update the MSR value on 
primary threads as well (which we do in sysfs store path), so this is 
merely an optimization so that we don't do it pointlessly twice on 
siblings.

> > +/*
> > + * Architectures that need SMT-specific errata handling during SMT hotplug
> > + * should override these.
> > + */
> > +void __weak arch_smt_enable_errata(void) { };
> > +void __weak arch_smt_disable_errata(void) { };
> > +
> >  static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
> >  {
> > int cpu, ret = 0;
> >  
> > cpu_maps_update_begin();
> > +   arch_smt_disable_errata();
> > for_each_online_cpu(cpu) {
> > if (topology_is_primary_thread(cpu))
> > continue;
> > ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
> > -   if (ret)
> > +   if (ret) {
> > +   arch_smt_enable_errata();
> > break;
> > +   }
> 
> Why don't you do the disable_errata call _after_ the loop and only on
> success? That's more logical and spares 50% IPIs
> 
> > @@ -2073,6 +2083,7 @@ static int cpuhp_smt_enable(void)
> >/* See comment in cpuhp_smt_disable() */
> >cpuhp_online_cpu_device(cpu);
> >}
> > +   arch_smt_enable_errata();
> > cpu_maps_update_done();
> >return ret;
> 
> If you do that _before_ the loop then you spare 50% IPIs again because the
> siblings will do the right thing via x86_spec_ctrl_base.

So I will go through the whole codepath again, but I fear your suggestion 
would not work -- see the check for cpu_smt_control in stibp_needed(). We 
need to see the old (or new, depending on the direction of the transition) 
value of cpu_smt_contol, which will break if we move 
arch_smt_enable_errata() (and thus the check).

But I am not 100% sure about this now, will double-check it tomorrow.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v2] HID: i2c-hid: Don't reset device upon system resume

2018-09-06 Thread Jiri Kosina
On Thu, 6 Sep 2018, Kai-Heng Feng wrote:

> Raydium touchscreen triggers interrupt storm after system-wide suspend:
> [ 179.085033] i2c_hid i2c-CUST:00: i2c_hid_get_input: incomplete
> report (58/65535)
> 
> According to Raydium, Windows driver does not reset the device after
> system resume.
> 
> The HID over I2C spec does specify a reset should be used at
> intialization, but it doesn't specify if reset is required for system
> suspend.
> 
> Tested this patch on other i2c-hid touchpanels I have and those
> touchpanels do work after S3 without doing reset. If any regression
> happens to other touchpanel vendors, we can use quirk for Raydium
> devices.
> 
> There's still one device uses I2C_HID_QUIRK_RESEND_REPORT_DESCR so keep
> it there.
> 
> Cc: Aaron Ma 
> Cc: AceLan Kao 
> Signed-off-by: Kai-Heng Feng 

Queued in for-4.19/fixes. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v2] HID: i2c-hid: Don't reset device upon system resume

2018-09-06 Thread Jiri Kosina
On Thu, 6 Sep 2018, Kai-Heng Feng wrote:

> Raydium touchscreen triggers interrupt storm after system-wide suspend:
> [ 179.085033] i2c_hid i2c-CUST:00: i2c_hid_get_input: incomplete
> report (58/65535)
> 
> According to Raydium, Windows driver does not reset the device after
> system resume.
> 
> The HID over I2C spec does specify a reset should be used at
> intialization, but it doesn't specify if reset is required for system
> suspend.
> 
> Tested this patch on other i2c-hid touchpanels I have and those
> touchpanels do work after S3 without doing reset. If any regression
> happens to other touchpanel vendors, we can use quirk for Raydium
> devices.
> 
> There's still one device uses I2C_HID_QUIRK_RESEND_REPORT_DESCR so keep
> it there.
> 
> Cc: Aaron Ma 
> Cc: AceLan Kao 
> Signed-off-by: Kai-Heng Feng 

Queued in for-4.19/fixes. Thanks,

-- 
Jiri Kosina
SUSE Labs



[PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note: the code could be made less awkward if it'd be guaranteed that STIBP
could be kept on on a primary thread with SMT sibling being offline, without
potentially imposing performance penalty. This doesn't seem to be defined
anywhere though, so let's better be safe then sorry and always flip STIBP
both on primary and sibling threads on hotplug transitions.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---

Hopefully properly threaded now, sorry for the duplicates. alpine 2.21
clearly has a bug that it sometimes eats Reply-to: header :/ Apologizes
for all the noise

 arch/x86/kernel/cpu/bugs.c | 63 ++
 kernel/cpu.c   | 13 +-
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..ba3df0a49a2e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -325,6 +325,56 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (cpu_smt_control != CPU_SMT_ENABLED)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+/*
+ * The read-modify-write of the MSR doesn't need any race protection here,
+ * as we're running in atomic context.
+ */
+static void enable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask |= SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+static void disable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask &= ~SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+void arch_smt_enable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Enabling 
STIBP\n");
+   on_each_cpu(enable_stibp, NULL, 1);
+   }
+}
+
+void arch_smt_disable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Disabling 
STIBP\n");
+   on_each_cpu(disable_stibp, NULL, 1);
+   }
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +474,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP on BP if needed */
+   arch_smt_enable_errata();
 }
 
 #undef pr_fmt
@@ -655,6 +708,16 @@ void x86_spec_ctrl_setup_ap(void)
 
if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
x86_amd_ssb_disable();
+
+   /*
+* If we are here during system bootup, enable STIBP.
+*
+* If we are here because of SMT hotplug, STIBP will be enabled by the
+* SMT control code (enabling here would not be sufficient, as it
+* needs to happen on primary threads as well).
+*/
+   if (stibp_needed() && system_state < SYSTEM_RUNNING)
+   enable_stibp(NULL);
 }
 
 #undef pr_fmt
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..d3613d546829 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2025,17 +2025,27 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
kobject_uevent(>kobj, KOBJ_ONLINE);
 }
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override these.
+ */
+void __weak arch_smt_enable_errata(void) { };
+void __weak arch_smt_disable_errata(void) { };
+
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
int cpu, ret = 0;
 
cpu_maps_update_begin();
+   arch_smt_disable_errata();
for_each_online_cpu(cpu) {
if (topology_is_primary_thread(cpu))
continue;
ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
-   if (ret)
+   if (ret) {
+   arch_smt_enable_errata();
break;
+   }
/*
 * As this

[PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note: the code could be made less awkward if it'd be guaranteed that STIBP
could be kept on on a primary thread with SMT sibling being offline, without
potentially imposing performance penalty. This doesn't seem to be defined
anywhere though, so let's better be safe then sorry and always flip STIBP
both on primary and sibling threads on hotplug transitions.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---

Hopefully properly threaded now, sorry for the duplicates. alpine 2.21
clearly has a bug that it sometimes eats Reply-to: header :/ Apologizes
for all the noise

 arch/x86/kernel/cpu/bugs.c | 63 ++
 kernel/cpu.c   | 13 +-
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..ba3df0a49a2e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -325,6 +325,56 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (cpu_smt_control != CPU_SMT_ENABLED)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+/*
+ * The read-modify-write of the MSR doesn't need any race protection here,
+ * as we're running in atomic context.
+ */
+static void enable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask |= SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+static void disable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask &= ~SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+void arch_smt_enable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Enabling 
STIBP\n");
+   on_each_cpu(enable_stibp, NULL, 1);
+   }
+}
+
+void arch_smt_disable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Disabling 
STIBP\n");
+   on_each_cpu(disable_stibp, NULL, 1);
+   }
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +474,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP on BP if needed */
+   arch_smt_enable_errata();
 }
 
 #undef pr_fmt
@@ -655,6 +708,16 @@ void x86_spec_ctrl_setup_ap(void)
 
if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
x86_amd_ssb_disable();
+
+   /*
+* If we are here during system bootup, enable STIBP.
+*
+* If we are here because of SMT hotplug, STIBP will be enabled by the
+* SMT control code (enabling here would not be sufficient, as it
+* needs to happen on primary threads as well).
+*/
+   if (stibp_needed() && system_state < SYSTEM_RUNNING)
+   enable_stibp(NULL);
 }
 
 #undef pr_fmt
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..d3613d546829 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2025,17 +2025,27 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
kobject_uevent(>kobj, KOBJ_ONLINE);
 }
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override these.
+ */
+void __weak arch_smt_enable_errata(void) { };
+void __weak arch_smt_disable_errata(void) { };
+
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
int cpu, ret = 0;
 
cpu_maps_update_begin();
+   arch_smt_disable_errata();
for_each_online_cpu(cpu) {
if (topology_is_primary_thread(cpu))
continue;
ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
-   if (ret)
+   if (ret) {
+   arch_smt_enable_errata();
break;
+   }
/*
 * As this

[PATCH v4 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---

Hopefully properly threaded now, sorry for the duplicates. alpine 2.21 
clearly has a bug that it sometimes eats Reply-to: header :/ Apologizes 
for all the noise

 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev o

[PATCH v4 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---

Hopefully properly threaded now, sorry for the duplicates. alpine 2.21 
clearly has a bug that it sometimes eats Reply-to: header :/ Apologizes 
for all the noise

 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev o

[PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note: the code could be made less awkward if it'd be guaranteed that STIBP
could be kept on on a primary thread with SMT sibling being offline, without
potentially imposing performance penalty. This doesn't seem to be defined
anywhere though, so let's better be safe then sorry and always flip STIBP
both on primary and sibling threads on hotplug transitions.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 63 ++
 kernel/cpu.c   | 13 +-
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..ba3df0a49a2e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -325,6 +325,56 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (cpu_smt_control != CPU_SMT_ENABLED)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+/*
+ * The read-modify-write of the MSR doesn't need any race protection here,
+ * as we're running in atomic context.
+ */
+static void enable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask |= SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+static void disable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask &= ~SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+void arch_smt_enable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Enabling 
STIBP\n");
+   on_each_cpu(enable_stibp, NULL, 1);
+   }
+}
+
+void arch_smt_disable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Disabling 
STIBP\n");
+   on_each_cpu(disable_stibp, NULL, 1);
+   }
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +474,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP on BP if needed */
+   arch_smt_enable_errata();
 }
 
 #undef pr_fmt
@@ -655,6 +708,16 @@ void x86_spec_ctrl_setup_ap(void)
 
if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
x86_amd_ssb_disable();
+
+   /*
+* If we are here during system bootup, enable STIBP.
+*
+* If we are here because of SMT hotplug, STIBP will be enabled by the
+* SMT control code (enabling here would not be sufficient, as it
+* needs to happen on primary threads as well).
+*/
+   if (stibp_needed() && system_state < SYSTEM_RUNNING)
+   enable_stibp(NULL);
 }
 
 #undef pr_fmt
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..d3613d546829 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2025,17 +2025,27 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
kobject_uevent(>kobj, KOBJ_ONLINE);
 }
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override these.
+ */
+void __weak arch_smt_enable_errata(void) { };
+void __weak arch_smt_disable_errata(void) { };
+
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
int cpu, ret = 0;
 
cpu_maps_update_begin();
+   arch_smt_disable_errata();
for_each_online_cpu(cpu) {
if (topology_is_primary_thread(cpu))
continue;
ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
-   if (ret)
+   if (ret) {
+   arch_smt_enable_errata();
break;
+   }
/*
 * As this needs to hold the cpu maps lock it's impossible
 * to call device_offline() because that ends up calling
@@ -2073,6 +2083,7 @@ static int cpuhp

[PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this patch leaves STIBP on all the time,
as wrmsr on crossing kernel boundary is a no-no. This could perhaps later
be a bit more optimized (like disabling it in NOHZ, experiment with
disabling it in idle, etc) if needed.

Note: the code could be made less awkward if it'd be guaranteed that STIBP
could be kept on on a primary thread with SMT sibling being offline, without
potentially imposing performance penalty. This doesn't seem to be defined
anywhere though, so let's better be safe then sorry and always flip STIBP
both on primary and sibling threads on hotplug transitions.

Cc: sta...@vger.kernel.org
Signed-off-by: Jiri Kosina 
---
 arch/x86/kernel/cpu/bugs.c | 63 ++
 kernel/cpu.c   | 13 +-
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 40bdaea97fe7..ba3df0a49a2e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -325,6 +325,56 @@ static enum spectre_v2_mitigation_cmd __init 
spectre_v2_parse_cmdline(void)
return cmd;
 }
 
+static bool stibp_needed(void)
+{
+   if (spectre_v2_enabled == SPECTRE_V2_NONE)
+   return false;
+
+   if (cpu_smt_control != CPU_SMT_ENABLED)
+   return false;
+
+   if (!boot_cpu_has(X86_FEATURE_STIBP))
+   return false;
+
+   return true;
+}
+
+/*
+ * The read-modify-write of the MSR doesn't need any race protection here,
+ * as we're running in atomic context.
+ */
+static void enable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask |= SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+static void disable_stibp(void *info)
+{
+   u64 mask;
+   rdmsrl(MSR_IA32_SPEC_CTRL, mask);
+   mask &= ~SPEC_CTRL_STIBP;
+   wrmsrl(MSR_IA32_SPEC_CTRL, mask);
+}
+
+void arch_smt_enable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Enabling 
STIBP\n");
+   on_each_cpu(enable_stibp, NULL, 1);
+   }
+}
+
+void arch_smt_disable_errata(void)
+{
+   if (stibp_needed()) {
+   pr_info("Spectre v2 cross-process SMT mitigation: Disabling 
STIBP\n");
+   on_each_cpu(disable_stibp, NULL, 1);
+   }
+}
+
 static void __init spectre_v2_select_mitigation(void)
 {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -424,6 +474,9 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
+
+   /* Enable STIBP on BP if needed */
+   arch_smt_enable_errata();
 }
 
 #undef pr_fmt
@@ -655,6 +708,16 @@ void x86_spec_ctrl_setup_ap(void)
 
if (ssb_mode == SPEC_STORE_BYPASS_DISABLE)
x86_amd_ssb_disable();
+
+   /*
+* If we are here during system bootup, enable STIBP.
+*
+* If we are here because of SMT hotplug, STIBP will be enabled by the
+* SMT control code (enabling here would not be sufficient, as it
+* needs to happen on primary threads as well).
+*/
+   if (stibp_needed() && system_state < SYSTEM_RUNNING)
+   enable_stibp(NULL);
 }
 
 #undef pr_fmt
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85ad62e..d3613d546829 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2025,17 +2025,27 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
kobject_uevent(>kobj, KOBJ_ONLINE);
 }
 
+/*
+ * Architectures that need SMT-specific errata handling during SMT hotplug
+ * should override these.
+ */
+void __weak arch_smt_enable_errata(void) { };
+void __weak arch_smt_disable_errata(void) { };
+
 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
int cpu, ret = 0;
 
cpu_maps_update_begin();
+   arch_smt_disable_errata();
for_each_online_cpu(cpu) {
if (topology_is_primary_thread(cpu))
continue;
ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
-   if (ret)
+   if (ret) {
+   arch_smt_enable_errata();
break;
+   }
/*
 * As this needs to hold the cpu maps lock it's impossible
 * to call device_offline() because that ends up calling
@@ -2073,6 +2083,7 @@ static int cpuhp

[PATCH v4 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev or ptrace (and should use the real credentials).
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
+extern int __ptrace_may_access(s

[PATCH v4 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-06 Thread Jiri Kosina
From: Jiri Kosina 

Currently, we are issuing IBPB only in cases when switching into a non-dumpable
process, the rationale being to protect such 'important and security sensitive'
processess (such as GPG) from data leak into a different userspace process via
spectre v2.

This is however completely insufficient to provide proper userspace-to-userpace
spectrev2 protection, as any process can poison branch buffers before being
scheduled out, and the newly scheduled process immediately becomes spectrev2
victim.

In order to minimize the performance impact (for usecases that do require
spectrev2 protection), issue the barrier only in cases when switching between
processess where the victim can't be ptraced by the potential attacker (as in
such cases, the attacker doesn't have to bother with branch buffers at all).

Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in 
context switch")
Originally-by: Tim Chen 
Signed-off-by: Jiri Kosina 
---
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/ptrace.c| 12 
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e96b99eb800c..ed402441 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -7,6 +7,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
}
 }
 
+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   /*
+* Check if the current (previous) task has access to the memory
+* of the @tsk (next) task. If access is denied, make sure to
+* issue a IBPB to stop user->user Spectre-v2 attacks.
+*
+* Note: __ptrace_may_access() returns 0 or -ERRNO.
+*/
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
 {
@@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct 
mm_struct *next,
 * one process from doing Spectre-v2 attacks on another.
 *
 * As an optimization, flush indirect branches only when
-* switching into processes that disable dumping. This
-* protects high value processes like gpg, without having
-* too high performance overhead. IBPB is *expensive*!
-*
-* This will not flush branches when switching into kernel
-* threads. It will also not flush if we switch to idle
-* thread and back to the same process. It will flush if we
-* switch to a different non-dumpable process.
+* switching into a processes that can't be ptrace by the
+* current one (as in such case, attacker has much more
+* convenient way how to tamper with the next process than
+* branch buffer poisoning).
 */
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();
 
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4f36431c380b..983d3f5545a8 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
 #define PTRACE_MODE_NOAUDIT0x04
 #define PTRACE_MODE_FSCREDS 0x08
 #define PTRACE_MODE_REALCREDS 0x10
+#define PTRACE_MODE_NOACCESS_CHK 0x20
 
 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */
 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS)
 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS)
 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | 
PTRACE_MODE_REALCREDS)
+#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \
+ | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS)
 
 /**
  * ptrace_may_access - check whether the caller is permitted to access
@@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct 
list_head *dead);
  * process_vm_writev or ptrace (and should use the real credentials).
  */
 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode);
+extern int __ptrace_may_access(s

[PATCH v4 0/2] Harden spectrev2 userspace-userspace protection

2018-09-06 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

Jiri Kosina (2):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

 arch/x86/kernel/cpu/bugs.c | 63 
+++
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 13 -
 kernel/ptrace.c| 12 
 5 files changed, 107 insertions(+), 16 deletions(-)

-- 
Jiri Kosina
SUSE Labs



[PATCH v4 0/2] Harden spectrev2 userspace-userspace protection

2018-09-06 Thread Jiri Kosina
Currently, linux kernel is basically not preventing userspace-userspace 
spectrev2 attack, because:

- IBPB is basically unused (issued only for tasks that marked themselves 
  explicitly non-dumpable, which is absolutely negligible minority of all 
  software out there), therefore cross-process branch buffer posioning 
  using spectrev2 is possible

- STIBP is completely unused, therefore cross-process branch buffer 
  poisoning using spectrev2 between processess running on two HT siblings 
  thread s is possible

This patchset changes IBPB semantics, so that it's now applied whenever 
context-switching between processess that can't use ptrace() to achieve 
the same. This admittedly comes with extra overhad on a context switch; 
systems that don't care about could disable the mitigation using 
nospectre_v2 boot option.
The IBPB implementaion is heavily based on original patches by Tim Chen.

In addition to that, we unconditionally turn STIBP on so that HT siblings 
always have separate branch buffers.

We've been carrying IBPB implementation with the same semantics in our 
(SUSE) trees since january disclosure; STIBP was more or less ignored up 
to today.

v1->v2:
include IBPB changes
v2->v3: 
fix IBPB 'who can trace who' semantics
wire up STIBP flipping to SMT hotplug
v3->v4:
dropped ___ptrace_may_access(), as it's not needed
fixed deadlock with LSM/audit/selinux (Andrea Arcangeli)
statically patch out the ptrace check if !IBPB

Jiri Kosina (2):
  x86/speculation: apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

 arch/x86/kernel/cpu/bugs.c | 63 
+++
 arch/x86/mm/tlb.c  | 31 ---
 include/linux/ptrace.h |  4 
 kernel/cpu.c   | 13 -
 kernel/ptrace.c| 12 
 5 files changed, 107 insertions(+), 16 deletions(-)

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andrea Arcangeli wrote:

> [ 1838.621010] Kernel panic - not syncing: Watchdog detected hard LOCKUP on 
> cpu 6
> [ 1838.629070] CPU: 6 PID: 0 Comm: swapper/6 Not tainted 
> 3.10.0-327.62.4.el7.x86_64 #1
> [ 1838.637610] Hardware name: Dell Inc. PowerEdge R430/0CN7X8, BIOS 2.4.2 
> 01/09/2017
> [ 1838.645954] Call Trace:
> [ 1838.648680][] dump_stack+0x19/0x1b
> [ 1838.655113]  [] panic+0xd8/0x1e7
> [ 1838.660460]  [] ? restart_watchdog_hrtimer+0x50/0x50
> [ 1838.667742]  [] watchdog_overflow_callback+0xc2/0xd0
> [ 1838.675024]  [] __perf_event_overflow+0xa1/0x250
> [ 1838.681920]  [] perf_event_overflow+0x14/0x20
> [ 1838.688526]  [] intel_pmu_handle_irq+0x1e8/0x470
> [ 1838.695423]  [] ? ioremap_page_range+0x24c/0x330
> [ 1838.702320]  [] ? unmap_kernel_range_noflush+0x11/0x20
> [ 1838.709797]  [] ? ghes_copy_tofrom_phys+0x124/0x210
> [ 1838.716984]  [] ? ghes_read_estatus+0xa0/0x190
> [ 1838.723687]  [] perf_event_nmi_handler+0x2b/0x50
> [ 1838.730582]  [] nmi_handle.isra.0+0x69/0xb0
> [ 1838.736992]  [] do_nmi+0x169/0x340
> [ 1838.742532]  [] end_repeat_nmi+0x1e/0x7e
> [ 1838.748653]  [] ? _raw_spin_lock_irqsave+0x3d/0x60
> [ 1838.755742]  [] ? _raw_spin_lock_irqsave+0x3d/0x60
> [ 1838.762831]  [] ? _raw_spin_lock_irqsave+0x3d/0x60
> [ 1838.769917]  <>  [] avc_compute_av+0x126/0x1b5
> [ 1838.777125]  [] ? walk_tg_tree_from+0xbe/0x110
> [ 1838.783828]  [] avc_has_perm_noaudit+0xc4/0x110
> [ 1838.790628]  [] cred_has_capability+0x6b/0x120
> [ 1838.797331]  [] ? ktime_get+0x4c/0xd0
> [ 1838.803160]  [] ? clockevents_program_event+0x6b/0xf0
> [ 1838.810532]  [] selinux_capable+0x2e/0x40
> [ 1838.816748]  [] security_capable_noaudit+0x15/0x20
> [ 1838.823829]  [] has_ns_capability_noaudit+0x15/0x20
> [ 1838.831014]  [] ptrace_has_cap+0x35/0x40
> [ 1838.837126]  [] ___ptrace_may_access+0xa7/0x1e0
> [ 1838.843925]  [] __schedule+0x26e/0xa00
> [ 1838.849855]  [] schedule_preempt_disabled+0x29/0x70
> [ 1838.857041]  [] cpu_startup_entry+0x184/0x290
> [ 1838.863637]  [] start_secondary+0x1da/0x250

So yeah, current Linus' tree needs the same treatment -- we have to avoid 
calling out to ptrace_has_cap() in PTRACE_MODE_NOACCESS_CHK cases.

I have updated the patch for v4. Thanks again,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andrea Arcangeli wrote:

> [ 1838.621010] Kernel panic - not syncing: Watchdog detected hard LOCKUP on 
> cpu 6
> [ 1838.629070] CPU: 6 PID: 0 Comm: swapper/6 Not tainted 
> 3.10.0-327.62.4.el7.x86_64 #1
> [ 1838.637610] Hardware name: Dell Inc. PowerEdge R430/0CN7X8, BIOS 2.4.2 
> 01/09/2017
> [ 1838.645954] Call Trace:
> [ 1838.648680][] dump_stack+0x19/0x1b
> [ 1838.655113]  [] panic+0xd8/0x1e7
> [ 1838.660460]  [] ? restart_watchdog_hrtimer+0x50/0x50
> [ 1838.667742]  [] watchdog_overflow_callback+0xc2/0xd0
> [ 1838.675024]  [] __perf_event_overflow+0xa1/0x250
> [ 1838.681920]  [] perf_event_overflow+0x14/0x20
> [ 1838.688526]  [] intel_pmu_handle_irq+0x1e8/0x470
> [ 1838.695423]  [] ? ioremap_page_range+0x24c/0x330
> [ 1838.702320]  [] ? unmap_kernel_range_noflush+0x11/0x20
> [ 1838.709797]  [] ? ghes_copy_tofrom_phys+0x124/0x210
> [ 1838.716984]  [] ? ghes_read_estatus+0xa0/0x190
> [ 1838.723687]  [] perf_event_nmi_handler+0x2b/0x50
> [ 1838.730582]  [] nmi_handle.isra.0+0x69/0xb0
> [ 1838.736992]  [] do_nmi+0x169/0x340
> [ 1838.742532]  [] end_repeat_nmi+0x1e/0x7e
> [ 1838.748653]  [] ? _raw_spin_lock_irqsave+0x3d/0x60
> [ 1838.755742]  [] ? _raw_spin_lock_irqsave+0x3d/0x60
> [ 1838.762831]  [] ? _raw_spin_lock_irqsave+0x3d/0x60
> [ 1838.769917]  <>  [] avc_compute_av+0x126/0x1b5
> [ 1838.777125]  [] ? walk_tg_tree_from+0xbe/0x110
> [ 1838.783828]  [] avc_has_perm_noaudit+0xc4/0x110
> [ 1838.790628]  [] cred_has_capability+0x6b/0x120
> [ 1838.797331]  [] ? ktime_get+0x4c/0xd0
> [ 1838.803160]  [] ? clockevents_program_event+0x6b/0xf0
> [ 1838.810532]  [] selinux_capable+0x2e/0x40
> [ 1838.816748]  [] security_capable_noaudit+0x15/0x20
> [ 1838.823829]  [] has_ns_capability_noaudit+0x15/0x20
> [ 1838.831014]  [] ptrace_has_cap+0x35/0x40
> [ 1838.837126]  [] ___ptrace_may_access+0xa7/0x1e0
> [ 1838.843925]  [] __schedule+0x26e/0xa00
> [ 1838.849855]  [] schedule_preempt_disabled+0x29/0x70
> [ 1838.857041]  [] cpu_startup_entry+0x184/0x290
> [ 1838.863637]  [] start_secondary+0x1da/0x250

So yeah, current Linus' tree needs the same treatment -- we have to avoid 
calling out to ptrace_has_cap() in PTRACE_MODE_NOACCESS_CHK cases.

I have updated the patch for v4. Thanks again,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andrea Arcangeli wrote:

> Perhaps you didn't sandbox KVM inside selinux by default?

We by default do not enable selinux, so that's probably why.

Thanks again, will go through the code in mainline and adapt the patch 
before sending v4.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andrea Arcangeli wrote:

> Perhaps you didn't sandbox KVM inside selinux by default?

We by default do not enable selinux, so that's probably why.

Thanks again, will go through the code in mainline and adapt the patch 
before sending v4.

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andi Kleen wrote:

> Please if you repost include plenty of performance numbers for multi 
> threaded workloads.  It's ridiculous to even discuss this without them.

Talking about ridiculous ... I find it a bit sad that Intel has let this 
be unfixed for 3/4 years in linux; that doesn't really signal deep 
dedication to customer safety. Have any STIBP patches been even submitted?

This is not the same situation as IBRS which was mostly ignored -- there 
we have retpolines to protect the kernel, and it's debatable whether it's 
exploitable on SKL at all.

Ignoring IBPB and STIBP is keeping the system plain vulnerable to 
user-user attacks, and us not providing users with possibiliy to easily 
mitigate, is a bit embarassing in my eyes.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andi Kleen wrote:

> Please if you repost include plenty of performance numbers for multi 
> threaded workloads.  It's ridiculous to even discuss this without them.

Talking about ridiculous ... I find it a bit sad that Intel has let this 
be unfixed for 3/4 years in linux; that doesn't really signal deep 
dedication to customer safety. Have any STIBP patches been even submitted?

This is not the same situation as IBRS which was mostly ignored -- there 
we have retpolines to protect the kernel, and it's debatable whether it's 
exploitable on SKL at all.

Ignoring IBPB and STIBP is keeping the system plain vulnerable to 
user-user attacks, and us not providing users with possibiliy to easily 
mitigate, is a bit embarassing in my eyes.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andrea Arcangeli wrote:

> ptrace_has_cap(tcred->user_ns, mode) is supposed to eventually lockup
> hard if called from scheduler as it does some locking, and we fixed
> that already half a year ago.
> 
> Not sure how it's still unfixed in Jiri's codebase after so long, or
> if it's an issue specific to 3.10 and upstream gets away without this.

We haven't got any lockup reports in our kernels (and we do carry a 
variant of this patch), so it might be somehow specific to 3.10.

> diff --git a/kernel/ptrace.c b/kernel/ptrace.c
> index eb7862f185ff..4a8d0dd73c93 100644
> --- a/kernel/ptrace.c
> +++ b/kernel/ptrace.c
> @@ -285,7 +285,8 @@ int ___ptrace_may_access(struct task_struct *tracer,
>   gid_eq(caller_gid, tcred->sgid) &&
>   gid_eq(caller_gid, tcred->gid))
>   goto ok;
> - if (ptrace_has_cap(tcred->user_ns, mode))
> + if (!(mode & PTRACE_MODE_NOACCESS_CHK) &&
> + ptrace_has_cap(tcred->user_ns, mode))
>   goto ok;
>   rcu_read_unlock();
>   return -EPERM;
> @@ -296,7 +297,8 @@ ok:
>   dumpable = get_dumpable(task->mm);
>   rcu_read_lock();
>   if (dumpable != SUID_DUMP_USER &&
> - !ptrace_has_cap(__task_cred(task)->user_ns, mode)) {
> + ((mode & PTRACE_MODE_NOACCESS_CHK) ||
> +  !ptrace_has_cap(__task_cred(task)->user_ns, mode))) {
>   rcu_read_unlock();
>   return -EPERM;

I will look into this whether it's still applicable or not, thanks a lot 
for the pointer.

(and no, my testing of the patch I sent on current tree didn't produce any 
hangs -- was there a reliable way to trigger it on 3.10?).

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Andrea Arcangeli wrote:

> ptrace_has_cap(tcred->user_ns, mode) is supposed to eventually lockup
> hard if called from scheduler as it does some locking, and we fixed
> that already half a year ago.
> 
> Not sure how it's still unfixed in Jiri's codebase after so long, or
> if it's an issue specific to 3.10 and upstream gets away without this.

We haven't got any lockup reports in our kernels (and we do carry a 
variant of this patch), so it might be somehow specific to 3.10.

> diff --git a/kernel/ptrace.c b/kernel/ptrace.c
> index eb7862f185ff..4a8d0dd73c93 100644
> --- a/kernel/ptrace.c
> +++ b/kernel/ptrace.c
> @@ -285,7 +285,8 @@ int ___ptrace_may_access(struct task_struct *tracer,
>   gid_eq(caller_gid, tcred->sgid) &&
>   gid_eq(caller_gid, tcred->gid))
>   goto ok;
> - if (ptrace_has_cap(tcred->user_ns, mode))
> + if (!(mode & PTRACE_MODE_NOACCESS_CHK) &&
> + ptrace_has_cap(tcred->user_ns, mode))
>   goto ok;
>   rcu_read_unlock();
>   return -EPERM;
> @@ -296,7 +297,8 @@ ok:
>   dumpable = get_dumpable(task->mm);
>   rcu_read_lock();
>   if (dumpable != SUID_DUMP_USER &&
> - !ptrace_has_cap(__task_cred(task)->user_ns, mode)) {
> + ((mode & PTRACE_MODE_NOACCESS_CHK) ||
> +  !ptrace_has_cap(__task_cred(task)->user_ns, mode))) {
>   rcu_read_unlock();
>   return -EPERM;

I will look into this whether it's still applicable or not, thanks a lot 
for the pointer.

(and no, my testing of the patch I sent on current tree didn't produce any 
hangs -- was there a reliable way to trigger it on 3.10?).

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] hid: hid-core: Fix a sleep-in-atomic-context bug in __hid_request()

2018-09-05 Thread Jiri Kosina
On Sat, 1 Sep 2018, Jia-Ju Bai wrote:

> The driver may sleep with holding a spinlock.
> 
> The function call paths (from bottom to top) in Linux-4.16 are:
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-picolcd_core.c, 111: 
>   hid_hw_request in picolcd_send_and_wait
> drivers/hid/hid-picolcd_core.c, 100: 
>   _raw_spin_lock_irqsave in picolcd_send_and_wait
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-picolcd_core.c, 245: 
>   hid_hw_request in picolcd_reset
> drivers/hid/hid-picolcd_core.c, 235: 
>   _raw_spin_lock_irqsave in picolcd_reset
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-picolcd_fb.c, 215: 
>   hid_hw_request in picolcd_fb_reset
> drivers/hid/hid-picolcd_fb.c, 206: 
>   _raw_spin_lock_irqsave in picolcd_fb_reset
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-lg4ff.c, 465: 
>   hid_hw_request in lg4ff_play
> drivers/hid/hid-lg4ff.c, 441: 
>   _raw_spin_lock_irqsave in lg4ff_play
> 
> To fix this bug, GFP_KERNEL is replaced with GFP_ATOMIC.
> 
> This bug is found by my static analysis tool DSAC.

Could you please rewrite the changelog so that it's human readable? The 
above is a bit hard to understand, I think something along the lines of 
"__hid_request() has to be allocating with GFP_ATOMIC because there are 
the following callchains leading to __hid_request() being an atomic 
context: ... a->b->c.._hid_request()" etc.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] hid: hid-core: Fix a sleep-in-atomic-context bug in __hid_request()

2018-09-05 Thread Jiri Kosina
On Sat, 1 Sep 2018, Jia-Ju Bai wrote:

> The driver may sleep with holding a spinlock.
> 
> The function call paths (from bottom to top) in Linux-4.16 are:
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-picolcd_core.c, 111: 
>   hid_hw_request in picolcd_send_and_wait
> drivers/hid/hid-picolcd_core.c, 100: 
>   _raw_spin_lock_irqsave in picolcd_send_and_wait
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-picolcd_core.c, 245: 
>   hid_hw_request in picolcd_reset
> drivers/hid/hid-picolcd_core.c, 235: 
>   _raw_spin_lock_irqsave in picolcd_reset
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-picolcd_fb.c, 215: 
>   hid_hw_request in picolcd_fb_reset
> drivers/hid/hid-picolcd_fb.c, 206: 
>   _raw_spin_lock_irqsave in picolcd_fb_reset
> 
> [FUNC] hid_alloc_report_buf(GFP_KERNEL)
> drivers/hid/hid-core.c, 1435: 
>   hid_alloc_report_buf in __hid_request
> ./include/linux/hid.h, 1023: 
>   __hid_request in hid_hw_request
> drivers/hid/hid-lg4ff.c, 465: 
>   hid_hw_request in lg4ff_play
> drivers/hid/hid-lg4ff.c, 441: 
>   _raw_spin_lock_irqsave in lg4ff_play
> 
> To fix this bug, GFP_KERNEL is replaced with GFP_ATOMIC.
> 
> This bug is found by my static analysis tool DSAC.

Could you please rewrite the changelog so that it's human readable? The 
above is a bit hard to understand, I think something along the lines of 
"__hid_request() has to be allocating with GFP_ATOMIC because there are 
the following callchains leading to __hid_request() being an atomic 
context: ... a->b->c.._hid_request()" etc.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v2 0/2] Rumble support for Xbox One S controller

2018-09-05 Thread Jiri Kosina
On Wed, 15 Aug 2018, Andrey Smirnov wrote:

> This is v2 of the patches adding support for rumble for Xbox One S
> controller connected over Bluetooth. Hopefully all of the changes are
> pretty straightforward and self-explanatory.

Applied to for-4.20/microsoft branch. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v2 0/2] Rumble support for Xbox One S controller

2018-09-05 Thread Jiri Kosina
On Wed, 15 Aug 2018, Andrey Smirnov wrote:

> This is v2 of the patches adding support for rumble for Xbox One S
> controller connected over Bluetooth. Hopefully all of the changes are
> pretty straightforward and self-explanatory.

Applied to for-4.20/microsoft branch. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] HID: core: fix NULL pointer dereference

2018-09-05 Thread Jiri Kosina
On Wed, 29 Aug 2018, Gustavo A. R. Silva wrote:

> There is a NULL pointer dereference in case memory resources
> for *parse* are not successfully allocated.
> 
> Fix this by adding a new goto label and make the execution
> path jump to it in case vzalloc() fails.
> 
> Addresses-Coverity-ID: 1473081 ("Dereference after null check")
> Fixes: b2dd9f2e5a8a ("HID: core: fix memory leak on probe")
> Signed-off-by: Gustavo A. R. Silva 
> ---
>  drivers/hid/hid-core.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
> index 4548dae..5bec924 100644
> --- a/drivers/hid/hid-core.c
> +++ b/drivers/hid/hid-core.c
> @@ -1000,7 +1000,7 @@ int hid_open_report(struct hid_device *device)
>   parser = vzalloc(sizeof(struct hid_parser));
>   if (!parser) {
>   ret = -ENOMEM;
> - goto err;
> + goto alloc_err;
>   }
>  
>   parser->device = device;
> @@ -1049,6 +1049,7 @@ int hid_open_report(struct hid_device *device)
>   hid_err(device, "item fetching failed at offset %d\n", (int)(end - 
> start));
>  err:
>   kfree(parser->collection_stack);
> +alloc_err:
>   vfree(parser);
>   hid_close_report(device);
>   return ret;

Queued in for-4.19/fixes. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH] HID: core: fix NULL pointer dereference

2018-09-05 Thread Jiri Kosina
On Wed, 29 Aug 2018, Gustavo A. R. Silva wrote:

> There is a NULL pointer dereference in case memory resources
> for *parse* are not successfully allocated.
> 
> Fix this by adding a new goto label and make the execution
> path jump to it in case vzalloc() fails.
> 
> Addresses-Coverity-ID: 1473081 ("Dereference after null check")
> Fixes: b2dd9f2e5a8a ("HID: core: fix memory leak on probe")
> Signed-off-by: Gustavo A. R. Silva 
> ---
>  drivers/hid/hid-core.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
> index 4548dae..5bec924 100644
> --- a/drivers/hid/hid-core.c
> +++ b/drivers/hid/hid-core.c
> @@ -1000,7 +1000,7 @@ int hid_open_report(struct hid_device *device)
>   parser = vzalloc(sizeof(struct hid_parser));
>   if (!parser) {
>   ret = -ENOMEM;
> - goto err;
> + goto alloc_err;
>   }
>  
>   parser->device = device;
> @@ -1049,6 +1049,7 @@ int hid_open_report(struct hid_device *device)
>   hid_err(device, "item fetching failed at offset %d\n", (int)(end - 
> start));
>  err:
>   kfree(parser->collection_stack);
> +alloc_err:
>   vfree(parser);
>   hid_close_report(device);
>   return ret;

Queued in for-4.19/fixes. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 2/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Peter Zijlstra wrote:

> On Tue, Sep 04, 2018 at 06:18:55PM +0200, Thomas Gleixner wrote:
> > On Tue, 4 Sep 2018, Jiri Kosina wrote:
> > >   if (tsk && tsk->mm &&
> > >   tsk->mm->context.ctx_id != last_ctx_id &&
> > > - get_dumpable(tsk->mm) != SUID_DUMP_USER)
> > > + ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB))
> > 
> > Uurgh. If X86_FEATURE_USE_IBPB is not enabled, then the whole
> > __ptrace_may_access() overhead is just done for nothing.
> > 
> > >   indirect_branch_prediction_barrier();
> > 
> > This really wants to be runtime patched:
> > 
> > if (static_cpu_has(X86_FEATURE_USE_IBPB))
> > stop_speculation(tsk, last_ctx_id);
> > 
> > and have an inline for that:
> > 
> > static inline void stop_speculation(struct task_struct *tsk, u64 
> > last_ctx_id)
> > {
> > if (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
> > ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB))
> > indirect_branch_prediction_barrier();
> > }
> > 
> > which also makes the whole mess readable.
> 
> How about something like:
> 
>   if (static_cpu_has(X86_FEATURE_USE_IBPB) && need_ibpb(tsk, last_ctx_id))
>   indirect_branch_predictor_barrier();
> 
> where:
> 
> static inline bool need_ibpb(struct task_struct *next, u64 last_ctx_id)
> {
>   return next && next->mm && next->mm->context.ctx_id != last_ctx_id &&
>   __ptrace_may_access(next, PTRACE_MODE_IBPB));
> }
> 
> I don't much like "stop_speculation" for a name here.

Yeah, I did more or less that earlier today; my series currently has

+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
[ ... ]
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 2/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Peter Zijlstra wrote:

> On Tue, Sep 04, 2018 at 06:18:55PM +0200, Thomas Gleixner wrote:
> > On Tue, 4 Sep 2018, Jiri Kosina wrote:
> > >   if (tsk && tsk->mm &&
> > >   tsk->mm->context.ctx_id != last_ctx_id &&
> > > - get_dumpable(tsk->mm) != SUID_DUMP_USER)
> > > + ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB))
> > 
> > Uurgh. If X86_FEATURE_USE_IBPB is not enabled, then the whole
> > __ptrace_may_access() overhead is just done for nothing.
> > 
> > >   indirect_branch_prediction_barrier();
> > 
> > This really wants to be runtime patched:
> > 
> > if (static_cpu_has(X86_FEATURE_USE_IBPB))
> > stop_speculation(tsk, last_ctx_id);
> > 
> > and have an inline for that:
> > 
> > static inline void stop_speculation(struct task_struct *tsk, u64 
> > last_ctx_id)
> > {
> > if (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
> > ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB))
> > indirect_branch_prediction_barrier();
> > }
> > 
> > which also makes the whole mess readable.
> 
> How about something like:
> 
>   if (static_cpu_has(X86_FEATURE_USE_IBPB) && need_ibpb(tsk, last_ctx_id))
>   indirect_branch_predictor_barrier();
> 
> where:
> 
> static inline bool need_ibpb(struct task_struct *next, u64 last_ctx_id)
> {
>   return next && next->mm && next->mm->context.ctx_id != last_ctx_id &&
>   __ptrace_may_access(next, PTRACE_MODE_IBPB));
> }
> 
> I don't much like "stop_speculation" for a name here.

Yeah, I did more or less that earlier today; my series currently has

+static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
+{
+   return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
+   __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+}
+
[ ... ]
-   if (tsk && tsk->mm &&
-   tsk->mm->context.ctx_id != last_ctx_id &&
-   get_dumpable(tsk->mm) != SUID_DUMP_USER)
+   if (static_cpu_has(X86_FEATURE_USE_IBPB) &&
+   ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 2/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Peter Zijlstra wrote:

> > if (tsk && tsk->mm &&
> > tsk->mm->context.ctx_id != last_ctx_id &&
> > -   get_dumpable(tsk->mm) != SUID_DUMP_USER)
> > +   ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB))
> > indirect_branch_prediction_barrier();
> 
> See how the new (first) argument is 'current' and you could've just used
> the old __ptrace_may_access().

Yeah, 1/3 is dropped in my current series already; this was cherry-picked 
from original Tim's series, but it is indeed superfluous.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 2/3] x86/speculation: apply IBPB more strictly to avoid cross-process data leak

2018-09-05 Thread Jiri Kosina
On Wed, 5 Sep 2018, Peter Zijlstra wrote:

> > if (tsk && tsk->mm &&
> > tsk->mm->context.ctx_id != last_ctx_id &&
> > -   get_dumpable(tsk->mm) != SUID_DUMP_USER)
> > +   ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB))
> > indirect_branch_prediction_barrier();
> 
> See how the new (first) argument is 'current' and you could've just used
> the old __ptrace_may_access().

Yeah, 1/3 is dropped in my current series already; this was cherry-picked 
from original Tim's series, but it is indeed superfluous.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Tue, 4 Sep 2018, Tim Chen wrote:

> I think STIBP should be an opt in option as it will have significant
> impact on performance.  The attack from neighbor thread is pretty
> difficult to pull off considering you have to know what the sibling
> thread is running and its address allocation.

In many scenarios the attacker can just easily taskset itself to the 
correct sibling.

> We could also use a security module to opt in the STIBP policy.

I am a bit afraid that we are offloading to sysadmins decisions that are 
very hard for them to make, as they require deep understanding of both the 
technical details of the security issue in the CPU, and the mitigation.

I surely understand that Intel is doing what they could to minimize the 
performance effect, but achieving that by making it a rocket science to 
configure it properly doesn't feel right.

So, after giving it a bit more thought, I still believe "I want spectre V2 
protection" vs. "I do not care about spectre V2 on my system 
(=nospectre_v2)" are the sane options we should provide; so I'll respin v4 
of my patchset, including the ptrace check in switch_mm() (statically 
patched out on !IBPB-capable systems), and we can then later see whether 
the LSM implementation, once it exists, should be used instead.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH v3 1/3] ptrace: Provide ___ptrace_may_access() that can be applied on arbitrary tasks

2018-09-05 Thread Jiri Kosina
On Tue, 4 Sep 2018, Tim Chen wrote:

> I think STIBP should be an opt in option as it will have significant
> impact on performance.  The attack from neighbor thread is pretty
> difficult to pull off considering you have to know what the sibling
> thread is running and its address allocation.

In many scenarios the attacker can just easily taskset itself to the 
correct sibling.

> We could also use a security module to opt in the STIBP policy.

I am a bit afraid that we are offloading to sysadmins decisions that are 
very hard for them to make, as they require deep understanding of both the 
technical details of the security issue in the CPU, and the mitigation.

I surely understand that Intel is doing what they could to minimize the 
performance effect, but achieving that by making it a rocket science to 
configure it properly doesn't feel right.

So, after giving it a bit more thought, I still believe "I want spectre V2 
protection" vs. "I do not care about spectre V2 on my system 
(=nospectre_v2)" are the sane options we should provide; so I'll respin v4 
of my patchset, including the ptrace check in switch_mm() (statically 
patched out on !IBPB-capable systems), and we can then later see whether 
the LSM implementation, once it exists, should be used instead.

Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH 0/4] HID fixes

2018-09-04 Thread Jiri Kosina
On Tue, 4 Sep 2018, Benjamin Tissoires wrote:

> Hi Jiri,
> 
> there is no real link between those 4 commit but the fact that I wrote
> them today ;)
> 
> 2 patches should at least be scheduled for v4.19: 1/4 and 3/4
> Both are stable fixes for mistakes I made in v4.18.
> 
> Patch 2 and 4 are just nice to have, so v4.20 should be fine.

And I've queued the patches exactly like that. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH 0/4] HID fixes

2018-09-04 Thread Jiri Kosina
On Tue, 4 Sep 2018, Benjamin Tissoires wrote:

> Hi Jiri,
> 
> there is no real link between those 4 commit but the fact that I wrote
> them today ;)
> 
> 2 patches should at least be scheduled for v4.19: 1/4 and 3/4
> Both are stable fixes for mistakes I made in v4.18.
> 
> Patch 2 and 4 are just nice to have, so v4.20 should be fine.

And I've queued the patches exactly like that. Thanks,

-- 
Jiri Kosina
SUSE Labs



Re: [PATCH 1/4] HID: multitouch: fix Elan panels with 2 input modes declaration

2018-09-04 Thread Jiri Kosina
On Tue, 4 Sep 2018, Benjamin Tissoires wrote:

> When implementing commit 7f81c8db5489 ("HID: multitouch: simplify
> the settings of the various features"), I wrongly removed a test
> that made sure we never try to set the second InputMode feature
> to something else than 0.
> 
> This broke badly some recent Elan panels that now forget to send the
> click button in some area of the touchpad.
> 
> Fixes 7f81c8db5489

Please make sure that the 'Fixes:' tag is always in proper format. There 
are tools depending on that :)

I'll fix that up manually.

Thanks!

-- 
Jiri Kosina
SUSE Labs



<    2   3   4   5   6   7   8   9   10   11   >