Re: [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2

2019-01-02 Thread Jeremy Linton

Hi,

On 12/13/2018 05:09 AM, Julien Thierry wrote:



On 06/12/2018 23:44, Jeremy Linton wrote:

Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton 
---
  arch/arm64/kernel/cpu_errata.c | 72 +++---
  1 file changed, 67 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 559ecdee6fd2..6505c93d507e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c


[...]


@@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
device_attribute *attr,
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
  }
  
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,

+   char *buf)
+{
+   switch (__spectrev2_safe) {
+   case A64_SV2_SAFE:
+   return sprintf(buf, "Not affected\n");
+   case A64_SV2_UNSAFE:
+   if (__hardenbp_enab == A64_HBP_MIT)
+   return sprintf(buf,
+   "Mitigation: Branch predictor hardening\n");
+   return sprintf(buf, "Vulnerable\n");
+   default:
+   return sprintf(buf, "Unknown\n");
+   }


Again I see that we are going to display "Unknown" when the mitigation
is not built in.

Couldn't we make that CONFIG_GENERIC_CPU_,gation is not implemented? It's
just checking the list of MIDRs.



Before I re-post, its probably worth pointing out that the 
spectrev2_safe isn't set the same as the meltdown safe flag (which 
reflects a whitelist or cpu_good flag) where the unknown/unsafe 
condition is currently the same.


spectrev2_safe is a white/black list with a black list of known 
vulnerable cores, plus cores with csv2 set indicating they are good. 
This means the unset condition conceptually covers, the check being 
disabled, as well as the core not being one of either known bad or known 
good cores. Meaning you still need a dedicated "unknown" state because 
the final state isn't unknown simply because the mitigation is not 
compiled in.




Re: [PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2

2018-12-13 Thread Julien Thierry



On 06/12/2018 23:44, Jeremy Linton wrote:
> Add code to track whether all the cores in the machine are
> vulnerable, and whether all the vulnerable cores have been
> mitigated.
> 
> Once we have that information we can add the sysfs stub and
> provide an accurate view of what is known about the machine.
> 
> Signed-off-by: Jeremy Linton 
> ---
>  arch/arm64/kernel/cpu_errata.c | 72 +++---
>  1 file changed, 67 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index 559ecdee6fd2..6505c93d507e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c

[...]

> @@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
> device_attribute *attr,
>   return sprintf(buf, "Mitigation: __user pointer sanitization\n");
>  }
>  
> +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute 
> *attr,
> + char *buf)
> +{
> + switch (__spectrev2_safe) {
> + case A64_SV2_SAFE:
> + return sprintf(buf, "Not affected\n");
> + case A64_SV2_UNSAFE:
> + if (__hardenbp_enab == A64_HBP_MIT)
> + return sprintf(buf,
> + "Mitigation: Branch predictor hardening\n");
> + return sprintf(buf, "Vulnerable\n");
> + default:
> + return sprintf(buf, "Unknown\n");
> + }

Again I see that we are going to display "Unknown" when the mitigation
is not built in.

Couldn't we make that CONFIG_GENERIC_CPU_VULNERABILITIES check whether a
CPU is vulnerable or not even if the mitigation is not implemented? It's
just checking the list of MIDRs.

Thanks,

-- 
Julien Thierry


[PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2

2018-12-06 Thread Jeremy Linton
Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton 
---
 arch/arm64/kernel/cpu_errata.c | 72 +++---
 1 file changed, 67 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 559ecdee6fd2..6505c93d507e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,6 +109,11 @@ cpu_enable_trap_ctr_access(const struct 
arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || 
defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static enum { A64_HBP_UNSET, A64_HBP_MIT, A64_HBP_NOTMIT } __hardenbp_enab = 
A64_HBP_UNSET;
+#endif
+
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include 
 #include 
@@ -231,15 +236,19 @@ enable_smccc_arch_workaround_1(const struct 
arm64_cpu_capabilities *entry)
if (!entry->matches(entry, SCOPE_LOCAL_CPU))
return;
 
-   if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+   if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
+   }
 
switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
  ARM_SMCCC_ARCH_WORKAROUND_1, );
-   if ((int)res.a0 < 0)
+   if ((int)res.a0 < 0) {
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
+   }
cb = call_hvc_arch_workaround_1;
/* This is a guest, no need to patch KVM vectors */
smccc_start = NULL;
@@ -249,14 +258,17 @@ enable_smccc_arch_workaround_1(const struct 
arm64_cpu_capabilities *entry)
case PSCI_CONDUIT_SMC:
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
  ARM_SMCCC_ARCH_WORKAROUND_1, );
-   if ((int)res.a0 < 0)
+   if ((int)res.a0 < 0) {
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
+   }
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start;
smccc_end = __smccc_workaround_1_smc_end;
break;
 
default:
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
}
 
@@ -266,6 +278,9 @@ enable_smccc_arch_workaround_1(const struct 
arm64_cpu_capabilities *entry)
 
install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
 
+   if (__hardenbp_enab == A64_HBP_UNSET)
+   __hardenbp_enab = A64_HBP_MIT;
+
return;
 }
 #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
@@ -539,7 +554,36 @@ multi_entry_cap_cpu_enable(const struct 
arm64_cpu_capabilities *entry)
caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+   defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+
+static enum { A64_SV2_UNSET, A64_SV2_SAFE, A64_SV2_UNSAFE } __spectrev2_safe = 
A64_SV2_UNSET;
+
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+   bool is_vul;
+   bool has_csv2;
+   u64 pfr0;
+
+   WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+   is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+
+   pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+   has_csv2 = cpuid_feature_extract_unsigned_field(pfr0, 
ID_AA64PFR0_CSV2_SHIFT);
+
+   if (is_vul)
+   __spectrev2_safe = A64_SV2_UNSAFE;
+   else if (__spectrev2_safe == A64_SV2_UNSET && has_csv2)
+   __spectrev2_safe = A64_SV2_SAFE;
+
+   return is_vul;
+}
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -728,7 +772,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
.cpu_enable = enable_smccc_arch_workaround_1,
-   ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+   .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+   .matches = check_branch_predictor,
+   .midr_range_list = arm64_bp_harden_smccc_cpus,
},
 #endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
device_attribute *attr,
return sprintf(buf, "Mitigation: __user 

[PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2

2018-12-06 Thread Jeremy Linton
Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton 
---
 arch/arm64/kernel/cpu_errata.c | 72 +++---
 1 file changed, 67 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 559ecdee6fd2..6505c93d507e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,6 +109,11 @@ cpu_enable_trap_ctr_access(const struct 
arm64_cpu_capabilities *__unused)
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
 
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || 
defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static enum { A64_HBP_UNSET, A64_HBP_MIT, A64_HBP_NOTMIT } __hardenbp_enab = 
A64_HBP_UNSET;
+#endif
+
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
 #include 
 #include 
@@ -231,15 +236,19 @@ enable_smccc_arch_workaround_1(const struct 
arm64_cpu_capabilities *entry)
if (!entry->matches(entry, SCOPE_LOCAL_CPU))
return;
 
-   if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+   if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
+   }
 
switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
  ARM_SMCCC_ARCH_WORKAROUND_1, );
-   if ((int)res.a0 < 0)
+   if ((int)res.a0 < 0) {
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
+   }
cb = call_hvc_arch_workaround_1;
/* This is a guest, no need to patch KVM vectors */
smccc_start = NULL;
@@ -249,14 +258,17 @@ enable_smccc_arch_workaround_1(const struct 
arm64_cpu_capabilities *entry)
case PSCI_CONDUIT_SMC:
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
  ARM_SMCCC_ARCH_WORKAROUND_1, );
-   if ((int)res.a0 < 0)
+   if ((int)res.a0 < 0) {
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
+   }
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start;
smccc_end = __smccc_workaround_1_smc_end;
break;
 
default:
+   __hardenbp_enab = A64_HBP_NOTMIT;
return;
}
 
@@ -266,6 +278,9 @@ enable_smccc_arch_workaround_1(const struct 
arm64_cpu_capabilities *entry)
 
install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
 
+   if (__hardenbp_enab == A64_HBP_UNSET)
+   __hardenbp_enab = A64_HBP_MIT;
+
return;
 }
 #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
@@ -539,7 +554,36 @@ multi_entry_cap_cpu_enable(const struct 
arm64_cpu_capabilities *entry)
caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+   defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
+
+static enum { A64_SV2_UNSET, A64_SV2_SAFE, A64_SV2_UNSAFE } __spectrev2_safe = 
A64_SV2_UNSET;
+
+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+   bool is_vul;
+   bool has_csv2;
+   u64 pfr0;
+
+   WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+   is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list);
+
+   pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+   has_csv2 = cpuid_feature_extract_unsigned_field(pfr0, 
ID_AA64PFR0_CSV2_SHIFT);
+
+   if (is_vul)
+   __spectrev2_safe = A64_SV2_UNSAFE;
+   else if (__spectrev2_safe == A64_SV2_UNSET && has_csv2)
+   __spectrev2_safe = A64_SV2_SAFE;
+
+   return is_vul;
+}
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -728,7 +772,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
.cpu_enable = enable_smccc_arch_workaround_1,
-   ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+   .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+   .matches = check_branch_predictor,
+   .midr_range_list = arm64_bp_harden_smccc_cpus,
},
 #endif
 #ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -766,4 +812,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
device_attribute *attr,
return sprintf(buf, "Mitigation: __user 

[PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2

2018-08-07 Thread Mian Yousaf Kaukab
Only report mitigation present if hardening callback has been
successfully installed.

Signed-off-by: Mian Yousaf Kaukab 
---
 arch/arm64/kernel/cpu_errata.c | 34 +-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 92616431ae4e..8469d3be7b15 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -481,7 +481,8 @@ multi_entry_cap_cpu_enable(const struct 
arm64_cpu_capabilities *entry)
caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+   defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -712,4 +713,35 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
device_attribute *attr,
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
 
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+   char *buf)
+{
+   u64 pfr0;
+   struct bp_hardening_data *data;
+
+   pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+   if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
+   return sprintf(buf, "Not affected\n");
+
+   if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
+   /*
+* Hardware is vulnerable. Lets check if bp hardening callback
+* has been successfully installed
+*/
+   data = arm64_get_bp_hardening_data();
+   if (data && data->fn)
+   return sprintf(buf,
+   "Mitigation: Branch predictor hardening");
+   else
+   /* For example SMCCC_VERSION_1_0 */
+   return sprintf(buf, "Vulnerable\n");
+   }
+
+   /* In case CONFIG_HARDEN_BRANCH_PREDICTOR is not enabled */
+   if (is_midr_in_range_list(read_cpuid_id(), arm64_bp_harden_smccc_cpus))
+   return sprintf(buf, "Vulnerable\n");
+
+   return sprintf(buf, "Not affected\n");
+}
+
 #endif
-- 
2.11.0



[PATCH 4/6] arm64: add sysfs vulnerability show for spectre v2

2018-08-07 Thread Mian Yousaf Kaukab
Only report mitigation present if hardening callback has been
successfully installed.

Signed-off-by: Mian Yousaf Kaukab 
---
 arch/arm64/kernel/cpu_errata.c | 34 +-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 92616431ae4e..8469d3be7b15 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -481,7 +481,8 @@ multi_entry_cap_cpu_enable(const struct 
arm64_cpu_capabilities *entry)
caps->cpu_enable(caps);
 }
 
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \
+   defined(CONFIG_GENERIC_CPU_VULNERABILITIES)
 
 /*
  * List of CPUs where we need to issue a psci call to
@@ -712,4 +713,35 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
device_attribute *attr,
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
 }
 
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+   char *buf)
+{
+   u64 pfr0;
+   struct bp_hardening_data *data;
+
+   pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+   if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
+   return sprintf(buf, "Not affected\n");
+
+   if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
+   /*
+* Hardware is vulnerable. Lets check if bp hardening callback
+* has been successfully installed
+*/
+   data = arm64_get_bp_hardening_data();
+   if (data && data->fn)
+   return sprintf(buf,
+   "Mitigation: Branch predictor hardening");
+   else
+   /* For example SMCCC_VERSION_1_0 */
+   return sprintf(buf, "Vulnerable\n");
+   }
+
+   /* In case CONFIG_HARDEN_BRANCH_PREDICTOR is not enabled */
+   if (is_midr_in_range_list(read_cpuid_id(), arm64_bp_harden_smccc_cpus))
+   return sprintf(buf, "Vulnerable\n");
+
+   return sprintf(buf, "Not affected\n");
+}
+
 #endif
-- 
2.11.0