Re: [PATCH bpf-next v3 05/11] bpf, arm64, powerpc: Add bpf_jit_bypass_spec_v1/v4()

2025-05-18 Thread Hari Bathini




On 01/05/25 1:05 pm, Luis Gerhorst wrote:

JITs can set bpf_jit_bypass_spec_v1/v4() if they want the verifier to
skip analysis/patching for the respective vulnerability. For v4, this
will reduce the number of barriers the verifier inserts. For v1, it
allows more programs to be accepted.

The primary motivation for this is to not regress unpriv BPF's
performance on ARM64 in a future commit where BPF_NOSPEC is also used
against Spectre v1.

This has the user-visible change that v1-induced rejections on
non-vulnerable PowerPC CPUs are avoided.

For now, this does not change the semantics of BPF_NOSPEC. It is still a
v4-only barrier and must not be implemented if bypass_spec_v4 is always
true for the arch. Changing it to a v1 AND v4-barrier is done in a
future commit.

As an alternative to bypass_spec_v1/v4, one could introduce NOSPEC_V1
AND NOSPEC_V4 instructions and allow backends to skip their lowering as
suggested by commit f5e81d111750 ("bpf: Introduce BPF nospec instruction
for mitigating Spectre v4"). Adding bpf_jit_bypass_spec_v1/v4() was
found to be preferable for the following reason:

* bypass_spec_v1/v4 benefits non-vulnerable CPUs: Always performing the
   same analysis (not taking into account whether the current CPU is
   vulnerable), needlessly restricts users of CPUs that are not
   vulnerable. The only use case for this would be portability-testing,
   but this can later be added easily when needed by allowing users to
   force bypass_spec_v1/v4 to false.

* Portability is still acceptable: Directly disabling the analysis
   instead of skipping the lowering of BPF_NOSPEC(_V1/V4) might allow
   programs on non-vulnerable CPUs to be accepted while the program will
   be rejected on vulnerable CPUs. With the fallback to speculation
   barriers for Spectre v1 implemented in a future commit, this will only
   affect programs that do variable stack-accesses or are very complex.

For PowerPC, the SEC_FTR checking in bpf_jit_bypass_spec_v4() is based
on the check that was previously located in the BPF_NOSPEC case.

For LoongArch, it would likely be safe to set both
bpf_jit_bypass_spec_v1() and _v4() according to
commit a6f6a95f2580 ("LoongArch, bpf: Fix jit to skip speculation
barrier opcode"). This is omitted here as I am unable to do any testing
for LoongArch.

Signed-off-by: Luis Gerhorst 
Cc: Henriette Herzog 
Cc: Maximilian Ott 
Cc: Milan Stephan 


for the powerpc part..

Acked-by: Hari Bathini 


---
  arch/arm64/net/bpf_jit_comp.c | 21 -
  arch/powerpc/net/bpf_jit_comp64.c | 21 +
  include/linux/bpf.h   | 11 +--
  kernel/bpf/core.c | 15 +++
  4 files changed, 53 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 70d7c89d3ac9..0f617b55866e 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1583,15 +1583,7 @@ static int build_insn(const struct bpf_insn *insn, 
struct jit_ctx *ctx,
  
  	/* speculation barrier */

case BPF_ST | BPF_NOSPEC:
-   /*
-* Nothing required here.
-*
-* In case of arm64, we rely on the firmware mitigation of
-* Speculative Store Bypass as controlled via the ssbd kernel
-* parameter. Whenever the mitigation is enabled, it works
-* for all of the kernel code with no need to provide any
-* additional instructions.
-*/
+   /* See bpf_jit_bypass_spec_v4() */
break;
  
  	/* ST: *(size *)(dst + off) = imm */

@@ -2762,6 +2754,17 @@ bool bpf_jit_supports_percpu_insn(void)
return true;
  }
  
+bool bpf_jit_bypass_spec_v4(void)

+{
+   /* In case of arm64, we rely on the firmware mitigation of Speculative
+* Store Bypass as controlled via the ssbd kernel parameter. Whenever
+* the mitigation is enabled, it works for all of the kernel code with
+* no need to provide any additional instructions. Therefore, skip
+* inserting nospec insns against Spectre v4.
+*/
+   return true;
+}
+
  bool bpf_jit_inlines_helper_call(s32 imm)
  {
switch (imm) {
diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
b/arch/powerpc/net/bpf_jit_comp64.c
index 233703b06d7c..b5339c541283 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -363,6 +363,23 @@ static int bpf_jit_emit_tail_call(u32 *image, struct 
codegen_context *ctx, u32 o
return 0;
  }
  
+bool bpf_jit_bypass_spec_v1(void)

+{
+#if defined(CONFIG_PPC_E500) || defined(CONFIG_PPC_BOOK3S_64)
+   return !(security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR));
+#else
+   return true;
+#endif
+}
+
+bool bpf_jit_bypass_spec_v4(void)
+{
+   return !(security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+s

Re: [PATCH bpf-next v3 05/11] bpf, arm64, powerpc: Add bpf_jit_bypass_spec_v1/v4()

2025-05-01 Thread Kumar Kartikeya Dwivedi
On Thu, 1 May 2025 at 09:51, Luis Gerhorst  wrote:
>
> JITs can set bpf_jit_bypass_spec_v1/v4() if they want the verifier to
> skip analysis/patching for the respective vulnerability. For v4, this
> will reduce the number of barriers the verifier inserts. For v1, it
> allows more programs to be accepted.
>
> The primary motivation for this is to not regress unpriv BPF's
> performance on ARM64 in a future commit where BPF_NOSPEC is also used
> against Spectre v1.
>
> This has the user-visible change that v1-induced rejections on
> non-vulnerable PowerPC CPUs are avoided.
>
> For now, this does not change the semantics of BPF_NOSPEC. It is still a
> v4-only barrier and must not be implemented if bypass_spec_v4 is always
> true for the arch. Changing it to a v1 AND v4-barrier is done in a
> future commit.
>
> As an alternative to bypass_spec_v1/v4, one could introduce NOSPEC_V1
> AND NOSPEC_V4 instructions and allow backends to skip their lowering as
> suggested by commit f5e81d111750 ("bpf: Introduce BPF nospec instruction
> for mitigating Spectre v4"). Adding bpf_jit_bypass_spec_v1/v4() was
> found to be preferable for the following reason:
>
> * bypass_spec_v1/v4 benefits non-vulnerable CPUs: Always performing the
>   same analysis (not taking into account whether the current CPU is
>   vulnerable), needlessly restricts users of CPUs that are not
>   vulnerable. The only use case for this would be portability-testing,
>   but this can later be added easily when needed by allowing users to
>   force bypass_spec_v1/v4 to false.
>
> * Portability is still acceptable: Directly disabling the analysis
>   instead of skipping the lowering of BPF_NOSPEC(_V1/V4) might allow
>   programs on non-vulnerable CPUs to be accepted while the program will
>   be rejected on vulnerable CPUs. With the fallback to speculation
>   barriers for Spectre v1 implemented in a future commit, this will only
>   affect programs that do variable stack-accesses or are very complex.
>
> For PowerPC, the SEC_FTR checking in bpf_jit_bypass_spec_v4() is based
> on the check that was previously located in the BPF_NOSPEC case.
>
> For LoongArch, it would likely be safe to set both
> bpf_jit_bypass_spec_v1() and _v4() according to
> commit a6f6a95f2580 ("LoongArch, bpf: Fix jit to skip speculation
> barrier opcode"). This is omitted here as I am unable to do any testing
> for LoongArch.
>
> Signed-off-by: Luis Gerhorst 
> Cc: Henriette Herzog 
> Cc: Maximilian Ott 
> Cc: Milan Stephan 
> ---

I think this and the next patch should have acks from arm and powerpc experts.



[PATCH bpf-next v3 05/11] bpf, arm64, powerpc: Add bpf_jit_bypass_spec_v1/v4()

2025-05-01 Thread Luis Gerhorst
JITs can set bpf_jit_bypass_spec_v1/v4() if they want the verifier to
skip analysis/patching for the respective vulnerability. For v4, this
will reduce the number of barriers the verifier inserts. For v1, it
allows more programs to be accepted.

The primary motivation for this is to not regress unpriv BPF's
performance on ARM64 in a future commit where BPF_NOSPEC is also used
against Spectre v1.

This has the user-visible change that v1-induced rejections on
non-vulnerable PowerPC CPUs are avoided.

For now, this does not change the semantics of BPF_NOSPEC. It is still a
v4-only barrier and must not be implemented if bypass_spec_v4 is always
true for the arch. Changing it to a v1 AND v4-barrier is done in a
future commit.

As an alternative to bypass_spec_v1/v4, one could introduce NOSPEC_V1
AND NOSPEC_V4 instructions and allow backends to skip their lowering as
suggested by commit f5e81d111750 ("bpf: Introduce BPF nospec instruction
for mitigating Spectre v4"). Adding bpf_jit_bypass_spec_v1/v4() was
found to be preferable for the following reason:

* bypass_spec_v1/v4 benefits non-vulnerable CPUs: Always performing the
  same analysis (not taking into account whether the current CPU is
  vulnerable), needlessly restricts users of CPUs that are not
  vulnerable. The only use case for this would be portability-testing,
  but this can later be added easily when needed by allowing users to
  force bypass_spec_v1/v4 to false.

* Portability is still acceptable: Directly disabling the analysis
  instead of skipping the lowering of BPF_NOSPEC(_V1/V4) might allow
  programs on non-vulnerable CPUs to be accepted while the program will
  be rejected on vulnerable CPUs. With the fallback to speculation
  barriers for Spectre v1 implemented in a future commit, this will only
  affect programs that do variable stack-accesses or are very complex.

For PowerPC, the SEC_FTR checking in bpf_jit_bypass_spec_v4() is based
on the check that was previously located in the BPF_NOSPEC case.

For LoongArch, it would likely be safe to set both
bpf_jit_bypass_spec_v1() and _v4() according to
commit a6f6a95f2580 ("LoongArch, bpf: Fix jit to skip speculation
barrier opcode"). This is omitted here as I am unable to do any testing
for LoongArch.

Signed-off-by: Luis Gerhorst 
Cc: Henriette Herzog 
Cc: Maximilian Ott 
Cc: Milan Stephan 
---
 arch/arm64/net/bpf_jit_comp.c | 21 -
 arch/powerpc/net/bpf_jit_comp64.c | 21 +
 include/linux/bpf.h   | 11 +--
 kernel/bpf/core.c | 15 +++
 4 files changed, 53 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 70d7c89d3ac9..0f617b55866e 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1583,15 +1583,7 @@ static int build_insn(const struct bpf_insn *insn, 
struct jit_ctx *ctx,
 
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
-   /*
-* Nothing required here.
-*
-* In case of arm64, we rely on the firmware mitigation of
-* Speculative Store Bypass as controlled via the ssbd kernel
-* parameter. Whenever the mitigation is enabled, it works
-* for all of the kernel code with no need to provide any
-* additional instructions.
-*/
+   /* See bpf_jit_bypass_spec_v4() */
break;
 
/* ST: *(size *)(dst + off) = imm */
@@ -2762,6 +2754,17 @@ bool bpf_jit_supports_percpu_insn(void)
return true;
 }
 
+bool bpf_jit_bypass_spec_v4(void)
+{
+   /* In case of arm64, we rely on the firmware mitigation of Speculative
+* Store Bypass as controlled via the ssbd kernel parameter. Whenever
+* the mitigation is enabled, it works for all of the kernel code with
+* no need to provide any additional instructions. Therefore, skip
+* inserting nospec insns against Spectre v4.
+*/
+   return true;
+}
+
 bool bpf_jit_inlines_helper_call(s32 imm)
 {
switch (imm) {
diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
b/arch/powerpc/net/bpf_jit_comp64.c
index 233703b06d7c..b5339c541283 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -363,6 +363,23 @@ static int bpf_jit_emit_tail_call(u32 *image, struct 
codegen_context *ctx, u32 o
return 0;
 }
 
+bool bpf_jit_bypass_spec_v1(void)
+{
+#if defined(CONFIG_PPC_E500) || defined(CONFIG_PPC_BOOK3S_64)
+   return !(security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR));
+#else
+   return true;
+#endif
+}
+
+bool bpf_jit_bypass_spec_v4(void)
+{
+   return !(security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+security_ftr_enabled(SEC_FTR_STF_BARRIER) &&
+stf_barrier_type_get() != STF_BARRIER_NONE);
+}
+
 /*