On Tue, May 1, 2018 at 4:16 PM, Alexei Starovoitov <alexei.starovoi...@gmail.com> wrote: > On Mon, Apr 30, 2018 at 10:15:05AM -0700, William Tu wrote: >> Existing verifier does not allow 'ctx + const + const'. However, due to >> compiler optimization, there is a case where BPF compilerit generates >> 'ctx + const + 0', as shown below: >> >> 599: (1d) if r2 == r4 goto pc+2 >> R0=inv(id=0) R1=ctx(id=0,off=40,imm=0) >> R2=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) >> R3=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff)) R4=inv0 >> R6=ctx(id=0,off=0,imm=0) R7=inv2 >> 600: (bf) r1 = r6 // r1 is ctx >> 601: (07) r1 += 36 // r1 has offset 36 >> 602: (61) r4 = *(u32 *)(r1 +0) // r1 + 0 >> dereference of modified ctx ptr R1 off=36+0, ctx+const is allowed, >> ctx+const+const is not >> >> The reason for BPF backend generating this code is due optimization >> likes this, explained from Yonghong: >> if (...) >> *(ctx + 60) >> else >> *(ctx + 56) >> >> The compiler translates it to >> if (...) >> ptr = ctx + 60 >> else >> ptr = ctx + 56 >> *(ptr + 0) >> >> So load ptr memory become an example of 'ctx + const + 0'. This patch >> enables support for this case. >> >> Fixes: f8ddadc4db6c7 ("Merge >> git://git.kernel.org/pub/scm/linux/kernel/git/davem/net") >> Cc: Yonghong Song <y...@fb.com> >> Signed-off-by: Yifeng Sun <pkusunyif...@gmail.com> >> Signed-off-by: William Tu <u9012...@gmail.com> >> --- >> kernel/bpf/verifier.c | 2 +- >> tools/testing/selftests/bpf/test_verifier.c | 13 +++++++++++++ >> 2 files changed, 14 insertions(+), 1 deletion(-) >> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> index 712d8655e916..c9a791b9cf2a 100644 >> --- a/kernel/bpf/verifier.c >> +++ b/kernel/bpf/verifier.c >> @@ -1638,7 +1638,7 @@ static int check_mem_access(struct bpf_verifier_env >> *env, int insn_idx, u32 regn >> /* ctx accesses must be at a fixed offset, so that we can >> * determine what type of data were returned. >> */ >> - if (reg->off) { >> + if (reg->off && off != reg->off) { >> verbose(env, >> "dereference of modified ctx ptr R%d >> off=%d+%d, ctx+const is allowed, ctx+const+const is not\n", >> regno, reg->off, off - reg->off); >> diff --git a/tools/testing/selftests/bpf/test_verifier.c >> b/tools/testing/selftests/bpf/test_verifier.c >> index 1acafe26498b..95ad5d5723ae 100644 >> --- a/tools/testing/selftests/bpf/test_verifier.c >> +++ b/tools/testing/selftests/bpf/test_verifier.c >> @@ -8452,6 +8452,19 @@ static struct bpf_test tests[] = { >> .prog_type = BPF_PROG_TYPE_SCHED_CLS, >> }, >> { >> + "arithmetic ops make PTR_TO_CTX + const + 0 valid", >> + .insns = { >> + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, >> + offsetof(struct __sk_buff, data) - >> + offsetof(struct __sk_buff, mark)),
This is: r1 += N // r1 has offset N: the offset between data and mark) >> + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), This is: r0 = *(u32 *)(r1 +0) // r1 + 0 So the above two lines create similar case I hit 601: (07) r1 += 36 // r1 has offset 36 602: (61) r4 = *(u32 *)(r1 +0) // r1 + 0 > > How rewritten code looks here? > > The patch is allowing check_ctx_access() to proceed with sort-of > correct 'off' and remember ctx_field_size, > but in convert_ctx_accesses() it's using insn->off to do conversion. > Which is zero in this case, so it will convert > struct __sk_buff { > __u32 len; // offset 0 > > into access of 'struct sk_buff'->len > and then will add __sk_buff's &data - &mark delta to in-kernel len field. > Which will point to some random field further down in struct sk_buff. > Doesn't look correct at all. why? So it points to ctx + "offsetof(struct __sk_buff, data) - offsetof(struct __sk_buff, mark)", which is ctx + const then I tested that 'ctx + const + 0' should pass the verifier > How did you test this patch? > Without the patch, the test case will fail. With the patch, the test case passes. William