Em Tue, Nov 14, 2017 at 03:19:51PM +0100, Daniel Borkmann escreveu:
> On 11/14/2017 02:42 PM, Arnaldo Carvalho de Melo wrote:
> > Em Tue, Nov 14, 2017 at 02:09:34PM +0100, Daniel Borkmann escreveu:
> >> On 11/14/2017 01:58 PM, Arnaldo Carvalho de Melo wrote:
> >> Currently having a version compiled from the git tree:

> >> # llc --version
> >> LLVM (http://llvm.org/):
> >>   LLVM version 6.0.0git-2d810c2
> >>   Optimized build.
> >>   Default target: x86_64-unknown-linux-gnu
> >>   Host CPU: skylake

> > [root@jouet bpf]# llc --version
> > LLVM (http://llvm.org/):
> >   LLVM version 4.0.0svn

> > Old stuff! ;-) Will change, but improving these messages should be on
> > the radar, I think :-)

> Yep, agree, I think we need a generic, better solution for this type of
> issue instead of converting individual helpers to handle 0 min bound and
> then only bailing out in such case; need to brainstorm a bit on that.
 
> I think for the above in your case ...
 
>  [...]
>   6: (85) call bpf_probe_read_str#45
>   7: (bf) r1 = r0
>   8: (67) r1 <<= 32
>   9: (77) r1 >>= 32
>  10: (15) if r1 == 0x0 goto pc+10
>   R0=inv(id=0) R1=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) 
> R6=ctx(id=0,off=0,imm=0) R10=fp0
>  11: (57) r0 &= 127
>  [...]
 
> ... the shifts on r1 might be due to using 32 bit type, so if you find
> a way to avoid these and have the test on r0 directly, we might get there.
> Perhaps keep using a 64 bit type to avoid them. It would be useful to
> propagate the deduced bound information back to r0 when we know that
> neither r0 nor r1 has changed in the meantime.

I changed len/ret to u64, didn't help, updating clang and llvm to see if
that helps...

Will end up working directly with eBPF bytecode, which is what I really
need in 'perf trace', but lets get this sorted out first.

- Arnaldo

Reply via email to