On 04/04/18 00:37, Alexei Starovoitov wrote:
> hmm. that doesn't fail for me and any other bots didn't complain.
> Are you sure you're running the latest kernel and tests?
Ah, test_progs isn't actually rebuilding because __NR_bpf is undeclared;
 something must be going wrong with header files.
Never mind.

> hmm. what's wrong with bsearch? It's trivial and fast. 
bsearch is O(log n), and the sort() call on the subprog_starts (which happens
 every time add_subprog() is called) is O(n log n).
Whereas reading aux->subprogno is O(1).
As far as I'm concerned, that's a sign that the latter data structure is the
 appropriate one.

> Even if we don't see the solution today we have to work towards it.
I guess I'm just not confident "towards" is the direction you think it is.

> Compiler designers could have combined multiple of such passes into
> fewer ones, but it's not done, because it increases complexity and
> causes tough bugs where pass is partially complete.
I'm not trying to combine together multiple 'for bb in prog/for insn in bb'-
 type passes.  The combining I was doing was more on 'for all possible
 execution paths'-type passes, because it's those that explode combinatorially.
Happily I think we can go a long way towards getting rid of them; but while I
 think we can get down to only having 1, I don't think we can reach 0.

> The prime example where more than 4k instructions and loops are mandatory
> is user space stack analysis inside the program. Like walking python stack
> requires non-trival pointer chasing. With 'pragma unroll' the stack depth
> limit today is ~18. That's not really usable. Looping through 100 python
> frames would require about 16k bpf assembler instructions.
But this would be solved by having support for bounded loops, and I think I've
 successfully shown that this is not inherently incompatible with a do_check()
 style walk.

> Hence do_check approach must go. The rough idea is to compute per basic
> block a set of INs (registers and stack) that basic block needs
> to see to be safe and corresponding set of OUTs.
> Then propagate this knowledge across cfg edges.
> Once we have such set per bpf function, it will essentially answer the 
> question
> 'what arguments this function needs to see to be safe and what it returns'
> To make bpf libraries scale we'd need to keep such information
> around after the verification, so dynamic linking and indirect calls
> are fast to verify.
> It's very high level obviously. There are many gotchas to resolve.
I agree that if we can do this it'll be ideal.  But that's a big 'if'; my
 example code was intended to demonstrate that the "set of INs bb/func needs to
 see to be safe" can be an arbitrarily complicated disjunction, and that instead
 of a combinatorially exploding number of paths to walk (the do_check() 
approach)
 you now have combinatorially exploding IN-constraints to propagate backwards.

> Please do, since that's my concern with tsort.
> The verifier is the key piece of bpf infra and to be effective maintainers
> we need to thoroughly understand the verifier code.
> We cannot just take the patch based on the cover letter. The author may
> disappear tomorrow and what we're going to do with the code?
I entirely accept this argument.  Unfortunately, when writing explanations,
 it's difficult to know when one has reached something that will be
 understood, so inevitably there will have to be a few iterations to get
 there ;-)

Reply via email to