On 07/06/2017 08:27 PM, Edward Cree wrote:
On 04/07/17 23:28, Daniel Borkmann wrote:
Have you tried with cilium's BPF code? The kernel selftests are quite small,
so not really pushing processed insns too far. I can send you a BPF obj file
if that's easier for testing.
Results from the next (in-progress) version of the patch series, with the
  'id' bugfix I mentioned in my other mail, and rebased onto an updated
  net-next (0e72582).  Numbers collected with:
# tc filter add dev lo egress bpf da obj /path/to/bpf_object.o sec $section verb 2>&1 | 
grep "processed" | awk -e 'BEGIN { N = 0; }' -e '{ N += $2; }' -e 'END { print N; }'

Program                net-next   short    full
bpf_lb_opt_-DLB_L3.o       4707    5872    6515
bpf_lb_opt_-DLB_L4.o       7662    8652    8976
bpf_lb_opt_-DUNKNOWN.o      727    2972    2960
bpf_lxc_opt_-DDROP_ALL.o  57725   85750   95412
bpf_lxc_opt_-DUNKNOWN.o   93676  134043  141706
bpf_netdev.o              14702   24665   24251
bpf_overlay.o              7303   10939   10999

Conclusion: the ptr&const and full-range min/max tracking make little
  difference (10% increase at most, sometimes a decrease); most of the
  increase comes from the basic "replace imm and aux_off/align with tnums"
  patch.

Okay, thanks for the analysis, Edward.

So based on what Alexei was saying earlier, it sounds like the answer for
  now is to up the limit (say to a round 128k), get this series merged,
  then start work on pruning optimisation so we can hopefully bring that
  limit back down again later.  Sound reasonable?

But this means the bpf_lxc_* cases increase quite significantly,
arguably one of them is pretty close already, but the other one not
so much, meaning while 142k would shoot over the 128k target quite a
bit, the 95k is quite close to the point that it wouldn't take much,
say, few different optimizations from compiler, to hit the limit as
well eventually, something like 156k for the time being would seem a
more adequate raise perhaps that needs to be evaluated carefully
given the situation.
_______________________________________________
iovisor-dev mailing list
iovisor-dev@lists.iovisor.org
https://lists.iovisor.org/mailman/listinfo/iovisor-dev

Reply via email to