On Thu, 8 Sep 2016 23:30:50 -0700 Alexei Starovoitov <alexei.starovoi...@gmail.com> wrote:
> On Fri, Sep 09, 2016 at 07:36:52AM +0200, Jesper Dangaard Brouer wrote: [...] > > Imagine you have packets intermixed towards the stack and XDP_TX. > > Every time you call the stack code, then you flush your icache. When > > returning to the driver code, you will have to reload all the icache > > associated with the XDP_TX, this is a costly operation. > [...] > To make further progress in this discussion can we talk about > the use case you have in mind instead? Then solution will > be much clear, I hope. The DDoS use-case _is_ affected by this "hidden" bulking design. Lets say, I want to implement a DDoS facility. Instead of just dropping the malicious packets, I want to see the bad packets. I implement this by rewriting the destination-MAC to be my monitor machine and then XDP_TX the packet. In the DDoS use-case, you have loaded your XDP/eBPF program, and 100% of the traffic is delivered to the stack. (See note 1) Once the DDoS attack starts, then the traffic pattern changes, and XDP should (hopefully only) catch the malicious traffic (monitor machine can help diagnose false positive). Now, due to interleaving the DDoS traffic with the clean traffic, then efficiency of XDP_TX is reduced due to more icache misses... Note(1): Notice I have already demonstrated that loading a XDP/eBPF program with 100% delivery to the stack, actually slows down the normal stack. This is due to hitting a bottleneck in the page allocator. I'm working removing that bottleneck with page_pool, and that solution is orthogonal to this problem. It is actually an excellent argument, for why you would want to run a DDoS XDP filter only on a restricted number of RX queues. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer _______________________________________________ iovisor-dev mailing list iovisor-dev@lists.iovisor.org https://lists.iovisor.org/mailman/listinfo/iovisor-dev