[dpdk-users] KNI sample application ping from remote host

2019-09-10 Thread Andrew Wang
Hi

I tried running KNI sample application. The host machine has 2 "82599ES
10-Gigabit SFI/SFP+ Network" interfaces, one of which was bound to igb_uio

Network devices using DPDK-compatible driver

:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
drv=igb_uio unused=



I started the kni application on that interface

root@localhost:/tmp/dpdk-stable-18.11.2/examples/kni# build/kni -c 0x3  -n
2 -- -P -p 0x1  --config="(0,0,1)"
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device :04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device :04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device :08:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1583 net_i40e
EAL: PCI device :08:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1583 net_i40e
APP: Initialising port 0 ...
KNI: pci: 04:00:01 8086:10fb

Checking link status
done
Port0 Link Up - speed 1Mbps - full-duplex
APP: 
APP: KNI Running
APP: kill -SIGUSR1 29622
APP: Show KNI Statistics.
APP: kill -SIGUSR2 29622
APP: Zero KNI Statistics.
APP: 
APP: Lcore 1 is writing to port 0
APP: Lcore 0 is reading from port 0



And then bound the vEth0 interface to a local subnet address:

root@localhost:~# ip addr add 192.168.0.4/24 dev vEth0
root@localhost:~# ip link set vEth0 up



I can ping the interface locally:

root@localhost:~# ping -c 1 192.168.0.4
PING 192.168.0.4 (192.168.0.4) 56(84) bytes of data.
64 bytes from 192.168.0.4: icmp_seq=1 ttl=64 time=0.040 ms

--- 192.168.0.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms



But from a different host, the ping won't succeed:
root@gateway-09:~# ping -c 1 192.168.0.4
PING 192.168.0.4 (192.168.0.4) 56(84) bytes of data.
>From 192.168.0.2 icmp_seq=1 Destination Host Unreachable

--- 192.168.0.4 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms



Is this expected behavior? I had thought the sample KNI application would
just grab packets and forward to the kernel but it seems this is not the
case. How do I have the DPDK app grab all packets it sees in the interface
and forward to the kernel?

Thanks
Andrew


Re: [dpdk-users] AF_PACKET pmd and linux networking stack

2018-08-25 Thread Andrew Wang
Rami

Thank you for the suggestion. Later I found it was a problem with the dpdk
app (setting incorrect mac address).

Andrew

On Fri, Aug 17, 2018 at 5:15 PM Rami Rosen  wrote:

> Hi Andrew,
> I am not sure at all that there is a way of achieving it, because of the
> way AF_PACKET works.
> But couldn't you achieve your goal with running DPDK L2FWD or DPDK L3FWD
> application ? In such a case, when not working in promiscuous mode, it
> seems that this will avoid processing of the packets by the kernel. See the
> l2fwd and l3fwd sections in DPDK samples guide.
>
> Regards,
> Rami Rosen
>
>
>
> בתאריך יום ד׳, 8 באוג׳ 2018, 19:57, מאת Andrew Wang ‏ >:
>
>> Hi
>>
>> Is there a way of preventing the linux kernel networking stack from
>> handling packets when using the AF_PACKET pmd?
>>
>> Our DPDK app is running on a node that is attracting traffic for a VIP and
>> for which it has a blackhole routing rule (to drop all the incoming
>> packets
>> for that VIP).
>>
>> The intention was to have our DPDK app (running on AF_PACKET pmd for now -
>> we're still developing the app) grab those packets, process and send them
>> out (with a different address).
>>
>> Right now we can actually see those packets in our dpdk app, we process
>> them, give them a different address, but it seems that the linux kernel
>> networking stack is still dropping them.
>>
>> When the blackhole rule is removed we see the outgoing packet with correct
>> header, but also a destination unreachable message is sent out, suggesting
>> the kernel is also handling the packet. We see the same behavior when
>> using
>> iptables to drop packets (instead of blackhole route).
>>
>> Any suggestions appreciated.
>>
>> Thanks
>> Andrew
>>
>


[dpdk-users] AF_PACKET pmd and linux networking stack

2018-08-08 Thread Andrew Wang
Hi

Is there a way of preventing the linux kernel networking stack from
handling packets when using the AF_PACKET pmd?

Our DPDK app is running on a node that is attracting traffic for a VIP and
for which it has a blackhole routing rule (to drop all the incoming packets
for that VIP).

The intention was to have our DPDK app (running on AF_PACKET pmd for now -
we're still developing the app) grab those packets, process and send them
out (with a different address).

Right now we can actually see those packets in our dpdk app, we process
them, give them a different address, but it seems that the linux kernel
networking stack is still dropping them.

When the blackhole rule is removed we see the outgoing packet with correct
header, but also a destination unreachable message is sent out, suggesting
the kernel is also handling the packet. We see the same behavior when using
iptables to drop packets (instead of blackhole route).

Any suggestions appreciated.

Thanks
Andrew