在 2016/11/15 0:05, Kavanagh, Mark B 写道:


2016/10/28 17:47, Kavanagh, Mark B :
Currently, when doing packet copy, lots of DPDK mbuf's info
will be missed, like packet type, ol_flags, etc.
Those information is very important for DPDK to do
packets processing.

Signed-off-by: Michael Qiu <qiud...@chinac.com>
Signed-off-by: Jijiang Liu <liujiji...@chinac.com>
---
lib/dp-packet.c   | 3 +++
lib/netdev-dpdk.c | 4 ++++
2 files changed, 7 insertions(+)

diff --git a/lib/dp-packet.c b/lib/dp-packet.c
index bf8522e..619f651 100644
--- a/lib/dp-packet.c
+++ b/lib/dp-packet.c
@@ -175,6 +175,9 @@ dp_packet_clone_with_headroom(const struct dp_packet 
*buffer, size_t
headroom)
      new_buffer->cutlen = buffer->cutlen;
#ifdef DPDK_NETDEV
      new_buffer->mbuf.ol_flags = buffer->mbuf.ol_flags;
+    new_buffer->mbuf.tx_offload = buffer->mbuf.tx_offload;
+    new_buffer->mbuf.packet_type = buffer->mbuf.packet_type;
+    new_buffer->mbuf.nb_segs = buffer->mbuf.nb_segs;
#else
      new_buffer->rss_hash_valid = buffer->rss_hash_valid;
#endif
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
This doesn't apply cleanly for me - apply change described below to resolve.
Hi, Mark

Sorry, my patchset is based on version tag 2.6, I will make V2 patch
which based on the latest code.

index 27b4ee2..ad92504 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -1589,6 +1589,10 @@ dpdk_do_tx_copy(struct netdev *netdev, int qid, struct
dp_packet_batch
*batch)
          memcpy(rte_pktmbuf_mtod(mbufs[newcnt], void *),
                 dp_packet_data(batch->packets[i]), size);

+        mbufs[newcnt]->nb_segs = batch->packets[i]->mbuf.nb_segs;
+        mbufs[newcnt]->ol_flags = batch->packets[i]->mbuf.ol_flags;
+        mbufs[newcnt]->packet_type = batch->packets[i]->mbuf.packet_type;
I'm not sure if this change is needed; mbuf->packet_type is only useful on the 
NIC Rx path,
right? Please let me know if I've missed something.

Yes, you are right.  There are two reasons:

First, it does no harm when we copy it in tx path, at least, I haven't
found any, if I'm wrong, show me please.
Hmm, I can't think of any examples off the top of my head, but I'd be worried 
that it could lead to inconsistencies between the packet_type copied from the 
batch mbuf and the actual headers present in the packet following processing by 
OvS. More on this below.

At least, we should never miss someting when doing packet copy right?

Second, we could reuse this field to carry tunnel info(tun_type) when do
header push, thus in dpdk tx path, we could easily get the packet's
tunnel info like vxlan, IPIP, GRE, etc. and do the tunnel offloads settings.
IMO this requires a number of changes which should be contained in their own 
separate patch(set); as part of that patchset, the packet_type could be copied 
from 'batch->packets[i]' into 'mbufs[newcnt]'.
One caveat - how will 'packet_type' be populated for packets which don't 
originate from the NIC? Presumably, the additional processing required to set 
packet_type for such packets on the datapath would kill performance.

Just take my Vxlan patch for example, we could easily set this flags when doing push header:

@@ -211,6 +211,11 @@ netdev_tnl_push_udp_header(struct dp_packet *packet,

udp = netdev_tnl_push_ip_header(packet, data->header, data->header_len, &ip_tot_size);

+    #ifdef DPDK_NETDEV
+        if (data->tnl_type == OVS_VPORT_TYPE_VXLAN)
+            packet->mbuf.tun_type = RTE_TUNNEL_TYPE_VXLAN;
+    #endif


For performance, I think it could increase, because in dpdk tx path, we do not need to prase the packet's type anymore, whether it is a tunnel packet, and which tunnel type it is, the only thing is to check this flag and do the action.

+        mbufs[newcnt]->tx_offload = batch->packets[i]->mbuf.tx_offload;
          rte_pktmbuf_data_len(mbufs[newcnt]) = size;
          rte_pktmbuf_pkt_len(mbufs[newcnt]) = size;
The mbuf array is named 'pkts' as opposed to 'mbufs' in the latest upstream 
codebase;
renaming the array allows the patch to apply cleanly.

OK, I will post the newest code to fix this.

--
1.8.3.1
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

--
Thanks,
Michael


_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to