Hi,
I wrote a test application using MEMNIC interfaces for inter
VM switching .[VM1 --> host --- >VM2 ]
I send uni-directional traffic from traffic generator at vm1
to the host .
And in the host my application switches traffic coming from
vm1 to vm2.
I have used dpdk-1.7 and memnic -1.2 . and
librte_pmd_memnic_copy.so to bind memnic interfaces to dpdk application [my
traffic generator app].
I observed a maximum throughput of around 470Mbps only [with
packet size 1400 Bytes,21,000 packets per second ] ..
Is there any best method to improve throughput using MEMNIC
interfaces ..
If I increase shared memory area shown below .. does it
servers my need .
If I want around 4 to 5 Gbps through put with unidirectional
flow from VM1 to VM2 through the host ..
How much shared memory area shall I take per port ..
What might be the memory alignment ..
/*
* Shared memory area mapping
* +------------------+
* | Header Area 1MB |
* +------------------+
* | Up to VM 7MB |
* +------------------+
* | Padding 1MB |
* +------------------+
* | Down to host 7MB |
* +------------------+
*/
struct memnic_area {
union {
struct memnic_header hdr;
char hdr_pad[1024 * 1024];
};
union {
struct memnic_data up;
char up_pad[7 * 1024 * 1024];
};
char blank[1024 * 1024];
union {
struct memnic_data down;
char down_pad[7 * 1024 * 1024];
};
};
Thanks & Regards,
Srinivas.
"DISCLAIMER: This message is proprietary to Aricent and is intended solely for
the use of the individual to whom it is addressed. It may contain privileged or
confidential information and should not be circulated or used for any purpose
other than for what it is intended. If you have received this message in error,
please notify the originator immediately. If you are not the intended
recipient, you are notified that you are strictly prohibited from using,
copying, altering, or disclosing the contents of this message. Aricent accepts
no responsibility for loss or damage arising from the use of the information
transmitted by this email including damage from virus."