Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-15 Thread Stokes, Ian
> On 14/01/2019 13:33, Ilya Maximets wrote:
> > On 14.01.2019 15:51, Ian Stokes wrote:
> >> On 1/11/2019 7:37 PM, Ian Stokes wrote:
> >>> On 1/11/2019 4:14 PM, Ilya Maximets wrote:
>  Nothing significantly changed since the previous versions.
>  This patch set effectively breaks following cases if multi-segment
>  mbufs enabled:
> 
> >>>
> >>> Hi Ilya, thanks for your feedback. A few queries and clarifications
> for discussion below.
> >>>
> >>>  From reading the mail at first I was under the impression that all
> these features were being broken by default after applying the patch?
> >>>
> >>> To clarify, in your testing, after applying these patches, are the use
> cases below broken if you are not using multiseg/TSO? (by default these
> are disabled).
> >>>
> >>> The intention of this work, as an experimental feature, would be that
> it is disabled by default and will not introduce new regressions to users
> who do not enable it. So there would be no impact on a user who does not
> wish to enable this and use OVS DPDK as is today.
> >>>
> >>> The series would be the first steps to enable OVS DPDK to use TSO in
> both inter VM and Host to Host environments using DPDK interfaces that
> support the feature. Kernel interfaces such as tap devices are not catered
> for as of yet.
> >>>
> >>> The use cases that are catered for are
> >>>
> >>> - Inter VM communication on the same host using DPDK Interfaces;
> >>> - Inter VM communication on different hosts DPDK Interfaces;
> >>> - The same two use cases above, but on a VLAN network.
> >>>
> >>> Again these are the first steps showing TSO in DPDK with multiseg
> feature.
> >>>
> >>
> >> Hi all,
> >>
> >> with code freeze tomorrow I'd like to come to a consensus on this as
> there has bee no response to the counter points made below.
> >
> > Sorry for responses taking so long. I'm just trying to perform some
> testing.
> >
> >>
> >> In summary the status of the feature is:
> >>
> >> It does not cause the usecases outlined below to break while mutltiseg
> is disabled (which by default it is disabled). There is no impact to a
> user who does not wish to enable the feature.
> >
> > Disagree. There is a performance impact. I just made some performance
> > testing on current master with and without patch-sets applied (mseg
> > and TSO) with my usual "PVP with bonded PHY" scenario. And I see 5%
> > average performance drop for the 512B UDP packets if multi-segment
> > related patches just applied and *not enabled*. Unfortunately, have no
> > time for more testing of other cases as you're hurrying me up.
> 
> Hi Ilya,
> 
> Thanks for pointing this out.
> 
> I've run a few performance testings of my own and I can confirm I also see
> a slight performance drop (although not as much as ~5.0%, around ~3.0-
> 4.0%).
> 
> From my findings this affects normal packets (it doesn't seem to be
> noticeable if we're using 1500B sized packets and above). And this is
> because of the changes introduced on the recent v13:
> - On previous versions I took the approach of modifying
> dp_packet_l2_5/l3/l4() so we know the size a caller wants to fetch and can
> reason if the header is within the first segment or not;
> - In v13 the approach is now to verify when OvS first parses the packet
> and sets the layer offsets, in miniflow_extract, if the headers parsed are
> within the first mbuf.
> 
> The reason for the change is because the first approach would require a
> unreasonable amount of changes (even more so), so that each place calling
> dp_packet_l2_3/l3/l4() would pass a size as well. However, the new changes
> also come with some additional checks that are taking some additional
> cycles.
> 

Hi All,

in light of this I don’t feel the series has had enough testing and reviews for 
different use cases from across the community..
As such I'm going to hold off committing to master for the 2.11 release.
Once more testing and reviews have been completed as well as fixes for 
performance 
drops as highlighted above we can consider for master in the future.

Ian
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-14 Thread Lam, Tiago
On 14/01/2019 13:33, Ilya Maximets wrote:
> On 14.01.2019 15:51, Ian Stokes wrote:
>> On 1/11/2019 7:37 PM, Ian Stokes wrote:
>>> On 1/11/2019 4:14 PM, Ilya Maximets wrote:
 Nothing significantly changed since the previous versions.
 This patch set effectively breaks following cases if multi-segment
 mbufs enabled:

>>>
>>> Hi Ilya, thanks for your feedback. A few queries and clarifications for 
>>> discussion below.
>>>
>>>  From reading the mail at first I was under the impression that all these 
>>> features were being broken by default after applying the patch?
>>>
>>> To clarify, in your testing, after applying these patches, are the use 
>>> cases below broken if you are not using multiseg/TSO? (by default these are 
>>> disabled).
>>>
>>> The intention of this work, as an experimental feature, would be that it is 
>>> disabled by default and will not introduce new regressions to users who do 
>>> not enable it. So there would be no impact on a user who does not wish to 
>>> enable this and use OVS DPDK as is today.
>>>
>>> The series would be the first steps to enable OVS DPDK to use TSO in both 
>>> inter VM and Host to Host environments using DPDK interfaces that support 
>>> the feature. Kernel interfaces such as tap devices are not catered for as 
>>> of yet.
>>>
>>> The use cases that are catered for are
>>>
>>> - Inter VM communication on the same host using DPDK Interfaces;
>>> - Inter VM communication on different hosts DPDK Interfaces;
>>> - The same two use cases above, but on a VLAN network.
>>>
>>> Again these are the first steps showing TSO in DPDK with multiseg feature.
>>>
>>
>> Hi all,
>>
>> with code freeze tomorrow I'd like to come to a consensus on this as there 
>> has bee no response to the counter points made below.
> 
> Sorry for responses taking so long. I'm just trying to perform some testing.
> 
>>
>> In summary the status of the feature is:
>>
>> It does not cause the usecases outlined below to break while mutltiseg is 
>> disabled (which by default it is disabled). There is no impact to a user who 
>> does not wish to enable the feature.
> 
> Disagree. There is a performance impact. I just made some performance
> testing on current master with and without patch-sets applied (mseg and TSO)
> with my usual "PVP with bonded PHY" scenario. And I see 5% average performance
> drop for the 512B UDP packets if multi-segment related patches just applied
> and *not enabled*. Unfortunately, have no time for more testing of other cases
> as you're hurrying me up.

Hi Ilya,

Thanks for pointing this out.

I've run a few performance testings of my own and I can confirm I also
see a slight performance drop (although not as much as ~5.0%, around
~3.0-4.0%).

From my findings this affects normal packets (it doesn't seem to be
noticeable if we're using 1500B sized packets and above). And this is
because of the changes introduced on the recent v13:
- On previous versions I took the approach of modifying
dp_packet_l2_5/l3/l4() so we know the size a caller wants to fetch and
can reason if the header is within the first segment or not;
- In v13 the approach is now to verify when OvS first parses the packet
and sets the layer offsets, in miniflow_extract, if the headers parsed
are within the first mbuf.

The reason for the change is because the first approach would require a
unreasonable amount of changes (even more so), so that each place
calling dp_packet_l2_3/l3/l4() would pass a size as well. However, the
new changes also come with some additional checks that are taking some
additional cycles.

> 
>>
>> It is considered an experimental feature.
>>
>> When enabled it supports the following usecase:
>>  1. Inter VM communication on the same host using DPDK Interfaces;
>>  2. Inter VM communication on different hosts DPDK Interfaces;
>>  3. The same two use cases above, but on a VLAN network.
>>
>> The known cases it does not support:
>>  1. Host <-> VM communication using kernel interfaces.
>>  2. VM <-> VM communication where TSO is not supported by one VM.
>>  3. Tunneled connections (i.e. VXLAN) currently not supported.
>>
>> For issues 1 and 2 above, GSO (the software fallback for segmenting packets 
>> in DPDK) would address the issue. This work is planned as a follow on for 
>> this feature already by Intel.
> 
> * not only segmenting, but also checksumming.
> 
>>
>> While GSO is not in place, it is the case that a DPDK device being added 
>> that does not support multiseg will fail to add with the user informed why, 
>> this is only if multiseg has been enabled.> 
>> For issue 3 there is already the capability to support this is DPDK 18.11. 
>> There will be a follow on patch for OVS to enable this, just not within the 
>> scope of the 2.11 code freeze.
> 
> Calculating of inner checksums on TX is not supported by much more devices.
> And virtio, I think, does not support this.
> 
> Also, does the current patch-set check/drop packets before encapsulation?

Just to 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-14 Thread Ilya Maximets
On 14.01.2019 18:03, Stokes, Ian wrote:
>> On 14.01.2019 15:51, Ian Stokes wrote:
>>> On 1/11/2019 7:37 PM, Ian Stokes wrote:
 On 1/11/2019 4:14 PM, Ilya Maximets wrote:
> Nothing significantly changed since the previous versions.
> This patch set effectively breaks following cases if multi-segment
> mbufs enabled:
>

 Hi Ilya, thanks for your feedback. A few queries and clarifications for
>> discussion below.

  From reading the mail at first I was under the impression that all
>> these features were being broken by default after applying the patch?

 To clarify, in your testing, after applying these patches, are the use
>> cases below broken if you are not using multiseg/TSO? (by default these
>> are disabled).

 The intention of this work, as an experimental feature, would be that
>> it is disabled by default and will not introduce new regressions to users
>> who do not enable it. So there would be no impact on a user who does not
>> wish to enable this and use OVS DPDK as is today.

 The series would be the first steps to enable OVS DPDK to use TSO in
>> both inter VM and Host to Host environments using DPDK interfaces that
>> support the feature. Kernel interfaces such as tap devices are not catered
>> for as of yet.

 The use cases that are catered for are

 - Inter VM communication on the same host using DPDK Interfaces;
 - Inter VM communication on different hosts DPDK Interfaces;
 - The same two use cases above, but on a VLAN network.

 Again these are the first steps showing TSO in DPDK with multiseg
>> feature.

>>>
>>> Hi all,
>>>
>>> with code freeze tomorrow I'd like to come to a consensus on this as
>> there has bee no response to the counter points made below.
>>
>> Sorry for responses taking so long. I'm just trying to perform some
>> testing.
>>
> 
> No problem, thanks for testing and responding.
> 
>>>
>>> In summary the status of the feature is:
>>>
>>> It does not cause the usecases outlined below to break while mutltiseg
>> is disabled (which by default it is disabled). There is no impact to a
>> user who does not wish to enable the feature.
>>
>> Disagree. There is a performance impact. I just made some performance
>> testing on current master with and without patch-sets applied (mseg and
>> TSO) with my usual "PVP with bonded PHY" scenario. And I see 5% average
>> performance drop for the 512B UDP packets if multi-segment related patches
>> just applied and *not enabled*. Unfortunately, have no time for more
>> testing of other cases as you're hurrying me up.
>>
> 
> I had not looked at the bonding use case myself but I trust your findings, 
> that’s for testing that case.
> 
> Tiago is investigating to confirm he sees the same issue and how it would be 
> resolved.

I don't think that the performance drop cased by bonding itself.
This most probably caused just by introducing additional checks and
initializations in datapath.
I just have no other environment to test right now.

> 
>>>
>>> It is considered an experimental feature.
>>>
>>> When enabled it supports the following usecase:
>>>  1. Inter VM communication on the same host using DPDK Interfaces;
>>>  2. Inter VM communication on different hosts DPDK Interfaces;
>>>  3. The same two use cases above, but on a VLAN network.
>>>
>>> The known cases it does not support:
>>>  1. Host <-> VM communication using kernel interfaces.
>>>  2. VM <-> VM communication where TSO is not supported by one VM.
>>>  3. Tunneled connections (i.e. VXLAN) currently not supported.
>>>
>>> For issues 1 and 2 above, GSO (the software fallback for segmenting
>> packets in DPDK) would address the issue. This work is planned as a follow
>> on for this feature already by Intel.
>>
>> * not only segmenting, but also checksumming.
> 
> Yes, checksumming would also have be addressed.
>>
>>>
>>> While GSO is not in place, it is the case that a DPDK device being
>>> added that does not support multiseg will fail to add with the user
>> informed why, this is only if multiseg has been enabled.> For issue 3
>> there is already the capability to support this is DPDK 18.11. There will
>> be a follow on patch for OVS to enable this, just not within the scope of
>> the 2.11 code freeze.
>>
>> Calculating of inner checksums on TX is not supported by much more
>> devices.
>> And virtio, I think, does not support this.
>>
>> Also, does the current patch-set check/drop packets before encapsulation?
> 
> @Tiago, can you comment to this? I don’t believe there is a check 
> specifically for encapsulation as it's a known usupported use case.
>>
>>>
>>> My position is that the series are at a high enough revision to be added
>> (v14 for multi seg, v3 for the TSO) as long as the known issues above are
>> documented.
>>>
>>> Intel intends to develop the feature further with the use cases outlined
>> above as the next steps in incremental development of the feature.
>>>
>>> It's 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-14 Thread Stokes, Ian
> On 14.01.2019 15:51, Ian Stokes wrote:
> > On 1/11/2019 7:37 PM, Ian Stokes wrote:
> >> On 1/11/2019 4:14 PM, Ilya Maximets wrote:
> >>> Nothing significantly changed since the previous versions.
> >>> This patch set effectively breaks following cases if multi-segment
> >>> mbufs enabled:
> >>>
> >>
> >> Hi Ilya, thanks for your feedback. A few queries and clarifications for
> discussion below.
> >>
> >>  From reading the mail at first I was under the impression that all
> these features were being broken by default after applying the patch?
> >>
> >> To clarify, in your testing, after applying these patches, are the use
> cases below broken if you are not using multiseg/TSO? (by default these
> are disabled).
> >>
> >> The intention of this work, as an experimental feature, would be that
> it is disabled by default and will not introduce new regressions to users
> who do not enable it. So there would be no impact on a user who does not
> wish to enable this and use OVS DPDK as is today.
> >>
> >> The series would be the first steps to enable OVS DPDK to use TSO in
> both inter VM and Host to Host environments using DPDK interfaces that
> support the feature. Kernel interfaces such as tap devices are not catered
> for as of yet.
> >>
> >> The use cases that are catered for are
> >>
> >> - Inter VM communication on the same host using DPDK Interfaces;
> >> - Inter VM communication on different hosts DPDK Interfaces;
> >> - The same two use cases above, but on a VLAN network.
> >>
> >> Again these are the first steps showing TSO in DPDK with multiseg
> feature.
> >>
> >
> > Hi all,
> >
> > with code freeze tomorrow I'd like to come to a consensus on this as
> there has bee no response to the counter points made below.
> 
> Sorry for responses taking so long. I'm just trying to perform some
> testing.
> 

No problem, thanks for testing and responding.

> >
> > In summary the status of the feature is:
> >
> > It does not cause the usecases outlined below to break while mutltiseg
> is disabled (which by default it is disabled). There is no impact to a
> user who does not wish to enable the feature.
> 
> Disagree. There is a performance impact. I just made some performance
> testing on current master with and without patch-sets applied (mseg and
> TSO) with my usual "PVP with bonded PHY" scenario. And I see 5% average
> performance drop for the 512B UDP packets if multi-segment related patches
> just applied and *not enabled*. Unfortunately, have no time for more
> testing of other cases as you're hurrying me up.
> 

I had not looked at the bonding use case myself but I trust your findings, 
that’s for testing that case.

Tiago is investigating to confirm he sees the same issue and how it would be 
resolved.

> >
> > It is considered an experimental feature.
> >
> > When enabled it supports the following usecase:
> >  1. Inter VM communication on the same host using DPDK Interfaces;
> >  2. Inter VM communication on different hosts DPDK Interfaces;
> >  3. The same two use cases above, but on a VLAN network.
> >
> > The known cases it does not support:
> >  1. Host <-> VM communication using kernel interfaces.
> >  2. VM <-> VM communication where TSO is not supported by one VM.
> >  3. Tunneled connections (i.e. VXLAN) currently not supported.
> >
> > For issues 1 and 2 above, GSO (the software fallback for segmenting
> packets in DPDK) would address the issue. This work is planned as a follow
> on for this feature already by Intel.
> 
> * not only segmenting, but also checksumming.

Yes, checksumming would also have be addressed.
> 
> >
> > While GSO is not in place, it is the case that a DPDK device being
> > added that does not support multiseg will fail to add with the user
> informed why, this is only if multiseg has been enabled.> For issue 3
> there is already the capability to support this is DPDK 18.11. There will
> be a follow on patch for OVS to enable this, just not within the scope of
> the 2.11 code freeze.
> 
> Calculating of inner checksums on TX is not supported by much more
> devices.
> And virtio, I think, does not support this.
> 
> Also, does the current patch-set check/drop packets before encapsulation?

@Tiago, can you comment to this? I don’t believe there is a check specifically 
for encapsulation as it's a known usupported use case.
> 
> >
> > My position is that the series are at a high enough revision to be added
> (v14 for multi seg, v3 for the TSO) as long as the known issues above are
> documented.
> >
> > Intel intends to develop the feature further with the use cases outlined
> above as the next steps in incremental development of the feature.
> >
> > It's quite a late stage to receive a NACK on the series based on the
> unsupported use cases.
> 
> With all the respect, I'd like to raise the issue once again. All the
> "high revisions"
> and "long time development" is most of the time *last-minute development*.
> I mean is that there was no any progress since early October 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-14 Thread Ilya Maximets
On 14.01.2019 15:51, Ian Stokes wrote:
> On 1/11/2019 7:37 PM, Ian Stokes wrote:
>> On 1/11/2019 4:14 PM, Ilya Maximets wrote:
>>> Nothing significantly changed since the previous versions.
>>> This patch set effectively breaks following cases if multi-segment
>>> mbufs enabled:
>>>
>>
>> Hi Ilya, thanks for your feedback. A few queries and clarifications for 
>> discussion below.
>>
>>  From reading the mail at first I was under the impression that all these 
>> features were being broken by default after applying the patch?
>>
>> To clarify, in your testing, after applying these patches, are the use cases 
>> below broken if you are not using multiseg/TSO? (by default these are 
>> disabled).
>>
>> The intention of this work, as an experimental feature, would be that it is 
>> disabled by default and will not introduce new regressions to users who do 
>> not enable it. So there would be no impact on a user who does not wish to 
>> enable this and use OVS DPDK as is today.
>>
>> The series would be the first steps to enable OVS DPDK to use TSO in both 
>> inter VM and Host to Host environments using DPDK interfaces that support 
>> the feature. Kernel interfaces such as tap devices are not catered for as of 
>> yet.
>>
>> The use cases that are catered for are
>>
>> - Inter VM communication on the same host using DPDK Interfaces;
>> - Inter VM communication on different hosts DPDK Interfaces;
>> - The same two use cases above, but on a VLAN network.
>>
>> Again these are the first steps showing TSO in DPDK with multiseg feature.
>>
> 
> Hi all,
> 
> with code freeze tomorrow I'd like to come to a consensus on this as there 
> has bee no response to the counter points made below.

Sorry for responses taking so long. I'm just trying to perform some testing.

> 
> In summary the status of the feature is:
> 
> It does not cause the usecases outlined below to break while mutltiseg is 
> disabled (which by default it is disabled). There is no impact to a user who 
> does not wish to enable the feature.

Disagree. There is a performance impact. I just made some performance
testing on current master with and without patch-sets applied (mseg and TSO)
with my usual "PVP with bonded PHY" scenario. And I see 5% average performance
drop for the 512B UDP packets if multi-segment related patches just applied
and *not enabled*. Unfortunately, have no time for more testing of other cases
as you're hurrying me up.

> 
> It is considered an experimental feature.
> 
> When enabled it supports the following usecase:
>  1. Inter VM communication on the same host using DPDK Interfaces;
>  2. Inter VM communication on different hosts DPDK Interfaces;
>  3. The same two use cases above, but on a VLAN network.
> 
> The known cases it does not support:
>  1. Host <-> VM communication using kernel interfaces.
>  2. VM <-> VM communication where TSO is not supported by one VM.
>  3. Tunneled connections (i.e. VXLAN) currently not supported.
> 
> For issues 1 and 2 above, GSO (the software fallback for segmenting packets 
> in DPDK) would address the issue. This work is planned as a follow on for 
> this feature already by Intel.

* not only segmenting, but also checksumming.

> 
> While GSO is not in place, it is the case that a DPDK device being added that 
> does not support multiseg will fail to add with the user informed why, this 
> is only if multiseg has been enabled.> 
> For issue 3 there is already the capability to support this is DPDK 18.11. 
> There will be a follow on patch for OVS to enable this, just not within the 
> scope of the 2.11 code freeze.

Calculating of inner checksums on TX is not supported by much more devices.
And virtio, I think, does not support this.

Also, does the current patch-set check/drop packets before encapsulation?

> 
> My position is that the series are at a high enough revision to be added (v14 
> for multi seg, v3 for the TSO) as long as the known issues above are 
> documented.
> 
> Intel intends to develop the feature further with the use cases outlined 
> above as the next steps in incremental development of the feature.
> 
> It's quite a late stage to receive a NACK on the series based on the 
> unsupported use cases.

With all the respect, I'd like to raise the issue once again. All the "high 
revisions"
and "long time development" is most of the time *last-minute development*. I 
mean
is that there was no any progress since early October and a 3 revisions at once 
within
the last week. And it always happens like this. Just look at most of the DPDK 
related
features in OVS for the last few years. The workflow is same:

 1 RFC
 2 no progress for a months
 3 release soon --> a lot of new revisions in a few weeks
 4 didn't meet the release deadline
 5 goto step #2

It'll be OK if it was only one feature, but we have 3-4 big, not tested, not 
reviewed
enough features all the time one week before the release and only few people who
cares about that. Moreover, most of the time original 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-14 Thread Ian Stokes

On 1/11/2019 7:37 PM, Ian Stokes wrote:

On 1/11/2019 4:14 PM, Ilya Maximets wrote:

Nothing significantly changed since the previous versions.
This patch set effectively breaks following cases if multi-segment
mbufs enabled:



Hi Ilya, thanks for your feedback. A few queries and clarifications for 
discussion below.


 From reading the mail at first I was under the impression that all 
these features were being broken by default after applying the patch?


To clarify, in your testing, after applying these patches, are the use 
cases below broken if you are not using multiseg/TSO? (by default these 
are disabled).


The intention of this work, as an experimental feature, would be that it 
is disabled by default and will not introduce new regressions to users 
who do not enable it. So there would be no impact on a user who does not 
wish to enable this and use OVS DPDK as is today.


The series would be the first steps to enable OVS DPDK to use TSO in 
both inter VM and Host to Host environments using DPDK interfaces that 
support the feature. Kernel interfaces such as tap devices are not 
catered for as of yet.


The use cases that are catered for are

- Inter VM communication on the same host using DPDK Interfaces;
- Inter VM communication on different hosts DPDK Interfaces;
- The same two use cases above, but on a VLAN network.

Again these are the first steps showing TSO in DPDK with multiseg feature.



Hi all,

with code freeze tomorrow I'd like to come to a consensus on this as 
there has bee no response to the counter points made below.


In summary the status of the feature is:

It does not cause the usecases outlined below to break while mutltiseg 
is disabled (which by default it is disabled). There is no impact to a 
user who does not wish to enable the feature.


It is considered an experimental feature.

When enabled it supports the following usecase:
 1. Inter VM communication on the same host using DPDK Interfaces;
 2. Inter VM communication on different hosts DPDK Interfaces;
 3. The same two use cases above, but on a VLAN network.

The known cases it does not support:
 1. Host <-> VM communication using kernel interfaces.
 2. VM <-> VM communication where TSO is not supported by one VM.
 3. Tunneled connections (i.e. VXLAN) currently not supported.

For issues 1 and 2 above, GSO (the software fallback for segmenting 
packets in DPDK) would address the issue. This work is planned as a 
follow on for this feature already by Intel.


While GSO is not in place, it is the case that a DPDK device being added 
that does not support multiseg will fail to add with the user informed 
why, this is only if multiseg has been enabled.


For issue 3 there is already the capability to support this is DPDK 
18.11. There will be a follow on patch for OVS to enable this, just not 
within the scope of the 2.11 code freeze.


My position is that the series are at a high enough revision to be added 
(v14 for multi seg, v3 for the TSO) as long as the known issues above 
are documented.


Intel intends to develop the feature further with the use cases outlined 
above as the next steps in incremental development of the feature.


It's quite a late stage to receive a NACK on the series based on the 
unsupported use cases.


Based on the cases it does support, and given there is a clear path for 
incremental steps to enable the feature completely, I feel experimental 
status of the feature is warranted and I'd like to see the is the 2.11 
release.


Ian.


   1. Host <-> VM communication broken.

  With this patch applied VMs has offloading enabled by default.
  This makes all the packets that goes out from the VM to have
  incorrect checksums (beside possible multi-segmenting). As a
  result any packet that reaches host interfaces dropped by the
  host kernel as corrupted. Could be easily reproduced by trying
  to use ssh from host to VM, which is the common case for usual
  OpenStack installations.

I feel it's important to stress that by default however the patches 
don't break this functionality. Both Multi-segment mbufs and TSO are 
disabled by default, and thus, TSO won't be negotiated and the VMs won't 
be sending multi-segment mbufs to OvS.


 From Tiagos testing if you decide to enable multi-segments mbuf and 
TSO, *which is considered experimental* at the moment, the VM-Host 
communication will be broken only if your VM negotiates TSO enabled by 
default. This is because with TSO enabled checksum offloading comes into 
play, which means packets from the VM will arrive to the host with bad 
checksums.


Otherwise, if your VM doesn't enable TSO by default, or if you disable 
with `ethtool -K  tx off`, everything should work as before, but 
using multi-segment mbufs. That was my impression.


@Tiago can you confirm?

The only way to circumvent that, as you mention below, is "fallback to 
full software TSO implementation".


As an experimental feature I would be surprised to see 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-13 Thread Lam, Tiago
On 11/01/2019 16:14, Ilya Maximets wrote:

[snip]

> 
> 
> 
> One more thing I wanted to mention is that I was surprised to not see
> the performance comparison of TSO with usual Jumbo frames in your slides
> on a recent OVS Conference [1]. You had a comparison slide with 1500 MTU,
> TSO and the kernel vhost, but not the jumbo frames. In my testing
> I have more than 22 Gbps for VM<->VM jumbo frames performance with
> current OVS and iperf. And this is almost equal to vhost kernel with TSO
> performance on my system (I have probably a bit less powerful CPU, so the
> results are a bit lower than in your slides). Also, in real cases there will
> be significant performance drop because of fallback to software TSO in
> many cases. Sorry, but I don't really see the benefits of using the
> Multi-segment mbufs and TSO.
> All of this work positioned as a feature to cover the gap between the kernel
> and userspace datapaths, but there is no any gap from the performance point
> of view and overcomplicating the OVS by multisegmenting and TSO is really
> not necessary.

I've done a few more performance runs of my own, and I disagree with the
above statement. While I wasn't able to get 22Gbps with current master
(only ~20Gbps) by setting an MTU of 9000B, with TSO and Multi-segment
mbufs enabled I'm able to get a sustained 32Gbps. So, even if those
results are on the high end of those 22Gbps, say ~23Gbps, that's still a
39.1% performance increase in terms of throughput alone, when compared
to 32Gbps. I don't consider this to be negligible margins.

Another point, as I mentioned in my other reply (maybe not as clearly),
and was mentioned in the presentation as well, is that with TSO you also
save vCPU cycles, meaning you're not performing those software
segmentations in the VM according to the MTU. This, in turn, helps
saving host CPU cycles.

I do think, though, that this discussion would have been a lot more
helpful in the early stages.

Tiago.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-11 Thread Darrell Ball
On Fri, Jan 11, 2019 at 8:22 AM Ilya Maximets 
wrote:

> Nothing significantly changed since the previous versions.
> This patch set effectively breaks following cases if multi-segment
> mbufs enabled:
>
>   1. Host <-> VM communication broken.
>
>  With this patch applied VMs has offloading enabled by default.
>  This makes all the packets that goes out from the VM to have
>  incorrect checksums (beside possible multi-segmenting). As a
>  result any packet that reaches host interfaces dropped by the
>  host kernel as corrupted. Could be easily reproduced by trying
>  to use ssh from host to VM, which is the common case for usual
>  OpenStack installations.
>
>  If checksums will be fixed, segmented packets will be dropped
>  anyway in netdev-linux/bsd/dummy.
>
>   2. Broken VM to VM communication in case of not negotiated offloading
>  in one of VMs.
>
>  If one of VMs does not negotiate offloading, all the packets
>  will be dropped for the same reason as in Host to VM case. This
>  is a really bad case because virtio driver could change the
>  feature set in runtime and this should be handled in OVS.
>
>  Note that linux kernel virtio driver always has rx checksum
>  offloading enabled, i.e. it doesn't check/recalculates the
>  checksums in software if needed. So, you need the dpdk driver or
>  a FreeBSD machine to reproduce checksumming issue.
>
>   3. Broken support of all the virtual and HW NICs that doen't support TSO.
>
>  It's actually big number of NICs. More than a half of PMDs in DPDK
>  does not support TSO.
>  All the packets just dropped by netdev-dpdk before send.
>
> This is most probably not a full list of issues. To fix most of them
> fallback to full software TSO implementation required.
>
> Until this is done I prefer not to merge the feature because it
> breaks the basic functionality. I understand that you're going to make
> it 'experimental', but, IMHO, this doesn't worth even that. Patches
> are fairly small and there is nothing actually we need to test before
> the full support. The main part is not implemented yet.
>
> NACK for the series. Sorry.
>
> 
>
> One more thing I wanted to mention is that I was surprised to not see
> the performance comparison of TSO with usual Jumbo frames in your slides
> on a recent OVS Conference [1]. You had a comparison slide with 1500 MTU,
> TSO and the kernel vhost, but not the jumbo frames. In my testing
> I have more than 22 Gbps for VM<->VM jumbo frames performance with
> current OVS and iperf. And this is almost equal to vhost kernel with TSO
> performance on my system (I have probably a bit less powerful CPU, so the
> results are a bit lower than in your slides). Also, in real cases there
> will
> be significant performance drop because of fallback to software TSO in
> many cases. Sorry, but I don't really see the benefits of using the
> Multi-segment mbufs and TSO.
>

Tiago

I am also interested in the performance benefit question.
Could you answer it ?

Thanks Darrell




> All of this work positioned as a feature to cover the gap between the
> kernel
> and userspace datapaths, but there is no any gap from the performance point
> of view


Tiago

Is this true ?

Thanks Darrell




> and overcomplicating the OVS by multisegmenting and TSO is really
> not necessary.
>




>
> [1] http://www.openvswitch.org/support/ovscon2018/5/0935-lam.pptx
>
>
> On 10.01.2019 19:58, Tiago Lam wrote:
> > Enabling TSO offload allows a host stack to delegate the segmentation of
> > oversized TCP packets to the underlying physical NIC, if supported. In
> the case
> > of a VM this means that the segmentation of the packets is not performed
> by the
> > guest kernel, but by the host NIC itself. In turn, since the TSO
> calculations
> > and checksums are being performed in hardware, this alleviates the CPU
> load on
> > the host system. In inter VM communication this might account to
> significant
> > savings, and higher throughput, even more so if the VMs are running on
> the same
> > host.
> >
> > Thus, although inter VM communication is already possible as is, there's
> a
> > sacrifice in terms of CPU, which may affect the overall throughput.
> >
> > This series adds support for TSO in OvS-DPDK, by making use of the TSO
> > offloading feature already supported by DPDK vhost backend, having the
> > following scenarios in mind:
> > - Inter VM communication on the same host;
> > - Inter VM communication on different hosts;
> > - The same two use cases above, but on a VLAN network.
> >
> > The work is based on [1]; It has been rebased to run on top of the
> > multi-segment mbufs work (v13) [2] and re-worked to use DPDK v18.11.
> >
> > [1] https://patchwork.ozlabs.org/patch/749564/
> > [2]
> https://mail.openvswitch.org/pipermail/ovs-dev/2019-January/354950.html
> >
> > Considerations:
> > - As mentioned above, this series depends on the multi-segment mbuf
> series
> >   (v13) 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-11 Thread Lam, Tiago
On 11/01/2019 19:37, Ian Stokes wrote:
> On 1/11/2019 4:14 PM, Ilya Maximets wrote:
>> Nothing significantly changed since the previous versions.
>> This patch set effectively breaks following cases if multi-segment
>> mbufs enabled:
>>
> 
> Hi Ilya, thanks for your feedback. A few queries and clarifications for 
> discussion below.
> 
>  From reading the mail at first I was under the impression that all 
> these features were being broken by default after applying the patch?
> 
> To clarify, in your testing, after applying these patches, are the use 
> cases below broken if you are not using multiseg/TSO? (by default these 
> are disabled).
> 
> The intention of this work, as an experimental feature, would be that it 
> is disabled by default and will not introduce new regressions to users 
> who do not enable it. So there would be no impact on a user who does not 
> wish to enable this and use OVS DPDK as is today.
> 
> The series would be the first steps to enable OVS DPDK to use TSO in 
> both inter VM and Host to Host environments using DPDK interfaces that 
> support the feature. Kernel interfaces such as tap devices are not 
> catered for as of yet.
> 
> The use cases that are catered for are
> 
> - Inter VM communication on the same host using DPDK Interfaces;
> - Inter VM communication on different hosts DPDK Interfaces;
> - The same two use cases above, but on a VLAN network.
> 
> Again these are the first steps showing TSO in DPDK with multiseg feature.
> 
>>1. Host <-> VM communication broken.
>>
>>   With this patch applied VMs has offloading enabled by default.
>>   This makes all the packets that goes out from the VM to have
>>   incorrect checksums (beside possible multi-segmenting). As a
>>   result any packet that reaches host interfaces dropped by the
>>   host kernel as corrupted. Could be easily reproduced by trying
>>   to use ssh from host to VM, which is the common case for usual
>>   OpenStack installations.
>>
> I feel it's important to stress that by default however the patches 
> don't break this functionality. Both Multi-segment mbufs and TSO are 
> disabled by default, and thus, TSO won't be negotiated and the VMs won't 
> be sending multi-segment mbufs to OvS.
> 
>  From Tiagos testing if you decide to enable multi-segments mbuf and 
> TSO, *which is considered experimental* at the moment, the VM-Host 
> communication will be broken only if your VM negotiates TSO enabled by 
> default. This is because with TSO enabled checksum offloading comes into 
> play, which means packets from the VM will arrive to the host with bad 
> checksums.
> 
> Otherwise, if your VM doesn't enable TSO by default, or if you disable 
> with `ethtool -K  tx off`, everything should work as before, but 
> using multi-segment mbufs. That was my impression.
> 
> @Tiago can you confirm?

That's correct, yes. The expected is that with TSO it won't work (for
the reasons you mention above), but one will get the appropriate
warnings / errors. However, making use of multi-segment mbus only
doesn't break this use-case.
> 
> The only way to circumvent that, as you mention below, is "fallback to 
> full software TSO implementation".

Which is called GSO.

> 
> As an experimental feature I would be surprised to see Open Stack 
> provide support for this. Again by default it should not impact an Open 
> Stack deployment unless it is explicitly enabled and support for that 
> would be some way off as of yet.
> 
>>   If checksums will be fixed, segmented packets will be dropped
>>   anyway in netdev-linux/bsd/dummy.
> 
> By segmented I assume you are referring to TCP segment rather than multi 
> segment, can you clarify?
> 
> This is a limitation, but one to be factored in by a user when choosing 
> to enable or disable TSO with OVS DPDK until it is resolved.
> 
>>
>>2. Broken VM to VM communication in case of not negotiated offloading
>>   in one of VMs.
>>
>>   If one of VMs does not negotiate offloading, all the packets
>>   will be dropped for the same reason as in Host to VM case. This
>>   is a really bad case because virtio driver could change the
>>   feature set in runtime and this should be handled in OVS.
>>
>>   Note that linux kernel virtio driver always has rx checksum
>>   offloading enabled, i.e. it doesn't check/recalculates the
>>   checksums in software if needed. So, you need the dpdk driver or
>>   a FreeBSD machine to reproduce checksumming issue.
>>
> 
> Again by default I believe this still functions as expected. Tiago can 
> you comment any further on this from your testing?

Sure.

That's correct, this is not broken when using the defaults. In fact, if
multi-segments is disabled, we keep the explicit call to
rte_vhost_driver_disable_features() as before, in order to disable TSO
and checksum offloading.

If you decide to enable multi-segments mbuf and TSO, as you say, I
haven't noticed any issues with Linux VMs, 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-11 Thread Lam, Tiago
On 11/01/2019 16:14, Ilya Maximets wrote:

[snip]

> One more thing I wanted to mention is that I was surprised to not see
> the performance comparison of TSO with usual Jumbo frames in your slides
> on a recent OVS Conference [1]. You had a comparison slide with 1500 MTU,
> TSO and the kernel vhost, but not the jumbo frames. In my testing
> I have more than 22 Gbps for VM<->VM jumbo frames performance with
> current OVS and iperf. And this is almost equal to vhost kernel with TSO
> performance on my system (I have probably a bit less powerful CPU, so the
> results are a bit lower than in your slides). Also, in real cases there will
> be significant performance drop because of fallback to software TSO in
> many cases. Sorry, but I don't really see the benefits of using the
> Multi-segment mbufs and TSO.
> All of this work positioned as a feature to cover the gap between the kernel
> and userspace datapaths, but there is no any gap from the performance point
> of view and overcomplicating the OVS by multisegmenting and TSO is really
> not necessary.
> 
> [1] http://www.openvswitch.org/support/ovscon2018/5/0935-lam.pptx
> 

Right. Thanks for watching...

You can have the same behavior without this patchset. No one said
otherwise. At the cost of user experience. And that's why every so often
there's a new person in the ovs-discuss mailing list complaining about
performance issues on a PVP setup using OvS-DPDK. TSO improves usability
since it leaves up to the Host on how to do that segmentation, which is
where it matters. Otherwise you need to set the MTU in the VM
accordingly, risking double segmentation or screwing something up.

What's more, if TCP's Window Scaling option is being negotiated between
VMs, if you don't have TSO you will inevitably have a performance hit.
You've configured a manual MTU which is likely going to constrain what's
negotiated with Window Scaling, leading to your VM TCP/IP stack
segmenting the packet according to that. With TSO you avoid that and
only do the segmentation, in the Host, if the packet is actually leaving
the host. VM to VM communication within the same host would have no need
for segmentation.

Tiago.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-11 Thread Ian Stokes

On 1/11/2019 4:14 PM, Ilya Maximets wrote:

Nothing significantly changed since the previous versions.
This patch set effectively breaks following cases if multi-segment
mbufs enabled:



Hi Ilya, thanks for your feedback. A few queries and clarifications for 
discussion below.


From reading the mail at first I was under the impression that all 
these features were being broken by default after applying the patch?


To clarify, in your testing, after applying these patches, are the use 
cases below broken if you are not using multiseg/TSO? (by default these 
are disabled).


The intention of this work, as an experimental feature, would be that it 
is disabled by default and will not introduce new regressions to users 
who do not enable it. So there would be no impact on a user who does not 
wish to enable this and use OVS DPDK as is today.


The series would be the first steps to enable OVS DPDK to use TSO in 
both inter VM and Host to Host environments using DPDK interfaces that 
support the feature. Kernel interfaces such as tap devices are not 
catered for as of yet.


The use cases that are catered for are

- Inter VM communication on the same host using DPDK Interfaces;
- Inter VM communication on different hosts DPDK Interfaces;
- The same two use cases above, but on a VLAN network.

Again these are the first steps showing TSO in DPDK with multiseg feature.


   1. Host <-> VM communication broken.

  With this patch applied VMs has offloading enabled by default.
  This makes all the packets that goes out from the VM to have
  incorrect checksums (beside possible multi-segmenting). As a
  result any packet that reaches host interfaces dropped by the
  host kernel as corrupted. Could be easily reproduced by trying
  to use ssh from host to VM, which is the common case for usual
  OpenStack installations.

I feel it's important to stress that by default however the patches 
don't break this functionality. Both Multi-segment mbufs and TSO are 
disabled by default, and thus, TSO won't be negotiated and the VMs won't 
be sending multi-segment mbufs to OvS.


From Tiagos testing if you decide to enable multi-segments mbuf and 
TSO, *which is considered experimental* at the moment, the VM-Host 
communication will be broken only if your VM negotiates TSO enabled by 
default. This is because with TSO enabled checksum offloading comes into 
play, which means packets from the VM will arrive to the host with bad 
checksums.


Otherwise, if your VM doesn't enable TSO by default, or if you disable 
with `ethtool -K  tx off`, everything should work as before, but 
using multi-segment mbufs. That was my impression.


@Tiago can you confirm?

The only way to circumvent that, as you mention below, is "fallback to 
full software TSO implementation".


As an experimental feature I would be surprised to see Open Stack 
provide support for this. Again by default it should not impact an Open 
Stack deployment unless it is explicitly enabled and support for that 
would be some way off as of yet.



  If checksums will be fixed, segmented packets will be dropped
  anyway in netdev-linux/bsd/dummy.


By segmented I assume you are referring to TCP segment rather than multi 
segment, can you clarify?


This is a limitation, but one to be factored in by a user when choosing 
to enable or disable TSO with OVS DPDK until it is resolved.




   2. Broken VM to VM communication in case of not negotiated offloading
  in one of VMs.

  If one of VMs does not negotiate offloading, all the packets
  will be dropped for the same reason as in Host to VM case. This
  is a really bad case because virtio driver could change the
  feature set in runtime and this should be handled in OVS.

  Note that linux kernel virtio driver always has rx checksum
  offloading enabled, i.e. it doesn't check/recalculates the
  checksums in software if needed. So, you need the dpdk driver or
  a FreeBSD machine to reproduce checksumming issue.



Again by default I believe this still functions as expected. Tiago can 
you comment any further on this from your testing?



   3. Broken support of all the virtual and HW NICs that doen't support TSO.

  It's actually big number of NICs. More than a half of PMDs in DPDK
  does not support TSO.
  All the packets just dropped by netdev-dpdk before send.


There are a number of NICS/VDEVs that do not support TSO in DPDK. 
However there are a number of NICs that do support it and this will 
increase in future DPDK releases.


I don't think should be a blocking factor.

In the case where multi segment is enabled in OVS DPDK we discussed 
simply not allowing a NIC to be added if it did not support multi seg 
mbufs. This can be extended to TSO also, as currently multi seg is a 
prerequirement for TSO.


This avoids a case for the moment where someone is mixing devices where 
support does and does not exist. Again this occurs only if the 

Re: [ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-11 Thread Ilya Maximets
Nothing significantly changed since the previous versions.
This patch set effectively breaks following cases if multi-segment
mbufs enabled:

  1. Host <-> VM communication broken.

 With this patch applied VMs has offloading enabled by default.
 This makes all the packets that goes out from the VM to have
 incorrect checksums (beside possible multi-segmenting). As a
 result any packet that reaches host interfaces dropped by the
 host kernel as corrupted. Could be easily reproduced by trying
 to use ssh from host to VM, which is the common case for usual
 OpenStack installations.

 If checksums will be fixed, segmented packets will be dropped
 anyway in netdev-linux/bsd/dummy.

  2. Broken VM to VM communication in case of not negotiated offloading
 in one of VMs.

 If one of VMs does not negotiate offloading, all the packets
 will be dropped for the same reason as in Host to VM case. This
 is a really bad case because virtio driver could change the
 feature set in runtime and this should be handled in OVS.

 Note that linux kernel virtio driver always has rx checksum
 offloading enabled, i.e. it doesn't check/recalculates the
 checksums in software if needed. So, you need the dpdk driver or
 a FreeBSD machine to reproduce checksumming issue.

  3. Broken support of all the virtual and HW NICs that doen't support TSO.

 It's actually big number of NICs. More than a half of PMDs in DPDK
 does not support TSO.
 All the packets just dropped by netdev-dpdk before send.

This is most probably not a full list of issues. To fix most of them
fallback to full software TSO implementation required.

Until this is done I prefer not to merge the feature because it
breaks the basic functionality. I understand that you're going to make
it 'experimental', but, IMHO, this doesn't worth even that. Patches
are fairly small and there is nothing actually we need to test before
the full support. The main part is not implemented yet.

NACK for the series. Sorry.



One more thing I wanted to mention is that I was surprised to not see
the performance comparison of TSO with usual Jumbo frames in your slides
on a recent OVS Conference [1]. You had a comparison slide with 1500 MTU,
TSO and the kernel vhost, but not the jumbo frames. In my testing
I have more than 22 Gbps for VM<->VM jumbo frames performance with
current OVS and iperf. And this is almost equal to vhost kernel with TSO
performance on my system (I have probably a bit less powerful CPU, so the
results are a bit lower than in your slides). Also, in real cases there will
be significant performance drop because of fallback to software TSO in
many cases. Sorry, but I don't really see the benefits of using the
Multi-segment mbufs and TSO.
All of this work positioned as a feature to cover the gap between the kernel
and userspace datapaths, but there is no any gap from the performance point
of view and overcomplicating the OVS by multisegmenting and TSO is really
not necessary.

[1] http://www.openvswitch.org/support/ovscon2018/5/0935-lam.pptx


On 10.01.2019 19:58, Tiago Lam wrote:
> Enabling TSO offload allows a host stack to delegate the segmentation of
> oversized TCP packets to the underlying physical NIC, if supported. In the 
> case
> of a VM this means that the segmentation of the packets is not performed by 
> the
> guest kernel, but by the host NIC itself. In turn, since the TSO calculations
> and checksums are being performed in hardware, this alleviates the CPU load on
> the host system. In inter VM communication this might account to significant
> savings, and higher throughput, even more so if the VMs are running on the 
> same
> host.
> 
> Thus, although inter VM communication is already possible as is, there's a
> sacrifice in terms of CPU, which may affect the overall throughput.
> 
> This series adds support for TSO in OvS-DPDK, by making use of the TSO
> offloading feature already supported by DPDK vhost backend, having the
> following scenarios in mind:
> - Inter VM communication on the same host;
> - Inter VM communication on different hosts;
> - The same two use cases above, but on a VLAN network.
> 
> The work is based on [1]; It has been rebased to run on top of the
> multi-segment mbufs work (v13) [2] and re-worked to use DPDK v18.11.
> 
> [1] https://patchwork.ozlabs.org/patch/749564/
> [2] https://mail.openvswitch.org/pipermail/ovs-dev/2019-January/354950.html
> 
> Considerations:
> - As mentioned above, this series depends on the multi-segment mbuf series
>   (v13) and can't be applied on master as is;
> - The `rte_eth_tx_prepare()` API in DPDK is marked experimental, and although
>   I'm not getting any errors / warnings while compiling, do shout if get into
>   trouble while testing;
> - I'm due to send v3 in the next few days, but sending v2 now to enable early
>   testing;
> 
> Tiago Lam (3):
>   netdev-dpdk: Validate packets burst before Tx.
>   

[ovs-dev] [PATCH v2 0/3] dpdk: Add support for TSO

2019-01-10 Thread Tiago Lam
Enabling TSO offload allows a host stack to delegate the segmentation of
oversized TCP packets to the underlying physical NIC, if supported. In the case
of a VM this means that the segmentation of the packets is not performed by the
guest kernel, but by the host NIC itself. In turn, since the TSO calculations
and checksums are being performed in hardware, this alleviates the CPU load on
the host system. In inter VM communication this might account to significant
savings, and higher throughput, even more so if the VMs are running on the same
host.

Thus, although inter VM communication is already possible as is, there's a
sacrifice in terms of CPU, which may affect the overall throughput.

This series adds support for TSO in OvS-DPDK, by making use of the TSO
offloading feature already supported by DPDK vhost backend, having the
following scenarios in mind:
- Inter VM communication on the same host;
- Inter VM communication on different hosts;
- The same two use cases above, but on a VLAN network.

The work is based on [1]; It has been rebased to run on top of the
multi-segment mbufs work (v13) [2] and re-worked to use DPDK v18.11.

[1] https://patchwork.ozlabs.org/patch/749564/
[2] https://mail.openvswitch.org/pipermail/ovs-dev/2019-January/354950.html

Considerations:
- As mentioned above, this series depends on the multi-segment mbuf series
  (v13) and can't be applied on master as is;
- The `rte_eth_tx_prepare()` API in DPDK is marked experimental, and although
  I'm not getting any errors / warnings while compiling, do shout if get into
  trouble while testing;
- I'm due to send v3 in the next few days, but sending v2 now to enable early
  testing;

Tiago Lam (3):
  netdev-dpdk: Validate packets burst before Tx.
  netdev-dpdk: Consider packets marked for TSO.
  netdev-dpdk: Enable TSO when using multi-seg mbufs

 Documentation/automake.mk   |   1 +
 Documentation/topics/dpdk/index.rst |   1 +
 Documentation/topics/dpdk/tso.rst   | 111 
 NEWS|   1 +
 lib/dp-packet.h |  14 +++
 lib/netdev-bsd.c|  11 +-
 lib/netdev-dpdk.c   | 203 ++--
 lib/netdev-dummy.c  |  11 +-
 lib/netdev-linux.c  |  15 +++
 9 files changed, 332 insertions(+), 36 deletions(-)
 create mode 100644 Documentation/topics/dpdk/tso.rst

-- 
2.7.4

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev