On 02/04/2016 11:38 AM, Tom Herbert wrote:
I'd start with verifying the XPS configuration is sane and then trying
to reproduce the issue outside of using VMs, if both of those are okay
then maybe look at some sort of bad interaction with OpenStack
configuration.
So, looking at bare-iron, I can
Shame on me for not including bare-iron TCP_RR:
stack@fcperf-cp1-comp0001-mgmt:~$ grep "1 1" xps_tcp_rr_on_* | awk
'{t+=$6;r+=$9;s+=$10}END{print "throughput",t/NR,"recv sd",r/NR,"send
sd",s/NR}'
throughput 18589.4 recv sd 21.6296 send sd 20.5931
stack@fcperf-cp1-comp0001-mgmt:~$ grep
Folks -
I was doing some performance work with OpenStack Liberty on systems with
2x E5-2650L v3 @ 1.80GHz processors and 560FLR (Intel 82599ES) NICs onto
which I'd placed a 4.4.0-1 kernel. I was actually interested in the
effect of removing the linux bridge from all the plumbing OpenStack
On Thu, Feb 4, 2016 at 11:13 AM, Rick Jones wrote:
> Folks -
>
> I was doing some performance work with OpenStack Liberty on systems with 2x
> E5-2650L v3 @ 1.80GHz processors and 560FLR (Intel 82599ES) NICs onto which
> I'd placed a 4.4.0-1 kernel. I was actually interested
On 02/04/2016 11:38 AM, Tom Herbert wrote:
On Thu, Feb 4, 2016 at 11:13 AM, Rick Jones wrote:
The Intel folks suggested something about the process scheduler moving the
sender around and ultimately causing some packet re-ordering. That could I
suppose explain the
On Thu, Feb 4, 2016 at 11:57 AM, Rick Jones wrote:
> On 02/04/2016 11:38 AM, Tom Herbert wrote:
>>
>> On Thu, Feb 4, 2016 at 11:13 AM, Rick Jones wrote:
>>>
>>> The Intel folks suggested something about the process scheduler moving
>>> the
>>> sender
On 02/04/2016 12:13 PM, Tom Herbert wrote:
On Thu, Feb 4, 2016 at 11:57 AM, Rick Jones wrote:
On 02/04/2016 11:38 AM, Tom Herbert wrote:
XPS has OOO avoidance for TCP, that should not be a problem.
What/how much should I read into:
With XPSTCPOFOQueue: 78206