On Wed, Feb 15, 2017 at 1:49 PM, Michael S. Tsirkin wrote:
> The logic is simple really. With #VCPUs == #queues we can reasonably
> assume this box is mostly doing networking so we can set affinity
> the way we like. With VCPUs > queues clearly VM is doing more stuff
> so we
On Wed, Feb 15, 2017 at 01:38:48PM -0800, Benjamin Serebrin wrote:
> On Wed, Feb 15, 2017 at 11:17 AM, Michael S. Tsirkin wrote:
>
> > Right. But userspace knows it's random at least. If kernel supplies
> > affinity e.g. the way your patch does, userspace ATM accepts this as a
>
On Wed, Feb 15, 2017 at 11:17 AM, Michael S. Tsirkin wrote:
> Right. But userspace knows it's random at least. If kernel supplies
> affinity e.g. the way your patch does, userspace ATM accepts this as a
> gospel.
The existing code supplies the same affinity gospels in the #vcpu
On Wed, Feb 15, 2017 at 10:27:37AM -0800, Benjamin Serebrin wrote:
> On Wed, Feb 15, 2017 at 9:42 AM, Michael S. Tsirkin wrote:
> >
> >
> > > For pure network load, assigning each txqueue IRQ exclusively
> > > to one of the cores that generates traffic on that queue is the
> > >
On Wed, Feb 15, 2017 at 9:42 AM, Michael S. Tsirkin wrote:
>
>
> > For pure network load, assigning each txqueue IRQ exclusively
> > to one of the cores that generates traffic on that queue is the
> > optimal layout in terms of load spreading. Irqbalance does
> > not have the XPS
On Wed, Feb 15, 2017 at 08:50:34AM -0800, Willem de Bruijn wrote:
> On Tue, Feb 14, 2017 at 1:05 PM, Michael S. Tsirkin wrote:
> > On Tue, Feb 14, 2017 at 11:17:41AM -0800, Benjamin Serebrin wrote:
> >> On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin
> >>
On Tue, Feb 14, 2017 at 1:05 PM, Michael S. Tsirkin wrote:
> On Tue, Feb 14, 2017 at 11:17:41AM -0800, Benjamin Serebrin wrote:
>> On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin wrote:
>>
>> > IIRC irqbalance will bail out and avoid touching affinity
>> >
On Tue, Feb 14, 2017 at 11:17:41AM -0800, Benjamin Serebrin wrote:
> On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin wrote:
>
> > IIRC irqbalance will bail out and avoid touching affinity
> > if you set affinity from driver. Breaking that's not nice.
> > Pls correct me if
On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin wrote:
> IIRC irqbalance will bail out and avoid touching affinity
> if you set affinity from driver. Breaking that's not nice.
> Pls correct me if I'm wrong.
I believe you're right that irqbalance will leave the affinity
On Tue, Feb 07, 2017 at 10:15:06AM -0800, Ben Serebrin wrote:
> From: Benjamin Serebrin
>
> If the number of virtio queue pairs is not equal to the
> number of VCPUs, the virtio guest driver doesn't assign
> any CPU affinity for the queue interrupts or the xps
> aggregation
From: Ben Serebrin
Date: Tue, 7 Feb 2017 10:15:06 -0800
> If the number of virtio queue pairs is not equal to the
> number of VCPUs, the virtio guest driver doesn't assign
> any CPU affinity for the queue interrupts or the xps
> aggregation interrupt. (In contrast, the
From: Benjamin Serebrin
If the number of virtio queue pairs is not equal to the
number of VCPUs, the virtio guest driver doesn't assign
any CPU affinity for the queue interrupts or the xps
aggregation interrupt. (In contrast, the driver does assign
both if the counts of
12 matches
Mail list logo