Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-09 Thread johnny Strom

On 09/08/2015 02:45 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:33:53PM +0300, johnny Strom wrote:

On 09/08/2015 02:12 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:

On 09/08/2015 01:06 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/

Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1


[...]

  vif = ""
   0 = ""
backend = "/local/domain/0/backend/vif/1/0"
backend-id = "0"
state = "4"
handle = "0"
mac = "00:16:3e:ee:aa:aa"
multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Hello

I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf

rmmod xen_netback

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
xenvif_max_queues=4



But it is still the same issue..

Is xenvif_max_queues supported in Linux kernel 3.16?

modinfo -p xen_netback

separate_tx_rx_irq: (bool)
rx_drain_timeout_msecs: (uint)
rx_stall_timeout_msecs: (uint)
max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.


It's still the same issue:

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
max_queues=4


If there are the precise steps you took, isn't modprobe -v already
inserted the module without parameter set? I.e. the later insmod had no
effect.


Hello again

I am not sure but there might be another issue with the xen_netback 
module in Debian jessie.


I am not able to set the  max_queues options so that it is set at load time.

I have tested with the loop kernel module where it works to set an value 
but doing the same for

the xen_netback driver dose not work:

This works with loop module like this:
options loop max_loop=50

And then doing the same with the xen_netback that dose not work:
options xen_netback max_queues=4

Or is there some other way it should be set in 
/etc/modprobe.d/xen_netback.conf?



Best regards Johnny




But what could be reason for this?


Make sure that parameter is correctly set. You can look at
/sys/module/xen_netback/parameter/max_queues for the actual number.

You can even just echo a number to that file to set the value on the
fly.


Could it be problems with one of the CPUS? since if I boot dom0 with just 14
cpu cores then it works


No, it can't be related to CPUs.  That's because DomU doesn't exhaust
resources anymore.

Wei.

___
Xen-users mailing list
xen-us...@lists.xen.org
http://lists.xen.org/xen-users



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-09 Thread johnny Strom

On 09/09/2015 12:35 PM, Wei Liu wrote:

On Wed, Sep 09, 2015 at 10:09:09AM +0300, johnny Strom wrote:
[...]

Hello again

I am not sure but there might be another issue with the xen_netback module
in Debian jessie.

I am not able to set the  max_queues options so that it is set at load time.

I have tested with the loop kernel module where it works to set an value but
doing the same for
the xen_netback driver dose not work:

This works with loop module like this:
options loop max_loop=50

And then doing the same with the xen_netback that dose not work:
options xen_netback max_queues=4

Or is there some other way it should be set in
/etc/modprobe.d/xen_netback.conf?


Best regards Johnny


After looking at the code more carefully I think that's a bug.

I will send a patch to fix it when I get around to do it. In the mean
time (till the bug fix is propagated to Debian, that probably takes
quite a bit of time), you can use a script to echo the value you want to
the control knob during system startup.

I will put a Reported-by tag with your email address in my patch if you
don't mind. I will also CC you on that patch I'm going to send so that
you have an idea when it's merged upstreamed.


Thanks

It's ok to put my email in the patch.

Best regards Johnny


Wei.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread johnny Strom

On 09/08/2015 02:12 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:

On 09/08/2015 01:06 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/

Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1


[...]

  vif = ""
   0 = ""
backend = "/local/domain/0/backend/vif/1/0"
backend-id = "0"
state = "4"
handle = "0"
mac = "00:16:3e:ee:aa:aa"
multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Hello

I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf

rmmod xen_netback

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
xenvif_max_queues=4



But it is still the same issue..

Is xenvif_max_queues supported in Linux kernel 3.16?

modinfo -p xen_netback

separate_tx_rx_irq: (bool)
rx_drain_timeout_msecs: (uint)
rx_stall_timeout_msecs: (uint)
max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.



It's still the same issue:

modprobe -v xen_netback
insmod 
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko max_queues=4 



But what could be reason for this?

Could it be problems with one of the CPUS? since if I boot dom0 with 
just 14 cpu cores then it works



xenstore-ls  /local/domain/7
vm = "/vm/8602b443-6828-44ca-9e2c-8023b2fe8583"
name = "test.debian-jessie"
cpu = ""
 0 = ""
  availability = "online"
 1 = ""
  availability = "online"
 2 = ""
  availability = "online"
 3 = ""
  availability = "online"
 4 = ""
  availability = "online"
 5 = ""
  availability = "online"
 6 = ""
  availability = "online"
 7 = ""
  availability = "online"
 8 = ""
  availability = "online"
 9 = ""
  availability = "online"
 10 = ""
  availability = "online"
 11 = ""
  availability = "online"
 12 = ""
  availability = "online"
 13 = ""
  availability = "online"
 14 = ""
  availability = "online"
 15 = ""
  availability = "online"
 16 = ""
  availability = "online"
 17 = ""
  availability = "online"
 18 = ""
  availability = "online"
 19 = ""
  availability = "online"
 20 = ""
  availability = "online"
 21 = ""
  availability = "online"
 22 = ""
  availability = "online"
 23 = ""
  availability = "online"
 24 = ""
  availability = "online"
 25 = ""
  availability = "online"
 26 = ""
  availability = "online"
 27 = ""
  availability = "online"
 28 = ""
  availability = "online"
 29 = ""
  availability = "online"
 30 = ""
  availability = "online"
 31 = ""
  availability = "online"
 32 = ""
  availability = "online"
 33 = ""
  availability = "online"
 34 = ""
  availability = "online"
 35 = ""
  availability = "online"
 36 = ""
  availability = "online"
 37 = ""
  availability = "online"
 38 = ""
  availability = "online"
 39 = ""
  availability = "online"
memory = ""
 static-max = "4194304"
 target = "4194305"
 videoram = "-1"
device = ""
 suspend = ""
  event-channel = ""
 vbd = ""
  51713 = ""
   backend = "/local/domain/0/backend/qdisk/7/51713"
   backend-id = "0"
   state = "4"
   virtual-device = "51713"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "8"
   event-channel = "243"
   feature-persistent = "1"
  51714 = ""
   backend = "/local/domain/0/backend/qdisk/7/51714"
   backend-id = "0"
   state = "4"
   virtual-device = "51714"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "9"
   event-channel = "244"
   feature-persistent = "1"
 vif = "&quo

Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread johnny Strom

On 09/08/2015 02:45 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:33:53PM +0300, johnny Strom wrote:

On 09/08/2015 02:12 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:

On 09/08/2015 01:06 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/

Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1


[...]

  vif = ""
   0 = ""
backend = "/local/domain/0/backend/vif/1/0"
backend-id = "0"
state = "4"
handle = "0"
mac = "00:16:3e:ee:aa:aa"
multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Hello

I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf

rmmod xen_netback

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
xenvif_max_queues=4



But it is still the same issue..

Is xenvif_max_queues supported in Linux kernel 3.16?

modinfo -p xen_netback

separate_tx_rx_irq: (bool)
rx_drain_timeout_msecs: (uint)
rx_stall_timeout_msecs: (uint)
max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.


It's still the same issue:

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
max_queues=4


If there are the precise steps you took, isn't modprobe -v already
inserted the module without parameter set? I.e. the later insmod had no
effect.


But what could be reason for this?


Make sure that parameter is correctly set. You can look at
/sys/module/xen_netback/parameter/max_queues for the actual number.

You can even just echo a number to that file to set the value on the
fly.


Yes thanks that works, I will figure out how to load the module.






Could it be problems with one of the CPUS? since if I boot dom0 with just 14
cpu cores then it works


No, it can't be related to CPUs.  That's because DomU doesn't exhaust
resources anymore.


Ok

And DomU also works if I use the kernel  3.2.68-1+deb7u2 that is in 
debian Wheezy.


Best Regards Johnny




Wei.

___
Xen-users mailing list
xen-us...@lists.xen.org
http://lists.xen.org/xen-users



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread johnny Strom

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/


Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1

vm = "/vm/63d68f91-093d-4cf2-8c8e-28c7bd8028ab"
name = "test-debian-jessie"
cpu = ""
 0 = ""
  availability = "online"
 1 = ""
  availability = "online"
 2 = ""
  availability = "online"
 3 = ""
  availability = "online"
 4 = ""
  availability = "online"
 5 = ""
  availability = "online"
 6 = ""
  availability = "online"
 7 = ""
  availability = "online"
 8 = ""
  availability = "online"
 9 = ""
  availability = "online"
 10 = ""
  availability = "online"
 11 = ""
  availability = "online"
 12 = ""
  availability = "online"
 13 = ""
  availability = "online"
 14 = ""
  availability = "online"
 15 = ""
  availability = "online"
 16 = ""
  availability = "online"
 17 = ""
  availability = "online"
 18 = ""
  availability = "online"
 19 = ""
  availability = "online"
 20 = ""
  availability = "online"
 21 = ""
  availability = "online"
 22 = ""
  availability = "online"
 23 = ""
  availability = "online"
 24 = ""
  availability = "online"
 25 = ""
  availability = "online"
 26 = ""
  availability = "online"
 27 = ""
  availability = "online"
 28 = ""
  availability = "online"
 29 = ""
  availability = "online"
 30 = ""
  availability = "online"
 31 = ""
  availability = "online"
 32 = ""
  availability = "online"
 33 = ""
  availability = "online"
 34 = ""
  availability = "online"
 35 = ""
  availability = "online"
 36 = ""
  availability = "online"
 37 = ""
  availability = "online"
 38 = ""
  availability = "online"
 39 = ""
  availability = "online"
memory = ""
 static-max = "4194304"
 target = "4194305"
 videoram = "-1"
device = ""
 suspend = ""
  event-channel = ""
 vbd = ""
  51713 = ""
   backend = "/local/domain/0/backend/qdisk/1/51713"
   backend-id = "0"
   state = "4"
   virtual-device = "51713"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "8"
   event-channel = "243"
   feature-persistent = "1"
  51714 = ""
   backend = "/local/domain/0/backend/qdisk/1/51714"
   backend-id = "0"
   state = "4"
   virtual-device = "51714"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "9"
   event-channel = "244"
   feature-persistent = "1"
 vif = ""
  0 = ""
   backend = "/local/domain/0/backend/vif/1/0"
   backend-id = "0"
   state = "4"
   handle = "0"
   mac = "00:16:3e:ee:aa:aa"
   multi-queue-num-queues = "17"
   queue-0 = ""
tx-ring-ref = "8960"
rx-ring-ref = "8961"
event-channel-tx = "245"
event-channel-rx = "246"
   queue-1 = ""
tx-ring-ref = "8962"
rx-ring-ref = "8963"
event-channel-tx = "247"
event-channel-rx = "248"
   queue-2 = ""
tx-ring-ref = "8964"
rx-ring-ref = "8965"
event-channel-tx = "249"
event-channel-rx = "250"
   queue-3 = ""
tx-ring-ref = "8966"
rx-ring-ref = "8967"
event-channel-tx = "251"
event-channel-rx = "252"
   queue-4 = ""
tx-ring-ref = "8968"
rx-ring-ref = "8969"
event-channel-tx = "253"
event-channel-rx = "254"
   queue-5 = ""
tx-ring-ref = "8970"
rx-ring-ref = "8971"
event-channel-tx = "255"
event-channel-rx = "256"
   queue-6 = ""
tx-ring-ref = "8972"
rx-ring-ref = "8973"
event-channel-tx = "257"
event-channel-rx = "258"
   queue-7 = ""
tx-ring-ref = "8974"
rx-ring-ref = "8975"
event-channel-tx = "259"
event-channel-rx = "260"
   queue-8 = ""
tx-ring-ref = "8976"
rx-ring-ref = "8977"
event-channel-tx = "261"
event-channel-rx = "262"
   queue-9 = ""
tx-ring-ref = "8978"
rx-ring-ref = "8979"
event-channel-tx = "263"
event-channel-rx = "264"
   queue-10 = ""
tx-ring-ref = "8980"
rx-ring-ref = "8981"
event-channel-tx = "265"
event-channel-rx = "266"
   queue-11 = ""
tx-ring-ref = "8982"
rx-ring-ref = "8983"
event-channel-tx = "267"
event-channel-rx = "268"
   queue-12 = ""
tx-ring-ref = "8984"
rx-ring-ref = "8985"
event-channel-tx = "269"
event-channel-rx = "270"
   queue-13 = ""
tx-ring-ref = "8986"
rx-ring-ref = "8987"
event-channel-tx = "271"
event-channel-rx = "272"
   queue-14 = ""
tx-ring-ref = "8988"
rx-ring-ref = "8989"
event-channel-tx = "273"
event-channel-rx = "274"
   queue-15 = ""
tx-ring-ref = "8990"
rx-ring-ref = "8991"
event-channel-tx = "275"
event-channel-rx = "276"
   queue-16 = ""
tx-ring-ref = "8992"
rx-ring-ref = "8993"
event-channel-tx = "277"
event-channel-rx = "278"
   request-rx-copy = "1"
   feature-rx-notify = "1"
   feature-sg = "1"
   feature-gso-tcpv4 = "1"
   feature-gso-tcpv6 = "1"
   feature-ipv6-csum-offload = "1"
  1 = ""
   backend = "/local/domain/0/backend/vif/1/1"
   backend-id = "0"
   state = "1"
   handle = "1"
   mac = "00:16:3e:ec:a7:b5"
control = ""
 shutdown = ""
 platform-feature-multiprocessor-suspend = "1"
 platform-feature-xs_reset_watches = "1"
data =