Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-09 Thread johnny Strom

On 09/08/2015 02:45 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:33:53PM +0300, johnny Strom wrote:

On 09/08/2015 02:12 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:

On 09/08/2015 01:06 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/

Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1


[...]

  vif = ""
   0 = ""
backend = "/local/domain/0/backend/vif/1/0"
backend-id = "0"
state = "4"
handle = "0"
mac = "00:16:3e:ee:aa:aa"
multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Hello

I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf

rmmod xen_netback

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
xenvif_max_queues=4



But it is still the same issue..

Is xenvif_max_queues supported in Linux kernel 3.16?

modinfo -p xen_netback

separate_tx_rx_irq: (bool)
rx_drain_timeout_msecs: (uint)
rx_stall_timeout_msecs: (uint)
max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.


It's still the same issue:

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
max_queues=4


If there are the precise steps you took, isn't modprobe -v already
inserted the module without parameter set? I.e. the later insmod had no
effect.


Hello again

I am not sure but there might be another issue with the xen_netback 
module in Debian jessie.


I am not able to set the  max_queues options so that it is set at load time.

I have tested with the loop kernel module where it works to set an value 
but doing the same for

the xen_netback driver dose not work:

This works with loop module like this:
options loop max_loop=50

And then doing the same with the xen_netback that dose not work:
options xen_netback max_queues=4

Or is there some other way it should be set in 
/etc/modprobe.d/xen_netback.conf?



Best regards Johnny




But what could be reason for this?


Make sure that parameter is correctly set. You can look at
/sys/module/xen_netback/parameter/max_queues for the actual number.

You can even just echo a number to that file to set the value on the
fly.


Could it be problems with one of the CPUS? since if I boot dom0 with just 14
cpu cores then it works


No, it can't be related to CPUs.  That's because DomU doesn't exhaust
resources anymore.

Wei.

___
Xen-users mailing list
xen-us...@lists.xen.org
http://lists.xen.org/xen-users



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-09 Thread Wei Liu
On Wed, Sep 09, 2015 at 10:09:09AM +0300, johnny Strom wrote:
[...]
> 
> Hello again
> 
> I am not sure but there might be another issue with the xen_netback module
> in Debian jessie.
> 
> I am not able to set the  max_queues options so that it is set at load time.
> 
> I have tested with the loop kernel module where it works to set an value but
> doing the same for
> the xen_netback driver dose not work:
> 
> This works with loop module like this:
> options loop max_loop=50
> 
> And then doing the same with the xen_netback that dose not work:
> options xen_netback max_queues=4
> 
> Or is there some other way it should be set in
> /etc/modprobe.d/xen_netback.conf?
> 
> 
> Best regards Johnny
> 

After looking at the code more carefully I think that's a bug.

I will send a patch to fix it when I get around to do it. In the mean
time (till the bug fix is propagated to Debian, that probably takes
quite a bit of time), you can use a script to echo the value you want to
the control knob during system startup.

I will put a Reported-by tag with your email address in my patch if you
don't mind. I will also CC you on that patch I'm going to send so that
you have an idea when it's merged upstreamed.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-09 Thread johnny Strom

On 09/09/2015 12:35 PM, Wei Liu wrote:

On Wed, Sep 09, 2015 at 10:09:09AM +0300, johnny Strom wrote:
[...]

Hello again

I am not sure but there might be another issue with the xen_netback module
in Debian jessie.

I am not able to set the  max_queues options so that it is set at load time.

I have tested with the loop kernel module where it works to set an value but
doing the same for
the xen_netback driver dose not work:

This works with loop module like this:
options loop max_loop=50

And then doing the same with the xen_netback that dose not work:
options xen_netback max_queues=4

Or is there some other way it should be set in
/etc/modprobe.d/xen_netback.conf?


Best regards Johnny


After looking at the code more carefully I think that's a bug.

I will send a patch to fix it when I get around to do it. In the mean
time (till the bug fix is propagated to Debian, that probably takes
quite a bit of time), you can use a script to echo the value you want to
the control knob during system startup.

I will put a Reported-by tag with your email address in my patch if you
don't mind. I will also CC you on that patch I'm going to send so that
you have an idea when it's merged upstreamed.


Thanks

It's ok to put my email in the patch.

Best regards Johnny


Wei.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread Wei Liu
On Tue, Sep 08, 2015 at 09:58:59AM +0100, Ian Campbell wrote:
> On Mon, 2015-09-07 at 15:50 +0300, johnny Strom wrote:
> > 
> > Hello
> > 
> > I sent an email before about bridging not working in domU using Debian 
> > 8.1 and XEN 4.4.1.
> > 
> > It was not the network card "igb" as I first taught.
> > 
> > I managed to get bridging working in DOMU if is set the limit of cpu's 
> > in dom0 to 14, this is from /etc/default/grub
> > when it works ok:
> > 
> > GRUB_CMDLINE_XEN="dom0_max_vcpus=14 dom0_vcpus_pin"
> > 
> > 
> > Is there any known issue/limitations running xen with more with more 
> > than 14 CPU cores in dom0?
> > 
> > 
> > The cpu in question is:
> > 
> > processor   : 16
> > vendor_id   : GenuineIntel
> > cpu family  : 6
> > model   : 63
> > model name  : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
> > stepping: 2
> > microcode   : 0x2d
> > cpu MHz : 2298.718
> > cache size  : 25600 KB
> > physical id : 0
> > siblings: 17
> > core id : 11
> > cpu cores   : 9
> > apicid  : 22
> > initial apicid  : 22
> > fpu : yes
> > fpu_exception   : yes
> > cpuid level : 15
> > wp  : yes
> > flags   : fpu de tsc msr pae mce cx8 apic sep mca cmov pat 
> > clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good 
> > nopl nonstop_tsc eagerfpu pni pclmulqdq monitor est ssse3 fma cx16 
> > sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand 
> > hypervisor lahf_lm abm ida arat epb xsaveopt pln pts dtherm fsgsbase 
> > bmi1 avx2 bmi2 erms
> > bogomips: 4597.43
> > clflush size: 64
> > cache_alignment : 64
> > address sizes   : 46 bits physical, 48 bits virtual
> > power management:
> > 
> > 
> > 
> > 
> > If I set it to 17 in dom0:
> > 
> > GRUB_CMDLINE_XEN="dom0_max_vcpus=17 dom0_vcpus_pin"
> > 
> > Then I get this oops whan I try to boot domU with 40 vcpu's.
> > 
> > [1.588313] systemd-udevd[255]: starting version 215
> > [1.606097] xen_netfront: Initialising Xen virtual ethernet driver
> > [1.648172] blkfront: xvda2: flush diskcache: enabled; persistent 
> > grants: enabled; indirect descriptors: disabled;
> > [1.649190] blkfront: xvda1: flush diskcache: enabled; persistent 
> > grants: enabled; indirect descriptors: disabled;
> > [1.649705] Setting capacity to 2097152
> > [1.649716] xvda2: detected capacity change from 0 to 1073741824
> > [1.653540] xen_netfront: can't alloc rx grant refs
> 
> The frontend has run out of grant refs, perhaps due to multiqueue support
> in the front/backend where I think the number of queues scales with number
> of processors.
> 

The default number of queues would be number of _backend_ processors.
Xen command line indicates 17 Dom0 vcpus, which isn't too large I think.

Can you check in xenstore what the value of multi-queue-max-queues is?
Use xenstore-ls /local/domain/$DOMID/ when the guest is still around.

> I've added some relevant maintainers for net{front,back} and grant tables,
> plus people who were involved with MQ and the devel list.
> 
> 
> > [1.653547] net eth1: only created 17 queues

This indicates it only created 16 queues.  And there seems to be a bug
in code.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread Ian Campbell
On Mon, 2015-09-07 at 15:50 +0300, johnny Strom wrote:
> 
> Hello
> 
> I sent an email before about bridging not working in domU using Debian 
> 8.1 and XEN 4.4.1.
> 
> It was not the network card "igb" as I first taught.
> 
> I managed to get bridging working in DOMU if is set the limit of cpu's 
> in dom0 to 14, this is from /etc/default/grub
> when it works ok:
> 
> GRUB_CMDLINE_XEN="dom0_max_vcpus=14 dom0_vcpus_pin"
> 
> 
> Is there any known issue/limitations running xen with more with more 
> than 14 CPU cores in dom0?
> 
> 
> The cpu in question is:
> 
> processor   : 16
> vendor_id   : GenuineIntel
> cpu family  : 6
> model   : 63
> model name  : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
> stepping: 2
> microcode   : 0x2d
> cpu MHz : 2298.718
> cache size  : 25600 KB
> physical id : 0
> siblings: 17
> core id : 11
> cpu cores   : 9
> apicid  : 22
> initial apicid  : 22
> fpu : yes
> fpu_exception   : yes
> cpuid level : 15
> wp  : yes
> flags   : fpu de tsc msr pae mce cx8 apic sep mca cmov pat 
> clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good 
> nopl nonstop_tsc eagerfpu pni pclmulqdq monitor est ssse3 fma cx16 
> sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand 
> hypervisor lahf_lm abm ida arat epb xsaveopt pln pts dtherm fsgsbase 
> bmi1 avx2 bmi2 erms
> bogomips: 4597.43
> clflush size: 64
> cache_alignment : 64
> address sizes   : 46 bits physical, 48 bits virtual
> power management:
> 
> 
> 
> 
> If I set it to 17 in dom0:
> 
> GRUB_CMDLINE_XEN="dom0_max_vcpus=17 dom0_vcpus_pin"
> 
> Then I get this oops whan I try to boot domU with 40 vcpu's.
> 
> [1.588313] systemd-udevd[255]: starting version 215
> [1.606097] xen_netfront: Initialising Xen virtual ethernet driver
> [1.648172] blkfront: xvda2: flush diskcache: enabled; persistent 
> grants: enabled; indirect descriptors: disabled;
> [1.649190] blkfront: xvda1: flush diskcache: enabled; persistent 
> grants: enabled; indirect descriptors: disabled;
> [1.649705] Setting capacity to 2097152
> [1.649716] xvda2: detected capacity change from 0 to 1073741824
> [1.653540] xen_netfront: can't alloc rx grant refs

The frontend has run out of grant refs, perhaps due to multiqueue support
in the front/backend where I think the number of queues scales with number
of processors.

I've added some relevant maintainers for net{front,back} and grant tables,
plus people who were involved with MQ and the devel list.


> [1.653547] net eth1: only created 17 queues
> [1.654027] BUG: unable to handle kernel NULL pointer dereference at 
> 0018
> [1.654033] IP: [] netback_changed+0x964/0xee0 
> [xen_netfront]
> [1.654041] PGD 0
> [1.654044] Oops:  [#1] SMP
> [1.654048] Modules linked in: xen_netfront(+) xen_blkfront(+) 
> crct10dif_pclmul crct10dif_common crc32c_intel
> [1.654057] CPU: 3 PID: 209 Comm: xenwatch Not tainted 3.16.0-4-amd64 
> #1 Debian 3.16.7-ckt11-1+deb8u3
> [1.654061] task: 880faf477370 ti: 880faf478000 task.ti: 
> 880faf478000
> [1.654064] RIP: e030:[] [] 
> netback_changed+0x964/0xee0 [xen_netfront]
> [1.654071] RSP: e02b:880faf47be20  EFLAGS: 00010202
> [1.654074] RAX:  RBX: 880002a729c0 RCX: 
> 0001
> [1.654077] RDX: 0066785c RSI: 880002a72a58 RDI: 
> 3f1f
> [1.654080] RBP: 880faa44e000 R08: c9000624 R09: 
> ea0036d3f180
> [1.654083] R10: 251e R11:  R12: 
> 880faa44f000
> [1.654086] R13: 880002a72a58 R14: 000729c0 R15: 
> 880fab6f4000
> [1.654093] FS:  () GS:880fb706() 
> knlGS:
> [1.654096] CS:  e033 DS:  ES:  CR0: 80050033
> [1.654099] CR2: 0018 CR3: 01813000 CR4: 
> 00042660
> [1.654102] Stack:
> [1.654104]  880faf5aec00 880f000f 00110001 
> 880faf5aec00
> [1.654109]  880002a6b041 880002a6af84 0001af561000 
> 00110001
> [1.656945]  8800028e9df1 8800028e8880 880faf47beb8 
> 8135b9e0
> [1.656945] Call Trace:
> [1.656945]  [] ? unregister_xenbus_watch+0x220/0x220
> [1.656945]  [] ? xenwatch_thread+0x98/0x140
> [1.656945]  [] ? prepare_to_wait_event+0xf0/0xf0
> [1.656945]  [] ? kthread+0xbd/0xe0
> [1.656945]  [] ? kthread_create_on_node+0x180/0x180
> [1.656945]  [] ? ret_from_fork+0x58/0x90
> [1.656945]  [] ? kthread_create_on_node+0x180/0x180
> [1.656945] Code: 48 89 c6 e9 bd fd ff ff 48 8b 3c 24 48 c7 c2 b3 52 
> 06 a0 be f4 ff ff ff 31 c0 e8 38 61 2f e1 e9 54 ff ff ff 48 8b 43 20 4c 
> 89 ee <48> 8b 78 18 e8 13 63 2f e1 85 c0 0f 88 b0 fd ff ff 48 8b 43 20
> [1.656945] RIP  [] netback_changed+0x964/0xee0 
> 

Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread Wei Liu
On Tue, Sep 08, 2015 at 02:33:53PM +0300, johnny Strom wrote:
> On 09/08/2015 02:12 PM, Wei Liu wrote:
> >On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:
> >>On 09/08/2015 01:06 PM, Wei Liu wrote:
> >>>On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:
> On 09/08/2015 12:13 PM, Wei Liu wrote:
> >  xenstore-ls/local/domain/$DOMID/
> Here is the output of xenstore-ls only one network card is working.
> 
> xenstore-ls  /local/domain/1
> 
> >>>[...]
>   vif = ""
>    0 = ""
> backend = "/local/domain/0/backend/vif/1/0"
> backend-id = "0"
> state = "4"
> handle = "0"
> mac = "00:16:3e:ee:aa:aa"
> multi-queue-num-queues = "17"
> >>>OK so the number of queues is 17. You probably don't need that many
> >>>queues.
> >>>
> >>>Set module parameter "xenvif_max_queues" of netback to something like 4
> >>>should work around the problem for you.
> >>Hello
> >>
> >>I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf
> >>
> >>rmmod xen_netback
> >>
> >>modprobe -v xen_netback
> >>insmod
> >>/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
> >>xenvif_max_queues=4
> >>
> >>
> >>
> >>But it is still the same issue..
> >>
> >>Is xenvif_max_queues supported in Linux kernel 3.16?
> >>
> >>modinfo -p xen_netback
> >>
> >>separate_tx_rx_irq: (bool)
> >>rx_drain_timeout_msecs: (uint)
> >>rx_stall_timeout_msecs: (uint)
> >>max_queues:Maximum number of queues per virtual interface (uint)
> >Oh, right, the parameter name should be "max_queues".
> >
> >Sorry about that!
> >
> >Wei.
> 
> 
> It's still the same issue:
> 
> modprobe -v xen_netback
> insmod
> /lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
> max_queues=4
> 

If there are the precise steps you took, isn't modprobe -v already
inserted the module without parameter set? I.e. the later insmod had no
effect.

> 
> But what could be reason for this?
> 

Make sure that parameter is correctly set. You can look at
/sys/module/xen_netback/parameter/max_queues for the actual number.

You can even just echo a number to that file to set the value on the
fly.

> Could it be problems with one of the CPUS? since if I boot dom0 with just 14
> cpu cores then it works
> 

No, it can't be related to CPUs.  That's because DomU doesn't exhaust
resources anymore.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread johnny Strom

On 09/08/2015 02:12 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:

On 09/08/2015 01:06 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/

Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1


[...]

  vif = ""
   0 = ""
backend = "/local/domain/0/backend/vif/1/0"
backend-id = "0"
state = "4"
handle = "0"
mac = "00:16:3e:ee:aa:aa"
multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Hello

I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf

rmmod xen_netback

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
xenvif_max_queues=4



But it is still the same issue..

Is xenvif_max_queues supported in Linux kernel 3.16?

modinfo -p xen_netback

separate_tx_rx_irq: (bool)
rx_drain_timeout_msecs: (uint)
rx_stall_timeout_msecs: (uint)
max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.



It's still the same issue:

modprobe -v xen_netback
insmod 
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko max_queues=4 



But what could be reason for this?

Could it be problems with one of the CPUS? since if I boot dom0 with 
just 14 cpu cores then it works



xenstore-ls  /local/domain/7
vm = "/vm/8602b443-6828-44ca-9e2c-8023b2fe8583"
name = "test.debian-jessie"
cpu = ""
 0 = ""
  availability = "online"
 1 = ""
  availability = "online"
 2 = ""
  availability = "online"
 3 = ""
  availability = "online"
 4 = ""
  availability = "online"
 5 = ""
  availability = "online"
 6 = ""
  availability = "online"
 7 = ""
  availability = "online"
 8 = ""
  availability = "online"
 9 = ""
  availability = "online"
 10 = ""
  availability = "online"
 11 = ""
  availability = "online"
 12 = ""
  availability = "online"
 13 = ""
  availability = "online"
 14 = ""
  availability = "online"
 15 = ""
  availability = "online"
 16 = ""
  availability = "online"
 17 = ""
  availability = "online"
 18 = ""
  availability = "online"
 19 = ""
  availability = "online"
 20 = ""
  availability = "online"
 21 = ""
  availability = "online"
 22 = ""
  availability = "online"
 23 = ""
  availability = "online"
 24 = ""
  availability = "online"
 25 = ""
  availability = "online"
 26 = ""
  availability = "online"
 27 = ""
  availability = "online"
 28 = ""
  availability = "online"
 29 = ""
  availability = "online"
 30 = ""
  availability = "online"
 31 = ""
  availability = "online"
 32 = ""
  availability = "online"
 33 = ""
  availability = "online"
 34 = ""
  availability = "online"
 35 = ""
  availability = "online"
 36 = ""
  availability = "online"
 37 = ""
  availability = "online"
 38 = ""
  availability = "online"
 39 = ""
  availability = "online"
memory = ""
 static-max = "4194304"
 target = "4194305"
 videoram = "-1"
device = ""
 suspend = ""
  event-channel = ""
 vbd = ""
  51713 = ""
   backend = "/local/domain/0/backend/qdisk/7/51713"
   backend-id = "0"
   state = "4"
   virtual-device = "51713"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "8"
   event-channel = "243"
   feature-persistent = "1"
  51714 = ""
   backend = "/local/domain/0/backend/qdisk/7/51714"
   backend-id = "0"
   state = "4"
   virtual-device = "51714"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "9"
   event-channel = "244"
   feature-persistent = "1"
 vif = ""
  0 = ""
   backend = "/local/domain/0/backend/vif/7/0"
   backend-id = "0"
   state = "4"
   handle = "0"
   mac = "00:16:3e:ee:aa:aa"
   multi-queue-num-queues = "17"
   queue-0 = ""
tx-ring-ref = "8960"
rx-ring-ref = "8961"
event-channel-tx = "245"
event-channel-rx = "246"
   queue-1 = ""
tx-ring-ref = "8962"
rx-ring-ref = "8963"
event-channel-tx = "247"
event-channel-rx = "248"
   queue-2 = ""
tx-ring-ref = "8964"
rx-ring-ref = "8965"
event-channel-tx = "249"
event-channel-rx = "250"
   queue-3 = ""
tx-ring-ref = "8966"
rx-ring-ref = "8967"
event-channel-tx = "251"
event-channel-rx = "252"
   queue-4 = ""
tx-ring-ref = "8968"
rx-ring-ref = "8969"
event-channel-tx = "253"
event-channel-rx = "254"
   queue-5 = ""
tx-ring-ref = "8970"
rx-ring-ref = "8971"
event-channel-tx = "255"
event-channel-rx = "256"
   queue-6 = ""
tx-ring-ref = "8972"
rx-ring-ref = "8973"
event-channel-tx = "257"
event-channel-rx = "258"
   queue-7 = ""
tx-ring-ref = "8974"
rx-ring-ref = "8975"
event-channel-tx = "259"
event-channel-rx = "260"
   queue-8 = ""
tx-ring-ref = "8976"
rx-ring-ref = 

Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread johnny Strom

On 09/08/2015 02:45 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:33:53PM +0300, johnny Strom wrote:

On 09/08/2015 02:12 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:

On 09/08/2015 01:06 PM, Wei Liu wrote:

On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/

Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1


[...]

  vif = ""
   0 = ""
backend = "/local/domain/0/backend/vif/1/0"
backend-id = "0"
state = "4"
handle = "0"
mac = "00:16:3e:ee:aa:aa"
multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Hello

I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf

rmmod xen_netback

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
xenvif_max_queues=4



But it is still the same issue..

Is xenvif_max_queues supported in Linux kernel 3.16?

modinfo -p xen_netback

separate_tx_rx_irq: (bool)
rx_drain_timeout_msecs: (uint)
rx_stall_timeout_msecs: (uint)
max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.


It's still the same issue:

modprobe -v xen_netback
insmod
/lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
max_queues=4


If there are the precise steps you took, isn't modprobe -v already
inserted the module without parameter set? I.e. the later insmod had no
effect.


But what could be reason for this?


Make sure that parameter is correctly set. You can look at
/sys/module/xen_netback/parameter/max_queues for the actual number.

You can even just echo a number to that file to set the value on the
fly.


Yes thanks that works, I will figure out how to load the module.






Could it be problems with one of the CPUS? since if I boot dom0 with just 14
cpu cores then it works


No, it can't be related to CPUs.  That's because DomU doesn't exhaust
resources anymore.


Ok

And DomU also works if I use the kernel  3.2.68-1+deb7u2 that is in 
debian Wheezy.


Best Regards Johnny




Wei.

___
Xen-users mailing list
xen-us...@lists.xen.org
http://lists.xen.org/xen-users



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread Wei Liu
On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:
> On 09/08/2015 12:13 PM, Wei Liu wrote:
> >  xenstore-ls/local/domain/$DOMID/
> 
> Here is the output of xenstore-ls only one network card is working.
> 
> xenstore-ls  /local/domain/1
> 
[...]
>  vif = ""
>   0 = ""
>backend = "/local/domain/0/backend/vif/1/0"
>backend-id = "0"
>state = "4"
>handle = "0"
>mac = "00:16:3e:ee:aa:aa"
>multi-queue-num-queues = "17"

OK so the number of queues is 17. You probably don't need that many
queues.

Set module parameter "xenvif_max_queues" of netback to something like 4
should work around the problem for you.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread johnny Strom

On 09/08/2015 12:13 PM, Wei Liu wrote:

  xenstore-ls/local/domain/$DOMID/


Here is the output of xenstore-ls only one network card is working.

xenstore-ls  /local/domain/1

vm = "/vm/63d68f91-093d-4cf2-8c8e-28c7bd8028ab"
name = "test-debian-jessie"
cpu = ""
 0 = ""
  availability = "online"
 1 = ""
  availability = "online"
 2 = ""
  availability = "online"
 3 = ""
  availability = "online"
 4 = ""
  availability = "online"
 5 = ""
  availability = "online"
 6 = ""
  availability = "online"
 7 = ""
  availability = "online"
 8 = ""
  availability = "online"
 9 = ""
  availability = "online"
 10 = ""
  availability = "online"
 11 = ""
  availability = "online"
 12 = ""
  availability = "online"
 13 = ""
  availability = "online"
 14 = ""
  availability = "online"
 15 = ""
  availability = "online"
 16 = ""
  availability = "online"
 17 = ""
  availability = "online"
 18 = ""
  availability = "online"
 19 = ""
  availability = "online"
 20 = ""
  availability = "online"
 21 = ""
  availability = "online"
 22 = ""
  availability = "online"
 23 = ""
  availability = "online"
 24 = ""
  availability = "online"
 25 = ""
  availability = "online"
 26 = ""
  availability = "online"
 27 = ""
  availability = "online"
 28 = ""
  availability = "online"
 29 = ""
  availability = "online"
 30 = ""
  availability = "online"
 31 = ""
  availability = "online"
 32 = ""
  availability = "online"
 33 = ""
  availability = "online"
 34 = ""
  availability = "online"
 35 = ""
  availability = "online"
 36 = ""
  availability = "online"
 37 = ""
  availability = "online"
 38 = ""
  availability = "online"
 39 = ""
  availability = "online"
memory = ""
 static-max = "4194304"
 target = "4194305"
 videoram = "-1"
device = ""
 suspend = ""
  event-channel = ""
 vbd = ""
  51713 = ""
   backend = "/local/domain/0/backend/qdisk/1/51713"
   backend-id = "0"
   state = "4"
   virtual-device = "51713"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "8"
   event-channel = "243"
   feature-persistent = "1"
  51714 = ""
   backend = "/local/domain/0/backend/qdisk/1/51714"
   backend-id = "0"
   state = "4"
   virtual-device = "51714"
   device-type = "disk"
   protocol = "x86_64-abi"
   ring-ref = "9"
   event-channel = "244"
   feature-persistent = "1"
 vif = ""
  0 = ""
   backend = "/local/domain/0/backend/vif/1/0"
   backend-id = "0"
   state = "4"
   handle = "0"
   mac = "00:16:3e:ee:aa:aa"
   multi-queue-num-queues = "17"
   queue-0 = ""
tx-ring-ref = "8960"
rx-ring-ref = "8961"
event-channel-tx = "245"
event-channel-rx = "246"
   queue-1 = ""
tx-ring-ref = "8962"
rx-ring-ref = "8963"
event-channel-tx = "247"
event-channel-rx = "248"
   queue-2 = ""
tx-ring-ref = "8964"
rx-ring-ref = "8965"
event-channel-tx = "249"
event-channel-rx = "250"
   queue-3 = ""
tx-ring-ref = "8966"
rx-ring-ref = "8967"
event-channel-tx = "251"
event-channel-rx = "252"
   queue-4 = ""
tx-ring-ref = "8968"
rx-ring-ref = "8969"
event-channel-tx = "253"
event-channel-rx = "254"
   queue-5 = ""
tx-ring-ref = "8970"
rx-ring-ref = "8971"
event-channel-tx = "255"
event-channel-rx = "256"
   queue-6 = ""
tx-ring-ref = "8972"
rx-ring-ref = "8973"
event-channel-tx = "257"
event-channel-rx = "258"
   queue-7 = ""
tx-ring-ref = "8974"
rx-ring-ref = "8975"
event-channel-tx = "259"
event-channel-rx = "260"
   queue-8 = ""
tx-ring-ref = "8976"
rx-ring-ref = "8977"
event-channel-tx = "261"
event-channel-rx = "262"
   queue-9 = ""
tx-ring-ref = "8978"
rx-ring-ref = "8979"
event-channel-tx = "263"
event-channel-rx = "264"
   queue-10 = ""
tx-ring-ref = "8980"
rx-ring-ref = "8981"
event-channel-tx = "265"
event-channel-rx = "266"
   queue-11 = ""
tx-ring-ref = "8982"
rx-ring-ref = "8983"
event-channel-tx = "267"
event-channel-rx = "268"
   queue-12 = ""
tx-ring-ref = "8984"
rx-ring-ref = "8985"
event-channel-tx = "269"
event-channel-rx = "270"
   queue-13 = ""
tx-ring-ref = "8986"
rx-ring-ref = "8987"
event-channel-tx = "271"
event-channel-rx = "272"
   queue-14 = ""
tx-ring-ref = "8988"
rx-ring-ref = "8989"
event-channel-tx = "273"
event-channel-rx = "274"
   queue-15 = ""
tx-ring-ref = "8990"
rx-ring-ref = "8991"
event-channel-tx = "275"
event-channel-rx = "276"
   queue-16 = ""
tx-ring-ref = "8992"
rx-ring-ref = "8993"
event-channel-tx = "277"
event-channel-rx = "278"
   request-rx-copy = "1"
   feature-rx-notify = "1"
   feature-sg = "1"
   feature-gso-tcpv4 = "1"
   feature-gso-tcpv6 = "1"
   feature-ipv6-csum-offload = "1"
  1 = ""
   backend = "/local/domain/0/backend/vif/1/1"
   backend-id = "0"
   state = "1"
   handle = "1"
   mac = "00:16:3e:ec:a7:b5"
control = ""
 shutdown = ""
 platform-feature-multiprocessor-suspend = "1"
 platform-feature-xs_reset_watches = "1"
data = 

Re: [Xen-devel] [Xen-users] Xen bridging issue.

2015-09-08 Thread Wei Liu
On Tue, Sep 08, 2015 at 02:07:21PM +0300, johnny Strom wrote:
> On 09/08/2015 01:06 PM, Wei Liu wrote:
> >On Tue, Sep 08, 2015 at 12:59:39PM +0300, johnny Strom wrote:
> >>On 09/08/2015 12:13 PM, Wei Liu wrote:
> >>>  xenstore-ls/local/domain/$DOMID/
> >>Here is the output of xenstore-ls only one network card is working.
> >>
> >>xenstore-ls  /local/domain/1
> >>
> >[...]
> >>  vif = ""
> >>   0 = ""
> >>backend = "/local/domain/0/backend/vif/1/0"
> >>backend-id = "0"
> >>state = "4"
> >>handle = "0"
> >>mac = "00:16:3e:ee:aa:aa"
> >>multi-queue-num-queues = "17"
> >OK so the number of queues is 17. You probably don't need that many
> >queues.
> >
> >Set module parameter "xenvif_max_queues" of netback to something like 4
> >should work around the problem for you.
> 
> Hello
> 
> I tried to set it to 4 in  /etc/modprobe.d/xen_netback.conf
> 
> rmmod xen_netback
> 
> modprobe -v xen_netback
> insmod
> /lib/modules/3.16.0-4-amd64/kernel/drivers/net/xen-netback/xen-netback.ko
> xenvif_max_queues=4
> 
> 
> 
> But it is still the same issue..
> 
> Is xenvif_max_queues supported in Linux kernel 3.16?
> 
> modinfo -p xen_netback
> 
> separate_tx_rx_irq: (bool)
> rx_drain_timeout_msecs: (uint)
> rx_stall_timeout_msecs: (uint)
> max_queues:Maximum number of queues per virtual interface (uint)

Oh, right, the parameter name should be "max_queues".

Sorry about that!

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel