Re: multi-vCPU networking issues as client OS under Xen

2018-02-19 Thread Roger Pau Monné
On Mon, Feb 19, 2018 at 10:42:08AM +, Laurence Pawling wrote:
> >When using >1 vCPUs can you set hw.xn.num_queues=1 on
> >/boot/loader.conf and try to reproduce the issue?
> >
> >I'm afraid this is rather related to multiqueue (which is only used
> >if >1 vCPUs).
> >
> >Thanks, Roger.
> 
> Roger - thanks for your quick reply, this is confirmed. Setting 
> hw.xn.num_queues=1 on the server VM when vCPUs > 1 prevents the issue.

I've also been told that in order to discard this being a XenServer
specific issue you should execute the following on Dom0 and reboot the
server:

# xe-switch-network-backend bridge

And then try to reproduce the issue again with >1 vCPUs (and of course
removing the queue limit in loader.conf)

> For reference, please can you comment on the performance impact of this?

I'm afraid I don't have any numbers.

Roger.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: multi-vCPU networking issues as client OS under Xen

2018-02-19 Thread Laurence Pawling via freebsd-virtualization
>When using >1 vCPUs can you set hw.xn.num_queues=1 on
>/boot/loader.conf and try to reproduce the issue?
>
>I'm afraid this is rather related to multiqueue (which is only used
>if >1 vCPUs).
>
>Thanks, Roger.

Roger - thanks for your quick reply, this is confirmed. Setting 
hw.xn.num_queues=1 on the server VM when vCPUs > 1 prevents the issue.

For reference, please can you comment on the performance impact of this?

Laurence


smime.p7s
Description: S/MIME cryptographic signature


Re: multi-vCPU networking issues as client OS under Xen

2018-02-19 Thread Roger Pau Monné
On Mon, Feb 19, 2018 at 09:58:30AM +, Laurence Pawling via freebsd-xen 
wrote:
> Hi all,
> 
>  
> 
> I’m wondering if anyone here has seen this issue before, I’ve spent the last 
> couple of days troubleshooting:
> 
>  
> 
> Platform:
> 
> Host: XenServer 7.0 running on 2 x E2660-v4, 256GB RAM
> 
> Server VM: FreeBSD 11 (tested on 11.0-p15 and 11.1-p6), 2GB RAM (also tested 
> with 32GB RAM), 1x50GB HDD, 1 x NIC, 2 or more vCPUs in any combination (2 
> sockets x 1 core, 1 socket x 2 cores, …)
> 
> Client VM: FreeBSD 11, any configuration of vCPUs, RAM and HDD.
> 
>  
> 
> Behaviour:
> 
> Sporadic interruption of TCP sessions when utilising the above machine as a 
> “server” with “clients” connecting. Looking into the communication with 
> pcap/Wireshark, you see a TCP Dup Ack sent from both ends, followed by the 
> client sending an RST packet, terminating the TCP session. We have also seen 
> evidence of the client sending a Keepalive packet, which is ACK’d by the 
> server before the RST is sent from the client end.
> 
>  
> 
> To recreate:
> 
> On the above VM, perform a vanilla install of nginx:
> 
> pkg install nginx
> 
> service nginx onestart
> 
> Then on a client VM (currently only tested with FreeBSD), run the following 
> (or similar):
> 
> for i in {1..1}; do if [ $(curl -s -o /dev/null -w "%{http_code}" 
> http://10.2.122.71) != 200 ] ; then echo "error"; fi; done
> 
> When vCPUs=1 on the server, I get no errors, when vCPUs>1 I get errors 
> reported. The frequency of errors *seems* to be proportional to the number of 
> vCPUs, but they are sporadic with no clear periodicity or pattern, so that is 
> just anecdotal. Also, the problem seems by far the most prevalent when 
> communicating between two VMs on the same host, in the same VLAN. Xen still 
> sends packets via the switch rather than bridging internally between the 
> interfaces.

When using >1 vCPUs can you set hw.xn.num_queues=1 on
/boot/loader.conf and try to reproduce the issue?

I'm afraid this is rather related to multiqueue (which is only used
if >1 vCPUs).

Thanks, Roger.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


multi-vCPU networking issues as client OS under Xen

2018-02-19 Thread Laurence Pawling via freebsd-virtualization
Hi all,

 

I’m wondering if anyone here has seen this issue before, I’ve spent the last 
couple of days troubleshooting:

 

Platform:

Host: XenServer 7.0 running on 2 x E2660-v4, 256GB RAM

Server VM: FreeBSD 11 (tested on 11.0-p15 and 11.1-p6), 2GB RAM (also tested 
with 32GB RAM), 1x50GB HDD, 1 x NIC, 2 or more vCPUs in any combination (2 
sockets x 1 core, 1 socket x 2 cores, …)

Client VM: FreeBSD 11, any configuration of vCPUs, RAM and HDD.

 

Behaviour:

Sporadic interruption of TCP sessions when utilising the above machine as a 
“server” with “clients” connecting. Looking into the communication with 
pcap/Wireshark, you see a TCP Dup Ack sent from both ends, followed by the 
client sending an RST packet, terminating the TCP session. We have also seen 
evidence of the client sending a Keepalive packet, which is ACK’d by the server 
before the RST is sent from the client end.

 

To recreate:

On the above VM, perform a vanilla install of nginx:

pkg install nginx

service nginx onestart

Then on a client VM (currently only tested with FreeBSD), run the following (or 
similar):

for i in {1..1}; do if [ $(curl -s -o /dev/null -w "%{http_code}" 
http://10.2.122.71) != 200 ] ; then echo "error"; fi; done

When vCPUs=1 on the server, I get no errors, when vCPUs>1 I get errors 
reported. The frequency of errors *seems* to be proportional to the number of 
vCPUs, but they are sporadic with no clear periodicity or pattern, so that is 
just anecdotal. Also, the problem seems by far the most prevalent when 
communicating between two VMs on the same host, in the same VLAN. Xen still 
sends packets via the switch rather than bridging internally between the 
interfaces.

Note that we have not had a chance to investigate the effect of different 
numbers of CPUs on the *client* end, however it does seem to be governed 
entirely by the server end.

 

We cannot recreate this issue using the same FreeBSD image and same 
configuration, but using KVM as a hypervisor.

 

Has anyone met this before?

 

Thanks,

 

Laurence



smime.p7s
Description: S/MIME cryptographic signature


Re: Nested virtualization networking issues with bhyve

2015-05-13 Thread Allan Jude
On 2015-05-13 19:15, Jonathan Wong wrote:
> I've recently tried to experiment with vmware esxi 6 + vmworkstation and I
> managed to get nested virtualization with freebsd guest and nested bhyve
> ubuntu VM working. However, the networking doesn't seem to come up properly
> while installing, and post install for the ubuntu VM.
> 
> The bridged networking works as expected with freebsd guests and other
> network nodes on the network. However, the nested VMs themselves can't seem
> to bring up networking properly.
> 
> lspci seems to bring up the virtio network device, but even configuring
> static ips doesn't help. The VM is not able to receive or route any packets
> from any VM
> 
> FreeBSD guest can ssh into the nested VM. The nested VM can ssh into
> freebsd guest.  The nested VM cannot contact internet or local gateway.
> Tried with E1000 and VMXNET 2. Computers on network can ssh into freebsd
> guest. The freebsd guest can ssh into other computers on the network.
> Computers on the network cannot contact the VM. The vmware host cannot
> contact the nested VM or vice versa.
> 
> Any tips to getting this to work?
> 
> Thanks.
> ___
> freebsd-virtualization@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to 
> "freebsd-virtualization-unsubscr...@freebsd.org"
> 

I helped you (or someone else) with this issue on IRC.

The answer was to enable promiscuous mode on the nic in ESXi

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004099

So that the FreeBSD (middle) host would receive the packets destine for
the inner VM. Without promiscuous mode, any packets with an ethernet
address other than that of the ESXi virtual nic, will not be passed into
the VM.

-- 
Allan Jude



signature.asc
Description: OpenPGP digital signature


Nested virtualization networking issues with bhyve

2015-05-13 Thread Jonathan Wong
I've recently tried to experiment with vmware esxi 6 + vmworkstation and I
managed to get nested virtualization with freebsd guest and nested bhyve
ubuntu VM working. However, the networking doesn't seem to come up properly
while installing, and post install for the ubuntu VM.

The bridged networking works as expected with freebsd guests and other
network nodes on the network. However, the nested VMs themselves can't seem
to bring up networking properly.

lspci seems to bring up the virtio network device, but even configuring
static ips doesn't help. The VM is not able to receive or route any packets
from any VM

FreeBSD guest can ssh into the nested VM. The nested VM can ssh into
freebsd guest.  The nested VM cannot contact internet or local gateway.
Tried with E1000 and VMXNET 2. Computers on network can ssh into freebsd
guest. The freebsd guest can ssh into other computers on the network.
Computers on the network cannot contact the VM. The vmware host cannot
contact the nested VM or vice versa.

Any tips to getting this to work?

Thanks.
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Networking issues

2014-02-09 Thread Neel Natu
Hi Sebastian,

On Fri, Feb 7, 2014 at 12:03 PM,   wrote:
> Hello virtualization-lovers,
>
> I am very dedicated to FreeBSD since 7.2 and welcome the new bhyve
> hypervisor. :)
>
> So I set everything up, launched the guest and set up the network. The
> problem now is, I can reach the host IP, but not the default gateway. Did I
> forget to set something up?
> The system is a hosted root box.
>
> My current setup (i changed the IP's except the last octet):
> ---snip---
> host:
> # ifconfig
> re0: flags=8943 metric 0
> mtu 1500
>
> options=82099
> ether 30:85:a9:ed:01:22
> inet 192.168.0.137 netmask 0xffe0 broadcast 192.168.0.159
> inet6 fe80::3285:a9ff:feed:122%re0 prefixlen 64 scopeid 0x1
> nd6 options=29
> media: Ethernet autoselect (1000baseT )
> status: active
> bridge0: flags=8843 metric 0 mtu
> 1500
> ether 02:0d:2a:df:6e:00
> nd6 options=1
> id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
> maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
> root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
> member: tap0 flags=143
> ifmaxaddr 0 port 4 priority 128 path cost 200
> member: re0 flags=143
> ifmaxaddr 0 port 1 priority 128 path cost 2
> tap0: flags=8943 metric 0
> mtu 1500
> options=8
> ether 00:bd:fe:79:0e:00
> nd6 options=29
> media: Ethernet autoselect
> status: active
> Opened by PID 16910
>
> # netstat -rn
> Routing tables
>
> Internet:
> DestinationGatewayFlagsRefs  Use  Netif Expire
> default5.9.157.129UGS 0   293321re0
> 192.168.0.128/27   link#1 U   0   30re0
> 192.168.0.137  link#1 UHS 00lo0
> 127.0.0.1  link#2 UH  0 1606lo0
>
> ---snip---
>
> ---snip---
> guest:
> # ifconfig
> vtnet0: flags=8943 metric 0
> mtu 1500
> options=80028
> ether 00:a0:98:18:c4:69
> inet 192.168.0.154 netmask 0xffe0 broadcast 192.168.0.159
> inet6 fe80::2a0:98ff:fe18:c469%vtnet0 prefixlen 64 scopeid 0x1
> nd6 options=29
> media: Ethernet 10Gbase-T 
> status: active
> lo0: flags=8049 metric 0 mtu 16384
> options=63
> inet6 ::1 prefixlen 128
> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
> inet 127.0.0.1 netmask 0xff00
> nd6 options=21
>
> # netstat -rn
> Routing tables
>
> Internet:
> DestinationGatewayFlagsRefs  Use  Netif Expire
> default5.9.157.129UGS 0  418 vtnet0
> 192.168.0.128/27   link#1 U   0   24 vtnet0
> 192.168.0.154  link#1 UHS 00lo0
> 127.0.0.1  link#2 UH  0   24lo0
>
> ---snip---
>
> ping host -> guest works
>
> # ping 192.168.0.154
> PING 192.168.0.154 (192.168.0.154): 56 data bytes
> 64 bytes from 192.168.0.154: icmp_seq=0 ttl=64 time=0.083 ms
> 64 bytes from 192.168.0.154: icmp_seq=1 ttl=64 time=0.094 ms
>
>
> ping guest -> host works
>
> # ping 192.168.0.137
> PING 192.168.0.137 (192.168.0.137): 56 data bytes
> 64 bytes from 192.168.0.137: icmp_seq=0 ttl=64 time=0.398 ms
> 64 bytes from 192.168.0.137: icmp_seq=1 ttl=64 time=0.069 ms
>
>
> ping 173.194.70.102 (google.com) from guest - doesn't work...
> # ping 173.194.70.102
> PING 173.194.70.102 (173.194.70.102): 56 data bytes
> --- 173.194.70.102 ping statistics ---
> 3 packets transmitted, 0 packets received, 100.0% packet loss
>
> tcpdump listening on the host:
>
> # tcpdump -N -vv -i bridge0
> tcpdump: WARNING: bridge0: no IPv4 address assigned
> tcpdump: listening on bridge0, link-type EN10MB (Ethernet), capture size
> 65535 bytes
> 19:58:19.139767 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has static
> tell 192.168.0.137, length 46
>
> ---^ same on tap0
>
>
> What's wrong with that setup? Did I forget to set a proper route? Or is it
> a MAC address issue?
>

Can you ping the default router from your guest?

Also, I was bit puzzled that the default router is 5.9.157.129 on a
192.168.0.128/27 subnet. Should I read it as 192.168.0.129 instead?

best
Neel

> Thanks in advance,
>
> Sebastian
> ___
> freebsd-virtualization@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to 
> "freebsd-virtualization-unsubscr...@freebsd.org"
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Networking issues

2014-02-07 Thread ickler
Hello virtualization-lovers,

I am very dedicated to FreeBSD since 7.2 and welcome the new bhyve
hypervisor. :)

So I set everything up, launched the guest and set up the network. The
problem now is, I can reach the host IP, but not the default gateway. Did I
forget to set something up?
The system is a hosted root box.

My current setup (i changed the IP's except the last octet):
---snip---
host:
# ifconfig
re0: flags=8943 metric 0
mtu 1500

options=82099
ether 30:85:a9:ed:01:22
inet 192.168.0.137 netmask 0xffe0 broadcast 192.168.0.159
inet6 fe80::3285:a9ff:feed:122%re0 prefixlen 64 scopeid 0x1
nd6 options=29
media: Ethernet autoselect (1000baseT )
status: active
bridge0: flags=8843 metric 0 mtu
1500
ether 02:0d:2a:df:6e:00
nd6 options=1
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: tap0 flags=143
ifmaxaddr 0 port 4 priority 128 path cost 200
member: re0 flags=143
ifmaxaddr 0 port 1 priority 128 path cost 2
tap0: flags=8943 metric 0
mtu 1500
options=8
ether 00:bd:fe:79:0e:00
nd6 options=29
media: Ethernet autoselect
status: active
Opened by PID 16910

# netstat -rn
Routing tables

Internet:
DestinationGatewayFlagsRefs  Use  Netif Expire
default5.9.157.129UGS 0   293321re0
192.168.0.128/27   link#1 U   0   30re0
192.168.0.137  link#1 UHS 00lo0
127.0.0.1  link#2 UH  0 1606lo0

---snip---

---snip---
guest:
# ifconfig
vtnet0: flags=8943 metric 0
mtu 1500
options=80028
ether 00:a0:98:18:c4:69
inet 192.168.0.154 netmask 0xffe0 broadcast 192.168.0.159
inet6 fe80::2a0:98ff:fe18:c469%vtnet0 prefixlen 64 scopeid 0x1
nd6 options=29
media: Ethernet 10Gbase-T 
status: active
lo0: flags=8049 metric 0 mtu 16384
options=63
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff00
nd6 options=21

# netstat -rn
Routing tables

Internet:
DestinationGatewayFlagsRefs  Use  Netif Expire
default5.9.157.129UGS 0  418 vtnet0
192.168.0.128/27   link#1 U   0   24 vtnet0
192.168.0.154  link#1 UHS 00lo0
127.0.0.1  link#2 UH  0   24lo0

---snip---

ping host -> guest works

# ping 192.168.0.154
PING 192.168.0.154 (192.168.0.154): 56 data bytes
64 bytes from 192.168.0.154: icmp_seq=0 ttl=64 time=0.083 ms
64 bytes from 192.168.0.154: icmp_seq=1 ttl=64 time=0.094 ms


ping guest -> host works

# ping 192.168.0.137
PING 192.168.0.137 (192.168.0.137): 56 data bytes
64 bytes from 192.168.0.137: icmp_seq=0 ttl=64 time=0.398 ms
64 bytes from 192.168.0.137: icmp_seq=1 ttl=64 time=0.069 ms


ping 173.194.70.102 (google.com) from guest - doesn't work...
# ping 173.194.70.102
PING 173.194.70.102 (173.194.70.102): 56 data bytes
--- 173.194.70.102 ping statistics ---
3 packets transmitted, 0 packets received, 100.0% packet loss

tcpdump listening on the host:

# tcpdump -N -vv -i bridge0
tcpdump: WARNING: bridge0: no IPv4 address assigned
tcpdump: listening on bridge0, link-type EN10MB (Ethernet), capture size
65535 bytes
19:58:19.139767 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has static
tell 192.168.0.137, length 46

---^ same on tap0


What's wrong with that setup? Did I forget to set a proper route? Or is it
a MAC address issue?

Thanks in advance,

Sebastian
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"