Traffic being tagged with VLAN 0 w/ KVM

2018-01-18 Thread Sean Lair
Hi all,

We are seeing some strange behavior with our KVM guests.  When the guest VM is 
on the same KVM host as the vRouter, traffic to the guest VM is being tagged 
with VLAN 0 (it should just be untagged traffic).  This breaks connectivity for 
some operating systems that aren't expecting packets to be tagged with a VLAN.  
Technically, the guest VLAN is 306, and we see Cloudstack creating the 306 VLAN 
sub-interface on the host and the corresponding bridge interface - that all 
looks good.

This only occurs when the guest VM is on the same host as the vRouter.  
However, it is likely that the physical switch connecting the KVM hosts is 
being smart and stripping the VLAN 0 tag... and that is why we don't have the 
problem with the traffic flows between hosts.

Cloudstack 4.9.3 w/ Advanced Networking
CentOS 7.4 KVM host (fully up-to-date)
Bridge networking within the host

Here is a tcpdump on the guest VM, showing the dot1q "VLAN 0" tag on the 
response packets only.  The guest doesn't support VLAN tagging, and just says 
request timed out to the ping.

01:18:22.723029 02:00:0c:a3:00:05 > 02:00:56:46:00:04, ethertype IPv4 (0x0800), 
length 98: 10.1.1.246 > 8.8.8.8: ICMP echo request, id 1375, seq 150, length 64
01:18:22.738067 02:00:56:46:00:04 > 02:00:0c:a3:00:05, ethertype 802.1Q 
(0x8100), length 102: vlan 0, p 0, ethertype IPv4, 8.8.8.8 > 10.1.1.246: ICMP 
echo reply, id 1375, seq 150, length 64
01:18:23.724363 02:00:0c:a3:00:05 > 02:00:56:46:00:04, ethertype IPv4 (0x0800), 
length 98: 10.1.1.246 > 8.8.8.8: ICMP echo request, id 1375, seq 151, length 64
01:18:23.739301 02:00:56:46:00:04 > 02:00:0c:a3:00:05, ethertype 802.1Q 
(0x8100), length 102: vlan 0, p 0, ethertype IPv4, 8.8.8.8 > 10.1.1.246: ICMP 
echo reply, id 1375, seq 151, length 64
01:18:24.725480 02:00:0c:a3:00:05 > 02:00:56:46:00:04, ethertype IPv4 (0x0800), 
length 98: 10.1.1.246 > 8.8.8.8: ICMP echo request, id 1375, seq 152, length 64
01:18:24.740498 02:00:56:46:00:04 > 02:00:0c:a3:00:05, ethertype 802.1Q 
(0x8100), length 102: vlan 0, p 0, ethertype IPv4, 8.8.8.8 > 10.1.1.246: ICMP 
echo reply, id 1375, seq 152, length 64
01:18:25.726752 02:00:0c:a3:00:05 > 02:00:56:46:00:04, ethertype IPv4 (0x0800), 
length 98: 10.1.1.246 > 8.8.8.8: ICMP echo request, id 1375, seq 153, length 64
01:18:25.741749 02:00:56:46:00:04 > 02:00:0c:a3:00:05, ethertype 802.1Q 
(0x8100), length 102: vlan 0, p 0, ethertype IPv4, 8.8.8.8 > 10.1.1.246: ICMP 
echo reply, id 1375, seq 153, length 64

Any idea how to stop KVM from adding that VLAN 0 dot1q tag?  KVM is a new add 
to our environment.

Thanks
Sean


Re: [PROPOSE] EOL for supported OSes & Hypervisors

2018-01-18 Thread Jean-Francois Nadeau
+1 On the versions of ACS in the matrix.   I.e. it sounds like today most
production setup runs 4.9 or before and until 4.11 is GA and stabilizes it
sounds like 4.9 is the only good option for a go live today.  Knowing how
long 4.9 would be supported is key.

On Wed, Jan 17, 2018 at 9:50 AM, Ron Wheeler  wrote:

> It might also be helpful to know what version of ACS as well.
> Some indication of your plan/desire to upgrade ACS, hypervisor, or
> management server operating system might be helpful.
> There is a big difference between the situation where someone is running
> ACS 4.9x on CentOS 6 and wants to upgrade to ACS 4.12 while keeping CentOS
> 6 and another environment where the planned upgrade to ACS4.12 will be done
> at the same time as an upgrade to CentOS 7.x.
>
> Is it fair to say that any proposed changes in this area will occur in
> 4.12 at the earliest and will not likely occur before summer 2018?
>
>
> Ron
>
>
>
> On 17/01/2018 4:23 AM, Paul Angus wrote:
>
>> Thanks Eric,
>>
>> As you'll see from the intro email to this thread, the purpose here is to
>> ensure that we don't strand a 'non-trivial' number of users by dropping
>> support for any given hypervisor, or management server operating system.
>>
>> Hence the request to users to let the community know what they are using,
>> so that a fact-based community consensus can be reached.
>>
>>
>> Kind regards,
>>
>> Paul Angus
>>
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>> -Original Message-
>> From: Eric Lee Green [mailto:eric.lee.gr...@gmail.com]
>> Sent: 16 January 2018 23:36
>> To: users@cloudstack.apache.org
>> Subject: Re: [PROPOSE] EOL for supported OSes & Hypervisors
>>
>> This is the type of discussion that I wanted to open - the argument
>>> that I see for earlier dropping of v6 is that - Between May 2018 and
>>> q2 2020 RHEL/CentOS 6.x will only receive security and mission
>>> critical updates, meanwhile packages on which we depend or may want to
>>> utilise in the future are been deprecated or not developed for v6.x
>>>
>> But this has always been the case for Centos 6.x. It is running antique
>> versions of everything, and has been doing so for quite some time. It is,
>> for example, running versions of Gnome and init that have been obsolete for
>> years. Same deal with the version of MySQL that it comes with.
>>
>> The reality is that Centos 6.x guest support, at the very least, needs to
>> be tested with each new version of Cloudstack until final EOL of Centos 6
>> in Q2 2020. New versions of Cloudstack with new features not supported by
>> Centos 6 (such as LVM support for KVM, which requires the LIO storage
>> stack) can require Centos 7 or later, but the last Cloudstack version that
>> supports Centos 6.x as its server host should continue to receive bug fixes
>> until Centos 6.x is EOL.
>>
>> Making someone's IT investment obsolete is a way to irrelevancy.
>> Cloudstack is already an also-ran in the cloud marketplace. Making
>> someone's IT investment obsolete before the official EOL time for their IT
>> investment is a good way to have a mass migration away from your technology.
>>
>> This doesn't particularly affect me since my Centos 6 virtualization
>> hosts are not running Cloudstack and are going to be re-imaged to Centos
>> 7 before being added to the Cloudstack cluster, but ignoring the IT
>> environment that people actually live in, as versus the one we wish
>> existed, is annoying regardless. A friend of mine once said of the state of
>> ERP software, "enterprise software is dog food if dog food was being
>> designed by cats." I.e., the people writing the software rarely have any
>> understanding of how it is actually used by real life enterprises in real
>> life environments. Don't be those people.
>>
>>
>> On 01/16/2018 09:58 AM, Paul Angus wrote:
>>
>>> Hi Eric,
>>>
>>> This is the type of discussion that I wanted to open - the argument
>>> that I see for earlier dropping of v6 is that - Between May 2018 and q2
>>> 2020 RHEL/CentOS 6.x will only receive security and mission critical
>>> updates, meanwhile packages on which we depend or may want to utilise in
>>> the future are been deprecated or not developed for v6.x Also the testing
>>> and development burden on the CloudStack community increases as we try to
>>> maintain backward compatibility while including new versions.
>>>
>>> Needing installation documentation for centos 7 is a great point, and
>>> something that we need to address regardless.
>>>
>>>
>>> Does anyone else have a view, I'd really like to here from a wide range
>>> of people.
>>>
>>> Kind regards,
>>>
>>> Paul Angus
>>>
>>> paul.an...@shapeblue.com
>>> www.shapeblue.com
>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>>>
>>>
>>> -Original Message-
>>> From: Eric Green [mailto:eric.lee.gr...@gmail.com]
>>> Sent: 12 January 2018 17:24
>>> To: 

Re: kvm live volume migration

2018-01-18 Thread Marc-Aurèle Brothier
There's a PR waiting to be fixed about live migration with local volume for
KVM. So it will come at some point. I'm the one who made this PR but I'm
not using the upstream release so it's hard for me to debug the problem.
You can add yourself to the PR to get notify when things are moving on it.

https://github.com/apache/cloudstack/pull/1709

On Wed, Jan 17, 2018 at 10:56 AM, Eric Green 
wrote:

> Theoretically on Centos 7 as the host KVM OS it could be done with a
> couple of pauses and the snapshotting mechanism built into qcow2, but there
> is no simple way to do it directly via virsh, the libvirtd/qemu control
> program that is used to manage virtualization. It's not as with issuing a
> simple vmotion 'migrate volume' call in Vmware.
>
> I scripted out how it would work without that direct support in
> libvirt/virsh and after looking at all the points where things could go
> wrong, honestly, I think we need to wait until there is support in
> libvirt/virsh to do this. virsh clearly has the capability internally to do
> live migration of storage, since it does this for live domain migration of
> local storage between machines when migrating KVM domains from one host to
> another, but that capability is not currently exposed in a way Cloudstack
> could use, at least not on Centos 7.
>
>
> > On Jan 17, 2018, at 01:05, Piotr Pisz  wrote:
> >
> > Hello,
> >
> > Is there a chance that one day it will be possible to migrate volume
> (root disk) of a live VM in KVM between storage pools (in CloudStack)?
> > Like a storage vMotion in Vmware.
> >
> > Best regards,
> > Piotr
> >
>
>