Re: KVM Problem by deploying VPC

2018-05-25 Thread Andrija Panic
Hi Benjamin,

no experience here with Centos7 vs KVM really...

What I do remember, but that is VXLAN related, with kernel 4.0 and on-wards
(we are on Ubuntu), if you try to provision more vxlan interfaces then
max_igmp_memebership ( /proc/sys/net/ipv4/igmp_max_memberships ) it will
start interfaces but they will not be UP and proper cloud-stack message is
shown in logs (message that OS sends when you try to UP the interface, we
use ACS 4.8) - so it is easy to see issue and failed ations (though the VM
would be started fine :) )

Again, you should be able to examine agent logs and see which script is
failing... I expect either UP action is failing or not even attempted (that
would be perhaps a bug in ACS, but I don't expect that...)

Cheers

On 24 May 2018 at 16:03, Adam Witwicki <awitwi...@oakfordis.com> wrote:

> Hi Ben
>
> " Now i moved back to centos7 and have another problem. all vlan bridges
> that will be created by cloudstack-agent not manual get in ifup state. if i
> ifup the vlan bridges manual all works fine. Have someone a idea how i can
> force centos to automaticly bring up new if devices ?"
>
> I had this issue with 4.9 and ending up writing a dirty script to check
> the down bridges every minute via cron and if up them.
>
> iface=`/usr/sbin/ip add | grep -E "brbond.*DOWN" |  awk -F ':' '{print
> $2}'`
> echo "$iface" | while read -r a; do /usr/sbin/ip link set dev $a up; done
>
>
> Thanks
>
> Adam
>
> -Original Message-
> From: Benjamin Naber <benjamin.na...@coders-area.de>
> Sent: 24 May 2018 14:37
> To: users@cloudstack.apache.org
> Subject: Re: KVM Problem by deploying VPC
>
> ** This mail originated from OUTSIDE the Oakford corporate network. Treat
> hyperlinks and attachments in this email with caution. **
>
> Hi Andrija,
>
> ty for reply. i solved the error. Now i have an other error. Problem was
> that it failed witout a bonding device. i created a bonding device (bond0)
> wiht a single nic now it works without problems On Ubuntu.
>
> Example (didnt work):
>
> auto enp8s0f0
> iface enp8s0f0 inet manual
>
> auto cloudbr0
> iface cloudbr0 inet static
> address 10.253.250.230
> gateway 10.253.250.1
> netmask 255.255.255.0
> dns-nameservers 8.8.8.8 8.8.4.4
> bridge_ports enp8s0f0
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
>
> Example (works fine):
>
> auto enp8s0f0
> iface enp8s0f0 inet manual
> bond-master bond0
>
> auto bond0
> iface bond0 inet manual
> bond-mode active-backup
> bond-miimon 100
> bond-slaves none
>
>
> auto cloudbr0
> iface cloudbr0 inet static
> address 10.253.250.230
> gateway 10.253.250.1
> netmask 255.255.255.0
> dns-nameservers 8.8.8.8 8.8.4.4
> bridge_ports bond0
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
>
> Now i moved back to centos7 and have another problem. all vlan bridges
> that will be created by cloudstack-agent not manual get in ifup state. if i
> ifup the vlan bridges manual all works fine. Have someone a idea how i can
> force centos to automaticly bring up new if devices ?
>
>
> Kind regards
>
> Ben
>
> >
> > Andrija Panic <andrija.pa...@gmail.com> hat am 24. Mai 2018 um
> 00:37 geschrieben:
> >
> > Hi Ben,
> >
> > interesting parts seems to be:
> > 2018-05-23 12:59:47,213 DEBUG [kvm.resource.
> LibvirtComputingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) getting broadcast uri
> for
> > pif enp8s0f0.233 and bridge brenp8s0f0-233
> > 2018-05-23 12:59:47,213 DEBUG [resource.virtualnetwork.
> VirtualRoutingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Transforming
> > com.cloud.agent.api.routing.IpAssocVpcCommand to ConfigItems
> > 2018-05-23 12:59:47,581 DEBUG [resource.virtualnetwork.
> VirtualRoutingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Processing
> FileConfigItem,
> > copying 257 characters to ip_associations.json took 340ms
> > 2018-05-23 12:59:47,582 DEBUG [kvm.resource.
> LibvirtComputingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Executing:
> > /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh
> > update_config.py 169.254.2.247 ip_associations.json
> > 2018-05-23 12:59:47,766 DEBUG [kvm.resource.
> LibvirtComputingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Exit value is 1
> >
> > What I believe we see here, is that IP association fails for some
> reason
> > (exit value is 1) and after that ACS will just stop the VM and do the
> > cleanup (all log lines after this one)..

Re: SOLVED: KVM Problem by deploying VPC

2018-05-25 Thread Andrija Panic
Hi Benjamin,

good that you solved it - now, I can propose you update the documentation
over here :) if you have time:
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/hypervisor/kvm.html
Just write special requirement for the Centos7 Basic install.

Cheers

On 24 May 2018 at 17:45, Benjamin Naber <benjamin.na...@coders-area.de>
wrote:

> Hi Adam,
>
>
> got it ! the problem is this tiny script: /usr/share/cloudstack-common/
> scripts/vm/network/vnet/modifyvlan.sh
>
> at this lines you can see that ifconfig is used to bring bridges up.
>
> ifconfig |grep -w $vlanBr > /dev/null
> if [ $? -gt 0 ]
> then
> ifconfig $vlanBr up
> fi
> return 0
>
> the simple solution for centos7: just install the package net-tools over
> package manager! Centos in basic installation have no ifconfig installed.
> Ubuntu and Debian derivates have it.
>
> Kind regards and thanks for all impressions
>
> Ben
>
> >
> > Adam Witwicki <awitwi...@oakfordis.com> hat am 24. Mai 2018 um
> 16:03 geschrieben:
> >
> > Hi Ben
> >
> > " Now i moved back to centos7 and have another problem. all vlan
> bridges that will be created by cloudstack-agent not manual get in ifup
> state. if i ifup the vlan bridges manual all works fine. Have someone a
> idea how i can force centos to automaticly bring up new if devices ?"
> >
> > I had this issue with 4.9 and ending up writing a dirty script to
> check the down bridges every minute via cron and if up them.
> >
> > iface=/usr/sbin/ip add | grep -E brbond.*DOWN | awk -F
> : {print $2}
> > echo "$iface" | while read -r a; do /usr/sbin/ip link set dev $a up;
> done
> >
> > Thanks
> >
> >     Adam
> >
> > -Original Message-
> > From: Benjamin Naber <benjamin.na...@coders-area.de>
> > Sent: 24 May 2018 14:37
> > To: users@cloudstack.apache.org
> > Subject: Re: KVM Problem by deploying VPC
> >
> > ** This mail originated from OUTSIDE the Oakford corporate network.
> Treat hyperlinks and attachments in this email with caution. **
> >
> > Hi Andrija,
> >
> > ty for reply. i solved the error. Now i have an other error. Problem
> was that it failed witout a bonding device. i created a bonding device
> (bond0) wiht a single nic now it works without problems On Ubuntu.
> >
> > Example (didnt work):
> >
> > auto enp8s0f0
> > iface enp8s0f0 inet manual
> >
> > auto cloudbr0
> > iface cloudbr0 inet static
> > address 10.253.250.230
> > gateway 10.253.250.1
> > netmask 255.255.255.0
> > dns-nameservers 8.8.8.8 8.8.4.4
> > bridge_ports enp8s0f0
> > bridge_fd 5
> > bridge_stp off
> > bridge_maxwait 1
> >
> > Example (works fine):
> >
> > auto enp8s0f0
> > iface enp8s0f0 inet manual
> > bond-master bond0
> >
> > auto bond0
> > iface bond0 inet manual
> > bond-mode active-backup
> > bond-miimon 100
> > bond-slaves none
> >
> > auto cloudbr0
> > iface cloudbr0 inet static
> > address 10.253.250.230
> > gateway 10.253.250.1
> > netmask 255.255.255.0
> > dns-nameservers 8.8.8.8 8.8.4.4
> > bridge_ports bond0
> > bridge_fd 5
> > bridge_stp off
> > bridge_maxwait 1
> >
> > Now i moved back to centos7 and have another problem. all vlan
> bridges that will be created by cloudstack-agent not manual get in ifup
> state. if i ifup the vlan bridges manual all works fine. Have someone a
> idea how i can force centos to automaticly bring up new if devices ?
> >
> > Kind regards
> >
> > Ben
> >
> > >
> >
> > > >
> > > Andrija Panic <andrija.pa...@gmail.com> hat am 24. Mai 2018
> um 00:37 geschrieben:
> > >
> > > Hi Ben,
> > >
> > > interesting parts seems to be:
> > > 2018-05-23 12:59:47,213 DEBUG [kvm.resource.
> LibvirtComputingResource]
> > > (agentRequest-Handler-5:null) (logid:ef8b353e) getting
> broadcast uri for
> > > pif enp8s0f0.233 and bridge brenp8s0f0-233
> > > 2018-05-23 12:59:47,213 DEBUG [resource.virtualnetwork.
> VirtualRoutingResource]
> > > (agentRequest-Handler-5:null) (logid:ef8b353e) Transforming
> > > com.cloud.agent.api.routi

SOLVED: KVM Problem by deploying VPC

2018-05-24 Thread Benjamin Naber
Hi Adam,


got it ! the problem is this tiny script: 
/usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvlan.sh

at this lines you can see that ifconfig is used to bring bridges up.

ifconfig |grep -w $vlanBr > /dev/null
if [ $? -gt 0 ]
then
ifconfig $vlanBr up
fi
return 0

the simple solution for centos7: just install the package net-tools over 
package manager! Centos in basic installation have no ifconfig installed. 
Ubuntu and Debian derivates have it.

Kind regards and thanks for all impressions

Ben

> 
> Adam Witwicki <awitwi...@oakfordis.com> hat am 24. Mai 2018 um 16:03 
> geschrieben:
> 
> Hi Ben
> 
> " Now i moved back to centos7 and have another problem. all vlan bridges 
> that will be created by cloudstack-agent not manual get in ifup state. if i 
> ifup the vlan bridges manual all works fine. Have someone a idea how i can 
> force centos to automaticly bring up new if devices ?"
> 
> I had this issue with 4.9 and ending up writing a dirty script to check 
> the down bridges every minute via cron and if up them.
> 
> iface=/usr/sbin/ip add | grep -E brbond.*DOWN | awk -F 
> : {print $2}
> echo "$iface" | while read -r a; do /usr/sbin/ip link set dev $a up; done
> 
> Thanks
> 
> Adam
> 
> -Original Message-
> From: Benjamin Naber <benjamin.na...@coders-area.de>
>     Sent: 24 May 2018 14:37
> To: users@cloudstack.apache.org
> Subject: Re: KVM Problem by deploying VPC
> 
> ** This mail originated from OUTSIDE the Oakford corporate network. Treat 
> hyperlinks and attachments in this email with caution. **
> 
> Hi Andrija,
> 
> ty for reply. i solved the error. Now i have an other error. Problem was 
> that it failed witout a bonding device. i created a bonding device (bond0) 
> wiht a single nic now it works without problems On Ubuntu.
> 
> Example (didnt work):
> 
> auto enp8s0f0
> iface enp8s0f0 inet manual
> 
> auto cloudbr0
> iface cloudbr0 inet static
> address 10.253.250.230
> gateway 10.253.250.1
> netmask 255.255.255.0
> dns-nameservers 8.8.8.8 8.8.4.4
> bridge_ports enp8s0f0
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
> 
> Example (works fine):
> 
> auto enp8s0f0
> iface enp8s0f0 inet manual
> bond-master bond0
> 
> auto bond0
> iface bond0 inet manual
> bond-mode active-backup
> bond-miimon 100
> bond-slaves none
> 
> auto cloudbr0
> iface cloudbr0 inet static
> address 10.253.250.230
> gateway 10.253.250.1
> netmask 255.255.255.0
> dns-nameservers 8.8.8.8 8.8.4.4
> bridge_ports bond0
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
> 
> Now i moved back to centos7 and have another problem. all vlan bridges 
> that will be created by cloudstack-agent not manual get in ifup state. if i 
> ifup the vlan bridges manual all works fine. Have someone a idea how i can 
> force centos to automaticly bring up new if devices ?
> 
> Kind regards
> 
> Ben
> 
> >
> 
> > > 
> > Andrija Panic <andrija.pa...@gmail.com> hat am 24. Mai 2018 um 
> > 00:37 geschrieben:
> > 
> > Hi Ben,
> > 
> > interesting parts seems to be:
> > 2018-05-23 12:59:47,213 DEBUG 
> > [kvm.resource.LibvirtComputingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) getting broadcast 
> > uri for
> > pif enp8s0f0.233 and bridge brenp8s0f0-233
> > 2018-05-23 12:59:47,213 DEBUG 
> > [resource.virtualnetwork.VirtualRoutingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Transforming
> > com.cloud.agent.api.routing.IpAssocVpcCommand to ConfigItems
> > 2018-05-23 12:59:47,581 DEBUG 
> > [resource.virtualnetwork.VirtualRoutingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Processing 
> > FileConfigItem,
> > copying 257 characters to ip_associations.json took 340ms
> > 2018-05-23 12:59:47,582 DEBUG 
> > [kvm.resource.LibvirtComputingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Executing:
> > /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh
> > update_config.py 169.254.2.247 ip_associations.json
> > 2018-05-23 12:59:47,766 DEBUG 
> > [kvm.resource.LibvirtComputingResource]
> > (agentRequest-Handler-5:null) (logid:ef8b353e) Exit value is 1
> > 
> >   

RE: KVM Problem by deploying VPC

2018-05-24 Thread Adam Witwicki
Hi Ben

" Now i moved back to centos7 and have another problem. all vlan bridges that 
will be created by cloudstack-agent not manual get in ifup state. if i ifup the 
vlan bridges manual all works fine. Have someone a idea how i can force centos 
to automaticly bring up new if devices ?"

I had this issue with 4.9 and ending up writing a dirty script to check the 
down bridges every minute via cron and if up them.

iface=`/usr/sbin/ip add | grep -E "brbond.*DOWN" |  awk -F ':' '{print $2}'`
echo "$iface" | while read -r a; do /usr/sbin/ip link set dev $a up; done


Thanks

Adam

-Original Message-
From: Benjamin Naber <benjamin.na...@coders-area.de>
Sent: 24 May 2018 14:37
To: users@cloudstack.apache.org
Subject: Re: KVM Problem by deploying VPC

** This mail originated from OUTSIDE the Oakford corporate network. Treat 
hyperlinks and attachments in this email with caution. **

Hi Andrija,

ty for reply. i solved the error. Now i have an other error. Problem was that 
it failed witout a bonding device. i created a bonding device (bond0) wiht a 
single nic now it works without problems On Ubuntu.

Example (didnt work):

auto enp8s0f0
iface enp8s0f0 inet manual

auto cloudbr0
iface cloudbr0 inet static
address 10.253.250.230
gateway 10.253.250.1
netmask 255.255.255.0
dns-nameservers 8.8.8.8 8.8.4.4
bridge_ports enp8s0f0
bridge_fd 5
bridge_stp off
bridge_maxwait 1

Example (works fine):

auto enp8s0f0
iface enp8s0f0 inet manual
bond-master bond0

auto bond0
iface bond0 inet manual
bond-mode active-backup
bond-miimon 100
bond-slaves none


auto cloudbr0
iface cloudbr0 inet static
address 10.253.250.230
gateway 10.253.250.1
netmask 255.255.255.0
dns-nameservers 8.8.8.8 8.8.4.4
bridge_ports bond0
bridge_fd 5
bridge_stp off
bridge_maxwait 1

Now i moved back to centos7 and have another problem. all vlan bridges that 
will be created by cloudstack-agent not manual get in ifup state. if i ifup the 
vlan bridges manual all works fine. Have someone a idea how i can force centos 
to automaticly bring up new if devices ?


Kind regards

Ben

>
> Andrija Panic <andrija.pa...@gmail.com> hat am 24. Mai 2018 um 00:37 
> geschrieben:
>
> Hi Ben,
>
> interesting parts seems to be:
> 2018-05-23 12:59:47,213 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) getting broadcast uri for
> pif enp8s0f0.233 and bridge brenp8s0f0-233
> 2018-05-23 12:59:47,213 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Transforming
> com.cloud.agent.api.routing.IpAssocVpcCommand to ConfigItems
> 2018-05-23 12:59:47,581 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Processing FileConfigItem,
> copying 257 characters to ip_associations.json took 340ms
> 2018-05-23 12:59:47,582 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Executing:
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh
> update_config.py 169.254.2.247 ip_associations.json
> 2018-05-23 12:59:47,766 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Exit value is 1
>
> What I believe we see here, is that IP association fails for some reason
> (exit value is 1) and after that ACS will just stop the VM and do the
> cleanup (all log lines after this one)...
>
> Can you check your broadcast URI in the DB ? ACtually... I see it vlan 223
>
> * I did have some issues in past releases, when we wanted to use 
> untagged
>   vlans for Public network, but it seems not to be case here...
>
> Not sure, if it's possible that you also SSH to the VR during this 
> creation
> process, in order to collect logs from inside the VR - before Qemu 
> destroys
> the VR?
>
> ssh -p 3922 -i .ssh/id_rsa.cloud root@ 169.254.2.247 and then try to fetch
> (SCP or something) whole /var/log/ folder to the localhost - from there,
> there is cloud.log and auth.log where most of the commands outputs are
> located (success or failure)
>
> or something like rsync -av -e "ssh -p 3992 -i .ssh.id_rsa.cloud"
> root@169...
> /local/dir/
> and keep repeating until you fetch whatever data to be able to analyze it
>
> Perhaps someone else will have better suggestion...
>
> Best
> Andrija
>
> On 23 May 2018 at 13:08, Benjamin Naber <benjamin.na...@coders-area.de>
> wrote:
>
> > >
> > Hi Andrija,
> >
> > first of all thanks for your reply. I have now testet the setup on a
> > Ub

Re: KVM Problem by deploying VPC

2018-05-24 Thread Benjamin Naber
Hi Andrija,

ty for reply. i solved the error. Now i have an other error. Problem was that 
it failed witout a bonding device. i created a bonding device (bond0) wiht a 
single nic now it works without problems On Ubuntu.

Example (didnt work):

auto enp8s0f0
iface enp8s0f0 inet manual

auto cloudbr0
iface cloudbr0 inet static
address 10.253.250.230
gateway 10.253.250.1
netmask 255.255.255.0
dns-nameservers 8.8.8.8 8.8.4.4
bridge_ports enp8s0f0
bridge_fd 5
bridge_stp off
bridge_maxwait 1

Example (works fine):

auto enp8s0f0
iface enp8s0f0 inet manual
bond-master bond0
 
auto bond0
iface bond0 inet manual
bond-mode active-backup
bond-miimon 100
bond-slaves none

 
auto cloudbr0
iface cloudbr0 inet static
address 10.253.250.230
gateway 10.253.250.1
netmask 255.255.255.0
dns-nameservers 8.8.8.8 8.8.4.4
bridge_ports bond0
bridge_fd 5
bridge_stp off
bridge_maxwait 1

Now i moved back to centos7 and have another problem. all vlan bridges that 
will be created by cloudstack-agent not manual get in ifup state. if i ifup the 
vlan bridges manual all works fine. Have someone a idea how i can force centos 
to automaticly bring up new if devices ?


Kind regards

Ben

> 
> Andrija Panic  hat am 24. Mai 2018 um 00:37 
> geschrieben:
> 
> Hi Ben,
> 
> interesting parts seems to be:
> 2018-05-23 12:59:47,213 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) getting broadcast uri for
> pif enp8s0f0.233 and bridge brenp8s0f0-233
> 2018-05-23 12:59:47,213 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Transforming
> com.cloud.agent.api.routing.IpAssocVpcCommand to ConfigItems
> 2018-05-23 12:59:47,581 DEBUG 
> [resource.virtualnetwork.VirtualRoutingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Processing FileConfigItem,
> copying 257 characters to ip_associations.json took 340ms
> 2018-05-23 12:59:47,582 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Executing:
> /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh
> update_config.py 169.254.2.247 ip_associations.json
> 2018-05-23 12:59:47,766 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-5:null) (logid:ef8b353e) Exit value is 1
> 
> What I believe we see here, is that IP association fails for some reason
> (exit value is 1) and after that ACS will just stop the VM and do the
> cleanup (all log lines after this one)...
> 
> Can you check your broadcast URI in the DB ? ACtually... I see it vlan 223
> 
> * I did have some issues in past releases, when we wanted to use 
> untagged
>   vlans for Public network, but it seems not to be case here...
> 
> Not sure, if it's possible that you also SSH to the VR during this 
> creation
> process, in order to collect logs from inside the VR - before Qemu 
> destroys
> the VR?
> 
> ssh -p 3922 -i .ssh/id_rsa.cloud root@ 169.254.2.247 and then try to fetch
> (SCP or something) whole /var/log/ folder to the localhost - from there,
> there is cloud.log and auth.log where most of the commands outputs are
> located (success or failure)
> 
> or something like rsync -av -e "ssh -p 3992 -i .ssh.id_rsa.cloud"
> root@169...
> /local/dir/
> and keep repeating until you fetch whatever data to be able to analyze it
> 
> Perhaps someone else will have better suggestion...
> 
> Best
> Andrija
> 
> On 23 May 2018 at 13:08, Benjamin Naber 
> wrote:
> 
> > > 
> > Hi Andrija,
> > 
> > first of all thanks for your reply. I have now testet the setup on a
> > Ubuntu Xenial. Same issu with Default VPC Offering. Redundant VPC 
> > offering
> > works also without any problems. Same as on CentOS.
> > 
> > See delow the debug log (censored public Ips):
> > 
> > 2018-05-23 12:58:05,161 INFO [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) (logid:0806b407) Proccess agent ready
> > command, agent id = 15
> > 2018-05-23 12:58:05,161 INFO [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) (logid:0806b407) Set agent id 15
> > 2018-05-23 12:58:05,162 INFO [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) (logid:0806b407) Ready command is 
> > processed:
> > agent id = 15
> > 2018-05-23 12:58:05,162 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-2:null) (logid:0806b407) Seq 
> > 15-5506494969390563335:
> > { Ans: , MgmtId: 109952567336, via: 15, Ver: v1, Flags: 110,
> > [{"com.cloud.agent.api.ReadyAnswer":{"result":true,"wait":0}}] }
> > 2018-05-23 12:58:05,292 DEBUG [cloud.agent.Agent]
> > (agentRequest-Handler-3:null) (logid:0806b407) Request:Seq
> > 

Re: KVM Problem by deploying VPC

2018-05-23 Thread Andrija Panic
f8b353e) Looking for libvirtd
> connection at: lxc:///
> 2018-05-23 12:59:54,854 INFO  [kvm.resource.LibvirtConnection]
> (agentRequest-Handler-4:null) (logid:ef8b353e) No existing libvirtd
> connection found. Opening a new one
> 2018-05-23 12:59:54,856 DEBUG [kvm.resource.LibvirtConnection]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Successfully connected to
> libvirt at: lxc:///
> 2018-05-23 12:59:54,857 DEBUG [kvm.resource.LibvirtConnection]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Can not find LXC connection
> for Instance: r-37-VM, continuing.
> 2018-05-23 12:59:54,857 WARN  [kvm.resource.LibvirtConnection]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Can not find a connection
> for Instance r-37-VM. Assuming the default connection.
> 2018-05-23 12:59:54,857 DEBUG [kvm.resource.LibvirtConnection]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Looking for libvirtd
> connection at: qemu:///system
> 2018-05-23 12:59:54,858 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get dom xml:
> org.libvirt.LibvirtException: Domain not found: no domain with matching
> name 'r-37-VM'
> 2018-05-23 12:59:54,859 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get dom xml:
> org.libvirt.LibvirtException: Domain not found: no domain with matching
> name 'r-37-VM'
> 2018-05-23 12:59:54,860 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get dom xml:
> org.libvirt.LibvirtException: Domain not found: no domain with matching
> name 'r-37-VM'
> 2018-05-23 12:59:54,860 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Executing:
> /usr/share/cloudstack-common/scripts/vm/network/security_group.py
> destroy_network_rules_for_vm --vmname r-37-VM
> 2018-05-23 12:59:55,043 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Execution is successful.
> 2018-05-23 12:59:55,046 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get vm :Domain not
> found: no domain with matching name 'r-37-VM'
> 2018-05-23 12:59:55,046 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Try to stop the vm at first
> 2018-05-23 12:59:55,047 DEBUG [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:ef8b353e) VM r-37-VM doesn't exist, no
> need to stop it
> 2018-05-23 12:59:55,048 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-4:null) (logid:ef8b353e) Seq 15-5506494969390563342:
> { Ans: , MgmtId: 109952567336, via: 15, Ver: v1, Flags: 10,
> [{"com.cloud.agent.api.StopAnswer":{"result":true,"wait":0}}] }
> 2018-05-23 12:59:56,940 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-1:null) (logid:ef8b353e) Request:Seq
> 15-5506494969390563343:  { Cmd , MgmtId: 109952567336, via: 15, Ver: v1,
> Flags: 100011, [{"org.apache.cloudstack.storage.command.DeleteCommand":{"
> data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"
> uuid":"fa3dba5d-364b-43ce-9ebf-d15a3324f765","
> volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to
> .PrimaryDataStoreTO":{"uuid":"2258aa76-7813-354d-
> b274-961fb337e716","id":14,"poolType":"RBD","host":"ceph-mon
> ","path":"rbd","port":6789,"url":"RBD://ceph-mon/rbd/?ROLE
> =Primary=2258aa76-7813-354d-b274-961fb337e716","
> isManaged":false}},"name":"ROOT-37","size":349945344,"
> path":"fa3dba5d-364b-43ce-9ebf-d15a3324f765","volumeId":
> 37,"vmName":"r-37-VM","accountId":2,"format":"RAW","p
> rovisioningType":"THIN","id":37,"deviceId":0,"hypervisorType":"KVM"}},"wait":0}}]
> }
> 2018-05-23 12:59:56,940 DEBUG [cloud.agent.Agent]
> (agentRequest-Handler-1:null) (logid:ef8b353e) Processing command:
> org.apache.cloudstack.storage.command.DeleteCommand
> 2018-05-23 12:59:56,941 INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-1:null) (logid:ef8b353e) Trying to fetch storage pool
> 2258aa76-7813-354d-b274-961fb337e716 from libvirt
> 2018-05-23 12:59:56,952 DEBUG [kvm.resource.LibvirtConnection]
> (agentRequest-Handler-1:null) (logid:ef8b353e) Looking for libvirtd
> connection at: qemu:///system
> 2018-05-23 12:59:56,960 DEBUG [kvm.storage.LibvirtStorageAdaptor]
> (agen

Re: KVM Problem by deploying VPC

2018-05-23 Thread Benjamin Naber
gentRequest-Handler-4:null) (logid:ef8b353e) Can not find LXC connection for 
Instance: r-37-VM, continuing.
2018-05-23 12:59:54,857 WARN  [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Can not find a connection for 
Instance r-37-VM. Assuming the default connection.
2018-05-23 12:59:54,857 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Looking for libvirtd connection 
at: qemu:///system
2018-05-23 12:59:54,858 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get dom xml: 
org.libvirt.LibvirtException: Domain not found: no domain with matching name 
'r-37-VM'
2018-05-23 12:59:54,859 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get dom xml: 
org.libvirt.LibvirtException: Domain not found: no domain with matching name 
'r-37-VM'
2018-05-23 12:59:54,860 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get dom xml: 
org.libvirt.LibvirtException: Domain not found: no domain with matching name 
'r-37-VM'
2018-05-23 12:59:54,860 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Executing: 
/usr/share/cloudstack-common/scripts/vm/network/security_group.py 
destroy_network_rules_for_vm --vmname r-37-VM
2018-05-23 12:59:55,043 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Execution is successful.
2018-05-23 12:59:55,046 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Failed to get vm :Domain not 
found: no domain with matching name 'r-37-VM'
2018-05-23 12:59:55,046 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) Try to stop the vm at first
2018-05-23 12:59:55,047 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-4:null) (logid:ef8b353e) VM r-37-VM doesn't exist, no 
need to stop it
2018-05-23 12:59:55,048 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) 
(logid:ef8b353e) Seq 15-5506494969390563342:  { Ans: , MgmtId: 109952567336, 
via: 15, Ver: v1, Flags: 10, 
[{"com.cloud.agent.api.StopAnswer":{"result":true,"wait":0}}] }
2018-05-23 12:59:56,940 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) 
(logid:ef8b353e) Request:Seq 15-5506494969390563343:  { Cmd , MgmtId: 
109952567336, via: 15, Ver: v1, Flags: 100011, 
[{"org.apache.cloudstack.storage.command.DeleteCommand":{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"fa3dba5d-364b-43ce-9ebf-d15a3324f765","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"2258aa76-7813-354d-b274-961fb337e716","id":14,"poolType":"RBD","host":"ceph-mon","path":"rbd","port":6789,"url":"RBD://ceph-mon/rbd/?ROLE=Primary=2258aa76-7813-354d-b274-961fb337e716","isManaged":false}},"name":"ROOT-37","size":349945344,"path":"fa3dba5d-364b-43ce-9ebf-d15a3324f765","volumeId":37,"vmName":"r-37-VM","accountId":2,"format":"RAW","provisioningType":"THIN","id":37,"deviceId":0,"hypervisorType":"KVM"}},"wait":0}}]
 }
2018-05-23 12:59:56,940 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) 
(logid:ef8b353e) Processing command: 
org.apache.cloudstack.storage.command.DeleteCommand
2018-05-23 12:59:56,941 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:ef8b353e) Trying to fetch storage pool 
2258aa76-7813-354d-b274-961fb337e716 from libvirt
2018-05-23 12:59:56,952 DEBUG [kvm.resource.LibvirtConnection] 
(agentRequest-Handler-1:null) (logid:ef8b353e) Looking for libvirtd connection 
at: qemu:///system
2018-05-23 12:59:56,960 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:ef8b353e) Succesfully refreshed pool 
2258aa76-7813-354d-b274-961fb337e716 Capacity: 1500336095232 Used: 22173237211 
Available: 1426660573184
2018-05-23 12:59:57,215 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:ef8b353e) Attempting to remove volume 
fa3dba5d-364b-43ce-9ebf-d15a3324f765 from pool 
2258aa76-7813-354d-b274-961fb337e716
2018-05-23 12:59:57,215 INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:ef8b353e) Unprotecting and Removing RBD 
snapshots of image rbd/fa3dba5d-364b-43ce-9ebf-d15a3324f765 prior to removing 
the image
2018-05-23 12:59:57,228 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-1:null) (logid:ef8b353e) Succesfully connected to Ceph 
clus

Re: KVM Problem by deploying VPC

2018-05-21 Thread Andrija Panic
NetworkManager ? I though it was advised to not run it...

On 18 May 2018 at 16:11, Simon Weller <swel...@ena.com.invalid> wrote:

> Ben,
>
>
> Can you put the KVM agent in debug mode and post the logs?
>
>
> sed -i 's/INFO/DEBUG/g' /etc/cloudstack/agent/log4j-cloud.xml
>
>
> Then restart the agent.
>
>
> - Si
>
>
> 
> From: Benjamin Naber <benjamin.na...@coders-area.de>
> Sent: Friday, May 18, 2018 2:20 AM
> To: Cloudstack Mailinglist
> Subject: KVM Problem by deploying VPC
>
> Hi Together,
>
> im currently testing the Configuration of KVM Hosts in our Test
> environment.
> When i try to deploy a VPC the Hypervisor shows me the following error:
>
> Hypervisor Log:
>
> May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Successfully activated
> service 'org.freedesktop.nm_dispatcher'
> May 18 09:12:08 kvm-test01-sb systemd: Started Network Manager Script
> Dispatcher Service.
> May 18 09:12:08 kvm-test01-sb nm-dispatcher: req:1 'up' [vnet0]: new
> request (4 scripts)
> May 18 09:12:08 kvm-test01-sb nm-dispatcher: req:1 'up' [vnet0]: start
> running ordered scripts...
> May 18 09:12:08 kvm-test01-sb libvirtd: 2018-05-18 07:12:08.667+:
> 6251: warning : qemuDomainObjTaint:5378 : Domain id=2 name='r-31-VM'
> uuid=ff22b439-e0d0-44d1-a3cc-8dd23afb82eb is tainted: high-privileges
> May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Activating via systemd:
> service name='org.freedesktop.machine1' unit='dbus-org.freedesktop.
> machine1.service'
> May 18 09:12:08 kvm-test01-sb systemd: Starting Virtual Machine and
> Container Registration Service...
> May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Successfully activated
> service 'org.freedesktop.machine1'
> May 18 09:12:08 kvm-test01-sb systemd: Started Virtual Machine and
> Container Registration Service.
> May 18 09:12:08 kvm-test01-sb systemd-machined: New machine qemu-2-r-31-VM.
> May 18 09:12:08 kvm-test01-sb systemd: Started Virtual Machine
> qemu-2-r-31-VM.
> May 18 09:12:08 kvm-test01-sb systemd: Starting Virtual Machine
> qemu-2-r-31-VM.
> May 18 09:12:08 kvm-test01-sb systemd: Unit iscsi.service cannot be
> reloaded because it is inactive.
> May 18 09:12:08 kvm-test01-sb kvm: 1 guest now active
> May 18 09:12:12 kvm-test01-sb kernel: kvm [48292]: vcpu0 disabled perfctr
> wrmsr: 0xc1 data 0xabcd
> May 18 09:12:44 kvm-test01-sb kernel: kvm [48292]: vcpu0 disabled perfctr
> wrmsr: 0xc1 data 0xabcd
> May 18 09:12:46 kvm-test01-sb sh: INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-5:) (logid:b911ffae) Trying to fetch storage pool
> 2258aa76-7813-354d-b274-961fb337e716 from libvirt
> May 18 09:12:46 kvm-test01-sb sh: INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-5:) (logid:b911ffae) Asking libvirt to refresh
> storage pool 2258aa76-7813-354d-b274-961fb337e716
> May 18 09:13:05 kvm-test01-sb NetworkManager[925]: 
> [1526627585.6454] manager: (enp8s0f0.233): new VLAN device
> (/org/freedesktop/NetworkManager/Devices/15)
> May 18 09:13:05 kvm-test01-sb NetworkManager[925]: 
> [1526627585.6470] device (enp8s0f0.233): carrier: link connected
> May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 1(enp8s0f0.233)
> entered blocking state
> May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 1(enp8s0f0.233)
> entered disabled state
> May 18 09:13:05 kvm-test01-sb kernel: device enp8s0f0.233 entered
> promiscuous mode
> May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1)
> entered blocking state
> May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1)
> entered disabled state
> May 18 09:13:05 kvm-test01-sb kernel: device vnet1 entered promiscuous mode
> May 18 09:13:05 kvm-test01-sb NetworkManager[925]: 
> [1526627585.6648] manager: (vnet1): new Tun device (/org/freedesktop/
> NetworkManager/Devices/16)
> May 18 09:13:05 kvm-test01-sb NetworkManager[925]: 
> [1526627585.6662] device (vnet1): state change: unmanaged -> unavailable
> (reason 'connection-assumed', sys-iface-state: 'external')
> May 18 09:13:05 kvm-test01-sb NetworkManager[925]: 
> [1526627585.6674] device (vnet1): state change: unavailable -> disconnected
> (reason 'none', sys-iface-state: 'external')
> May 18 09:13:10 kvm-test01-sb kernel: cloud0: port 1(vnet0) entered
> disabled state
> May 18 09:13:10 kvm-test01-sb kernel: device vnet0 left promiscuous mode
> May 18 09:13:10 kvm-test01-sb kernel: cloud0: port 1(vnet0) entered
> disabled state
> May 18 09:13:10 kvm-test01-sb NetworkManager[925]: 
> [1526627590.5339] device (vnet0): state change: activated -> unmanaged
> (reason 'unmanaged', sys-iface-state: 'removed')
> May 18 09:13:10 kvm-test01-sb NetworkManag

Re: KVM Problem by deploying VPC

2018-05-18 Thread Simon Weller
Ben,


Can you put the KVM agent in debug mode and post the logs?


sed -i 's/INFO/DEBUG/g' /etc/cloudstack/agent/log4j-cloud.xml


Then restart the agent.


- Si



From: Benjamin Naber <benjamin.na...@coders-area.de>
Sent: Friday, May 18, 2018 2:20 AM
To: Cloudstack Mailinglist
Subject: KVM Problem by deploying VPC

Hi Together,

im currently testing the Configuration of KVM Hosts in our Test environment.
When i try to deploy a VPC the Hypervisor shows me the following error:

Hypervisor Log:

May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Successfully activated 
service 'org.freedesktop.nm_dispatcher'
May 18 09:12:08 kvm-test01-sb systemd: Started Network Manager Script 
Dispatcher Service.
May 18 09:12:08 kvm-test01-sb nm-dispatcher: req:1 'up' [vnet0]: new request (4 
scripts)
May 18 09:12:08 kvm-test01-sb nm-dispatcher: req:1 'up' [vnet0]: start running 
ordered scripts...
May 18 09:12:08 kvm-test01-sb libvirtd: 2018-05-18 07:12:08.667+: 6251: 
warning : qemuDomainObjTaint:5378 : Domain id=2 name='r-31-VM' 
uuid=ff22b439-e0d0-44d1-a3cc-8dd23afb82eb is tainted: high-privileges
May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Activating via systemd: 
service name='org.freedesktop.machine1' 
unit='dbus-org.freedesktop.machine1.service'
May 18 09:12:08 kvm-test01-sb systemd: Starting Virtual Machine and Container 
Registration Service...
May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Successfully activated 
service 'org.freedesktop.machine1'
May 18 09:12:08 kvm-test01-sb systemd: Started Virtual Machine and Container 
Registration Service.
May 18 09:12:08 kvm-test01-sb systemd-machined: New machine qemu-2-r-31-VM.
May 18 09:12:08 kvm-test01-sb systemd: Started Virtual Machine qemu-2-r-31-VM.
May 18 09:12:08 kvm-test01-sb systemd: Starting Virtual Machine qemu-2-r-31-VM.
May 18 09:12:08 kvm-test01-sb systemd: Unit iscsi.service cannot be reloaded 
because it is inactive.
May 18 09:12:08 kvm-test01-sb kvm: 1 guest now active
May 18 09:12:12 kvm-test01-sb kernel: kvm [48292]: vcpu0 disabled perfctr 
wrmsr: 0xc1 data 0xabcd
May 18 09:12:44 kvm-test01-sb kernel: kvm [48292]: vcpu0 disabled perfctr 
wrmsr: 0xc1 data 0xabcd
May 18 09:12:46 kvm-test01-sb sh: INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-5:) (logid:b911ffae) Trying to fetch storage pool 
2258aa76-7813-354d-b274-961fb337e716 from libvirt
May 18 09:12:46 kvm-test01-sb sh: INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-5:) (logid:b911ffae) Asking libvirt to refresh storage 
pool 2258aa76-7813-354d-b274-961fb337e716
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6454] 
manager: (enp8s0f0.233): new VLAN device 
(/org/freedesktop/NetworkManager/Devices/15)
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6470] 
device (enp8s0f0.233): carrier: link connected
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 1(enp8s0f0.233) 
entered blocking state
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 1(enp8s0f0.233) 
entered disabled state
May 18 09:13:05 kvm-test01-sb kernel: device enp8s0f0.233 entered promiscuous 
mode
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1) entered 
blocking state
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1) entered 
disabled state
May 18 09:13:05 kvm-test01-sb kernel: device vnet1 entered promiscuous mode
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6648] 
manager: (vnet1): new Tun device (/org/freedesktop/NetworkManager/Devices/16)
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6662] 
device (vnet1): state change: unmanaged -> unavailable (reason 
'connection-assumed', sys-iface-state: 'external')
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6674] 
device (vnet1): state change: unavailable -> disconnected (reason 'none', 
sys-iface-state: 'external')
May 18 09:13:10 kvm-test01-sb kernel: cloud0: port 1(vnet0) entered disabled 
state
May 18 09:13:10 kvm-test01-sb kernel: device vnet0 left promiscuous mode
May 18 09:13:10 kvm-test01-sb kernel: cloud0: port 1(vnet0) entered disabled 
state
May 18 09:13:10 kvm-test01-sb NetworkManager[925]:   [1526627590.5339] 
device (vnet0): state change: activated -> unmanaged (reason 'unmanaged', 
sys-iface-state: 'removed')
May 18 09:13:10 kvm-test01-sb NetworkManager[925]:   [1526627590.5342] 
device (cloud0): bridge port vnet0 was detached
May 18 09:13:10 kvm-test01-sb NetworkManager[925]:   [1526627590.5342] 
device (vnet0): released from master device cloud0
May 18 09:13:10 kvm-test01-sb dbus[883]: [system] Activating via systemd: 
service name='org.freedesktop.nm_dispatcher' 
unit='dbus-org.freedesktop.nm-dispatcher.service'
May 18 09:13:10 kvm-test01-sb systemd: Starting Network Manager Script 
Dispatcher Service...
May 18 09:13:10 kvm-test01-sb kernel: device vnet1 left promiscuous mode
May 18 09:13:10 kvm-test01-sb kernel: brenp8s0f0-23

KVM Problem by deploying VPC

2018-05-18 Thread Benjamin Naber
Hi Together,

im currently testing the Configuration of KVM Hosts in our Test environment.
When i try to deploy a VPC the Hypervisor shows me the following error:

Hypervisor Log:

May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Successfully activated 
service 'org.freedesktop.nm_dispatcher'
May 18 09:12:08 kvm-test01-sb systemd: Started Network Manager Script 
Dispatcher Service.
May 18 09:12:08 kvm-test01-sb nm-dispatcher: req:1 'up' [vnet0]: new request (4 
scripts)
May 18 09:12:08 kvm-test01-sb nm-dispatcher: req:1 'up' [vnet0]: start running 
ordered scripts...
May 18 09:12:08 kvm-test01-sb libvirtd: 2018-05-18 07:12:08.667+: 6251: 
warning : qemuDomainObjTaint:5378 : Domain id=2 name='r-31-VM' 
uuid=ff22b439-e0d0-44d1-a3cc-8dd23afb82eb is tainted: high-privileges
May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Activating via systemd: 
service name='org.freedesktop.machine1' 
unit='dbus-org.freedesktop.machine1.service'
May 18 09:12:08 kvm-test01-sb systemd: Starting Virtual Machine and Container 
Registration Service...
May 18 09:12:08 kvm-test01-sb dbus[883]: [system] Successfully activated 
service 'org.freedesktop.machine1'
May 18 09:12:08 kvm-test01-sb systemd: Started Virtual Machine and Container 
Registration Service.
May 18 09:12:08 kvm-test01-sb systemd-machined: New machine qemu-2-r-31-VM.
May 18 09:12:08 kvm-test01-sb systemd: Started Virtual Machine qemu-2-r-31-VM.
May 18 09:12:08 kvm-test01-sb systemd: Starting Virtual Machine qemu-2-r-31-VM.
May 18 09:12:08 kvm-test01-sb systemd: Unit iscsi.service cannot be reloaded 
because it is inactive.
May 18 09:12:08 kvm-test01-sb kvm: 1 guest now active
May 18 09:12:12 kvm-test01-sb kernel: kvm [48292]: vcpu0 disabled perfctr 
wrmsr: 0xc1 data 0xabcd
May 18 09:12:44 kvm-test01-sb kernel: kvm [48292]: vcpu0 disabled perfctr 
wrmsr: 0xc1 data 0xabcd
May 18 09:12:46 kvm-test01-sb sh: INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-5:) (logid:b911ffae) Trying to fetch storage pool 
2258aa76-7813-354d-b274-961fb337e716 from libvirt
May 18 09:12:46 kvm-test01-sb sh: INFO  [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-5:) (logid:b911ffae) Asking libvirt to refresh storage 
pool 2258aa76-7813-354d-b274-961fb337e716
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6454] 
manager: (enp8s0f0.233): new VLAN device 
(/org/freedesktop/NetworkManager/Devices/15)
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6470] 
device (enp8s0f0.233): carrier: link connected
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 1(enp8s0f0.233) 
entered blocking state
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 1(enp8s0f0.233) 
entered disabled state
May 18 09:13:05 kvm-test01-sb kernel: device enp8s0f0.233 entered promiscuous 
mode
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1) entered 
blocking state
May 18 09:13:05 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1) entered 
disabled state
May 18 09:13:05 kvm-test01-sb kernel: device vnet1 entered promiscuous mode
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6648] 
manager: (vnet1): new Tun device (/org/freedesktop/NetworkManager/Devices/16)
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6662] 
device (vnet1): state change: unmanaged -> unavailable (reason 
'connection-assumed', sys-iface-state: 'external')
May 18 09:13:05 kvm-test01-sb NetworkManager[925]:   [1526627585.6674] 
device (vnet1): state change: unavailable -> disconnected (reason 'none', 
sys-iface-state: 'external')
May 18 09:13:10 kvm-test01-sb kernel: cloud0: port 1(vnet0) entered disabled 
state
May 18 09:13:10 kvm-test01-sb kernel: device vnet0 left promiscuous mode
May 18 09:13:10 kvm-test01-sb kernel: cloud0: port 1(vnet0) entered disabled 
state
May 18 09:13:10 kvm-test01-sb NetworkManager[925]:   [1526627590.5339] 
device (vnet0): state change: activated -> unmanaged (reason 'unmanaged', 
sys-iface-state: 'removed')
May 18 09:13:10 kvm-test01-sb NetworkManager[925]:   [1526627590.5342] 
device (cloud0): bridge port vnet0 was detached
May 18 09:13:10 kvm-test01-sb NetworkManager[925]:   [1526627590.5342] 
device (vnet0): released from master device cloud0
May 18 09:13:10 kvm-test01-sb dbus[883]: [system] Activating via systemd: 
service name='org.freedesktop.nm_dispatcher' 
unit='dbus-org.freedesktop.nm-dispatcher.service'
May 18 09:13:10 kvm-test01-sb systemd: Starting Network Manager Script 
Dispatcher Service...
May 18 09:13:10 kvm-test01-sb kernel: device vnet1 left promiscuous mode
May 18 09:13:10 kvm-test01-sb kernel: brenp8s0f0-233: port 2(vnet1) entered 
disabled state
May 18 09:13:10 kvm-test01-sb dbus[883]: [system] Successfully activated 
service 'org.freedesktop.nm_dispatcher'
May 18 09:13:10 kvm-test01-sb systemd: Started Network Manager Script 
Dispatcher Service.
May 18 09:13:10 kvm-test01-sb nm-dispatcher: req:1 'down' [vnet0]: new request 
(4 scripts)
May 18 09:13:10 kvm-test01-sb