Thank you Dag, this looks very likely to be the case.  I was able to find a 
scenario yesterday that brought the service up by setting eno1.8 type to 
Ethernet, disabling eno1.8, restarting the agent service, then re-enabling 
eno1.8 as type Vlan.  My intended design was to create a 2 NIC bond (for now 
just single NIC eno1 for troubleshooting), create subinterfaces (ex eno1.8 for 
vlan8), and assign each subinterface to a cloudstack service.  I will try 
letting Cloudstack take care of all vlans and remove the subinterface, pointing 
cloudbr0 directly to eno1 or bond.

​​​​​Daniel Farrar

-----Original Message-----
From: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Sent: Wednesday, January 9, 2019 10:38 AM
To: users@cloudstack.apache.org
Subject: Re: KVM Agent Error: Incorrect details for private Nic

Hi Daniel,

My initial guess without deep diving into this is your NIC naming. The 
CloudStack agent looks for the interface (bond / NIC / team) which supports 
cloudbr0 - and I think maybe in your case it can't find it due to the dotted 
naming "eno1.8" when it probably expects some conventional like "eno16777984".

Is there a reason you've named it like this? This would normally point to a 
VLAN setup (whereas cloudbr0 needs to be trunked in an advanced zone). 
If not can you rename it?

Code for this is somewhere around here I think:
https://github.com/apache/cloudstack/blob/c565db2cf2ab5fdbd2fba65409928dfa7c5f2d25/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java#L1312

Developers - please correct me if I'm wrong.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
 

On 09/01/2019, 14:08, "Daniel Farrar" <daniel.far...@carringtonmh.com> wrote:

    Hello!  Wanted to see if anyone else has ran into this issue or if you can 
point me in the right direction...
    
    I have 2 KVM CloudStack-agents version 4.11.2 running Centos7 that are not 
able to start the cloudstack-agent service if the agent.propreties file 
specifies the variable "private.network.device=cloudbr0".  This is 
automatically added to the config file when the agent connects to the server, 
if the private network is commented out the service is able to start up but any 
system VMs are unable to connect to the network.  This environment was working 
with my original 2 KVM agents, but after deleting the hosts and rebuilding with 
identical network config I've ran into this issue.  I have logs and configs 
below, any ideas what is wrong?  This is a dev proof of concept environment, so 
I can rebuild if needed.  Thank you for your time!
    
    
    2019-01-07 15:36:10,257 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Agent started
    2019-01-07 15:36:10,259 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Implementation Version is 4.11.2.0
    2019-01-07 15:36:10,260 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
agent.properties found at /etc/cloudstack/agent/agent.properties
    2019-01-07 15:36:10,271 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Defaulting to using properties file for storage
    2019-01-07 15:36:10,272 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Defaulting to the constant time backoff algorithm
    2019-01-07 15:36:10,282 INFO  [cloud.utils.LogUtils] (main:null) (logid:) 
log4j configuration found at /etc/cloudstack/agent/log4j-cloud.xml
    2019-01-07 15:36:10,293 INFO  [cloud.agent.AgentShell] (main:null) (logid:) 
Using default Java settings for IPv6 preference for agent connection
    2019-01-07 15:36:10,384 INFO  [cloud.agent.Agent] (main:null) (logid:) id 
is 15
    2019-01-07 15:36:10,387 WARN  [cloud.resource.ServerResourceBase] 
(main:null) (logid:) Incorrect details for private Nic during initialization of 
ServerResourceBase
    2019-01-07 15:36:10,387 ERROR [cloud.agent.AgentShell] (main:null) (logid:) 
Unable to start agent: Unable to configure LibvirtComputingResource
    
    
    This occurs on version 4.11.1 and 4.11.2, and I've seen other threads with 
this issue on earlier versions that reported downgrading the agent fixed this 
issue.
    
    
    
    NIC CONFIGURATION:
    
    ifcfg-cloudbr0
    DEVICE=cloudbr0
    TYPE=Bridge
    ONBOOT=yes
    BOOTPROTO=none
    IPV6INIT=no
    IPV6_AUTOCONF=no
    DELAY=5
    STP=yes
    
    Ifcfg-eno1.8
    HWADDR=XX:XX:XX:XX:XX:XX
    DEVICE=eno1.8
    ONBOOT=yes
    HOTPLUG=no
    BOOTPROTO=none
    TYPE=Vlan
    VLAN=yes
    BRIDGE=cloudbr0
    
    cloudbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            ether txqueuelen 1000  (Ethernet)
            RX packets 2368  bytes 116316 (113.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 3  bytes 270 (270.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eno1.8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            ether txqueuelen 1000  (Ethernet)
            RX packets 3159  bytes 163465 (159.6 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1752  bytes 91924 (89.7 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    
    brctl show
    bridge name     bridge id               STP enabled     interfaces
    cloud0          8000.fe00a9fe00fc       no              vnet0
                                                            vnet3
    cloudbr0        8000.64510630b869       yes             eno1.8
                                                            vnet2
                                                            vnet5
    cloudbr1        8000.64510630b869       yes             eno1.7
                                                            vnet1
                                                            vnet4
    virbr0          8000.525400b79d7d       yes             virbr0-nic
    
    
    Agent.properties file networks:
    public.network.device=cloudbr0
    private.network.device=cloudbr0
    guest.network.device=cloudbr1
    
    
    Cloudstack Zone network config:
    
    Zone > Physical Network 1 > Public
    Traffic Type
    
    Public
    
    
    
    Broadcast Domain Type
    
    Vlan
    
    
    
    KVM traffic label
    
    cloudbr0
    
    
    
    10.x.y1
    
    255.255.255.0
    
    vlan://untagged
    
    10.x.y.100
    
    10.x.y.199
    
    [ROOT] system
    
    
    **Note: The public network works fine, virtual routers have network access.
    
    Zone > Physical Network 1 > Management
    Traffic Type
    
    Management
    
    
    
    Broadcast Domain Type
    
    Native
    
    
    
    KVM traffic label
    
    cloudbr0
    
    
    
    Pod1
    
    10.x.y.1
    
    255.255.255.0
    
    vlan://8
    
    10.x.y.30
    
    10.x.y.49
    
    system VMs
    
    
    
    Pod1
    
    10.x.y.1
    
    255.255.255.0
    
    vlan://8
    
    10.x.y.50
    
    10.x.y.99
    
    
    
    **NOTE: Tried above Mangement IP ranges with and without vlan tags, can't 
get any system VMs to have network connectivity
    
    
    Zone > Physical Network 1 > Guest
    State
    
    Enabled
    
    VLAN/VNI Range(s)
    
    800-850
    
    KVM traffic label
    
    cloudbr1
    
    
    
    
    Daniel Farrar
    
    
    ________________________________
    
    Confidentiality Notice: This message, including any attachment(s), may 
contain confidential information protected by law. The information contained 
herein is for the sole use of the intended recipient(s). If you have received 
this message in error, please contact the sender at the e-mail address listed 
above and destroy all copies of the original message, including any 
attachments. Thank you.
    


dag.sonst...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
  
 

Reply via email to