Hi Tim, I found the problem with the network. You can forget this message. It
was related to the physical interface on the switch that cause a problem with
the dhcp request. I've been able to deploy opnfv basic with ovs. Now, I'm
working on os-odl_l2-fdio-noha.yaml. I ran this command with the default
settings.
opnfv-deploy -n network_settings.yaml -i inventory.yaml -d
os-odl_l2-fdio-noha.yaml --interactive --debug
I receive some errors at the end of the overcloud deployment.. Cannot find vpp
interface matching: 81/0/0
Any Idea? Regards,
Pat
-----Original Message-----
From: Lemay, Patrick
Sent: November-22-16 10:40 AM
To: 'Tim Rozet'
Cc: [email protected]; Jamo Luhrsen; Poulin, Jocelyn
(6007251); Bernier, Daniel (520165); Guay, Francois (A214312); Smith, Brian
(3010640)
Subject: RE: [opnfv-tech-discuss] [Apex] deployment error
Hi Tim, we are nearly having a working deployment setup. I can confirm that pxe
work well. I think you are right regarding the post deployment task. There is
something wrong on my network_settings.yaml file. When the deployment start the
interfaces enp1s0f0 obtain an ip from undercloud dhcp 10.66.20.10. 30 seconds
after, it wipes the config and no communication is possible after. You'll see
it from the screenshot. Could you look at my network_settings.yaml file please?
We can also do a session to show you. I'm pretty sure that I miss configure
something in that file that cause some problem in my post deployment task.
Interface enp1s0f0 should be on 10.66.20.xx/24 admin_network Interface ens2f0
should be on 10.66.25.XX/24 public_network
Both interfaces are in access mode and gateway is .1. Deployment server
Undercloud) is 10.66.20.10.
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| ID | Name | Status |
Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| 494e3987-057a-4ba3-87a1-8169e8f9c126 | overcloud-controller-0 | BUILD |
spawning | NOSTATE | ctlplane=10.66.20.26 |
| d70c26ff-4d34-4787-bec6-9b596fd03019 | overcloud-novacompute-0 | ACTIVE | -
| Running | ctlplane=10.66.20.24 |
| 99b4b166-985a-41ff-a425-4ae95cc28db1 | overcloud-novacompute-1 | ACTIVE | -
| Running | ctlplane=10.66.20.25 |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
Regards,
Patrick Lemay
Consultant Managed Services Engineering Bell Canada
671 de la Gauchetière O. Bur. 610, Montreal, Quebec H3B 2M8
Tel: (514) 870-1540
-----Original Message-----
From: Tim Rozet [mailto:[email protected]]
Sent: November-16-16 3:10 PM
To: Lemay, Patrick
Cc: [email protected]; Jamo Luhrsen; Poulin, Jocelyn
(6007251); Bernier, Daniel (520165); Guay, Francois (A214312); Smith, Brian
(3010640)
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error
Hi Patrick,
In Apex, configuring bonds is not supported yet. It is possible in TripleO and
with some advanced config you could use a workaround, but for now let's just
focus on getting the deployment to work. When you execute the deployment (use
--debug arg to opnfv-deploy), can you console into one of the servers and see
if it actually PXE boots into linux? If it does then it means your network
connectivity on your admin network is correct and take note of which NIC MAC is
used to PXE. Now go to your host and do:
opnfv-util undercloud
. stackrc
nova list
ping <each node's ip>
If one or more pings fail, it means the network configuration of the box
post-pxe boot is wrong. At this point you need to console into the overcloud
node. Since you used --debug, a default root password of 'opnfvapex' was
applied to all the overcloud nodes. So you can login to the node and look at
/var/log/messages for:
Nov 15 09:58:26 localhost os-collect-config: [2016/11/15 09:58:26 AM] [INFO]
nic1 mapped to: eth0 Nov 15 09:58:26 localhost os-collect-config: [2016/11/15
09:58:26 AM] [INFO] nic2 mapped to: eth1 Nov 15 09:58:26 localhost
os-collect-config: [2016/11/15 09:58:26 AM] [INFO] nic3 mapped to: eth2 Nov 15
09:58:26 localhost os-collect-config: [2016/11/15 09:58:26 AM] [INFO] nic4
mapped to: eth3
In your network settings file if you are using logical nic mapping (meaning
nic<#>), then you will need to double check that the admin (ctrlplane) network
is correctly wired from your undercloud VM (host admin network NIC) to the
logical nic mapping in your network settings file. This logical nic name
resolves to a physical NIC name on the host using the mapping above. If it is
wrong, then you can either fix that by providing the correct logical nic name
in your network settings file or use the real physical nic name (in this case
eth0) for your settings. You can compare the MAC of the PXE boot interface
from earlier to determine which physical NIC is your admin network nic. The
nic settings are per each network and per profile (compute/control).
Tim Rozet
Red Hat SDN Team
----- Original Message -----
From: "Patrick Lemay" <[email protected]>
To: "Tim Rozet" <[email protected]>, [email protected]
Cc: "Jamo Luhrsen" <[email protected]>, "Jocelyn Poulin (6007251)"
<[email protected]>, "Daniel Bernier (520165)" <[email protected]>,
"Francois Guay (A214312)" <[email protected]>, "Brian Smith (3010640)"
<[email protected]>
Sent: Tuesday, November 15, 2016 11:58:01 AM
Subject: RE: [opnfv-tech-discuss] [Apex] deployment error
Hi Tim, adding configuration on enp1s0f0 and f1 worked well it use this config
to create br-admin and br-public. Thanks for that workaround. Now, undercloud
deploy successfully. IPMI with sol work fine also. I'm able to poweroff and
poweron servers with ironic. The only thing, the deployment script failed .
It's probably related to the network between the node and the undercloud that
is not well configure. We are supposed to create a bond interface on 10g but
this part has not been configure because there is no example. How can I
confirm that? Is there a journal with more details? Also, my ctlplane interface
is well configure but unreachable from the undercloud when source by stackrc.
[root@undercloud ~]# ironic node-list
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID
| Power State | Provisioning State | Maintenance |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| 9f069885-b9cb-4dec-85ab-945fb08a8753 | None |
b1efb2e1-6e78-4fc0-9d5a-5b323088bebb | power on | active | False
|
| f273975c-06ba-4c36-b7e6-25872d7efce0 | None |
a4c77e61-6d18-4e1d-94ca-cd007282856a | power on | active | False
|
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
[root@undercloud ~]# nova list
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| ID | Name | Status |
Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
| b1efb2e1-6e78-4fc0-9d5a-5b323088bebb | overcloud-controller-0 | ACTIVE | -
| Running | ctlplane=10.66.20.24 |
| a4c77e61-6d18-4e1d-94ca-cd007282856a | overcloud-novacompute-0 | ACTIVE | -
| Running | ctlplane=10.66.20.23 |
+--------------------------------------+-------------------------+--------+------------+-------------+----------------------+
When I try to source overcloudrc the file is not present but stackrc is present.
[stack@undercloud ~]$ source overcloudrc
-bash: overcloudrc: No such file or directory
[stack@undercloud ~]$ source stackrc
openstack server list
[stack@undercloud ~]$ openstack server list
+--------------------------------------+-------------------------+--------+----------------------+
| ID | Name | Status |
Networks |
+--------------------------------------+-------------------------+--------+----------------------+
| b1efb2e1-6e78-4fc0-9d5a-5b323088bebb | overcloud-controller-0 |
| ACTIVE | ctlplane=10.66.20.24 |
| a4c77e61-6d18-4e1d-94ca-cd007282856a | overcloud-novacompute-0 |
| ACTIVE | ctlplane=10.66.20.23 |
+--------------------------------------+-------------------------+--------+----------------------+
[stack@undercloud ~]$ ssh [email protected]
ssh: connect to host 10.66.20.24 port 22: No route to host
[stack@undercloud ~]$ ssh [email protected]
ssh: connect to host 10.66.20.23 port 22: No route to host
[stack@undercloud ~]$ ping 10.66.20.24
PING 10.66.20.24 (10.66.20.24) 56(84) bytes of data.
^C
--- 10.66.20.24 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1000ms
Ping the undercloud works:
[stack@undercloud ~]$ ping 10.66.20.101
PING 10.66.20.101 (10.66.20.101) 56(84) bytes of data.
64 bytes from 10.66.20.101: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 10.66.20.101: icmp_seq=2 ttl=64 time=0.049 ms
-----Original Message-----
From: Tim Rozet [mailto:[email protected]]
Sent: November-09-16 11:14 AM
To: Guay, Francois (A214312)
Cc: Jamo Luhrsen; Lemay, Patrick; [email protected]; Poulin,
Jocelyn (6007251)
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error
Hi Patrick,
For the Undercloud VM to bridge to your host, it needs to be able to get the IP
information off of the host interfaces. It does this by checking your ifcfg
files under /etc/sysconfig/network-scripts for the interface you specify for
each network. Can you check that enp1s0f1 ifcfg file is not set to dhcp and
has IP and NETMASK/PREFIX settings in the file?
Thanks,
Tim Rozet
Red Hat SDN Team
----- Original Message -----
From: "Francois Guay (A214312)" <[email protected]>
To: "Jamo Luhrsen" <[email protected]>, "Patrick Lemay"
<[email protected]>, [email protected]
Cc: "Jocelyn Poulin (6007251)" <[email protected]>
Sent: Friday, November 4, 2016 2:59:14 PM
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error
Patrick, Jamo,
I did the libvirt-python install and it solve the problem. (see attached)
Now, we need to add the two missing nodes to get the five required nodes. I
will give it a try and let you know.
Thanks Jamo.
Patrick, I'll let you share with Jamo what you did with the admin_network and
public_network stuff.
François
-----Original Message-----
From: Jamo Luhrsen [mailto:[email protected]]
Sent: 4 novembre 2016 12:04
To: Lemay, Patrick; [email protected]
Cc: Poulin, Jocelyn (6007251); Guay, Francois (A214312)
Subject: Re: [opnfv-tech-discuss] [Apex] deployment error
Patrick,
I've had similar problems trying to get apex baremetal to work on the jumphost
I'm working with. I hit the libvirt issue the other day. I think I just did a
"yum install libvirt-python" try that.
I'm still stuck on what I need to do with the admin_network, public_network
stuff that I think you have overcome recently. Can you recap the steps you
took?
JamO
On 11/04/2016 08:47 AM, Lemay, Patrick wrote:
> Hi I finally create the br-public by hand instead of script. It work
> but now I’m stock to another. «ImportError: No module named libvirt».
> I’m using your deployment cd. All the dependencies should be install. I start
> to tshoot the problem but I’m sure that you’ve already see that bug.
>
>
>
> All python version installed:
>
> [root@jumphost opnfv-apex]# python
>
> python python2 python2.7 python3 python3.4 python3.4m
>
>
>
> Output from the opnfv-deploy script:
>
> INFO: virsh networks set:
>
> Name State Autostart Persistent
>
> ----------------------------------------------------------
>
> admin_network active yes yes
>
> default active yes yes
>
> public_network active yes yes
>
>
>
> All dependencies installed and running
>
> 4 Nov 10:58:33 ntpdate[27293]: adjust time server 206.108.0.133 offset
> -0.002592 sec
>
> Volume undercloud exists. Deleting Existing Volume
> /var/lib/libvirt/images/undercloud.qcow2
>
> Vol undercloud.qcow2 deleted
>
>
>
> Vol undercloud.qcow2 created
>
>
>
> Traceback (most recent call last):
>
> File "/usr/libexec/openstack-tripleo/configure-vm", line 8, in
> <module>
>
> import libvirt
>
> ImportError: No module named libvirt
>
>
>
>
>
> Thanks,
>
>
>
>
>
>
>
>
>
> *From:*Lemay, Patrick
> *Sent:* November-01-16 11:42 AM
> *To:* '[email protected]'
> *Cc:* Bernier, Daniel (520165); Guay, Francois (A214312); Poulin,
> Jocelyn (6007251)
> *Subject:* Bell deployment config
>
>
>
> Hi guys, I have some issues regarding opnfv baremetal deployment. I
> install a jumphost from opnfv cd. I configure the inventory with IPMI ip mac
> and users.
>
>
>
> I’m not sure that I configure the network_settings correctly for
> public_network.
>
>
>
>
>
> For the deployment I use pod 2 and pod 3 from the drawing. The server
> Catherine is use for the jumphost. Interface enp1s0f0 is for pxe and
> enp1s0f1 is for public_network . The other 5 servers are for
> deployment IPMI ready. All interfaces in vlan
> 1020 are pxe ready and disk are configured raid 1. I have a problem deploying
> undercloud.
>
>
>
>
>
> I have this error related to the undercloud deployment:
>
> INFO: Creating Virsh Network: admin_network & OVS Bridge: br-admin
>
> INFO: Creating Virsh Network: public_network & OVS Bridge: br-public
>
> INFO: Bridges set:
>
> br-admin
>
> br-public
>
> enp1s0f0
>
> INFO: Interface enp1s0f0 bridged to bridge br-admin for enabled
> network: admin_network
>
> /var/opt/opnfv/lib/common-functions.sh: line 18: 5 - ( / 8) : syntax
> error: operand expected (error token is "/ 8) ")
>
> ERROR: IPADDR or NETMASK/PREFIX missing for enp1s0f1
>
> ERROR: Unable to bridge interface enp1s0f1 to bridge br-public for
> enabled network: public_network
>
>
>
>
>
> Could you help please? There is no vmware at all in the setup only baremetal..
>
>
>
> Regards,
>
>
>
>
>
>
>
> Patrick Lemay
>
> Consultant Managed Services Engineering Bell Canada
>
> 671 de la Gauchetière O. Bur. 610, Montreal, Quebec H3B 2M8
>
> Tel: (514) 870-1540
>
>
>
>
>
> _______________________________________________
> opnfv-tech-discuss mailing list
> [email protected]
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
>
_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
2016-11-25 00:34:58 [ControllerExtraConfigPre]: CREATE_FAILED Error:
resources.ControllerExtraConfigPre.resources.ControllerNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 1
2016-11-25 00:34:58 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-11-25 00:34:59 [0]: CREATE_FAILED Error:
resources[0].resources.ControllerExtraConfigPre.resources.ControllerNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 1
2016-11-25 00:34:59 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-11-25 00:34:59 [overcloud-Controller-atnm7x2ea2ft-0-7mqbmbvp2sbh]:
CREATE_FAILED Resource CREATE failed: Error:
resources.ControllerExtraConfigPre.resources.ControllerNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 1
2016-11-25 00:35:01 [overcloud-Controller-atnm7x2ea2ft]: CREATE_FAILED Resource
CREATE failed: Error:
resources[0].resources.ControllerExtraConfigPre.resources.ControllerNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 1
2016-11-25 00:35:01 [Controller]: CREATE_FAILED Error:
resources.Controller.resources[0].resources.ControllerExtraConfigPre.resources.ControllerNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 1
2016-11-25 00:35:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-11-25 00:35:22 [ComputeExtraConfigPre]: CREATE_FAILED Error:
resources.ComputeExtraConfigPre.resources.ComputeNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 6
2016-11-25 00:35:22 [NovaComputeDeployment]: SIGNAL_COMPLETE Unknown
2016-11-25 00:35:23 [overcloud-Compute-4oemq2obhurm-0-wmc7tluo2u4i]:
CREATE_FAILED Resource CREATE failed: Error:
resources.ComputeExtraConfigPre.resources.ComputeNumaPuppetDeployment:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 6
2016-11-25 00:35:24 [0]: CREATE_FAILED Error:
resources[0].resources.ComputeExtraConfigPre.resources.ComputeNuma
", "deploy_stderr": "Warning: Scope(Class[main]): Could not look up qualified
variable '::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Warning: Scope(Class[main]): Could not look up qualified variable
'::deploy_config_name';
Error: /Stage[main]/Fdio::Service/Vpp_interface_cfg[config vpp interfaces]:
Could not evaluate: Cannot find vpp interface matching: 81/0/0
Warning: /Stage[main]/Fdio::Honeycomb/Package[honeycomb]: Skipping because of
failed dependencies
Warning: /Stage[main]/Fdio::Honeycomb/File[honeycomb.json]: Skipping because of
failed dependencies
Warning: /Stage[main]/Fdio::Honeycomb/Service[honeycomb]: Skipping because of
failed dependencies
", "deploy_status_code": 6 }, "creation_time": "2016-11-25T01:33:03",
"updated_time": "2016-11-25T01:33:49", "input_values": {}, "action": "CREATE",
"status_reason": "deploy_status_code : Deployment exited with non-zero status
code: 6", "id": "2c9dd64b-b693-4d25-8959-e16623be953f" }
******************************************************
_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss