Sorry, I pressed send without the inventory file. My inventory file has the following content (this is the standard file I get when following the procedure described in [1]; I am deploying all-in one node, therefore, I did not change anything here):
> # These initial groups are the only groups required to be modified. The > # additional groups are for more control of the environment. > [control] > localhost ansible_connection=local > > [network] > localhost ansible_connection=local > > # inner-compute is the groups of compute nodes which do not have > # external reachability. > # DEPRECATED, the group will be removed in S release of OpenStack, > # use variable neutron_compute_dvr_mode instead. > [inner-compute] > > # external-compute is the groups of compute nodes which can reach > # outside. > # DEPRECATED, the group will be removed in S release of OpenStack, > # use variable neutron_compute_dvr_mode instead. > [external-compute] > localhost ansible_connection=local > > [compute:children] > inner-compute > external-compute > > [storage] > localhost ansible_connection=local > > [monitoring] > localhost ansible_connection=local > > [deployment] > localhost ansible_connection=local > > # You can explicitly specify which hosts run each project by updating the > # groups in the sections below. Common services are grouped together. > [chrony-server:children] > haproxy > > [chrony:children] > network > compute > storage > monitoring > > [collectd:children] > compute > > [baremetal:children] > control > > [grafana:children] > monitoring > > [etcd:children] > control > compute > > [kafka:children] > control > > [karbor:children] > control > > [kibana:children] > control > > [telegraf:children] > compute > control > monitoring > network > storage > > [elasticsearch:children] > control > > [haproxy:children] > network > > [hyperv] > #hyperv_host > > [hyperv:vars] > #ansible_user=user > #ansible_password=password > #ansible_port=5986 > #ansible_connection=winrm > #ansible_winrm_server_cert_validation=ignore > > [mariadb:children] > control > > [rabbitmq:children] > control > > [outward-rabbitmq:children] > control > > [qdrouterd:children] > control > > [mongodb:children] > control > > [keystone:children] > control > > [glance:children] > control > > [nova:children] > control > > [neutron:children] > network > > [openvswitch:children] > network > compute > manila-share > > [opendaylight:children] > network > > [cinder:children] > control > > [cloudkitty:children] > control > > [freezer:children] > control > > [memcached:children] > control > > [horizon:children] > control > > [swift:children] > control > > [barbican:children] > control > > [heat:children] > control > > [murano:children] > control > > [ceph:children] > control > > [ironic:children] > control > > [influxdb:children] > monitoring > > [prometheus:children] > monitoring > > [magnum:children] > control > > [sahara:children] > control > > [solum:children] > control > > [mistral:children] > control > > [manila:children] > control > > [panko:children] > control > > [gnocchi:children] > control > > [ceilometer:children] > control > > [aodh:children] > control > > [congress:children] > control > > [tacker:children] > control > > [vitrage:children] > control > > # Tempest > [tempest:children] > control > > [senlin:children] > control > > [vmtp:children] > control > > [trove:children] > control > > [watcher:children] > control > > [rally:children] > control > > [searchlight:children] > control > > [octavia:children] > control > > [designate:children] > control > > [placement:children] > control > > [bifrost:children] > deployment > > [zookeeper:children] > control > > [zun:children] > control > > [skydive:children] > monitoring > > [redis:children] > control > > [blazar:children] > control > > # Additional control implemented here. These groups allow you to control > which > # services run on which hosts at a per-service level. > # > # Word of caution: Some services are required to run on the same host to > # function appropriately. For example, neutron-metadata-agent must run on > the > # same host as the l3-agent and (depending on configuration) the > dhcp-agent. > > # Glance > [glance-api:children] > glance > > [glance-registry:children] > glance > > # Nova > [nova-api:children] > nova > > [nova-conductor:children] > nova > > [nova-consoleauth:children] > nova > > [nova-novncproxy:children] > nova > > [nova-scheduler:children] > nova > > [nova-spicehtml5proxy:children] > nova > > [nova-compute-ironic:children] > nova > > [nova-serialproxy:children] > nova > > # Neutron > [neutron-server:children] > control > > [neutron-dhcp-agent:children] > neutron > > [neutron-l3-agent:children] > neutron > > [neutron-lbaas-agent:children] > neutron > > [neutron-metadata-agent:children] > neutron > > [neutron-bgp-dragent:children] > neutron > > [neutron-infoblox-ipam-agent:children] > neutron > > # Ceph > [ceph-mds:children] > ceph > > [ceph-mgr:children] > ceph > > [ceph-nfs:children] > ceph > > [ceph-mon:children] > ceph > > [ceph-rgw:children] > ceph > > [ceph-osd:children] > storage > > # Cinder > [cinder-api:children] > cinder > > [cinder-backup:children] > storage > > [cinder-scheduler:children] > cinder > > [cinder-volume:children] > storage > > # Cloudkitty > [cloudkitty-api:children] > cloudkitty > > [cloudkitty-processor:children] > cloudkitty > > # Freezer > [freezer-api:children] > freezer > > [freezer-scheduler:children] > freezer > > # iSCSI > [iscsid:children] > compute > storage > ironic > > [tgtd:children] > storage > > # Karbor > [karbor-api:children] > karbor > > [karbor-protection:children] > karbor > > [karbor-operationengine:children] > karbor > > # Manila > [manila-api:children] > manila > > [manila-scheduler:children] > manila > > [manila-share:children] > network > > [manila-data:children] > manila > > # Swift > [swift-proxy-server:children] > swift > > [swift-account-server:children] > storage > > [swift-container-server:children] > storage > > [swift-object-server:children] > storage > > # Barbican > [barbican-api:children] > barbican > > [barbican-keystone-listener:children] > barbican > > [barbican-worker:children] > barbican > > # Trove > [trove-api:children] > trove > > [trove-conductor:children] > trove > > [trove-taskmanager:children] > trove > > # Heat > [heat-api:children] > heat > > [heat-api-cfn:children] > heat > > [heat-engine:children] > heat > > # Murano > [murano-api:children] > murano > > [murano-engine:children] > murano > > # Ironic > [ironic-api:children] > ironic > > [ironic-conductor:children] > ironic > > [ironic-inspector:children] > ironic > > [ironic-pxe:children] > ironic > > # Magnum > [magnum-api:children] > magnum > > [magnum-conductor:children] > magnum > > # Solum > [solum-api:children] > solum > > [solum-worker:children] > solum > > [solum-deployer:children] > solum > > [solum-conductor:children] > solum > > # Mistral > [mistral-api:children] > mistral > > [mistral-executor:children] > mistral > > [mistral-engine:children] > mistral > > # Aodh > [aodh-api:children] > aodh > > [aodh-evaluator:children] > aodh > > [aodh-listener:children] > aodh > > [aodh-notifier:children] > aodh > > # Panko > [panko-api:children] > panko > > # Gnocchi > [gnocchi-api:children] > gnocchi > > [gnocchi-statsd:children] > gnocchi > > [gnocchi-metricd:children] > gnocchi > > # Sahara > [sahara-api:children] > sahara > > [sahara-engine:children] > sahara > > # Ceilometer > [ceilometer-central:children] > ceilometer > > [ceilometer-notification:children] > ceilometer > > [ceilometer-compute:children] > compute > > # Congress > [congress-api:children] > congress > > [congress-datasource:children] > congress > > [congress-policy-engine:children] > congress > > # Multipathd > [multipathd:children] > compute > > # Watcher > [watcher-api:children] > watcher > > [watcher-engine:children] > watcher > > [watcher-applier:children] > watcher > > # Senlin > [senlin-api:children] > senlin > > [senlin-engine:children] > senlin > > # Searchlight > [searchlight-api:children] > searchlight > > [searchlight-listener:children] > searchlight > > # Octavia > [octavia-api:children] > octavia > > [octavia-health-manager:children] > octavia > > [octavia-housekeeping:children] > octavia > > [octavia-worker:children] > octavia > > # Designate > [designate-api:children] > designate > > [designate-central:children] > designate > > [designate-producer:children] > designate > > [designate-mdns:children] > network > > [designate-worker:children] > designate > > [designate-sink:children] > designate > > [designate-backend-bind9:children] > designate > > # Placement > [placement-api:children] > placement > > # Zun > [zun-api:children] > zun > > [zun-compute:children] > compute > > # Skydive > [skydive-analyzer:children] > skydive > > [skydive-agent:children] > compute > network > > # Tacker > [tacker-server:children] > tacker > > [tacker-conductor:children] > tacker > > # Vitrage > [vitrage-api:children] > vitrage > > [vitrage-notifier:children] > vitrage > > [vitrage-graph:children] > vitrage > > [vitrage-collector:children] > vitrage > > [vitrage-ml:children] > vitrage > > # Blazar > [blazar-api:children] > blazar > > [blazar-manager:children] > blazar > > # Prometheus > [prometheus-node-exporter:children] > monitoring > control > compute > network > storage > > [prometheus-mysqld-exporter:children] > mariadb > > [prometheus-haproxy-exporter:children] > haproxy > On Mon, May 21, 2018 at 11:27 PM, Rafael Weingärtner < [email protected]> wrote: > > Well, everything is pretty standard (I am only deploying a POC), I am > following this "documentation"[1]. I did not change much from the default > files. > [1] https://docs.openstack.org/project-deploy-guide/kolla- > ansible/queens/quickstart.html# > > Globals: > >> --- >> # You can use this file to override _any_ variable throughout Kolla. >> # Additional options can be found in the >> # 'kolla-ansible/ansible/group_vars/all.yml' file. Default value of all >> the >> # commented parameters are shown here, To override the default value >> uncomment >> # the parameter and change its value. >> >> ############### >> # Kolla options >> ############### >> # Valid options are [ COPY_ONCE, COPY_ALWAYS ] >> #config_strategy: "COPY_ALWAYS" >> >> # Valid options are ['centos', 'debian', 'oraclelinux', 'rhel', 'ubuntu'] >> #kolla_base_distro: "centos" >> kolla_base_distro: "ubuntu" >> >> # Valid options are [ binary, source ] >> #kolla_install_type: "binary" >> kolla_install_type: "source" >> >> # Valid option is Docker repository tag >> #openstack_release: "" >> #openstack_release: "master" >> openstack_release: "queens" >> >> # Location of configuration overrides >> #node_custom_config: "/etc/kolla/config" >> >> # This should be a VIP, an unused IP on your network that will float >> between >> # the hosts running keepalived for high-availability. If you want to run >> an >> # All-In-One without haproxy and keepalived, you can set enable_haproxy >> to no >> # in "OpenStack options" section, and set this value to the IP of your >> # 'network_interface' as set in the Networking section below. >> network_interface: "enp0s8" >> kolla_internal_vip_address: "192.168.56.250" >> >> # This is the DNS name that maps to the kolla_internal_vip_address VIP. By >> # default it is the same as kolla_internal_vip_address. >> #kolla_internal_fqdn: "{{ kolla_internal_vip_address }}" >> >> # This should be a VIP, an unused IP on your network that will float >> between >> # the hosts running keepalived for high-availability. It defaults to the >> # kolla_internal_vip_address, allowing internal and external >> communication to >> # share the same address. Specify a kolla_external_vip_address to >> separate >> # internal and external requests between two VIPs. >> #kolla_external_vip_address: "{{ kolla_internal_vip_address }}" >> >> # The Public address used to communicate with OpenStack as set in the >> public_url >> # for the endpoints that will be created. This DNS name should map to >> # kolla_external_vip_address. >> #kolla_external_fqdn: "{{ kolla_external_vip_address }}" >> >> ################ >> # Docker options >> ################ >> # Below is an example of a private repository with authentication. Note >> the >> # Docker registry password can also be set in the passwords.yml file. >> >> #docker_registry: "172.16.0.10:4000" >> #docker_namespace: "companyname" >> #docker_registry_username: "sam" >> #docker_registry_password: "correcthorsebatterystaple" >> >> ################### >> # Messaging options >> ################### >> # Below is an example of an separate backend that provides brokerless >> # messaging for oslo.messaging RPC communications >> >> #om_rpc_transport: "amqp" >> #om_rpc_user: "{{ qdrouterd_user }}" >> #om_rpc_password: "{{ qdrouterd_password }}" >> #om_rpc_port: "{{ qdrouterd_port }}" >> #om_rpc_group: "qdrouterd" >> >> >> ############################## >> # Neutron - Networking Options >> ############################## >> # This interface is what all your api services will be bound to by >> default. >> # Additionally, all vxlan/tunnel and storage network traffic will go over >> this >> # interface by default. This interface must contain an IPv4 address. >> # It is possible for hosts to have non-matching names of interfaces - >> these can >> # be set in an inventory file per host or per group or stored separately, >> see >> # http://docs.ansible.com/ansible/intro_inventory.html >> # Yet another way to workaround the naming problem is to create a bond >> for the >> # interface on all hosts and give the bond name here. Similar strategy >> can be >> # followed for other types of interfaces. >> #network_interface: "eth0" >> >> # These can be adjusted for even more customization. The default is the >> same as >> # the 'network_interface'. These interfaces must contain an IPv4 address. >> #kolla_external_vip_interface: "{{ network_interface }}" >> #api_interface: "{{ network_interface }}" >> #storage_interface: "{{ network_interface }}" >> #cluster_interface: "{{ network_interface }}" >> #tunnel_interface: "{{ network_interface }}" >> #dns_interface: "{{ network_interface }}" >> >> # This is the raw interface given to neutron as its external network >> port. Even >> # though an IP address can exist on this interface, it will be unusable >> in most >> # configurations. It is recommended this interface not be configured with >> any IP >> # addresses for that reason. >> #neutron_external_interface: "eth1" >> neutron_external_interface: "enp0s9" >> >> # Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, >> opendaylight ] >> neutron_plugin_agent: "linuxbridge" >> >> # Valid options are [ internal, infoblox ] >> #neutron_ipam_driver: "internal" >> >> >> #################### >> # keepalived options >> #################### >> # Arbitrary unique number from 0..255 >> #keepalived_virtual_router_id: "51" >> >> >> ############# >> # TLS options >> ############# >> # To provide encryption and authentication on the >> kolla_external_vip_interface, >> # TLS can be enabled. When TLS is enabled, certificates must be provided >> to >> # allow clients to perform authentication. >> #kolla_enable_tls_external: "no" >> #kolla_external_fqdn_cert: "{{ node_config_directory >> }}/certificates/haproxy.pem" >> >> >> ############## >> # OpenDaylight >> ############## >> #enable_opendaylight_qos: "no" >> #enable_opendaylight_l3: "yes" >> >> ################### >> # OpenStack options >> ################### >> # Use these options to set the various log levels across all OpenStack >> projects >> # Valid options are [ True, False ] >> #openstack_logging_debug: "False" >> >> # Valid options are [ none, novnc, spice, rdp ] >> #nova_console: "novnc" >> >> # OpenStack services can be enabled or disabled with these options >> #enable_aodh: "no" >> #enable_barbican: "no" >> #enable_blazar: "no" >> #enable_ceilometer: "no" >> #enable_central_logging: "no" >> #enable_ceph: "no" >> #enable_ceph_mds: "no" >> #enable_ceph_rgw: "no" >> #enable_ceph_nfs: "no" >> #enable_chrony: "no" >> #enable_cinder: "yes" >> #enable_cinder_backup: "yes" >> #enable_cinder_backend_hnas_iscsi: "no" >> #enable_cinder_backend_hnas_nfs: "no" >> #enable_cinder_backend_iscsi: "no" >> #enable_cinder_backend_lvm: "no" >> #enable_cinder_backend_nfs: "yes" >> #enable_cloudkitty: "no" >> #enable_collectd: "no" >> #enable_congress: "no" >> #enable_designate: "no" >> #enable_destroy_images: "no" >> #enable_etcd: "no" >> #enable_fluentd: "yes" >> #enable_freezer: "no" >> #enable_gnocchi: "no" >> #enable_grafana: "no" >> enable_haproxy: "no" >> #enable_heat: "yes" >> #enable_horizon: "yes" >> #enable_horizon_blazar: "{{ enable_blazar | bool }}" >> #enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}" >> #enable_horizon_designate: "{{ enable_designate | bool }}" >> #enable_horizon_freezer: "{{ enable_freezer | bool }}" >> #enable_horizon_ironic: "{{ enable_ironic | bool }}" >> #enable_horizon_karbor: "{{ enable_karbor | bool }}" >> #enable_horizon_magnum: "{{ enable_magnum | bool }}" >> #enable_horizon_manila: "{{ enable_manila | bool }}" >> #enable_horizon_mistral: "{{ enable_mistral | bool }}" >> #enable_horizon_murano: "{{ enable_murano | bool }}" >> #enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}" >> #enable_horizon_octavia: "{{ enable_octavia | bool }}" >> #enable_horizon_sahara: "{{ enable_sahara | bool }}" >> #enable_horizon_searchlight: "{{ enable_searchlight | bool }}" >> #enable_horizon_senlin: "{{ enable_senlin | bool }}" >> #enable_horizon_solum: "{{ enable_solum | bool }}" >> #enable_horizon_tacker: "{{ enable_tacker | bool }}" >> #enable_horizon_trove: "{{ enable_trove | bool }}" >> #enable_horizon_watcher: "{{ enable_watcher | bool }}" >> #enable_horizon_zun: "{{ enable_zun | bool }}" >> #enable_hyperv: "no" >> #enable_influxdb: "no" >> #enable_ironic: "no" >> #enable_ironic_pxe_uefi: "no" >> #enable_kafka: "no" >> #enable_karbor: "no" >> #enable_kuryr: "no" >> #enable_magnum: "no" >> #enable_manila: "no" >> #enable_manila_backend_generic: "no" >> #enable_manila_backend_hnas: "no" >> #enable_manila_backend_cephfs_native: "no" >> #enable_manila_backend_cephfs_nfs: "no" >> #enable_mistral: "no" >> #enable_mongodb: "no" >> #enable_murano: "no" >> #enable_multipathd: "no" >> #enable_neutron_bgp_dragent: "no" >> #enable_neutron_dvr: "no" >> #enable_neutron_lbaas: "no" >> #enable_neutron_fwaas: "no" >> #enable_neutron_qos: "no" >> #enable_neutron_agent_ha: "no" >> enable_neutron_vpnaas: "no" >> #enable_neutron_sriov: "no" >> #enable_neutron_sfc: "no" >> #enable_nova_fake: "no" >> #enable_nova_serialconsole_proxy: "no" >> #enable_octavia: "no" >> #enable_opendaylight: "no" >> #enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}" >> #enable_ovs_dpdk: "no" >> #enable_osprofiler: "no" >> #enable_panko: "no" >> #enable_prometheus: "no" >> #enable_qdrouterd: "no" >> #enable_rally: "no" >> #enable_redis: "no" >> #enable_sahara: "no" >> #enable_searchlight: "no" >> #enable_senlin: "no" >> #enable_skydive: "no" >> #enable_solum: "no" >> #enable_swift: "no" >> #enable_telegraf: "no" >> #enable_tacker: "no" >> #enable_tempest: "no" >> #enable_trove: "no" >> #enable_trove_singletenant: "no" >> #enable_vitrage: "no" >> #enable_vmtp: "no" >> #enable_watcher: "no" >> #enable_zookeeper: "no" >> #enable_zun: "no" >> >> ############## >> # Ceph options >> ############## >> # Ceph can be setup with a caching to improve performance. To use the >> cache you >> # must provide separate disks than those for the OSDs >> #ceph_enable_cache: "no" >> >> # Set to no if using external Ceph without cephx. >> #external_ceph_cephx_enabled: "yes" >> >> # Ceph is not able to determine the size of a cache pool automatically, >> # so the configuration on the absolute size is required here, otherwise >> the flush/evict will not work. >> #ceph_target_max_bytes: "" >> #ceph_target_max_objects: "" >> >> # Valid options are [ forward, none, writeback ] >> #ceph_cache_mode: "writeback" >> >> # A requirement for using the erasure-coded pools is you must setup a >> cache tier >> # Valid options are [ erasure, replicated ] >> #ceph_pool_type: "replicated" >> >> # Integrate ceph rados object gateway with openstack keystone >> #enable_ceph_rgw_keystone: "no" >> >> # Set the pgs and pgps for pool >> # WARNING! These values are dependant on the size and shape of your >> cluster - >> # the default values are not suitable for production use. Please refer to >> the >> # Kolla Ceph documentation for more information. >> #ceph_pool_pg_num: 8 >> #ceph_pool_pgp_num: 8 >> >> ############################# >> # Keystone - Identity Options >> ############################# >> >> # Valid options are [ fernet ] >> #keystone_token_provider: 'fernet' >> >> # Interval to rotate fernet keys by (in seconds). Must be an interval of >> # 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min), >> # 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min), >> # 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour), >> # 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week). >> #fernet_token_expiry: 86400 >> >> >> ######################## >> # Glance - Image Options >> ######################## >> # Configure image backend. >> #glance_backend_ceph: "no" >> #glance_backend_file: "yes" >> #glance_backend_swift: "no" >> #glance_backend_vmware: "no" >> # Configure glance upgrade option, due to this feature is experimental >> # in glance, so default value should be set to "no". >> glance_enable_rolling_upgrade: "no" >> >> >> ################## >> # Barbican options >> ################## >> # Valid options are [ simple_crypto, p11_crypto ] >> #barbican_crypto_plugin: "simple_crypto" >> #barbican_library_path: "/usr/lib/libCryptoki2_64.so" >> >> ################ >> ## Panko options >> ################ >> # Valid options are [ mongodb, mysql ] >> #panko_database_type: "mysql" >> >> ################# >> # Gnocchi options >> ################# >> # Valid options are [ file, ceph ] >> #gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}" >> >> # Valid options are [redis, ''] >> #gnocchi_incoming_storage: "{{ 'redis' if enable_redis | bool else '' }}" >> >> ################################ >> # Cinder - Block Storage Options >> ################################ >> # Enable / disable Cinder backends >> #cinder_backend_ceph: "{{ enable_ceph }}" >> #cinder_backend_vmwarevc_vmdk: "no" >> #cinder_volume_group: "cinder-volumes" >> >> # Valid options are [ nfs, swift, ceph ] >> #cinder_backup_driver: "ceph" >> #cinder_backup_share: "" >> #cinder_backup_mount_options_nfs: "" >> >> >> ################### >> # Designate options >> ################### >> # Valid options are [ bind9 ] >> #designate_backend: "bind9" >> #designate_ns_record: "sample.openstack.org" >> >> ######################## >> # Nova - Compute Options >> ######################## >> #nova_backend_ceph: "{{ enable_ceph }}" >> >> # Valid options are [ qemu, kvm, vmware, xenapi ] >> #nova_compute_virt_type: "kvm" >> >> # The number of fake driver per compute node >> #num_nova_fake_per_node: 5 >> >> ################# >> # Hyper-V options >> ################# >> # Hyper-V can be used as hypervisor >> #hyperv_username: "user" >> #hyperv_password: "password" >> #vswitch_name: "vswitch" >> # URL from which Nova Hyper-V MSI is downloaded >> #nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_ >> Beta.msi" >> >> ############################# >> # Horizon - Dashboard Options >> ############################# >> #horizon_backend_database: "{{ enable_murano | bool }}" >> >> ############################# >> # Ironic options >> ############################# >> # following value must be set when enable ironic, the value format >> # is "192.168.0.10,192.168.0.100". >> ironic_dnsmasq_dhcp_range: >> >> ###################################### >> # Manila - Shared File Systems Options >> ###################################### >> # HNAS backend configuration >> #hnas_ip: >> #hnas_user: >> #hnas_password: >> #hnas_evs_id: >> #hnas_evs_ip: >> #hnas_file_system_name: >> >> ################################ >> # Swift - Object Storage Options >> ################################ >> # Swift expects block devices to be available for storage. Two types of >> storage >> # are supported: 1 - storage device with a special partition name and >> filesystem >> # label, 2 - unpartitioned disk with a filesystem. The label of this >> filesystem >> # is used to detect the disk which Swift will be using. >> >> # Swift support two matching modes, valid options are [ prefix, strict ] >> #swift_devices_match_mode: "strict" >> >> # This parameter defines matching pattern: if "strict" mode was selected, >> # for swift_devices_match_mode then swift_device_name should specify the >> name of >> # the special swift partition for example: "KOLLA_SWIFT_DATA", if >> "prefix" mode was >> # selected then swift_devices_name should specify a pattern which would >> match to >> # filesystems' labels prepared for swift. >> #swift_devices_name: "KOLLA_SWIFT_DATA" >> >> >> ################################################ >> # Tempest - The OpenStack Integration Test Suite >> ################################################ >> # following value must be set when enable tempest >> tempest_image_id: >> tempest_flavor_ref_id: >> tempest_public_network_id: >> tempest_floating_network_name: >> >> # tempest_image_alt_id: "{{ tempest_image_id }}" >> # tempest_flavor_ref_alt_id: "{{ tempest_flavor_ref_id }}" >> >> ################################### >> # VMware - OpenStack VMware support >> ################################### >> #vmware_vcenter_host_ip: >> #vmware_vcenter_host_username: >> #vmware_vcenter_host_password: >> #vmware_datastore_name: >> #vmware_vcenter_name: >> #vmware_vcenter_cluster_name: >> >> ####################################### >> # XenAPI - Support XenAPI for XenServer >> ####################################### >> # XenAPI driver use HIMN(Host Internal Management Network) >> # to communicate with XenServer host. >> #xenserver_himn_ip: >> #xenserver_username: >> #xenserver_connect_protocol: >> >> ############ >> # Prometheus >> ############ >> #enable_prometheus_haproxy_exporter: "{{ enable_haproxy | bool }}" >> #enable_prometheus_mysqld_exporter: "{{ enable_mariadb | bool }}" >> #enable_prometheus_node_exporter: "yes" >> > > > On Mon, May 21, 2018 at 10:49 PM, Jeffrey Zhang <[email protected]> > wrote: > >> seems there are some issue in you inventory file. >> Could you compare your inventory file with the one in kolla-ansible code? >> >> if you are still not fix it, try to provide you globals.yml file and >> inventory file in ML. >> >> On Tue, May 22, 2018 at 6:51 AM, Rafael Weingärtner < >> [email protected]> wrote: >> >>> Hello OpenStackers, >>> First of all, I am not sure if this is the right list to post this >>> question. Therefore, please excuse me if I am sending an e-mail to the >>> wrong place. >>> >>> So, I have been trying to use Kolla to deploy a POC environment of >>> OpenStack. However, I have not been able to do so. Right now I am getting >>> the following error: >>> >>> fatal: [localhost]: FAILED! => {"msg": "The conditional check >>>> '(neutron_l3_agent.enabled | bool and neutron_l3_agent.host_in_groups >>>> | bool) or (neutron_vpnaas_agent.enabled | bool and >>>> neutron_vpnaas_agent.host_in_groups | bool)' failed. The error was: >>>> error while evaluating conditional ((neutron_l3_agent.enabled | bool and >>>> neutron_l3_agent.host_in_groups | bool) or >>>> (neutron_vpnaas_agent.enabled | bool and >>>> neutron_vpnaas_agent.host_in_groups >>>> | bool)): Unable to look up a name or access an attribute in template >>>> string ({{ inventory_hostname in groups['neutron-vpnaas-agent'] }}).\nMake >>>> sure your variable name does not contain invalid characters like '-': >>>> argument of type 'StrictUndefined' is not iterable\n\nThe error appears to >>>> have been in '/usr/local/share/kolla-ansibl >>>> e/ansible/roles/neutron/tasks/config.yml': line 2, column 3, but >>>> may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe >>>> offending line appears to be:\n\n---\n- name: Setting sysctl values\n ^ >>>> here\n"} >>>> >>> >>> It looks like an Ansible problem. I checked the file >>> “/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml” >>> at line 5, it has the following declaration: >>> >>>> neutron_l3_agent: "{{ neutron_services['neutron-l3-agent'] }}" >>>> >>> >>> As far as I understand everything is ok with this variable declaration. >>> There is the “neutron-l3-agent” parameter used to retrieve an element from >>> “neutron_services” map, but that does look ok. Has anybody else experienced >>> this problem before? >>> >>> I am using Kolla for OpenStack queens. I am using kolla with the >>> following command. >>> >>>> kolla-ansible -i all-in-one bootstrap-servers && kolla-ansible -i >>>> all-in-one prechecks && kolla-ansible -i all-in-one deploy >>>> >>> >>> As you can see, it is a simple use case to deploy OpenStack in a single >>> node. The command that is failing is the following. >>> >>>> kolla-ansible -i all-in-one deploy >>>> >>> >>> -- >>> Rafael Weingärtner >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: [email protected] >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: [email protected]?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Rafael Weingärtner > -- Rafael Weingärtner
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
