On 02/20/2017 09:33 AM, Prashant Shetty wrote:
Team,
I have multi node devstack setup with single controller and multiple
computes running stable/ocata.
On compute:
ENABLED_SERVICES=n-cpu,neutron,placement-api
Both KVM and ESxi compute came up fine:
vmware@cntr11:~$ nova hypervisor-list
warnings.warn(msg)
+----+----------------------------------------------------+-------+---------+
| ID | Hypervisor hostname | State |
Status |
+----+----------------------------------------------------+-------+---------+
| 4 | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up |
enabled |
| 7 | kvm-1 | up |
enabled |
+----+----------------------------------------------------+-------+---------+
vmware@cntr11:~$
All services seems to run fine. When tried to launch instance I see
below errors in nova-conductor logs and instance stuck in "scheduling"
state forever.
I dont have any config related to n-cell in controller. Could someone
help me to identify why nova-conductor is complaining about cells.
2017-02-20 14:24:06.128 WARNING oslo_config.cfg
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
"scheduler_default_filters" from group "DEFAULT" is deprecated. Use
option "enabled_filters" from group "filter_scheduler".
2017-02-20 14:24:06.211 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
schedule instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
recent call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/conductor/manager.py", line 866, in
schedule_and_build_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
request_specs[0].to_legacy_filter_properties_dict())
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/conductor/manager.py", line 597, in
_schedule_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager hosts =
self.scheduler_client.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/utils.py", line 371, in wrapped
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
func(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.queryclient.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in
__run_method
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
getattr(self.instance, __name)(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/client/query.py", line 32, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.scheduler_rpcapi.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/rpcapi.py", line 129, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
cctxt.call(ctxt, 'select_destinations', **msg_args)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py",
line 169, in call
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=self.retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py",
line 97, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
timeout=timeout, retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 458, in send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 449, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise result
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost_Remote:
No valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
recent call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py",
line 218, in inner
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
func(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/manager.py", line 98, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager dests =
self.driver.select_destinations(ctxt, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager File
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 79, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise
exception.NoValidHost(reason=reason)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.217 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-
I don't see anything above that is complaining about "No cell mapping
found for cell0"? Perhaps you pasted the wrong snippet from the logs.
Regardless, I think you simply need to run nova-manage cell_v2
simple_cell_setup. This is a required step in Ocata deployments. You can
read about this here:
https://docs.openstack.org/developer/nova/man/nova-manage.html
and the release notes here:
https://docs.openstack.org/releasenotes/nova/ocata.html
and more information about cells here:
https://docs.openstack.org/developer/nova/cells.html
Best,
-jay
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev