On 2/20/2017 10:31 AM, Prashant Shetty wrote:
Thanks Jay for the response. Sorry I missed out on copying right error.

Here is the log:
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.217 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
found for cell0 while trying to record scheduling failure. Setup is
incomplete.

I tried command you mentioned, still I see same error on conductor.

As part of stack.sh on controller I see below command was executed
related to "cell". Isn't it devstack should take care of this part
during initial bringup or am I missing any parameters in localrc for same?.

NOTE: I have not explicitly enabled n-cell in localrc

2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
+lib/nova:init_nova:683                    recreate_database nova
+lib/database:recreate_database:112        local db=nova
+lib/database:recreate_database:113        recreate_database_mysql nova
+lib/databases/mysql:recreate_database_mysql:56  local db=nova
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'
+lib/nova:init_nova:684                    recreate_database nova_cell0
+lib/database:recreate_database:112        local db=nova_cell0
+lib/database:recreate_database:113        recreate_database_mysql
nova_cell0
+lib/databases/mysql:recreate_database_mysql:56  local db=nova_cell0
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova_cell0;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova_cell0 CHARACTER SET utf8;'
+lib/nova:init_nova:689                    /usr/local/bin/nova-manage
--config-file /etc/nova/nova.conf db sync
WARNING: cell0 mapping not found - not syncing cell0.
2017-02-20 14:11:50.846 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 215 -> 216...
2017-02-20 14:11:54.279 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done
2017-02-20 14:11:54.280 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 216 -> 217...
2017-02-20 14:11:54.288 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done



Thanks,
Prashant

On Mon, Feb 20, 2017 at 8:21 PM, Jay Pipes <jaypi...@gmail.com
<mailto:jaypi...@gmail.com>> wrote:

    On 02/20/2017 09:33 AM, Prashant Shetty wrote:

        Team,

        I have multi node devstack setup with single controller and multiple
        computes running stable/ocata.

        On compute:
        ENABLED_SERVICES=n-cpu,neutron,placement-api

        Both KVM and ESxi compute came up fine:
        vmware@cntr11:~$ nova hypervisor-list

          warnings.warn(msg)
        
+----+----------------------------------------------------+-------+---------+
        | ID | Hypervisor hostname                                | State |
        Status  |
        
+----+----------------------------------------------------+-------+---------+
        | 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up    |
        enabled |
        | 7  | kvm-1                                              | up    |
        enabled |
        
+----+----------------------------------------------------+-------+---------+
        vmware@cntr11:~$

        All services seems to run fine. When tried to launch instance I see
        below errors in nova-conductor logs and instance stuck in
        "scheduling"
        state forever.
        I dont have any config related to n-cell in controller. Could
        someone
        help me to identify why nova-conductor is complaining about cells.

        2017-02-20 14:24:06.128 WARNING oslo_config.cfg
        [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
        "scheduler_default_filters" from group "DEFAULT" is deprecated. Use
        option "enabled_filters" from group "filter_scheduler".
        2017-02-20 14:24:06.211 ERROR nova.conductor.manager
        [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
        schedule instances
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
        recent call last):
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/conductor/manager.py", line 866, in
        schedule_and_build_instances
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        request_specs[0].to_legacy_filter_properties_dict())
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/conductor/manager.py", line 597, in
        _schedule_instances
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     hosts =
        self.scheduler_client.select_destinations(context, spec_obj)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/utils.py", line 371, in wrapped
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
        func(*args, **kwargs)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in
        select_destinations
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
        self.queryclient.select_destinations(context, spec_obj)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in
        __run_method
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
        getattr(self.instance, __name)(*args, **kwargs)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in
        select_destinations
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
        self.scheduler_rpcapi.select_destinations(context, spec_obj)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/rpcapi.py", line 129, in
        select_destinations
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
        cctxt.call(ctxt, 'select_destinations', **msg_args)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py",
        line 169, in call
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
         retry=self.retry)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py",
        line 97, in _send
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        timeout=timeout, retry=retry)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
        line 458, in send
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
         retry=retry)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
        line 449, in _send
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     raise
        result
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        NoValidHost_Remote:
        No valid host was found. There are not enough hosts available.
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
        recent call last):
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py",
        line 218, in inner
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     return
        func(*args, **kwargs)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/manager.py", line 98, in
        select_destinations
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     dests =
        self.driver.select_destinations(ctxt, spec_obj)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
        "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 79, in
        select_destinations
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager     raise
        exception.NoValidHost(reason=reason)
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
        valid host was found. There are not enough hosts available.
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        2017-02-20 14:24:06.211 TRACE nova.conductor.manager
        2017-02-20 14:24:06.217 ERROR nova.conductor.manager
        [req-e17fda8d-0d53-4735-


    I don't see anything above that is complaining about "No cell
    mapping found for cell0"? Perhaps you pasted the wrong snippet from
    the logs.

    Regardless, I think you simply need to run nova-manage cell_v2
    simple_cell_setup. This is a required step in Ocata deployments. You
    can read about this here:

    https://docs.openstack.org/developer/nova/man/nova-manage.html
    <https://docs.openstack.org/developer/nova/man/nova-manage.html>

    and the release notes here:

    https://docs.openstack.org/releasenotes/nova/ocata.html
    <https://docs.openstack.org/releasenotes/nova/ocata.html>

    and more information about cells here:

    https://docs.openstack.org/developer/nova/cells.html
    <https://docs.openstack.org/developer/nova/cells.html>

    Best,
    -jay

    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


You're doing multinode. You need to run this after the subnode n-cpu is running:

nova-manage cell_v2 discover_hosts

Run ^ from the master (control) node where the API database is located. We do the same in devstack-gate for multinode jobs:

https://github.com/openstack-infra/devstack-gate/blob/f5dccd60c20b08be6f0b053265e26a491307946e/devstack-vm-gate.sh#L717

Single-node devstack will take care of discovering the n-cpu compute host as part of the stack.sh run, but the multinode case is special in that you need to explicitly discover the subnode n-cpu after it's running. Devstack is not topology aware so this is something you have to handle in an orchestrator (like d-g) outside of the devstack run.

--

Thanks,

Matt Riedemann

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to