On 2/22/2017 9:33 AM, Prashant Shetty wrote:
Thanks Matt.

Here are the steps I have performed, I dont see any error related to
cell0 now but n-cond still not able to detect computes after discover as
well :(.

Do we need any cell setting on nova-compute nodes as well?.

vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova service-list
+----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host          | Zone     | Status  | State |
Updated_at                 | Disabled Reason |
+----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+
| 7  | nova-conductor   | cntr11        | internal | enabled | up    |
2017-02-22T14:23:34.000000 | -               |
| 9  | nova-scheduler   | cntr11        | internal | enabled | up    |
2017-02-22T14:23:28.000000 | -               |
| 10 | nova-consoleauth | cntr11        | internal | enabled | up    |
2017-02-22T14:23:33.000000 | -               |
| 11 | nova-compute     | esx-ubuntu-02 | nova     | enabled | up    |
2017-02-22T14:23:35.000000 | -               |
| 12 | nova-compute     | esx-ubuntu-03 | nova     | enabled | up    |
2017-02-22T14:23:35.000000 | -               |
| 13 | nova-compute     | esx-ubuntu-01 | nova     | enabled | up    |
2017-02-22T14:23:28.000000 | -               |
| 14 | nova-compute     | kvm-3         | nova     | enabled | up    |
2017-02-22T14:23:28.000000 | -               |
| 15 | nova-compute     | kvm-1         | nova     | enabled | up    |
2017-02-22T14:23:32.000000 | -               |
| 16 | nova-compute     | kvm-2         | nova     | enabled | up    |
2017-02-22T14:23:32.000000 | -               |
+----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+
vmware@cntr11:~/nsbu_cqe_openstack/devstack$
vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
map_cell0 --database_connection
mysql+pymysql://root:vmware@127.0.0.1/nova?charset=utf8
<http://root:vmware@127.0.0.1/nova?charset=utf8>
vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
simple_cell_setup --transport-url
rabbit://stackrabbit:vmware@60.0.24.49:5672/
<http://stackrabbit:vmware@60.0.24.49:5672/>
Cell0 is already setup
vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2 list_cells
+-------+--------------------------------------+
|  Name |                 UUID                 |
+-------+--------------------------------------+
|  None | ea6bec24-058a-4ba2-8d21-57d34c01802c |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+
vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
discover_hosts --verbose
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell: ea6bec24-058a-4ba2-8d21-57d34c01802c
Found 6 computes in cell: ea6bec24-058a-4ba2-8d21-57d34c01802c
Checking host mapping for compute host 'kvm-3':
a4b175d6-f5cc-45a8-9cf2-45726293b5c5
Checking host mapping for compute host 'esx-ubuntu-02':
70281329-590c-4cb7-8839-fd84160345b7
Checking host mapping for compute host 'esx-ubuntu-03':
04ea75a2-789e-483e-8d0e-4b0f79e012dc
Checking host mapping for compute host 'kvm-1':
dfabae3c-4ea9-4e8f-a496-8880dd9e89d9
Checking host mapping for compute host 'kvm-2':
d1cb30f5-822c-4c18-81fb-921ca676b834
Checking host mapping for compute host 'esx-ubuntu-01':
d00f8f16-af6b-437d-8136-bc744eb2472f
vmware@cntr11:~/nsbu_cqe_openstack/devstack$

​n-sch:
2017-02-22 14:26:51.467 INFO nova.scheduler.host_manager
[req-56d1cefb-1dfb-481d-aaff-b7b6e05f83f0 None None] Successfully synced
instances from host 'kvm-2'.
2017-02-22 14:26:51.608 INFO nova.scheduler.host_manager
[req-690b1a18-a709-49b2-bfad-2a6a75a3bee2 None None] Successfully synced
instances from host 'kvm-3'.
2017-02-22 14:27:23.366 INFO nova.filters
[req-1085ec50-29f7-4946-81e2-03c1378e8077 alt_demo admin] Filter
RetryFilter returned 0 hosts
2017-02-22 14:27:23.367 INFO nova.filters
[req-1085ec50-29f7-4946-81e2-03c1378e8077 alt_demo admin] Filtering
removed all hosts for the request with instance ID
'c74f394f-c805-4b5c-ba42-507dfda2c5be'. Filter results: ['RetryFilter:
(start: 0, end: 0)']


n-cond:
​
2017-02-22 14:27:23.375 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 79, in
select_destinations
2017-02-22 14:27:23.375 TRACE nova.conductor.manager     raise
exception.NoValidHost(reason=reason)
2017-02-22 14:27:23.375 TRACE nova.conductor.manager
2017-02-22 14:27:23.375 TRACE nova.conductor.manager NoValidHost: No
valid host was found. There are not enough hosts available.
2017-02-22 14:27:23.375 TRACE nova.conductor.manager
2017-02-22 14:27:23.375 TRACE nova.conductor.manager
2017-02-22 14:27:23.424 WARNING nova.scheduler.utils
[req-1085ec50-29f7-4946-81e2-03c1378e8077 alt_demo admin] Failed to
compute_task_build_instances: No valid host was found. There are not
enough hosts available.
Traceback (most recent call last):

  File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py",
line 218, in inner
    return func(*args, **kwargs)

  File "/opt/stack/nova/nova/scheduler/manager.py", line 98, in
select_destinations
    dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 79, in
select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2017-02-22 14:27:23.425 WARNING nova.scheduler.utils
[req-1085ec50-29f7-4946-81e2-03c1378e8077 alt_demo admin] [instance:
c74f394f-c805-4b5c-ba42-507dfda2c5be] Setting instance to ERROR state.

On Wed, Feb 22, 2017 at 5:44 PM, Matt Riedemann <mriede...@gmail.com
<mailto:mriede...@gmail.com>> wrote:

    On 2/21/2017 10:38 AM, Prashant Shetty wrote:

        Hi Mark,

        Thanks for your reply.

        I tried "nova-manage cell_v2 discover_hosts" and it returned
        nothing and
        still I have same issue on the node.

        Problem seems be the way devstack is getting configured,
        As code suggest below we create cell0 on node where n-api and n-cpu
        runs. In my case compute is running only n-cpu and controller is
        running
        n-api service, due to this code there are no cell created in
        controller
        or compute.


    The nova_cell0 database is created here:

    
https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1cf28fd9baa39d05436680616aef/lib/nova#L680
    
<https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1cf28fd9baa39d05436680616aef/lib/nova#L680>

    That's the same place that the nova_api database is created.


        We will not have this  problem in all-in-one-node setup.
        --
        # Do this late because it requires compute hosts to have started
        if is_service_enabled n-api; then
            if is_service_enabled n-cpu; then
                create_cell
            else
                # Some CI systems like Hyper-V build the control plane on
                # Linux, and join in non Linux Computes after setup. This
                # allows them to delay the processing until after their
        whole
                # environment is up.
                echo_summary "SKIPPING Cell setup because n-cpu is not
        enabled.
        You will have to do this manually before you have a working
        environment."
            fi
        fi


    You're correct that when stacking the control node where n-api is
    running, you won't get to the create_cell call:

    
https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1cf28fd9baa39d05436680616aef/stack.sh#L1371
    
<https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1cf28fd9baa39d05436680616aef/stack.sh#L1371>

    The create_cell function is what creates the cell0 mapping in the
    nova_api database and runs the simple_cell_setup command:

    
https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1cf28fd9baa39d05436680616aef/lib/nova#L943
    
<https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1cf28fd9baa39d05436680616aef/lib/nova#L943>

    You're running discover_hosts from the control node where the
    nova_api database lives, so that looks correct.

    Can you run discover_hosts with the --verbose option to get some
    more details, i.e. how many cell mappings are there, how many host
    mappings and compute_nodes records are created?

    I think the issue is that you haven't run map_cell0 and
    simple_cell_setup. In the gating multinode CI job, the create_cell
    function in devstack is called because that's a 2-node job where
    n-cpu is running on both nodes, but n-api is only running on the
    control (primary) node. In your case you don't have that so you're
    going to have to run these command manually.

    The docs here explain how to set this up and the commands to run:

    https://docs.openstack.org/developer/nova/cells.html#setup-of-cells-v2
    <https://docs.openstack.org/developer/nova/cells.html#setup-of-cells-v2>
    https://docs.openstack.org/developer/nova/cells.html#fresh-install
    <https://docs.openstack.org/developer/nova/cells.html#fresh-install>


        ---

        vmware@cntr11:~$ nova-manage cell_v2 discover_hosts
        vmware@cntr11:~$ nova service-list
        
+----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+
        | Id | Binary           | Host          | Zone     | Status  |
        State |
        Updated_at                 | Disabled Reason |
        
+----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+
        | 3  | nova-conductor   | cntr11        | internal | enabled |
        up    |
        2017-02-21T15:34:13.000000 | -               |
        | 5  | nova-scheduler   | cntr11        | internal | enabled |
        up    |
        2017-02-21T15:34:15.000000 | -               |
        | 6  | nova-consoleauth | cntr11        | internal | enabled |
        up    |
        2017-02-21T15:34:11.000000 | -               |
        | 7  | nova-compute     | esx-ubuntu-02 | nova     | enabled |
        up    |
        2017-02-21T15:34:14.000000 | -               |
        | 8  | nova-compute     | esx-ubuntu-03 | nova     | enabled |
        up    |
        2017-02-21T15:34:16.000000 | -               |
        | 9  | nova-compute     | kvm-3         | nova     | enabled |
        up    |
        2017-02-21T15:34:07.000000 | -               |
        | 10 | nova-compute     | kvm-2         | nova     | enabled |
        up    |
        2017-02-21T15:34:13.000000 | -               |
        | 11 | nova-compute     | esx-ubuntu-01 | nova     | enabled |
        up    |
        2017-02-21T15:34:14.000000 | -               |
        | 12 | nova-compute     | kvm-1         | nova     | enabled |
        up    |
        2017-02-21T15:34:09.000000 | -               |
        
+----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+
        vmware@cntr11:~$
        vmware@cntr11:~$ nova-manage cell_v2 list_cells
        +------+------+
        | Name | UUID |
        +------+------+
        +------+------+
        vmware@cntr11:~$


        Thanks,
        Prashant


    --

    Thanks,

    Matt Riedemann

    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The scheduler failure is different. You have 2 cell mappings, one for cell0 and the other is your single-cell with the API and main nova database, with uuid ea6bec24-058a-4ba2-8d21-57d34c01802c.

The discover_hosts output is showing that it's discovering the compute nodes in cell ea6bec24-058a-4ba2-8d21-57d34c01802c so those should all be mapped in the nova_api database (see the host_mappings table in the nova_api DB).

The scheduler failure could just be due to resource limitations or the request for the server build or the filters you have enabled or any number of "normal" issues you can have when scheduling. You have to investigate that.

--

Thanks,

Matt Riedemann

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to