Hi Alex,

first off, thank you for your kind reply. I followed your advices. However, I still have a problem which looks like an incorrect setting up of a storage pool for lxd. I configured the /etc/nova-compute.conf as you suggested, but it didn't help.

In my case those lines look like:

[DEFAULT]
compute_driver = nova_lxd.nova.virt.lxd.LXDDriver

[lxd]
allow_live_migration = True
pool = lxd

But the error is following (relevant parts from syslog --> it looks like lxd variables from the /etc/nova-compute.conf aren't delivered to the driver, or recognized by it, I tried to use different names for compute_driver e.g. lxd.LXDDriver, nova-lxd.nova.virt.lxd.(driver).LXDDriver -> however, it didn't help):

Sep 19 09:15:08 localhost nova-conductor[1956]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-5cda2246-8087-4f49-b9b3-463d29fd7bf8 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=1956) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:08 localhost nova-consoleauth[1984]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-2a9399e1-996d-42f4-899b-62db8d8e5afd #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=1984) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:10 localhost nova-scheduler[2000]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-3d23848b-270b-4718-91dd-3b0417d27d21 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2000) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:11 localhost devstack@placement-api.service[2030]: #033[00;32mDEBUG nova.api.openstack.placement.wsgi [#033[00;36m-#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2294) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:12 localhost devstack@n-api.service[1999]: #033[00;32mDEBUG nova.api.openstack.wsgi_app [#033[01;36mNone req-1e20e359-2bcc-44e6-9c75-7422c5837c03 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2181) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:14 localhost devstack@placement-api.service[2030]: #033[00;32mDEBUG nova.api.openstack.placement.wsgi [#033[00;36m-#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2292) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:15 localhost devstack@n-api.service[1999]: #033[00;32mDEBUG nova.api.openstack.wsgi_app [#033[01;36mNone req-6d5745e7-3784-4b8e-8210-e7bf74fd9655 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = nova_lxd.nova.virt.lxd.LXDDriver#033[00m #033[00;33m{{(pid=2182) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mcompute_driver                 = lxd.LXDDriver#033[00m #033[00;33m{{(pid=1968) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2890}}#033[00m ep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mlxd.allow_live_migration       = False#033[00m #033[00;33m{{(pid=1968) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mlxd.pool                       = None#033[00m #033[00;33m{{(pid=1968) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mlxd.root_dir                   = /var/lib/lxd/#033[00m #033[00;33m{{(pid=1968) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m Sep 19 09:15:20 localhost nova-compute[1968]: #033[00;32mDEBUG oslo_service.service [#033[01;36mNone req-1bc67100-ffbe-4d22-be1f-42133eb60611 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mlxd.timeout                    = -1#033[00m #033[00;33m{{(pid=1968) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2898}}#033[00m
.........
.........
Sep 19 09:15:21 localhost nova-compute[1968]: #033[00;32mDEBUG oslo_concurrency.processutils [#033[01;36mNone req-87a543da-84db-4fe1-a743-75ca6525ac2a #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mu'sudo nova-rootwrap /etc/nova/rootwrap.conf zpool list -o size -H None' failed. Not Retrying.#033[00m #033[00;33m{{(pid=1968) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:457}}#033[00m Sep 19 09:15:21 localhost nova-compute[1968]: #033[01;31mERROR nova.compute.manager [#033[01;36mNone req-87a543da-84db-4fe1-a743-75ca6525ac2a #033[00;36mNone None#033[01;31m] #033[01;35m#033[01;31mError updating resources for node ubuntu.#033[00m: ProcessExecutionError: Unexpected error while running command. Sep 19 09:15:21 localhost nova-compute[1968]: Command: sudo nova-rootwrap /etc/nova/rootwrap.conf zpool list -o size -H None
Sep 19 09:15:21 localhost nova-compute[1968]: Exit code: 1
Sep 19 09:15:21 localhost nova-compute[1968]: Stdout: u''
Sep 19 09:15:21 localhost nova-compute[1968]: Stderr: u"cannot open 'None': no such pool\n" Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00mTraceback (most recent call last): Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova/nova/compute/manager.py", line 7344, in update_available_resource_for_node Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m rt.update_available_resource(context, nodename) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova/nova/compute/resource_tracker.py", line 673, in update_available_resource Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    resources = self.driver.get_available_resource(nodename) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 1031, in get_available_resource Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    local_disk_info = _get_zpool_info(pool_name) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 209, in _get_zpool_info Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    total = _get_zpool_attribute('size') Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova-lxd/nova/virt/lxd/driver.py", line 201, in _get_zpool_attribute Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    run_as_root=True) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova/nova/utils.py", line 230, in execute Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    return RootwrapProcessHelper().execute(*cmd, **kwargs) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/opt/stack/nova/nova/utils.py", line 113, in execute Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    return processutils.execute(*cmd, **kwargs) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m  File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 424, in execute Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m    cmd=sanitized_cmd) Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00mProcessExecutionError: Unexpected error while running command. Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00mCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf zpool list -o size -H None Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00mExit code: 1 Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00mStdout: u'' Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00mStderr: u"cannot open 'None': no such pool\n" Sep 19 09:15:21 localhost nova-compute[1968]: ERROR nova.compute.manager #033[01;35m#033[00m

The local.conf used by devstack is looks like:

[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1

ADMIN_PASSWORD=devstack
MYSQL_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=devstack

# run the services you want to use
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,n-cpu,n-api,n-crt,n-obj,n-cond,n-sch,n-novnc,n-cauth,placement-api,placement-client
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-meta,q-l3
ENABLED_SERVICES+=,cinder,c-sch,c-api,c-vol
ENABLED_SERVICES+=,horizon

# disabled services
disable_service n-net

# enable nova-lxd
enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd stable/queens

Best regards,
Martin.

On 17.09.2018 15:29, Alex Kavanagh wrote:
Hi Martin

On Sun, Sep 16, 2018 at 8:46 PM, Martin Bobák <martin.bo...@savba.sk <mailto:martin.bo...@savba.sk>> wrote:

    Hi all,

    what is the recommended way of nova-lxd plugin installation on a
    fresh xenial host running a pure OpenStack devstack (Queens)
    installation? I have tried to install the nova-lxd plugin by pip
    install, or allowing it during OpenStack devstack installation,
    but each attempt lead to the same result. The plugin is either not
    recognized or the installation doesn't finish successfully. I went
    through the nova-lxd homepage as well as its github repo, but I
    wasn't able to solve the whole installation problem (e.g. I found
    out that the installation of the newest version of pylxd helps
    with installation of the plugin, however, the plugin isn't still
    recognized. So additional configuration is needed...).

    Do you have any thoughts about it?


I'm one of the maintainers for nova-lxd, so hopefully can get you up and running.

In order for nova-lxd to be configured in nova, the /etc/nova-compute.conf needs to contain the lines:

[DEFAULT]
compute_driver = nova_lxd.nova.virt.lxd.LXDDriver

This little 'fact' is hidden away in the "nova-compute-lxd" debian package, unfortunately.

You'll also need to configure an [lxd] section in nova.conf to control the storage pool in LXD for containers to use when launching instances.

[lxd]
allow_live_migration = True
pool = {{ storage_pool }}

The storage pool will need to be set up separately in lxd.
--

However, an 'easy' way to test OpenStack with nova-lxd, is to use charms.  We have a number of bundles that work with Juju.  For example we have a deployable bundle for xenial and queens at https://github.com/openstack-charmers/openstack-bundles/tree/master/development/openstack-lxd-xenial-queens <https://github.com/openstack-charmers/openstack-bundles/tree/master/development/openstack-lxd-xenial-queens> which also has some (hopefully) useful instructions on how to get it going.

Note the instructions say you have to use MaaS, but you should be able to adjust using it to the hardware you are using.

As an alternative, the openstack-ansible project also supports nova-lxd, but I don't have any experience with that.

Do come back if you have any further questions; do let me know how you get on.

Best regards
Alex.


    Best,
    Martin.

--  Martin Bobák, PhD.
     Researcher
     Institute of Informatics
     Slovak Academy of Sciences
    Dubravska cesta 9
    <https://maps.google.com/?q=Dubravska+cesta+9&entry=gmail&source=g>,
    SK-845 07 Bratislava, Slovakia
     Room: 311, Phone: +421 (0)2 5941-1278
     E-mail: martin.bo...@savba.sk <mailto:martin.bo...@savba.sk>
     URL: http://www.ui.sav.sk/w/odd/pdip/
    <http://www.ui.sav.sk/w/odd/pdip/>
     LinkedIn: https://www.linkedin.com/in/martin-bobak/
    <https://www.linkedin.com/in/martin-bobak/>

    _______________________________________________
    lxc-users mailing list
    lxc-users@lists.linuxcontainers.org
    <mailto:lxc-users@lists.linuxcontainers.org>
    http://lists.linuxcontainers.org/listinfo/lxc-users
    <http://lists.linuxcontainers.org/listinfo/lxc-users>




--
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd


_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to