Ping,
        Can you look at cinder.conf to set the rabbitmq IP?

Vedu

On 8/3/15, 8:10 PM, "Ping Song" <pi...@juniper.net> wrote:

>Hi Vedu:
>
>Thanks, do you know how to solve this?
>
>I changed my /etc/hosts like below and reboot, I still have the same
>issue...
>
>
>root@contrail:~# cat /etc/hosts
>172.222.12.3    localhost
>127.0.1.1       contrail
>
>10.85.4.51      contrail
>10.85.4.52      compute1
>
># The following lines are desirable for IPv6 capable hosts
>::1     localhost ip6-localhost ip6-loopback
>ff02::1 ip6-allnodes
>ff02::2 ip6-allrouters
>172.222.12.3     contrail     contrail-ctrl
>root@contrail:~# ping localhost
>PING localhost (172.222.12.3) 56(84) bytes of data.
>64 bytes from localhost (172.222.12.3): icmp_seq=1 ttl=64 time=0.072 ms
>^C
>--- localhost ping statistics ---
>1 packets transmitted, 1 received, 0% packet loss, time 0ms
>rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms
>
>root@contrail:~# tail -f /var/log/cinder/cinder-scheduler.log
>2015-08-03 10:33:04.203 1993 ERROR oslo.messaging._drivers.impl_rabbit
>[req-d2de3640-ec65-4951-a7b4-abeb56a48e6e - - - - -] AMQP server on
>localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in
>23 seconds.
>2015-08-03 10:33:27.222 1993 INFO oslo.messaging._drivers.impl_rabbit
>[req-d2de3640-ec65-4951-a7b4-abeb56a48e6e - - - - -] Delaying reconnect
>for 1.0 seconds...
>2015-08-03 10:33:28.225 1993 INFO oslo.messaging._drivers.impl_rabbit
>[req-d2de3640-ec65-4951-a7b4-abeb56a48e6e - - - - -] Connecting to AMQP
>server on localhost:5672
>2015-08-03 10:33:28.239 1993 ERROR oslo.messaging._drivers.impl_rabbit
>[req-d2de3640-ec65-4951-a7b4-abeb56a48e6e - - - - -] AMQP server on
>localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in
>25 seconds.
>2015-08-03 10:33:53.258 1993 INFO oslo.messaging._drivers.impl_rabbit
>[req-d2de3640-ec65-4951-a7b4-abeb56a48e6e - - - - -] Delaying reconnect
>for 1.0 seconds...
>2015-08-03 10:33:54.259 1993 INFO oslo.messaging._drivers.impl_rabbit
>[req-d2de3640-ec65-4951-a7b4-abeb56a48e6e - - - - -] Connecting to AMQP
>server on localhost:5672
>
>-----Original Message-----
>From: Vedamurthy Ananth Joshi
>Sent: Monday, August 03, 2015 12:28 AM
>To: Ping Song <pi...@juniper.net>; Lluís Gifre <lgi...@ac.upc.edu>; EXT -
>jpr...@igxglobal.com <jpr...@igxglobal.com>; Saravanan Purushothaman
><sp...@juniper.net>; ask-contrail <ask-contr...@juniper.net>;
>dev@lists.opencontrail.org
>Cc: contrailuser <us...@lists.opencontrail.org>
>Subject: Re: AMQP server on localhost:5672 is unreachable
>
>Ping,
>       Port 5672 is attached only to 172.222.12.3 and would not respond on
>³localhost"
>
>
>Vedu
>
>On 8/3/15, 9:32 AM, "Ping Song" <pi...@juniper.net> wrote:
>
>>Folks:
>>
>>It looks I ran into a similar issue...
>>
>>When I tried to launch instance I got:
>>Error: Failed to launch instance "instance1": Please try again later
>>[Error: No valid host was found. ].
>>
>>While I monitor the log I have:
>>
>>root@contrail:~# tail -f /var/log/cinder/cinder-scheduler.log
>>2015-08-02 23:47:52.088 2037 ERROR oslo.messaging._drivers.impl_rabbit
>>[req-3d2dd035-c205-4206-8e3f-7ad9e2c1430a - - - - -] AMQP server on
>>localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again
>>in
>>30 seconds.
>>2015-08-02 23:48:22.114 2037 INFO oslo.messaging._drivers.impl_rabbit
>>[req-3d2dd035-c205-4206-8e3f-7ad9e2c1430a - - - - -] Delaying reconnect
>>for 1.0 seconds...
>>2015-08-02 23:48:23.116 2037 INFO oslo.messaging._drivers.impl_rabbit
>>[req-3d2dd035-c205-4206-8e3f-7ad9e2c1430a - - - - -] Connecting to AMQP
>>server on localhost:5672
>>2015-08-02 23:48:23.131 2037 ERROR oslo.messaging._drivers.impl_rabbit
>>[req-3d2dd035-c205-4206-8e3f-7ad9e2c1430a - - - - -] AMQP server on
>>localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again
>>in
>>30 seconds.
>>
>>I'm not sure if this is related...but
>>When I look at the status of the service I don't see anything goes
>>wrong...
>>
>>root@contrail:~# rabbitmqctl status
>>Status of node 'rabbit@contrail-ctrl' ...
>>[{pid,11474},
>> {running_applications,[{rabbit,"RabbitMQ","3.5.0"},
>>                        {os_mon,"CPO  CXC 138 46","2.2.14"},
>>                        {mnesia,"MNESIA  CXC 138 12","4.11"},
>>                        {xmerl,"XML parser","1.3.5"},
>>                        {sasl,"SASL  CXC 138 11","2.3.4"},
>>                        {stdlib,"ERTS  CXC 138 10","1.19.4"},
>>                        {kernel,"ERTS  CXC 138 10","2.16.4"}]},
>>{os,{unix,linux}},  {erlang_version,"Erlang R16B03 (erts-5.10.4)
>>[source] [64-bit] [smp:8:8] [async-threads:30] [kernel-poll:true]\n"},
>>{memory,[{total,38403376},
>>          {connection_readers,437080},
>>          {connection_writers,0},
>>          {connection_channels,0},
>>          {connection_other,426992},
>>          {queue_procs,2704},
>>          {queue_slave_procs,0},
>>          {plugins,0},
>>          {other_proc,13738216},
>>          {mnesia,60176},
>>          {mgmt_db,0},
>>          {msg_index,46784},
>>          {other_ets,785752},
>>          {binary,400560},
>>          {code,16351158},
>>          {atom,561761},
>>          {other_system,5592193}]},
>> {alarms,[]},
>> {listeners,[{clustering,25672,"::"},{amqp,5672,"172.222.12.3"}]},
>> {vm_memory_high_watermark,0.4},
>> {vm_memory_limit,3269242060},
>> {disk_free_limit,50000000},
>> {disk_free,7446581248},
>> {file_descriptors,[{total_limit,10140},
>>                    {total_used,52},
>>                    {sockets_limit,9124},
>>                    {sockets_used,50}]},
>>{processes,[{limit,1048576},{used,278}]},
>> {run_queue,0},
>>
>>root@contrail:~# netstat -lntp | grep 5672
>>tcp        0      0 172.222.12.3:5672       0.0.0.0:*
>>LISTEN      11474/beam.smp
>>tcp        0      0 0.0.0.0:25672           0.0.0.0:*
>>LISTEN      11474/beam.smp
>>
>>root@contrail:~# service rabbitmq-server status
>>rabbitmq-server                  RUNNING    pid 11466, uptime 0:07:19
>>
>>so 172.222.12.3 is my data interface (eth1) , while my management
>>interface is eth0...
>>
>>any advice?
>>
>>Regards
>>ping
>>
>>-----Original Message-----
>>From: Lluís Gifre [mailto:lgi...@ac.upc.edu]
>>Sent: Monday, May 11, 2015 6:53 AM
>>To: EXT - jpr...@igxglobal.com <jpr...@igxglobal.com>; Saravanan
>>Purushothaman <sp...@juniper.net>; ask-contrail
>><ask-contr...@juniper.net>; dev@lists.opencontrail.org
>>Cc: contrailuser <us...@lists.opencontrail.org>
>>Subject: Re: [Users] Error creating volumes with cinder
>>
>>Hi all,
>>
>>Regarding this problem, I tried something that works for me:
>>
>>I noticed that using Contrail v2.10 I cannot install the cinder-volume
>>service because this package is not available, so I decided to create
>>my own package.
>>
>>These are the steps I followed in Ubuntu 14.04.1:
>>
>>I'm using LVM with a volume group named cinder-volumes to store the
>>volumes.
>>
>>
>># 1. Create my cinder-volume_2014.1.3-0ubuntu1~cloud0_all.deb package #
>>1.1. download original cinder package
>>https://launchpadlibrarian.net/187842740/cinder-volume_2014.1.3-0ubuntu
>>1.1
>>_all.deb
>>
>># 1.2. create temporal folder
>>mkdir mycinder
>>cd mycinder
>>
>># 1.3. extract package files
>>dpkg-deb -x ../cinder-volume_2014.1.3-0ubuntu1.1_all.deb .
>>
>># 1.4. extract package control files
>>dpkg-deb -e ../cinder-volume_2014.1.3-0ubuntu1.1_all.deb
>>
>># 1.5. edit file DEBIAN/control and change package version and
>>dependency in cinder-common # replace "2014.1.3-0ubuntu1.1" by
>>"1:2014.1.3-0ubuntu1~cloud0" so next lines should result
>>     Version: 1:2014.1.3-0ubuntu1~cloud0
>>     Depends: cinder-common (= 1:2014.1.3-0ubuntu1~cloud0), lvm2, tgt,
>>sysv-rc (>= 2.88dsf-24) | file-rc (>= 0.8.16), python:any
>>     Keep the rest of file as is
>>
>># 1.6. generate new package
>>dpkg-deb -b . ../cinder-volume_2014.1.3-0ubuntu1~cloud0_all.deb
>>dpkg-deb -i . ../cinder-volume_2014.1.3-0ubuntu1~cloud0_all.deb
>>
>># 2. Configure database in openstack node where cinder api/scheduler is
>>installed:
>># 2.1. Take root password used for the database creation grep "mysql -u
>>root --password=" /opt/contrail/utils/setup_*
>>
>># 2.2. Login into database and grant permissions to cinder user mysql
>>-u root --password=<password taken from previous section>
>>mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'
>>IDENTIFIED BY 'cinder';
>>mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'<your openstack
>>node hostname>' IDENTIFIED BY 'cinder';
>>mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY
>>'cinder';
>>mysql> exit
>>
>># 2.3. Populate cinder database
>>su -s /bin/sh -c "cinder-manage db sync" cinder
>>
>># 3. Install package
>># 3.1. Copy new package cinder-volume_2014.1.3-0ubuntu1~cloud0_all.deb
>>into compute nodes
>>
>># 3.2. Configure cinder api/scheduler
>>nano /etc/cinder/cinder.conf
>>             [DEFAULT]
>>             rootwrap_config = /etc/cinder/rootwrap.conf
>>             api_paste_config = /etc/cinder/api-paste.ini
>>             iscsi_helper = tgtadm
>>             volume_name_template = volume-%s
>>             volume_group = cinder-volumes
>>             verbose = True
>>             auth_strategy = keystone
>>             state_path = /var/lib/cinder
>>             lock_path = /var/lock/cinder
>>             volumes_dir = /var/lib/cinder/volumes
>>
>>             rpc_backend = rabbit
>>             rabbit_host = <your_openstack_node_IP>
>>             rabbit_port = 5672
>>             my_ip = <your_openstack_node_IP>
>>             glance_host = <your_openstack_node_IP>
>>
>>             osapi_volume_workers = 4
>>
>>             [database]
>>             connection =
>>mysql://cinder:cinder@<your_cinder_api/scheduler_node_IP>/cinder
>>
>>             [keystone_authtoken]
>>             admin_tenant_name = service
>>             admin_user = cinder
>>             admin_password = (copy the value from same attribute in
>>/etc/nova/nova.conf)
>>             auth_protocol = http
>>             auth_host = <your_openstack_node_IP>
>>             auth_port = 35357
>>             auth_uri = http://<your_openstack_node_IP>:5000
>>
>># 3.3. Restart cinder-api and cinder-scheduler services in openstack
>>node service cinder-api restart service cinder-scheduler restart
>>
>># 4. Install new cinder-volume in compute nodes # 4.1. copy
>>cinder-volumes package into contrail install repo cp
>>cinder-volume_2014.1.3-0ubuntu1~cloud0_all.deb
>>/opt/contrail/contrail_install_repo
>>
>># 4.2. move to contrail install repo folder cd
>>/opt/contrail/contrail_install_repo
>>
>># 4.3. regenerate Packages.gz
>>dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
>>
>># 4.4. update apt-get cache and install cinder-volume apt-get update
>>apt-get install cinder-volume
>>
>># 5. Configure compute node
>># 5.1. Configure cinder-volume service
>>nano /etc/cinder/cinder.conf
>>             [DEFAULT]
>>             rootwrap_config = /etc/cinder/rootwrap.conf
>>             api_paste_config = /etc/cinder/api-paste.ini
>>             iscsi_helper = tgtadm
>>             volume_name_template = volume-%s
>>             volume_group = cinder-volumes
>>             #verbose = True
>>             auth_strategy = keystone
>>             state_path = /var/lib/cinder
>>             lock_path = /var/lock/cinder
>>             volumes_dir = /var/lib/cinder/volumes
>>
>>             rpc_backend = rabbit
>>             rabbit_host = <your_openstack_node_IP>
>>             rabbit_port = 5672
>>             my_ip = <your_compute_node_IP>
>>             glance_host = <your_openstack_node_IP>
>>
>>             [database]
>>             connection =
>>mysql://cinder:cinder@<your_openstack_node_IP>/cinder
>>
>>             [keystone_authtoken]
>>             admin_tenant_name = service
>>             admin_user = cinder
>>             admin_password = (copy the value from same attribute in
>>/etc/nova/nova.conf)
>>             auth_protocol = http
>>             auth_host = <your_openstack_node_IP>
>>             auth_port = 35357
>>             auth_uri = http://<your_openstack_node_IP>:5000
>>
>># 5.2. Configure LVM service
>>nano /etc/lvm/lvm.conf
>>             Leave file as is, except for the filter attribute in
>>devices section.
>>             The device specified (sda4) has to be the same where
>>cinder-volumes resides.
>>
>>             devices {
>>                 ...
>>                 # Cinder filter
>>                 filter = [ "a/sda4/", "r/.*/" ]
>>                 ...
>>             }
>>
>># 5.3. Restart cinder-volume and tgt services service cinder-volume
>>restart service tgt restart
>>
>>Right now you should be able to create instances specifying to create a
>>volume from an image or create empty volumes without problem.
>>
>>The only issue I'm experiencing is that Openstack's Dashboard, in Admin
>>> Hypervisors reports an incorrect storage total space and used space.
>>Except for this, it works correctly.
>>
>>Hope it helps.
>>Lluis
>>
>>
>>
>>El 14/04/15 a les 18:04, Joao Prino ha escrit:
>>> Hi Saravanan,
>>>
>>>     Thanks for the update.
>>>     The output:
>>>
>>> # dpkg -l| grep cinder
>>> ii  cinder-api                           1:2014.1.3-0ubuntu1~cloud0
>>>       all          Cinder storage service - API server
>>> ii  cinder-common                        1:2014.1.3-0ubuntu1~cloud0
>>>       all          Cinder storage service - common files
>>> ii  cinder-scheduler                     1:2014.1.3-0ubuntu1~cloud0
>>>       all          Cinder storage service - Scheduler server
>>> ii  python-cinder                        1:2014.1.3-0ubuntu1~cloud0
>>>       all          Cinder Python libraries
>>> ii  python-cinderclient                  1:1.0.8-0ubuntu1
>>>       all          python bindings to the OpenStack Volume API
>>>
>>>     That might explain the behaviour, let me install the packages and
>>>will get back to you.
>>>
>>>
>>> Cheers,
>>> Joao
>>>
>>> -----Original Message-----
>>> From: Saravanan Purushothaman [mailto:sp...@juniper.net]
>>> Sent: 14 April 2015 16:56
>>> To: Joao Prino; Lluís Gifre; ask-contrail; dev@lists.opencontrail.org
>>> Cc: contrailuser
>>> Subject: RE: [Users] Error creating volumes with cinder
>>>
>>>
>>> Hi Joao,
>>>
>>>                 Can you try this..      "dpkg -l| grep cinder"
>>>
>>>                 I think cinder-volume package is not installed.
>>>
>>>                 if cinder-volume service is not running then we may
>>>get this "No valid host was found"
>>>
>>>                  NOTE:
>>>                            cinder-volume is storage related package.
>>> If you want storage on contrail then please follow the link
>>> instructions to install contrail-storage package. (cinder-volume will
>>> be installed as part of contrail-storage package)
>>>        
>>>                            you can get the "contrail storage packages"
>>> from this link
>>> http://www.juniper.net/support/downloads/?p=contrail#sw
>>>
>>>
>>> Regards,
>>> Saravanan
>>>   
>>> -----Original Message-----
>>> From: Joao Prino [mailto:jpr...@igxglobal.com]
>>> Sent: Tuesday, April 14, 2015 12:37 AM
>>> To: Lluís Gifre; ask-contrail; dev@lists.opencontrail.org
>>> Cc: contrailuser
>>> Subject: RE: [Users] Error creating volumes with cinder
>>>
>>> Hello Lluis Dev,
>>>
>>>     I'm calling Dev's attention to this problem.
>>>     Look forward to hear back whenever possible.
>>>
>>> Cheers,
>>> Joao
>>>
>>> -----Original Message-----
>>> From: Lluís Gifre [mailto:lgi...@ac.upc.edu]
>>> Sent: 10 April 2015 17:03
>>> To: Joao Prino; Adrian Smith
>>> Cc: contrailuser
>>> Subject: Re: [Users] Error creating volumes with cinder
>>>
>>> Hi Joao,
>>>
>>> Yes, I have already done this correciton but the problem "No backend
>>>provided" and "No valid host was found" while the cinder-scheduler is
>>>creating the volume is still happening.
>>>
>>> Thanks,
>>> Lluis
>>>
>>>
>>> El 10/04/15 a les 17:54, Joao Prino ha escrit:
>>>> Hi Lluis,
>>>>
>>>>    I had this exact same problem where was able to solve it by  adding
>>>>the "rabbit_host = <ip>" under '/etc/cinder/cinder.conf', editing
>>>>'/etc/rabbitmq/rabbitmq.config' to match my cluster setup's  fqdn
>>>>(wrongly populated) and changed the existing "rabbit_host = <ip>"
>>>>from '/etc/nova/nova.conf as it had the wrong IP assigned to it.
>>>>    This problem was due to having two NICs where RabbitMQ was given my
>>>>NIC1 address settings and it should have had my NIC2's address
>>>>according to the testbed file..
>>>>
>>>> Hope it helps!
>>>>
>>>> Joao
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Users [mailto:users-boun...@lists.opencontrail.org] On Behalf
>>>> Of Lluís Gifre
>>>> Sent: 10 April 2015 15:19
>>>> To: Adrian Smith
>>>> Cc: contrailuser
>>>> Subject: Re: [Users] Error creating volumes with cinder
>>>>
>>>> Hi Adrian,
>>>>
>>>> Thanks for your quick answer.
>>>> I checked the log you mentioned but I hadn't found any interesting
>>>>thing.
>>>>
>>>> Do you have any other idea?
>>>>
>>>> Thanks,
>>>> Lluis
>>>>
>>>> El 10/04/15 a les 13:14, Adrian Smith ha escrit:
>>>>> Hi Lluis,
>>>>>
>>>>> Take a look in the cinder scheduler log,
>>>>> /var/log/cinder/scheduler.log. It should have a more meaningful
>>>>>error.
>>>>>
>>>>> Adrian
>>>>>
>>>>> On 10 April 2015 at 11:48, Lluís Gifre <lgi...@ac.upc.edu> wrote:
>>>>>> Dear all,
>>>>>>
>>>>>> I'm deploying a opencontrail 2.10 + openstack icehouse testbed in
>>>>>> Ubuntu 14.04.
>>>>>> I tried it in many ways (with physical and virtual machines,
>>>>>> single and multi-box testbeds).
>>>>>>
>>>>>> Right now I'm trying with a single virtual machine with LVM and
>>>>>> the cinder-volumes volume group created.
>>>>>>
>>>>>> The deploy completes successfully.
>>>>>>
>>>>>> However, just after completing the set-up, I realized that in
>>>>>> cinder-schedule.log the next message is added periodically:
>>>>>>
>>>>>> 2015-04-10 07:27:29.426 2315 INFO
>>>>>> oslo.messaging._drivers.impl_rabbit
>>>>>> [req-e77c84ab-49f6-40f2-bb20-1c62b23605d6 - - - - -] Reconnecting
>>>>>> to AMQP server on localhost:5672
>>>>>> 2015-04-10 07:27:29.427 2315 INFO
>>>>>> oslo.messaging._drivers.impl_rabbit
>>>>>> [req-e77c84ab-49f6-40f2-bb20-1c62b23605d6 - - - - -] Delaying
>>>>>> reconnect for
>>>>>> 1.0 seconds...
>>>>>> 2015-04-10 07:27:30.442 2315 ERROR
>>>>>> oslo.messaging._drivers.impl_rabbit
>>>>>> [req-e77c84ab-49f6-40f2-bb20-1c62b23605d6 - - - - -] AMQP server
>>>>>> on
>>>>>> localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying
>>>>>> again in 30 seconds.
>>>>>>
>>>>>> To solve the problem I edited /etc/cinder/cinder.conf and added:
>>>>>>
>>>>>> rabbit_host = 192.168.67.13  # IP address taken from
>>>>>> /etc/rabbitmq/rabbitmq.config rabbit_port = 5672
>>>>>>
>>>>>> Content of my /etc/rabbitmq/rabbitmq.config:
>>>>>>
>>>>>> [
>>>>>>       {rabbit, [ {tcp_listeners, [{"192.168.67.13", 5672}]},
>>>>>>       {loopback_users, []},
>>>>>>       {log_levels,[{connection, info},{mirroring, info}]} ]
>>>>>>        }
>>>>>> ].
>>>>>>
>>>>>> After rebooting, this change solved the recurrent reconnecting to
>>>>>> AMQP problem.
>>>>>> Right now the cinder-scheduler log shows:
>>>>>>
>>>>>> 2015-04-10 07:32:44.063 2525 AUDIT cinder.service [-] Starting
>>>>>> cinder-scheduler node (version 2014.1.3)
>>>>>> 2015-04-10 07:32:44.197 2525 INFO
>>>>>> oslo.messaging._drivers.impl_rabbit
>>>>>> [req-21e86bf9-3fa6-4ab0-9968-616242200c41 - - - - -] Connected to
>>>>>> AMQP server on 192.168.67.13:5672
>>>>>> 2015-04-10 07:32:48.383 2525 INFO
>>>>>> oslo.messaging._drivers.impl_rabbit
>>>>>> [-] Connected to AMQP server on 192.168.67.13:5672
>>>>>>
>>>>>> Next I tried to create a volume, no mater if I do it from the
>>>>>>openstack dashboard or using cinder's or nova's command line
>>>>>>interface, the result is an error. For example trying with cinder
>>>>>>command line:
>>>>>>
>>>>>> # cinder list
>>>>>> 
>>>>>>+----+--------+--------------+------+-------------+----------+-----
>>>>>>+----+--------+--------------+------+-------------+----------+---
>>>>>>-----+
>>>>>> | ID | Status | Display Name | Size | Volume Type | Bootable |
>>>>>> | Attached to |
>>>>>> 
>>>>>>+----+--------+--------------+------+-------------+----------+-----
>>>>>>+----+--------+--------------+------+-------------+----------+---
>>>>>>-----+
>>>>>> 
>>>>>>+----+--------+--------------+------+-------------+----------+-----
>>>>>>+----+--------+--------------+------+-------------+----------+---
>>>>>>-----+
>>>>>>
>>>>>> # cinder create  --display-name test 1
>>>>>> +---------------------+--------------------------------------+
>>>>>> |       Property      |                Value                 |
>>>>>> +---------------------+--------------------------------------+
>>>>>> |     attachments     |                  []                  |
>>>>>> |  availability_zone  |                 nova                 |
>>>>>> |       bootable      |                false                 |
>>>>>> |      created_at     |      2015-04-10T07:57:12.787493      |
>>>>>> | display_description |                 None                 |
>>>>>> |     display_name    |                 test                 |
>>>>>> |      encrypted      |                False                 |
>>>>>> |          id         | d374b571-5df1-47f3-ae6e-c3218aebb9db |
>>>>>> |       metadata      |                  {}                  |
>>>>>> |         size        |                  1                   |
>>>>>> |     snapshot_id     |                 None                 |
>>>>>> |     source_volid    |                 None                 |
>>>>>> |        status       |               creating               |
>>>>>> |     volume_type     |                 None                 |
>>>>>> +---------------------+--------------------------------------+
>>>>>>
>>>>>> # cinder list
>>>>>> 
>>>>>>+--------------------------------------+--------+--------------+---
>>>>>>+--------------------------------------+--------+--------------+---
>>>>>>+-------------+----------+-------------+
>>>>>> |                  ID                  | Status | Display Name |
>>>>>>Size |
>>>>>> Volume Type | Bootable | Attached to |
>>>>>> 
>>>>>>+--------------------------------------+--------+--------------+---
>>>>>>+--------------------------------------+--------+--------------+---
>>>>>>+-------------+----------+-------------+
>>>>>> | d374b571-5df1-47f3-ae6e-c3218aebb9db | error  |     test     | 1
>>>>>>|
>>>>>> None    |  false   |             |
>>>>>> 
>>>>>>+--------------------------------------+--------+--------------+---
>>>>>>+--------------------------------------+--------+--------------+---
>>>>>>+-------------+----------+-------------+
>>>>>>
>>>>>> Then I checked the /var/log/cinder/cinder-scheduler.log and found
>>>>>> next
>>>>>> message:
>>>>>>
>>>>>> 2015-04-10 07:36:14.547 2525 WARNING cinder.context [-] Arguments
>>>>>> dropped when creating context: {'user':
>>>>>> u'ad2be0cd57c44177b1ba9b0735ea0f44',
>>>>>> 'tenant': u'90aab64e7e2b45f3891b0f08978e063e', 'user_identity':
>>>>>> u'ad2be0cd57c44177b1ba9b0735ea0f44
>>>>>> 90aab64e7e2b45f3891b0f08978e063e
>>>>>> -
>>>>>> - -'}
>>>>>> 2015-04-10 07:36:14.629 2525 ERROR
>>>>>> cinder.scheduler.flows.create_volume
>>>>>> [req-ba5d16aa-5092-4f5f-a4c6-d9481df83bfd
>>>>>> ad2be0cd57c44177b1ba9b0735ea0f44 90aab64e7e2b45f3891b0f08978e063e
>>>>>> -
>>>>>> - -] Failed to schedule_create_volume: No valid host was found.
>>>>>>
>>>>>> I have enabled DEBUG by adding debug = True to
>>>>>>/etc/cinder/cinder.conf and found in the cinder-scheduler.log:  ...
>>>>>>No backend provided ...
>>>>>> I installed and condifured tgtd according to
>>>>>>http://rconradharris.com/2013/01/14/getting-cinder-up-and-running.h
>>>>>> t
>>>>>> m
>>>>>> l
>>>>>>
>>>>>> But the error still remains.
>>>>>>
>>>>>> Do somebody have an idea of how to solve this problem?
>>>>>>
>>>>>> Thanks,
>>>>>> Lluis
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> us...@lists.opencontrail.org
>>>>>> http://lists.opencontrail.org/mailman/listinfo/users_lists.opencon
>>>>>> t
>>>>>> r
>>>>>> a
>>>>>> il.org
>>>> _______________________________________________
>>>> Users mailing list
>>>> us...@lists.opencontrail.org
>>>> http://lists.opencontrail.org/mailman/listinfo/users_lists.opencontr
>>>> a
>>>> i
>>>> l.org
>>
>

_______________________________________________
Dev mailing list
Dev@lists.opencontrail.org
http://lists.opencontrail.org/mailman/listinfo/dev_lists.opencontrail.org

Reply via email to