Re: [Openstack] Xenial / Mitaka = Instance failed network setup / nova.compute.manager Unauthorized

2016-04-30 Thread Luis Guilherme Russi
Hi Martinx,

 Could you please share your .conf files showing how we can have both Linux
Bridges and OpenvSwitch configurations?

Thank you a lot.

Em qui, 21 de abr de 2016 às 17:30, Paras pradhan 
escreveu:

> Thanks Martinx
>
> On Thu, Apr 21, 2016 at 2:50 PM, Martinx - ジェームズ <
> thiagocmarti...@gmail.com> wrote:
>
>> I just manage to make OpenStack Mitaka to work with both Linux Bridges,
>> and OpenvSwitch...
>>
>> Everything is work on both "All in One" and multi-node environments.
>>
>> Very soon, I'll post instructions about how to use the Ansible automation
>> that I am developing to play with this...
>>
>> Then, you guys will be able to deploy it, on a spare box (or VMs), and
>> then, compare with failing deployemnts...
>>
>> Cheers!
>>
>> On 21 April 2016 at 16:44, Paras pradhan  wrote:
>>
>>> The neutron section was missing from nova.conf and now the instances
>>> work but having issues with metadata server. instances boot but no network
>>> access.
>>> Thanks
>>> Paras.
>>>
>>> On Thu, Apr 21, 2016 at 9:58 AM, Martinx - ジェームズ <
>>> thiagocmarti...@gmail.com> wrote:
>>>
 My nova.conf is this one:


 https://github.com/tmartinx/svauto/blob/dev/ansible/roles/os_nova_cmpt/templates/mitaka/nova.conf

 However, I'm facing connectivity problems when with OpenvSwitch
 deployments, investigating it now...

 On 21 April 2016 at 11:04, Paras pradhan 
 wrote:

> Yes . I did. here is my nova.conf. Can you share yours?
> http://paste.openstack.org/show/494988/
>
> On Thu, Apr 21, 2016 at 7:36 AM, Eugen Block  wrote:
>
>> Okay, did you restart the respective services on all nodes after the
>> changes in your config files? If the same error still occurs then you 
>> might
>> haven't found all occurrences of the option auth_plugin, did you replace 
>> it
>> in all config files? I'm just guessing here...
>>
>>
>>
>> Zitat von Paras pradhan :
>>
>> No I still have that error. Other than that I don't see any other
>>> errors.
>>>
>>> On Wed, Apr 20, 2016 at 9:22 AM, Eugen Block  wrote:
>>>
>>> So I guess the mentioned error should be resolved? Does it work now?



 Zitat von Paras pradhan :

 Yes I I have it set up in nova.conf and neutron.conf

>
> On Wed, Apr 20, 2016 at 9:11 AM, Eugen Block 
> wrote:
>
> And did you change it to auth_type instead of auth_plugin? Also
> you should
>
>> make sure that this option is in the correct section of your
>> config file,
>> for example
>>
>> [keystone_authtoken]
>> ...
>> auth_type = password
>>
>> or
>>
>> [neutron]
>> ...
>> auth_type = password
>>
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Paras pradhan :
>>
>> Hi Eugen,
>>
>>
>>> Thanks. Log says its an error. Here is the full log.
>>> http://pastebin.com/K1f4pJhB
>>>
>>> -Paras.
>>>
>>> On Tue, Apr 19, 2016 at 2:05 AM, Eugen Block 
>>> wrote:
>>>
>>> Hi Paras,
>>>
>>>
 the option auth_plugin is deprecated (from nova.conf):

 ---cut here---
 # Authentication type to load (unknown value)
 # Deprecated group/name - [DEFAULT]/auth_plugin
 auth_type = password
 ---cut here---

 But as far as I can tell, you should only get a warning, not an
 error,
 I've seen some of these warnings in my logs, but it works (I
 work with
 openSUSE). To get Mitaka working at all I simply tried to set
 the same
 options as in my working Liberty configs, and then I searched
 for
 deprecation warnings and additional options mentioned in the
 Mitaka
 guide.

 Hope this helps!

 Regards,
 Eugen


 Zitat von Paras pradhan :


 Can somebody share the nova.conf and neutron.conf from working
 mitaka?
 I
 am

 also following the same guide and ran into a problem.

>
> 2016-04-18 16:51:07.982 2447 ERROR
> nova.api.openstack.extensions
> NoSuchOptError: no such option in group neutron: auth_plugin
>
> Not sure what did 

Re: [Openstack] OpenStack Liberty and Ryu

2016-04-01 Thread Luis Guilherme Russi
Hey there, Silvia,

 Have you had success configuring it?
 I'm trying to add Ryu in OpenStack Liberty, but not the Devstack. Mine is
the packages installation.

Thank you.

Em seg, 25 de jan de 2016 às 14:16, Silvia Fichera 
escreveu:

> Hi all,
> I would like to try to use Openstack (installed via Devstack) with Ryu in
> a multinode environment.
> On line I just found suggestion to use it with previous versions of
> OpenStack. Have you got any update?
> Any suggestion to build the local.conf?
> In particular I would separate the mgmt network from the data network. In
> fact I have 3 nodes (that I'm considering compute nodes) connected each
> other through an OVS which is installed in a virtual machine.
> So, for each node I have 2 interfaces and 2 ip addresses: eth0 with
> 10.30.3.x for the mgmt network and eth1 10.0.0.x connected to the switch
> for the data plane.
>
> Do you know how to manage the local.conf to let this environment work?
> I already tried with OpenDaylight but my attempt was unsuccessful.
>
> Thak you
>
> --
> Silvia Fichera
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder volume deleting issue

2014-05-14 Thread Luis Guilherme Russi
If you try to detach first, then you run the command to delete the volume,
what happens?


2014-05-14 8:43 GMT-03:00 anand ts anandts...@gmail.com:

 Hi Ageeleshwar,

 I directly deleted the running instance from database.


 On Wed, May 14, 2014 at 5:06 PM, Ageeleshwar Kandavelu 
 ageeleshwar.kandav...@csscorp.com wrote:

  Was the instance terminated or directly deleted from the database?
  --
 *From:* anand ts [anandts...@gmail.com]
 *Sent:* Wednesday, May 14, 2014 4:09 PM
 *To:* openstack@lists.openstack.org
 *Subject:* [Openstack] Cinder volume deleting issue

   Hi all,

  I have multinode setup on openstack+havana+rdo on CentOS6.5

  Issue- Can't able to delete cinder volume.

  when try to delete through command line

  [root@cinder ~(keystone_admin)]# cinder list

 +--++--+--+-+--+--+
 |  ID  | Status | Display Name | Size |
 Volume Type | Bootable | Attached to  |

 +--++--+--+-+--+--+
 | fe0fdad1-2f8a-4cce-a173-797391dbc7ad | in-use | vol2 |  10  |
   None|   true   | b998107b-e708-42a5-8790-4727fed879a3 |

 +--++--+--+-+--+--

  [root@cinder ~(keystone_admin)]# cinder delete
 fe0fdad1-2f8a-4cce-a173-797391dbc7ad
 Delete for volume fe0fdad1-2f8a-4cce-a173-797391dbc7ad failed: Invalid
 volume: Volume status must be available or error, but current status is:
 in-use (HTTP 400) (Request-ID: req-d9be63f0-476a-4ecd-8655-20491336ee8b)
 ERROR: Unable to delete any of the specified volumes.


  when try to delete through dashboard, screen shot attached with the
 mail.

  This occured when a cinder volume attached instance is deleted from the
 database without detaching the volume. Now the volume is in use and
 attached to NONE.


  Please find the cinder logs here ,
 http://paste.openstack.org/show/80333/

  Any work around to this problem.
   http://www.csscorp.com/common/email-disclaimer.php



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder volume deleting issue

2014-05-14 Thread Luis Guilherme Russi
Hi anand ts, Sergey has sent me a response, it follows:

Hello,
you can reset volume's state to 'avaliable' and try to delete it again.
cinder reset-state volume-id


2014-05-14 9:00 GMT-03:00 Luis Guilherme Russi luisguilherme...@gmail.com:

 If you try to detach first, then you run the command to delete the volume,
 what happens?


 2014-05-14 8:43 GMT-03:00 anand ts anandts...@gmail.com:

 Hi Ageeleshwar,

 I directly deleted the running instance from database.


 On Wed, May 14, 2014 at 5:06 PM, Ageeleshwar Kandavelu 
 ageeleshwar.kandav...@csscorp.com wrote:

  Was the instance terminated or directly deleted from the database?
  --
 *From:* anand ts [anandts...@gmail.com]
 *Sent:* Wednesday, May 14, 2014 4:09 PM
 *To:* openstack@lists.openstack.org
 *Subject:* [Openstack] Cinder volume deleting issue

   Hi all,

  I have multinode setup on openstack+havana+rdo on CentOS6.5

  Issue- Can't able to delete cinder volume.

  when try to delete through command line

  [root@cinder ~(keystone_admin)]# cinder list

 +--++--+--+-+--+--+
 |  ID  | Status | Display Name | Size |
 Volume Type | Bootable | Attached to  |

 +--++--+--+-+--+--+
 | fe0fdad1-2f8a-4cce-a173-797391dbc7ad | in-use | vol2 |  10  |
 None|   true   | b998107b-e708-42a5-8790-4727fed879a3 |

 +--++--+--+-+--+--

  [root@cinder ~(keystone_admin)]# cinder delete
 fe0fdad1-2f8a-4cce-a173-797391dbc7ad
 Delete for volume fe0fdad1-2f8a-4cce-a173-797391dbc7ad failed: Invalid
 volume: Volume status must be available or error, but current status is:
 in-use (HTTP 400) (Request-ID: req-d9be63f0-476a-4ecd-8655-20491336ee8b)
 ERROR: Unable to delete any of the specified volumes.


  when try to delete through dashboard, screen shot attached with the
 mail.

  This occured when a cinder volume attached instance is deleted from
 the database without detaching the volume. Now the volume is in use and
 attached to NONE.


  Please find the cinder logs here ,
 http://paste.openstack.org/show/80333/

  Any work around to this problem.
   http://www.csscorp.com/common/email-disclaimer.php



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to upgrade Grizzly to Havana

2014-02-10 Thread Luis Guilherme Russi
Hello Rajshree, I had this same doubt, but now I'm using the Grizzly
version on my running lab to perform my tests and studies, it would be good
to know if anybody had done this upgrade in a running system without any
issues.
I'm using on Ubuntu 12.04 Server.

Best regards.


2014-02-10 9:14 GMT-02:00 Rajshree Thorat rajshree.tho...@gslab.com:

 Hi All,

 I want to upgrade my Grizzly setup to Havana. How I can do it?
 Is there any document?

 --
 Regards,
 Rajshree


 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to apply a quantum patch

2014-02-04 Thread Luis Guilherme Russi
I've Found another way,

http://dcvan24.wordpress.com/2013/06/25/devstack-critical-nova-module-object-has-no-attribute-packs/comment-page-1/#comment-47

Thank you everyone!


2014-02-03 Luis Guilherme Russi luisguilherme...@gmail.com:

 Hello guys, I'm facing a problem with my quantum-server, I'm running
 OpenStack Grizzly with quantum and I was trying to install the Ryu plugin,
 but my quantum-server is broken now, reading my logs I've found this link
 https://bugs.launchpad.net/neutron/+bug/1178512?comments=all to fix that
 a version of kombu doesn't work if newer msgpack was installed, so I guess
 it happened when I tried to install the ryu plugin.

 But my point here is, how do I apply this patch? Should I copy and paste
 this command on my terminal and type enter?
 git fetch 
 https://review.openstack.org/openstack/neutronrefs/changes/04/28504/3  git 
 format-patch -1 --stdout FETCH_HEAD

 But again, I'm running quantum and not neutron, will this same path work
 with my quantum-server API? Or I must change the /neutron for /quantum on
 the git path?

 I'm newer when facing this patch things, can anybody help me, please?

 Best Regards.

 Guilherme.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] How to apply a quantum patch

2014-02-03 Thread Luis Guilherme Russi
Hello guys, I'm facing a problem with my quantum-server, I'm running
OpenStack Grizzly with quantum and I was trying to install the Ryu plugin,
but my quantum-server is broken now, reading my logs I've found this link
https://bugs.launchpad.net/neutron/+bug/1178512?comments=all to fix that a
version of kombu doesn't work if newer msgpack was installed, so I guess it
happened when I tried to install the ryu plugin.

But my point here is, how do I apply this patch? Should I copy and paste
this command on my terminal and type enter?
git fetch
https://review.openstack.org/openstack/neutronrefs/changes/04/28504/3
 git format-patch -1 --stdout FETCH_HEAD

But again, I'm running quantum and not neutron, will this same path work
with my quantum-server API? Or I must change the /neutron for /quantum on
the git path?

I'm newer when facing this patch things, can anybody help me, please?

Best Regards.

Guilherme.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Multiples storages

2013-11-18 Thread Guilherme Russi
Hello again guys, I'm trying to overcome another issue with cinder, I'm
here trying to create e 500 GB volume, this is my disk:

pvscan
  PV /dev/sdb1   VG cinder-volumes-2   lvm2 [931,51 GiB / 531,51 GiB free]
  Total: 1 [931,51 GiB] / in use: 1 [931,51 GiB] / in no VG: 0 [0   ]

But when I try:
cinder create --volume_type lvm_one --display_name v2st3-500 500

I get:
ERROR: VolumeSizeExceedsAvailableQuota: Requested volume or snapshot
exceeds allowed Gigabytes quota

Does anybody know where I begin to fix it?

The outputs on cinder-api.log are:

ERROR [cinder.api.middleware.fault] Caught error: Requested volume or
snapshot exceeds allowed Gigabytes quota
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/cinder/api/middleware/fault.py,
line 73, in __call__
return req.get_response(self.application)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in
send
application, catch_exc_info=False)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in
call_application
app_iter = application(self.environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
line 450, in __call__
return self.app(env, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131,
in __call__
response = self.app(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in
call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
line 803, in __call__
content_type, body, accept)
  File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
line 851, in _process_stack
action_result = self.dispatch(meth, request, action_args)
  File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
line 927, in dispatch
return method(req=request, **action_args)
  File /usr/lib/python2.7/dist-packages/cinder/api/v1/volumes.py, line
358, in create
**kwargs)
  File /usr/lib/python2.7/dist-packages/cinder/volume/api.py, line 165,
in create
raise exception.VolumeSizeExceedsAvailableQuota()
VolumeSizeExceedsAvailableQuota: Requested volume or snapshot exceeds
allowed Gigabytes quota

Thank you all.



2013/11/15 Razique Mahroua razique.mahr...@gmail.com

 Awesome :)

 Razique

 On 14 Nov 2013, at 15:27, Guilherme Russi wrote:

  That's right, I've stopped the open-iscsi and tgt process, and the
 lvremove
 worked. Thank you all.

 Regards.


 2013/11/13 Razique Mahroua razique.mahr...@gmail.com

  Hey :)
 that means the volume is still in use. (lvopen : 1) make sure it's not by
 checking the process, qemu-nbd, etc...

 On 13 Nov 2013, at 4:50, Guilherme Russi wrote:

 Hello Razique, I'm here opening this thread again, I've done some cinder
 delete but when I try to create another storeges it returns there's no
 space to create a new volume.

 Here is part of my lvdisplay output:

 Alloc PE / Size 52224 / 204,00 GiB
 Free PE / Size 19350 / 75,59 GiB

 And here is my lvdisplay:

 --- Logical volume ---
 LV Name
 /dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
 VG Name cinder-volumes
 LV UUID wdqxVd-GgUQ-21O4-OWlR-sRT3-HvUA-Q8j9kL
 LV Write Access read/write
 LV snapshot status source of

 /dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
 [active]
 LV Status available
 open 0

 LV Size 10,00 GiB
 Current LE 2560
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 256
 Block device 252:1

 --- Logical volume ---
 LV Name
 /dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
 VG Name cinder-volumes
 LV UUID EZz1lC-a8H2-1PlN-pJTN-XAIm-wW0q-qtUQOc
 LV Write Access read/write
 LV snapshot status active destination for
 /dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
 LV Status available
 open 0

 LV Size 10,00 GiB
 Current LE 2560
 COW-table size 10,00 GiB
 COW-table LE 2560
 Allocated to snapshot 0,00%
 Snapshot chunk size 4,00 KiB
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 256
 Block device 252:3

 --- Logical volume ---
 LV Name
 /dev/cinder-volumes/volume-ca36920e-938e-4ad1-b9c4-74c1e28abd31
 VG Name cinder-volumes
 LV UUID b40kQV-P8N4-R6jt-k97Z-I2a1-9TXm-5GXqfz
 LV Write Access read/write
 LV Status available
 open 1

 LV Size 60,00 GiB
 Current LE 15360
 Segments 1

Re: [Openstack] Multiples storages

2013-11-13 Thread Guilherme Russi
Hello John, my cinder list is empty, I've done cinder delete #storage-id
but when I do lvdisplay they're still there. I've already tried dmsetup
remove but no success too. And the storages are not in use with any VM.

Regards.


2013/11/13 John Griffith john.griff...@solidfire.com

 On Wed, Nov 13, 2013 at 5:50 AM, Guilherme Russi
 luisguilherme...@gmail.com wrote:
  Hello Razique, I'm here opening this thread again, I've done some cinder
  delete but when I try to create another storeges it returns there's no
 space
  to create a new volume.
 
  Here is part of my lvdisplay output:
 
  Alloc PE / Size   52224 / 204,00 GiB
  Free  PE / Size   19350 / 75,59 GiB
 
  And here is my lvdisplay:
 
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
VG Namecinder-volumes
LV UUIDwdqxVd-GgUQ-21O4-OWlR-sRT3-HvUA-Q8j9kL
LV Write Accessread/write
LV snapshot status source of
 
  /dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
 [active]
LV Status  available
# open 0
LV Size10,00 GiB
Current LE 2560
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:1
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/_snapshot-04e8414e-2c0e-4fc2-8bff-43dd80ecca09
VG Namecinder-volumes
LV UUIDEZz1lC-a8H2-1PlN-pJTN-XAIm-wW0q-qtUQOc
LV Write Accessread/write
LV snapshot status active destination for
  /dev/cinder-volumes/volume-06ccd141-91c4-45e4-b21f-595f4a36779b
LV Status  available
# open 0
LV Size10,00 GiB
Current LE 2560
COW-table size 10,00 GiB
COW-table LE   2560
Allocated to snapshot  0,00%
Snapshot chunk size4,00 KiB
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:3
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-ca36920e-938e-4ad1-b9c4-74c1e28abd31
VG Namecinder-volumes
LV UUIDb40kQV-P8N4-R6jt-k97Z-I2a1-9TXm-5GXqfz
LV Write Accessread/write
LV Status  available
# open 1
LV Size60,00 GiB
Current LE 15360
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:4
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-70be4f36-10bd-4877-b841-80333ccfe985
VG Namecinder-volumes
LV UUID2YDrMs-BrYo-aQcZ-8AlX-A4La-HET1-9UQ0gV
LV Write Accessread/write
LV Status  available
# open 1
LV Size1,00 GiB
Current LE 256
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:5
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-00c532bd-91fb-4a38-b340-4389fb7f0ed5
VG Namecinder-volumes
LV UUIDMfVOuB-5x5A-jne3-H4Ul-4NP8-eI7b-UYSYE7
LV Write Accessread/write
LV Status  available
# open 0
LV Size1,00 GiB
Current LE 256
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:6
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-ae133dbc-6141-48cf-beeb-9d6576e57a45
VG Namecinder-volumes
LV UUID53w8j3-WT4V-8m52-r6LK-ZYd3-mMHA-FtuyXV
LV Write Accessread/write
LV Status  available
# open 0
LV Size1,00 GiB
Current LE 256
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:7
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-954d2f1b-837b-4ba5-abfd-b3610597be5e
VG Namecinder-volumes
LV UUIDbelquE-WxQ2-gt6Y-WlPE-Hmq3-B9Am-zcYD3P
LV Write Accessread/write
LV Status  available
# open 0
LV Size60,00 GiB
Current LE 15360
Segments   1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device   252:8
 
--- Logical volume ---
LV Name
  /dev/cinder-volumes/volume-05d037d1

Re: [Openstack] Cinder volume attaching problem

2013-11-08 Thread Guilherme Russi
Found how to fix it, if anyone needs like me:

https://ask.openstack.org/en/question/130/why-do-i-get-no-portal-found-error-while-attaching-cinder-volume-to-vm/


2013/11/5 Guilherme Russi luisguilherme...@gmail.com

 Hello guys,

  Last saturday I needed to turn off my controller node because it was
 needed to perform a energy maintenance at my lab. After turning on the
 controller again I could fix the network problems, but I can't fix the
 cinder one, I'm using Grizzly at Ubuntu Server 12.04 and when I try to
 attach my storage I got this error:

 cinder-api.log:

 2013-11-05 11:23:49 INFO [cinder.api.middleware.fault]
 http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/954d2f1b-837b-4ba5-abfd-b3610597be5e/actionreturned
  with HTTP 500
 2013-11-05 11:23:53 INFO [cinder.api.openstack.wsgi] POST
 http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/954d2f1b-837b-4ba5-abfd-b3610597be5e/action
 2013-11-05 11:24:29 INFO [cinder.api.openstack.wsgi] GET
 http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/detail
 2013-11-05 11:24:29AUDIT [cinder.api.v1.volumes]
 vol=cinder.db.sqlalchemy.models.Volume object at 0x38a2250
 2013-11-05 11:24:29AUDIT [cinder.api.v1.volumes]
 vol=cinder.db.sqlalchemy.models.Volume object at 0x373f5d0
 2013-11-05 11:24:29AUDIT [cinder.api.v1.volumes]
 vol=cinder.db.sqlalchemy.models.Volume object at 0x37a9890
 2013-11-05 11:24:29 INFO [cinder.api.openstack.wsgi]
 http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/detailreturned
  with HTTP 200
 2013-11-05 11:24:53ERROR [cinder.api.middleware.fault] Caught error:
 Timeout while waiting on RPC response.
 Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/cinder/api/middleware/fault.py,
 line 73, in __call__
 return req.get_response(self.application)
   File /usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in
 send
 application, catch_exc_info=False)
   File /usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in
 call_application
 app_iter = application(self.environ, start_response)
   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
 __call__
 return resp(environ, start_response)
   File
 /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
 line 450, in __call__
 return self.app(env, start_response)
   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
 __call__
 return resp(environ, start_response)
   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
 __call__
 return resp(environ, start_response)
   File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131,
 in __call__
 response = self.app(environ, start_response)
   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
 __call__
 return resp(environ, start_response)
   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in
 call_func
 return self.func(req, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
 line 803, in __call__
 content_type, body, accept)
   File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
 line 851, in _process_stack
 action_result = self.dispatch(meth, request, action_args)
   File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
 line 927, in dispatch
 return method(req=request, **action_args)
   File
 /usr/lib/python2.7/dist-packages/cinder/api/contrib/volume_actions.py,
 line 137, in _initialize_connection
 connector)
   File /usr/lib/python2.7/dist-packages/cinder/volume/api.py, line 63,
 in wrapped
 return func(self, context, target_obj, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/cinder/volume/api.py, line 493,
 in initialize_connection
 connector)
   File /usr/lib/python2.7/dist-packages/cinder/volume/rpcapi.py, line
 117, in initialize_connection
 volume['host']))
   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/proxy.py,
 line 80, in call
 return rpc.call(context, self._get_topic(topic), msg, timeout)
   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/__init__.py,
 line 140, in call
 return _get_impl().call(CONF, context, topic, msg, timeout)
   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/impl_kombu.py,
 line 798, in call
 rpc_amqp.get_connection_pool(conf, Connection))
   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
 line 613, in call
 rv = list(rv)
   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
 line 555, in __iter__
 self.done()
   File /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 self.gen.next()
   File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc

Re: [Openstack] Multiples storages

2013-11-08 Thread Guilherme Russi
It is a hard disk, my scenario is one Controller (where I have my storage
cinder and my network quantum) and four compute nodes.


2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 ok !
 what is your actual Cinder backend? Is it a hard disk, a SAN, a network
 volume, etc…

 On 08 Nov 2013, at 05:20, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Hi Razique, thank you for answering, I want to expand my cinder storage,
 is it the block storage? I'll use the storage to allow VMs to have more
 hard disk space.

 Regards.

 Guilherme.



 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 Hi Guilherme !
 Which storage do you precisely want to expand?

 Regards,
 Razique


 On 08 Nov 2013, at 04:52, Guilherme Russi luisguilherme...@gmail.com
 wrote:

  Hello guys, I have a Grizzly deployment running fine with 5 nodes, and
 I want to add more storage on it. My question is, can I install a new HD on
 another computer thats not the controller and link this HD with my cinder
 that it can be a storage too?
  The computer I will install my new HD is at the same network as my
 cloud is. I'm asking because I haven't seen a question like that here. Does
 anybody knows how to do that? Have a clue? Any help is welcome.
 
  Thank you all.
 
  Best regards.
  ___
  Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openstack@lists.openstack.org
  Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Multiples storages

2013-11-08 Thread Guilherme Russi
Oh great! I'll try here and send you the results.

Very thanks :)


2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 If I’m not mistaken, you only need to install the “cinder-volume’ service
 that will update its status to your main node
 :)

 On 08 Nov 2013, at 05:34, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Great! I was reading the link and I have one question, do I need to
 install cinder at the other computer too?

 Thanks :)


 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 Ok in that case, with Grizzly you can use the “multi-backends” feature:
 https://wiki.openstack.org/wiki/Cinder-multi-backend

 and that should do it :)

 On 08 Nov 2013, at 05:29, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 It is a hard disk, my scenario is one Controller (where I have my storage
 cinder and my network quantum) and four compute nodes.


 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 ok !
 what is your actual Cinder backend? Is it a hard disk, a SAN, a network
 volume, etc…

 On 08 Nov 2013, at 05:20, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Hi Razique, thank you for answering, I want to expand my cinder storage,
 is it the block storage? I'll use the storage to allow VMs to have more
 hard disk space.

 Regards.

 Guilherme.



 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 Hi Guilherme !
 Which storage do you precisely want to expand?

 Regards,
 Razique


 On 08 Nov 2013, at 04:52, Guilherme Russi luisguilherme...@gmail.com
 wrote:

  Hello guys, I have a Grizzly deployment running fine with 5 nodes,
 and I want to add more storage on it. My question is, can I install a new
 HD on another computer thats not the controller and link this HD with my
 cinder that it can be a storage too?
  The computer I will install my new HD is at the same network as my
 cloud is. I'm asking because I haven't seen a question like that here. Does
 anybody knows how to do that? Have a clue? Any help is welcome.
 
  Thank you all.
 
  Best regards.
  ___
  Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openstack@lists.openstack.org
  Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack








___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Multiples storages

2013-11-08 Thread Guilherme Russi
Very thanks again.

Best regards.


2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 Oh yah true!
 not sure “conductors” exist yet for Cinder, meaning meanwhile, every node
 needs a direct access to the database
 glad to hear it’s working :)

 On 08 Nov 2013, at 08:53, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Hello again Razique, I've found the problem, I need to add the grants on
 the mysql to my another IP. Now it's working really good :D
 I've found this link too if someone needs:
 http://docs.openstack.org/admin-guide-cloud/content//managing-volumes.html

 Thank you so much, and if you need me just let me know.

 Best regards.

 Guilherme.



 2013/11/8 Guilherme Russi luisguilherme...@gmail.com

 Hello Razique, I got a couple of doubts, do you know if I need to do
 something else that's is not on the link you sent me? I'm asking because I
 followed the configuration but it's not working, here is what I get: I've
 installed the cinder-volume at the second computer that have the HD, and
 I've changed it's cinder.conf. I've changed too the master's cinder.conf
 like is following:


 [DEFAULT]
 rootwrap_config = /etc/cinder/rootwrap.conf
 sql_connection = mysql://cinder:password@localhost/cinder
 api_paste_confg = /etc/cinder/api-paste.ini
 #iscsi_helper=iscsiadm
 #iscsi_helper = ietadm
 iscsi_helper = tgtadm
 volume_name_template = volume-%s
 #volume_group = cinder-volumes
 verbose = True
 auth_strategy = keystone
 iscsi_ip_address = 192.168.3.1
 scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler

 # Rabbit authorization
 rabbit_host = localhost
 rabbit_port = 5672
 rabbit_hosts = $rabbit_host:$rabbit_port
 rabbit_use_ssl = false
 rabbit_userid = guest
 rabbit_password = password
 #rabbit_virtual_host = /nova

 state_path = /var/lib/cinder
 lock_path = /var/lock/cinder
 volumes_dir = /var/lib/cinder/volumes
 #rpc_backend = cinder.rpc.impl_kombu

 enabled_backends=orion-1,orion-4
 [orion-1]
 volume_group=cinder-volumes
 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 volume_backend_name=LVM_iSCSI
 [orion-4]
 volume_group=cinder-volumes-2
 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
 volume_backend_name=LVM_iSCSI

 The cinder.conf on the second computer is like this but the IPs are
 changed with the controller IP (It has the cinder-api), and when I run
 service cinder-volume restart at the second computer it's status is
 stop/waiting.

 Any ideas?

 Thanks :)


 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 sure :)

 On 08 Nov 2013, at 05:39, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Oh great! I'll try here and send you the results.

 Very thanks :)


 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 If I’m not mistaken, you only need to install the “cinder-volume’
 service that will update its status to your main node
 :)

 On 08 Nov 2013, at 05:34, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Great! I was reading the link and I have one question, do I need to
 install cinder at the other computer too?

 Thanks :)


 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 Ok in that case, with Grizzly you can use the “multi-backends” feature:
 https://wiki.openstack.org/wiki/Cinder-multi-backend

 and that should do it :)

 On 08 Nov 2013, at 05:29, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 It is a hard disk, my scenario is one Controller (where I have my
 storage cinder and my network quantum) and four compute nodes.


 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 ok !
 what is your actual Cinder backend? Is it a hard disk, a SAN, a
 network volume, etc…

 On 08 Nov 2013, at 05:20, Guilherme Russi luisguilherme...@gmail.com
 wrote:

 Hi Razique, thank you for answering, I want to expand my cinder
 storage, is it the block storage? I'll use the storage to allow VMs to 
 have
 more hard disk space.

 Regards.

 Guilherme.



 2013/11/8 Razique Mahroua razique.mahr...@gmail.com

 Hi Guilherme !
 Which storage do you precisely want to expand?

 Regards,
 Razique


 On 08 Nov 2013, at 04:52, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

  Hello guys, I have a Grizzly deployment running fine with 5 nodes,
 and I want to add more storage on it. My question is, can I install a 
 new
 HD on another computer thats not the controller and link this HD with my
 cinder that it can be a storage too?
  The computer I will install my new HD is at the same network as my
 cloud is. I'm asking because I haven't seen a question like that here. 
 Does
 anybody knows how to do that? Have a clue? Any help is welcome.
 
  Thank you all.
 
  Best regards.
  ___
  Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openstack@lists.openstack.org
  Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack













___
Mailing list: http://lists.openstack.org/cgi-bin/mailman

[Openstack] Cinder volume attaching problem

2013-11-05 Thread Guilherme Russi
Hello guys,

 Last saturday I needed to turn off my controller node because it was
needed to perform a energy maintenance at my lab. After turning on the
controller again I could fix the network problems, but I can't fix the
cinder one, I'm using Grizzly at Ubuntu Server 12.04 and when I try to
attach my storage I got this error:

cinder-api.log:

2013-11-05 11:23:49 INFO [cinder.api.middleware.fault]
http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/954d2f1b-837b-4ba5-abfd-b3610597be5e/actionreturned
with HTTP 500
2013-11-05 11:23:53 INFO [cinder.api.openstack.wsgi] POST
http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/954d2f1b-837b-4ba5-abfd-b3610597be5e/action
2013-11-05 11:24:29 INFO [cinder.api.openstack.wsgi] GET
http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/detail
2013-11-05 11:24:29AUDIT [cinder.api.v1.volumes]
vol=cinder.db.sqlalchemy.models.Volume object at 0x38a2250
2013-11-05 11:24:29AUDIT [cinder.api.v1.volumes]
vol=cinder.db.sqlalchemy.models.Volume object at 0x373f5d0
2013-11-05 11:24:29AUDIT [cinder.api.v1.volumes]
vol=cinder.db.sqlalchemy.models.Volume object at 0x37a9890
2013-11-05 11:24:29 INFO [cinder.api.openstack.wsgi]
http://192.168.3.1:8776/v1/d13839320f5d4194a4a3fe3b723d6144/volumes/detailreturned
with HTTP 200
2013-11-05 11:24:53ERROR [cinder.api.middleware.fault] Caught error:
Timeout while waiting on RPC response.
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/cinder/api/middleware/fault.py,
line 73, in __call__
return req.get_response(self.application)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in
send
application, catch_exc_info=False)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in
call_application
app_iter = application(self.environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
line 450, in __call__
return self.app(env, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131,
in __call__
response = self.app(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in
__call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in
__call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in
call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
line 803, in __call__
content_type, body, accept)
  File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
line 851, in _process_stack
action_result = self.dispatch(meth, request, action_args)
  File /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py,
line 927, in dispatch
return method(req=request, **action_args)
  File
/usr/lib/python2.7/dist-packages/cinder/api/contrib/volume_actions.py,
line 137, in _initialize_connection
connector)
  File /usr/lib/python2.7/dist-packages/cinder/volume/api.py, line 63, in
wrapped
return func(self, context, target_obj, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/cinder/volume/api.py, line 493,
in initialize_connection
connector)
  File /usr/lib/python2.7/dist-packages/cinder/volume/rpcapi.py, line
117, in initialize_connection
volume['host']))
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/proxy.py,
line 80, in call
return rpc.call(context, self._get_topic(topic), msg, timeout)
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/__init__.py,
line 140, in call
return _get_impl().call(CONF, context, topic, msg, timeout)
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/impl_kombu.py,
line 798, in call
rpc_amqp.get_connection_pool(conf, Connection))
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
line 613, in call
rv = list(rv)
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
line 555, in __iter__
self.done()
  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__
self.gen.next()
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
line 552, in __iter__
self._iterator.next()
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/impl_kombu.py,
line 648, in iterconsume
yield self.ensure(_error_callback, _consume)
  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/impl_kombu.py,
line 566, in ensure

Re: [Openstack] OpenStack 2013.1.4 released

2013-10-17 Thread Guilherme Russi
Hello Adam, if I update my current running Grizzly version may I have
problems?

Best regards.

Guilherme.


2013/10/17 Adam Gandelman ad...@canonical.com

 Hello everyone,

 The OpenStack Stable Maintenance team is happy to announce the release
 of the 2013.1.4 stable Grizzly release.  We have been busy reviewing and
 accepting backported bugfixes to the stable/grizzly branches according
 to the criteria set at:

 https://wiki.openstack.org/wiki/StableBranch

 A total of 68 bugs have been fixed across all projects. These
 updates to Grizzly are intended to be relatively risk free with no
 intentional regressions or API changes. The list of bugs, tarballs and
 other milestone information for each project may be found on Launchpad:

 https://launchpad.net/cinder/grizzly/2013.1.4
 https://launchpad.net/glance/grizzly/2013.1.4
 https://launchpad.net/horizon/grizzly/2013.1.4
 https://launchpad.net/keystone/grizzly/2013.1.4
 https://launchpad.net/nova/grizzly/2013.1.4
 https://launchpad.net/neutron/grizzly/2013.1.4

 Release notes may be found on the wiki:

 https://wiki.openstack.org/wiki/ReleaseNotes/2013.1.4

 The freeze on the stable/grizzly branches will be lifted today as we
 begin working toward the 2013.1.5 release, to be releasd on a TBD
 date and managed by Alan Pevec.

 Thanks,
 Adam


 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Connection between VMS

2013-10-09 Thread Guilherme Russi
Hello Rick,

Here is the command:

ubuntu@small-vm02:~$ ssh ubuntu@small-vm03
ssh: Could not resolve hostname small-vm03: Name or service not known

My point is, I have my cloud-vm01 with IP 10.5.5.3 and I want ssh to my
cloud-vm02 with IP 10.5.5.4, but I can't simply do ssh ubuntu@10.5.5.4
because the IP 10.5.5.4 can be attached to my cloud-vm03, for example, so,
I want to know if there's a way to ssh using ssh ubuntu@cloud-vm02


Regards.


2013/10/9 Rick Jones rick.jon...@hp.com

 On 10/09/2013 05:32 PM, Guilherme Russi wrote:

 Hello guys,

   I have some VMs and I'd like to connect them through their name, for
 example, my VMs are named cloud-vm01 to cloud-vmn but I can't ssh from
 cloud-vm01 in cloud-vm02 doing ssh user@cloud-vm01.
   How can I workaround it?


 When you say can't ssh can you be a bit more explicit?  What sort of
 error message do you get when you try to ssh?  The answer to that will
 probably guide responses.

 rick jones

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder error

2013-09-24 Thread Guilherme Russi
Hello guys, I've changed the rabbit error, but I still can't associate a
volume to my VMs, when I type cinder list I get the Malformed request url.
I've already checked my keystone and it looks fine to me. Any ideas?

Regards.


2013/9/23 Guilherme Russi luisguilherme...@gmail.com

 It's what i've got too:

  root@hemera:/home/hemera# rabbitmqctl list_permissions
 Listing permissions in vhost / ...
 guest .* .* .*
 ...done.

 Regards.



 2013/9/23 Marcelo Dieder marcelodie...@gmail.com

  Hi Guilherme,

 The RabbitMQ has virtual host for many applications. Default, the Rabbit
 created a default virtualhost with /.

 You can see this with the command:

 root@controller:~# rabbitmqctl list_permissions
 Listing permissions in vhost / ...
 guest.*.*.*
 ...done.

 Regards,

 Marcelo Dieder


 On 09/23/2013 04:57 PM, Guilherme Russi wrote:

 I guess I've got something:

  2013-09-23 16:52:17 INFO [cinder.openstack.common.rpc.common]
 Connected to AMQP server on localhost:5672

  I've found this page
 https://ask.openstack.org/en/question/4581/cinder-unable-to-connect-to-rabbitmq/
  and zipmaster07
 answered rabbit_virtual_host = /nova

 I commented out the rabbit_virtual_host, restarted all cinder services
 and I can see a successful connection to AMQP now.
 And I did that, now it's connected, but what is this rabbit_virtual_host?
 What does it do?
 I'll test my volumes now.

  Regards.





 2013/9/23 Guilherme Russi luisguilherme...@gmail.com

  I've looked at the quantum/server.log and nova-scheduler.log and they
 show:

 2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
 Reconnecting to AMQP server on localhost:5672
 2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
 Connected to AMQP server on localhost:5672

  2013-09-23 16:24:01.830 5971 INFO nova.openstack.common.rpc.common [-]
 Reconnecting to AMQP server on 127.0.0.1:5672
 2013-09-23 16:24:01.879 5971 INFO nova.openstack.common.rpc.common [-]
 Connected to AMQP server on 127.0.0.1:5672

  But at the cinder-volume.log:

  INFO [cinder.openstack.common.rpc.common] Reconnecting to AMQP server
 on localhost:5672
 2013-09-23 16:46:04ERROR [cinder.openstack.common.rpc.common] AMQP
 server on localhost:5672 is unreachable: Socket closed. Trying again in 30
 seconds.


  I was typing when you sent your answer, here is it:

  rabbitmq-server status
 Status of node rabbit@hemera ...
 [{pid,17266},
   {running_applications,[{rabbit,RabbitMQ,2.7.1},
 {os_mon,CPO  CXC 138 46,2.2.7},
 {sasl,SASL  CXC 138 11,2.1.10},
 {mnesia,MNESIA  CXC 138 12,4.5},
 {stdlib,ERTS  CXC 138 10,1.17.5},
 {kernel,ERTS  CXC 138 10,2.14.5}]},
  {os,{unix,linux}},
  {erlang_version,Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4]
 [rq:4] [async-threads:30] [kernel-poll:true]\n},
  {memory,[{total,30926120},
   {processes,14354392},
   {processes_used,14343184},
   {system,16571728},
   {atom,1124441},
   {atom_used,1120343},
   {binary,268176},
   {code,11134417},
   {ets,2037120}]},
  {vm_memory_high_watermark,0.4},
  {vm_memory_limit,3299385344}]
 ...done.


  Yes, I've restarted the rabbitmq-server, but as you can see at the
 logs, quantum and nova are connected.

  Ideas??

  Regards.



 2013/9/23 Marcelo Dieder marcelodie...@gmail.com

  What's the status of your rabbitmq?

 # rabbitmqctl status

 And do you tried restart the rabbitmq?

 Regards,
 Marcelo Dieder


   On 09/23/2013 03:31 PM, Guilherme Russi wrote:

  Yes, it is at the same place

  cat /etc/cinder/cinder.conf
 [DEFAULT]
 rootwrap_config=/etc/cinder/rootwrap.conf
 sql_connection = mysql://cinder:password@localhost/cinder
 api_paste_confg = /etc/cinder/api-paste.ini
 iscsi_helper=ietadm
 #iscsi_helper = tgtadm
 volume_name_template = volume-%s
 volume_group = cinder-volumes
 verbose = True
 auth_strategy = keystone
 iscsi_ip_address=localhost
 rabbit_host = localhost
 rabbit_port = 5672
 rabbit_userid = rabbit
 rabbit_password = password
 rabbit_virtual_host = /nova
 state_path = /var/lib/cinder
 lock_path = /var/lock/cinder
 volumes_dir = /var/lib/cinder/volumes

  Another idea?

  Regards.


 2013/9/23 Gangur, Hrushikesh (HP Converged Cloud - RD - Sunnyvale) 
 hrushikesh.gan...@hp.com

  Ensure that cinder configuration files have correct IP of rabbimq
 host.



 *From:* Guilherme Russi [mailto:luisguilherme...@gmail.com]
 *Sent:* Monday, September 23, 2013 10:53 AM
 *To:* openstack
 *Subject:* [Openstack] Cinder error



 Hello guys, I'm reinstalling my OpenStack Grizzly and I'm getting
 problem with my cinder, I'm getting *Error: *Unable to retrieve
 volume list. I was looking at the cinder log and I only found this error:
 ERROR [cinder.openstack.common.rpc.common] AMQP server on
 192.168.3.1:5672 is unreachable: Socket closed. Trying again in 30
 seconds.



 I have

Re: [Openstack] Cinder error

2013-09-23 Thread Guilherme Russi
Yes, it is at the same place

cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:password@localhost/cinder
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
#iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
iscsi_ip_address=localhost
rabbit_host = localhost
rabbit_port = 5672
rabbit_userid = rabbit
rabbit_password = password
rabbit_virtual_host = /nova
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes

Another idea?

Regards.


2013/9/23 Gangur, Hrushikesh (HP Converged Cloud - RD - Sunnyvale) 
hrushikesh.gan...@hp.com

  Ensure that cinder configuration files have correct IP of rabbimq host.**
 **

 ** **

 *From:* Guilherme Russi [mailto:luisguilherme...@gmail.com]
 *Sent:* Monday, September 23, 2013 10:53 AM
 *To:* openstack
 *Subject:* [Openstack] Cinder error

 ** **

 Hello guys, I'm reinstalling my OpenStack Grizzly and I'm getting problem
 with my cinder, I'm getting *Error: *Unable to retrieve volume list. I
 was looking at the cinder log and I only found this error: ERROR
 [cinder.openstack.common.rpc.common] AMQP server on 192.168.3.1:5672 is
 unreachable: Socket closed. Trying again in 30 seconds.

 ** **

 I have a partition created:

 ** **

 pvdisplay 

   --- Physical volume ---

   PV Name   /dev/sda7

   VG Name   cinder-volumes

   PV Size   279,59 GiB / not usable 1,00 MiB

   Allocatable   yes 

   PE Size   4,00 MiB

   Total PE  71574

   Free PE   66454

   Allocated PE  5120

   PV UUID   KHITxF-uagF-xADc-F8fu-na8t-1OXT-rDFbQ6



 root@hemera:/home/hemera# vgdisplay 

   --- Volume group ---

   VG Name   cinder-volumes

   System ID 

   Formatlvm2

   Metadata Areas1

   Metadata Sequence No  6

   VG Access read/write

   VG Status resizable

   MAX LV0

   Cur LV2

   Open LV   0

   Max PV0

   Cur PV1

   Act PV1

   VG Size   279,59 GiB

   PE Size   4,00 MiB

   Total PE  71574

   Alloc PE / Size   5120 / 20,00 GiB

   Free  PE / Size   66454 / 259,59 GiB

   VG UUID   mhN3uV-n80a-zjeb-uR35-0IPb-BFmo-G2Qehu

 ** **

 ** **

 I don't know how to fix this error, any help?

 ** **

 Thank you all and regards.

 ** **

 Guilherme.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Cinder error

2013-09-23 Thread Guilherme Russi
I guess I've got something:

2013-09-23 16:52:17 INFO [cinder.openstack.common.rpc.common] Connected
to AMQP server on localhost:5672

I've found this page
https://ask.openstack.org/en/question/4581/cinder-unable-to-connect-to-rabbitmq/
and zipmaster07
answered rabbit_virtual_host = /nova

I commented out the rabbit_virtual_host, restarted all cinder services
and I can see a successful connection to AMQP now.
And I did that, now it's connected, but what is this rabbit_virtual_host?
What does it do?
I'll test my volumes now.

Regards.





2013/9/23 Guilherme Russi luisguilherme...@gmail.com

 I've looked at the quantum/server.log and nova-scheduler.log and they show:

 2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
 Reconnecting to AMQP server on localhost:5672
 2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
 Connected to AMQP server on localhost:5672

 2013-09-23 16:24:01.830 5971 INFO nova.openstack.common.rpc.common [-]
 Reconnecting to AMQP server on 127.0.0.1:5672
 2013-09-23 16:24:01.879 5971 INFO nova.openstack.common.rpc.common [-]
 Connected to AMQP server on 127.0.0.1:5672

 But at the cinder-volume.log:

 INFO [cinder.openstack.common.rpc.common] Reconnecting to AMQP server on
 localhost:5672
 2013-09-23 16:46:04ERROR [cinder.openstack.common.rpc.common] AMQP
 server on localhost:5672 is unreachable: Socket closed. Trying again in 30
 seconds.


 I was typing when you sent your answer, here is it:

 rabbitmq-server status
 Status of node rabbit@hemera ...
 [{pid,17266},
  {running_applications,[{rabbit,RabbitMQ,2.7.1},
 {os_mon,CPO  CXC 138 46,2.2.7},
 {sasl,SASL  CXC 138 11,2.1.10},
 {mnesia,MNESIA  CXC 138 12,4.5},
 {stdlib,ERTS  CXC 138 10,1.17.5},
 {kernel,ERTS  CXC 138 10,2.14.5}]},
  {os,{unix,linux}},
  {erlang_version,Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4]
 [rq:4] [async-threads:30] [kernel-poll:true]\n},
  {memory,[{total,30926120},
   {processes,14354392},
   {processes_used,14343184},
   {system,16571728},
   {atom,1124441},
   {atom_used,1120343},
   {binary,268176},
   {code,11134417},
   {ets,2037120}]},
  {vm_memory_high_watermark,0.4},
  {vm_memory_limit,3299385344}]
 ...done.


 Yes, I've restarted the rabbitmq-server, but as you can see at the logs,
 quantum and nova are connected.

 Ideas??

 Regards.



 2013/9/23 Marcelo Dieder marcelodie...@gmail.com

  What's the status of your rabbitmq?

 # rabbitmqctl status

 And do you tried restart the rabbitmq?

 Regards,
 Marcelo Dieder


  On 09/23/2013 03:31 PM, Guilherme Russi wrote:

 Yes, it is at the same place

  cat /etc/cinder/cinder.conf
 [DEFAULT]
 rootwrap_config=/etc/cinder/rootwrap.conf
 sql_connection = mysql://cinder:password@localhost/cinder
 api_paste_confg = /etc/cinder/api-paste.ini
 iscsi_helper=ietadm
 #iscsi_helper = tgtadm
 volume_name_template = volume-%s
 volume_group = cinder-volumes
 verbose = True
 auth_strategy = keystone
 iscsi_ip_address=localhost
 rabbit_host = localhost
 rabbit_port = 5672
 rabbit_userid = rabbit
 rabbit_password = password
 rabbit_virtual_host = /nova
 state_path = /var/lib/cinder
 lock_path = /var/lock/cinder
 volumes_dir = /var/lib/cinder/volumes

  Another idea?

  Regards.


 2013/9/23 Gangur, Hrushikesh (HP Converged Cloud - RD - Sunnyvale) 
 hrushikesh.gan...@hp.com

  Ensure that cinder configuration files have correct IP of rabbimq host.



 *From:* Guilherme Russi [mailto:luisguilherme...@gmail.com]
 *Sent:* Monday, September 23, 2013 10:53 AM
 *To:* openstack
 *Subject:* [Openstack] Cinder error



 Hello guys, I'm reinstalling my OpenStack Grizzly and I'm getting
 problem with my cinder, I'm getting *Error: *Unable to retrieve volume
 list. I was looking at the cinder log and I only found this error: ERROR
 [cinder.openstack.common.rpc.common] AMQP server on 192.168.3.1:5672 is
 unreachable: Socket closed. Trying again in 30 seconds.



 I have a partition created:



 pvdisplay

   --- Physical volume ---

   PV Name   /dev/sda7

   VG Name   cinder-volumes

   PV Size   279,59 GiB / not usable 1,00 MiB

   Allocatable   yes

   PE Size   4,00 MiB

   Total PE  71574

   Free PE   66454

   Allocated PE  5120

   PV UUID   KHITxF-uagF-xADc-F8fu-na8t-1OXT-rDFbQ6



 root@hemera:/home/hemera# vgdisplay

   --- Volume group ---

   VG Name   cinder-volumes

   System ID

   Formatlvm2

   Metadata Areas1

   Metadata Sequence No  6

   VG Access read/write

   VG Status resizable

   MAX LV0

   Cur LV2

   Open LV   0

   Max PV0

   Cur PV1

   Act PV1

   VG Size

Re: [Openstack] Cinder error

2013-09-23 Thread Guilherme Russi
I've looked at the quantum/server.log and nova-scheduler.log and they show:

2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
Reconnecting to AMQP server on localhost:5672
2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
Connected to AMQP server on localhost:5672

2013-09-23 16:24:01.830 5971 INFO nova.openstack.common.rpc.common [-]
Reconnecting to AMQP server on 127.0.0.1:5672
2013-09-23 16:24:01.879 5971 INFO nova.openstack.common.rpc.common [-]
Connected to AMQP server on 127.0.0.1:5672

But at the cinder-volume.log:

INFO [cinder.openstack.common.rpc.common] Reconnecting to AMQP server on
localhost:5672
2013-09-23 16:46:04ERROR [cinder.openstack.common.rpc.common] AMQP
server on localhost:5672 is unreachable: Socket closed. Trying again in 30
seconds.


I was typing when you sent your answer, here is it:

rabbitmq-server status
Status of node rabbit@hemera ...
[{pid,17266},
 {running_applications,[{rabbit,RabbitMQ,2.7.1},
{os_mon,CPO  CXC 138 46,2.2.7},
{sasl,SASL  CXC 138 11,2.1.10},
{mnesia,MNESIA  CXC 138 12,4.5},
{stdlib,ERTS  CXC 138 10,1.17.5},
{kernel,ERTS  CXC 138 10,2.14.5}]},
 {os,{unix,linux}},
 {erlang_version,Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4]
[rq:4] [async-threads:30] [kernel-poll:true]\n},
 {memory,[{total,30926120},
  {processes,14354392},
  {processes_used,14343184},
  {system,16571728},
  {atom,1124441},
  {atom_used,1120343},
  {binary,268176},
  {code,11134417},
  {ets,2037120}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,3299385344}]
...done.


Yes, I've restarted the rabbitmq-server, but as you can see at the logs,
quantum and nova are connected.

Ideas??

Regards.



2013/9/23 Marcelo Dieder marcelodie...@gmail.com

  What's the status of your rabbitmq?

 # rabbitmqctl status

 And do you tried restart the rabbitmq?

 Regards,
 Marcelo Dieder


  On 09/23/2013 03:31 PM, Guilherme Russi wrote:

 Yes, it is at the same place

  cat /etc/cinder/cinder.conf
 [DEFAULT]
 rootwrap_config=/etc/cinder/rootwrap.conf
 sql_connection = mysql://cinder:password@localhost/cinder
 api_paste_confg = /etc/cinder/api-paste.ini
 iscsi_helper=ietadm
 #iscsi_helper = tgtadm
 volume_name_template = volume-%s
 volume_group = cinder-volumes
 verbose = True
 auth_strategy = keystone
 iscsi_ip_address=localhost
 rabbit_host = localhost
 rabbit_port = 5672
 rabbit_userid = rabbit
 rabbit_password = password
 rabbit_virtual_host = /nova
 state_path = /var/lib/cinder
 lock_path = /var/lock/cinder
 volumes_dir = /var/lib/cinder/volumes

  Another idea?

  Regards.


 2013/9/23 Gangur, Hrushikesh (HP Converged Cloud - RD - Sunnyvale) 
 hrushikesh.gan...@hp.com

  Ensure that cinder configuration files have correct IP of rabbimq host.



 *From:* Guilherme Russi [mailto:luisguilherme...@gmail.com]
 *Sent:* Monday, September 23, 2013 10:53 AM
 *To:* openstack
 *Subject:* [Openstack] Cinder error



 Hello guys, I'm reinstalling my OpenStack Grizzly and I'm getting problem
 with my cinder, I'm getting *Error: *Unable to retrieve volume list. I
 was looking at the cinder log and I only found this error: ERROR
 [cinder.openstack.common.rpc.common] AMQP server on 192.168.3.1:5672 is
 unreachable: Socket closed. Trying again in 30 seconds.



 I have a partition created:



 pvdisplay

   --- Physical volume ---

   PV Name   /dev/sda7

   VG Name   cinder-volumes

   PV Size   279,59 GiB / not usable 1,00 MiB

   Allocatable   yes

   PE Size   4,00 MiB

   Total PE  71574

   Free PE   66454

   Allocated PE  5120

   PV UUID   KHITxF-uagF-xADc-F8fu-na8t-1OXT-rDFbQ6



 root@hemera:/home/hemera# vgdisplay

   --- Volume group ---

   VG Name   cinder-volumes

   System ID

   Formatlvm2

   Metadata Areas1

   Metadata Sequence No  6

   VG Access read/write

   VG Status resizable

   MAX LV0

   Cur LV2

   Open LV   0

   Max PV0

   Cur PV1

   Act PV1

   VG Size   279,59 GiB

   PE Size   4,00 MiB

   Total PE  71574

   Alloc PE / Size   5120 / 20,00 GiB

   Free  PE / Size   66454 / 259,59 GiB

   VG UUID   mhN3uV-n80a-zjeb-uR35-0IPb-BFmo-G2Qehu





 I don't know how to fix this error, any help?



 Thank you all and regards.



 Guilherme.




 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



 ___
 Mailing list

Re: [Openstack] Cinder error

2013-09-23 Thread Guilherme Russi
It's what i've got too:

 root@hemera:/home/hemera# rabbitmqctl list_permissions
Listing permissions in vhost / ...
guest .* .* .*
...done.

Regards.



2013/9/23 Marcelo Dieder marcelodie...@gmail.com

  Hi Guilherme,

 The RabbitMQ has virtual host for many applications. Default, the Rabbit
 created a default virtualhost with /.

 You can see this with the command:

 root@controller:~# rabbitmqctl list_permissions
 Listing permissions in vhost / ...
 guest.*.*.*
 ...done.

 Regards,

 Marcelo Dieder


 On 09/23/2013 04:57 PM, Guilherme Russi wrote:

 I guess I've got something:

  2013-09-23 16:52:17 INFO [cinder.openstack.common.rpc.common]
 Connected to AMQP server on localhost:5672

  I've found this page
 https://ask.openstack.org/en/question/4581/cinder-unable-to-connect-to-rabbitmq/
  and zipmaster07
 answered rabbit_virtual_host = /nova

 I commented out the rabbit_virtual_host, restarted all cinder services
 and I can see a successful connection to AMQP now.
 And I did that, now it's connected, but what is this rabbit_virtual_host?
 What does it do?
 I'll test my volumes now.

  Regards.





 2013/9/23 Guilherme Russi luisguilherme...@gmail.com

  I've looked at the quantum/server.log and nova-scheduler.log and they
 show:

 2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
 Reconnecting to AMQP server on localhost:5672
 2013-09-23 16:25:27 INFO [quantum.openstack.common.rpc.common]
 Connected to AMQP server on localhost:5672

  2013-09-23 16:24:01.830 5971 INFO nova.openstack.common.rpc.common [-]
 Reconnecting to AMQP server on 127.0.0.1:5672
 2013-09-23 16:24:01.879 5971 INFO nova.openstack.common.rpc.common [-]
 Connected to AMQP server on 127.0.0.1:5672

  But at the cinder-volume.log:

  INFO [cinder.openstack.common.rpc.common] Reconnecting to AMQP server
 on localhost:5672
 2013-09-23 16:46:04ERROR [cinder.openstack.common.rpc.common] AMQP
 server on localhost:5672 is unreachable: Socket closed. Trying again in 30
 seconds.


  I was typing when you sent your answer, here is it:

  rabbitmq-server status
 Status of node rabbit@hemera ...
 [{pid,17266},
   {running_applications,[{rabbit,RabbitMQ,2.7.1},
 {os_mon,CPO  CXC 138 46,2.2.7},
 {sasl,SASL  CXC 138 11,2.1.10},
 {mnesia,MNESIA  CXC 138 12,4.5},
 {stdlib,ERTS  CXC 138 10,1.17.5},
 {kernel,ERTS  CXC 138 10,2.14.5}]},
  {os,{unix,linux}},
  {erlang_version,Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4]
 [rq:4] [async-threads:30] [kernel-poll:true]\n},
  {memory,[{total,30926120},
   {processes,14354392},
   {processes_used,14343184},
   {system,16571728},
   {atom,1124441},
   {atom_used,1120343},
   {binary,268176},
   {code,11134417},
   {ets,2037120}]},
  {vm_memory_high_watermark,0.4},
  {vm_memory_limit,3299385344}]
 ...done.


  Yes, I've restarted the rabbitmq-server, but as you can see at the
 logs, quantum and nova are connected.

  Ideas??

  Regards.



 2013/9/23 Marcelo Dieder marcelodie...@gmail.com

  What's the status of your rabbitmq?

 # rabbitmqctl status

 And do you tried restart the rabbitmq?

 Regards,
 Marcelo Dieder


   On 09/23/2013 03:31 PM, Guilherme Russi wrote:

  Yes, it is at the same place

  cat /etc/cinder/cinder.conf
 [DEFAULT]
 rootwrap_config=/etc/cinder/rootwrap.conf
 sql_connection = mysql://cinder:password@localhost/cinder
 api_paste_confg = /etc/cinder/api-paste.ini
 iscsi_helper=ietadm
 #iscsi_helper = tgtadm
 volume_name_template = volume-%s
 volume_group = cinder-volumes
 verbose = True
 auth_strategy = keystone
 iscsi_ip_address=localhost
 rabbit_host = localhost
 rabbit_port = 5672
 rabbit_userid = rabbit
 rabbit_password = password
 rabbit_virtual_host = /nova
 state_path = /var/lib/cinder
 lock_path = /var/lock/cinder
 volumes_dir = /var/lib/cinder/volumes

  Another idea?

  Regards.


 2013/9/23 Gangur, Hrushikesh (HP Converged Cloud - RD - Sunnyvale) 
 hrushikesh.gan...@hp.com

  Ensure that cinder configuration files have correct IP of rabbimq
 host.



 *From:* Guilherme Russi [mailto:luisguilherme...@gmail.com]
 *Sent:* Monday, September 23, 2013 10:53 AM
 *To:* openstack
 *Subject:* [Openstack] Cinder error



 Hello guys, I'm reinstalling my OpenStack Grizzly and I'm getting
 problem with my cinder, I'm getting *Error: *Unable to retrieve
 volume list. I was looking at the cinder log and I only found this error:
 ERROR [cinder.openstack.common.rpc.common] AMQP server on
 192.168.3.1:5672 is unreachable: Socket closed. Trying again in 30
 seconds.



 I have a partition created:



 pvdisplay

   --- Physical volume ---

   PV Name   /dev/sda7

   VG Name   cinder-volumes

   PV Size   279,59 GiB / not usable 1,00 MiB

   Allocatable   yes

   PE Size   4,00 MiB

   Total PE  71574

Re: [Openstack] Attaching FloatingIP to VM

2013-09-19 Thread Guilherme Russi
Hello again Rahul, any idea about what i should do? Should i create the
public network from demo user?

Thank you.


2013/9/18 Guilherme Russi luisguilherme...@gmail.com

 Hey Rahul,

  I've tried to create the public network again and now I put the --shared
 when I typed the command to create it, but the same way I still get the
 error message. Do I need to create another credential to my demo user? I
 had a grizzly version installed before I format my controller node and it
 worked fine.

 Thank you.

 Regards.


 2013/9/16 Rahul Sharma rahulsharma...@gmail.com

 Hi Guilherme,

 If you will source these credentials, then you will be able to perform
 operations on admin tenant and not on the demo tenant. For demo tenant, you
 need to change the OS_TENANT_NAME to demo. You would have created networks
 and ports using these credentials during installation, now you would be
 able to assign floating-ip's to vm's or ports of tenant admin. For demo
 user, you need to share that network with demo  tenant. Try and check if it
 works. Floating-ip is from the external network and it would be shared. If
 it doesn't work, try deleting the existing networks and creating again
 using UI if possible.

 --Rahul


 On Tue, Sep 17, 2013 at 1:05 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

 Hello Rahul, here are my credentials:

 export OS_TENANT_NAME=admin
 export OS_USERNAME=admin
 export OS_PASSWORD=password
 export OS_AUTH_URL=http://localhost:35357/v2.0/;
 export OS_SERVICE_ENDPOINT=http://localhost:35357/v2.0;
 export OS_SERVICE_TOKEN=password

 Do I need credentials to demo user too? I had another installation,
 before reinstall my ubuntu again, and my credentials were like this and I
 could allocate floating ip.

 Regards.

 2013/9/16 Rahul Sharma rahulsharma...@gmail.com

 Hi Guilherme,

 I am not sure but I have a wild guess that you might have created the
 port using admin user and allocated floating-ip using demo user.  If you
 are using CLI, then what are the parameters you have sourced using
 openrc/localrc? Have you sourced the correct tenant's credentials i.e. of
 admin's or of demo's? Sometimes, we make mistakes in sourcing the wrong
 credentials, hence making the guess.

 -Regards
 Rahul





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Attaching FloatingIP to VM

2013-09-18 Thread Guilherme Russi
Hey Rahul,

 I've tried to create the public network again and now I put the --shared
when I typed the command to create it, but the same way I still get the
error message. Do I need to create another credential to my demo user? I
had a grizzly version installed before I format my controller node and it
worked fine.

Thank you.

Regards.


2013/9/16 Rahul Sharma rahulsharma...@gmail.com

 Hi Guilherme,

 If you will source these credentials, then you will be able to perform
 operations on admin tenant and not on the demo tenant. For demo tenant, you
 need to change the OS_TENANT_NAME to demo. You would have created networks
 and ports using these credentials during installation, now you would be
 able to assign floating-ip's to vm's or ports of tenant admin. For demo
 user, you need to share that network with demo  tenant. Try and check if it
 works. Floating-ip is from the external network and it would be shared. If
 it doesn't work, try deleting the existing networks and creating again
 using UI if possible.

 --Rahul


 On Tue, Sep 17, 2013 at 1:05 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

 Hello Rahul, here are my credentials:

 export OS_TENANT_NAME=admin
 export OS_USERNAME=admin
 export OS_PASSWORD=password
 export OS_AUTH_URL=http://localhost:35357/v2.0/;
 export OS_SERVICE_ENDPOINT=http://localhost:35357/v2.0;
 export OS_SERVICE_TOKEN=password

 Do I need credentials to demo user too? I had another installation,
 before reinstall my ubuntu again, and my credentials were like this and I
 could allocate floating ip.

 Regards.

 2013/9/16 Rahul Sharma rahulsharma...@gmail.com

 Hi Guilherme,

 I am not sure but I have a wild guess that you might have created the
 port using admin user and allocated floating-ip using demo user.  If you
 are using CLI, then what are the parameters you have sourced using
 openrc/localrc? Have you sourced the correct tenant's credentials i.e. of
 admin's or of demo's? Sometimes, we make mistakes in sourcing the wrong
 credentials, hence making the guess.

 -Regards
 Rahul




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Grizzly] Unable to reboot instance after Migrate

2013-09-09 Thread Guilherme Russi
Hello Happy,

 I've my NFS configuration again, but I keep getting this error on my
compute's logs:

Live Migration failure: not all arguments converted during string formatting

I don't know if it's a problem with my nova or my libvirt, my compute nodes
connect to another NFS server that is not my controller node, would may be
it?

Thank you.


2013/9/4 happy idea guolongcang.w...@gmail.com

 o ...I just disable the firewall.$ufw  disable


 2013/9/5 Guilherme Russi luisguilherme...@gmail.com

 Just to check, How did you do this part?

 7. Configure your firewall to allow libvirt to communicate between nodes.

 Thank you.


 2013/9/3 happy idea guolongcang.w...@gmail.com


 follow this page's guide *carefully* ,
 http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html
 replace 'NOVA-INST-DIR'  with '/var/lib/nova'


 2013/9/4 Guilherme Russi luisguilherme...@gmail.com

 Hey there, I made my NFS configuration again, I made this way:

 Controller node:
 1- mkdir -p /local2/instances

 2- mount --bind /var/lib/nova/instances /local2/instances

 3- added this inside /etc/exports
 /var/lib/nova/instances
 192.168.3.0/24(rw,sync,fsid=0,no_root_squash,no_subtree_check)
 /local2/instances
 192.168.3.0/24(rw,sync,fsid=0,no_root_squash,no_subtree_check,nohide)

 4- added this inside /etc/fstab
 /var/lib/nova/instances /local2/instances none bind 0 0


 Compute node:
 1- added inside /etc/fstab
 192.168.3.1://var/lib/nova/instances   nfs4 defaults
  0   0

  2- mount -t nfs4 192.168.3.1:/ /var/lib/nova/instances/

 Do I need to do anything else?

 Regards.

 Guilherme.


 2013/9/1 happy idea guolongcang.w...@gmail.com

 2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-007d: [Errno 2] No such file or
 directory: '/var/lib/nova/instances/72ec37a3-b209-4729-b628-
 005fdcea5a3c/disk'

 *I think may be your NFS config is not correct.*


 2013/8/31 Guilherme Russi luisguilherme...@gmail.com

 Hello Happy, these are my logs:

 2013-08-30 14:42:51.402 12667 AUDIT nova.compute.resource_tracker
 [-] Auditing locally available compute resources
 2013-08-30 14:42:51.562 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-0084: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/c9e1c5ed-a108-4196-bfbc-24495e2e71bd/disk'
 2013-08-30 14:42:51.564 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-0077: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/483f98e3-8ef5-43e2-8c3a-def55abdabcd/disk'
 2013-08-30 14:42:51.567 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-00bd: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/66abd40e-fb19-4cbe-a248-61d968fd84b7/disk'
 2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-007d: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/72ec37a3-b209-4729-b628-005fdcea5a3c/disk'
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker
 [-] Free ram (MB): 2746
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker
 [-] Free disk (GB): 53
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker
 [-] Free VCPUS: 1
 2013-08-30 14:42:51.773 12667 INFO nova.compute.resource_tracker [-]
 Compute_service record updated for caos:caos
 2013-08-30 14:42:51.774 12667 INFO nova.compute.manager [-] Updating
 host status


 And here the output when I run the command:

 ERROR: Live migration of instance
 c9af3e9e-87b1-4aa3-95aa-22700e1091e4 to host tiresias failed (HTTP 400)
 (Request-ID: req-630d7837-6886-4e23-bc3d-a9fccc4a8868)

 My destiny host awsers ping when I ping.

 I've been fighting against it a quite while without success.

 Regards.

 Guilherme.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 OK.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 I am too, if I find something I'll let you know.

 Regards.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 I am trying to figure out what cause the bug.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 Well mine are:

 ii  nova-api
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - API 
 frontend
 ii  nova-cert
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - 
 certificate
 management
 ii  nova-common
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - common 
 files
 ii  nova-conductor
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - conductor
 service
 ii  nova-consoleauth
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - Console
 Authenticator
 ii  nova-novncproxy
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - NoVNC 
 proxy
 ii  nova-scheduler
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - virtual 
 machine
 scheduler
 ii  python-nova
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute Python 
 libraries
 ii  python

Re: [Openstack] [Grizzly] Unable to reboot instance after Migrate

2013-09-04 Thread Guilherme Russi
Just to check, How did you do this part?

7. Configure your firewall to allow libvirt to communicate between nodes.

Thank you.


2013/9/3 happy idea guolongcang.w...@gmail.com


 follow this page's guide *carefully* ,
 http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html
 replace 'NOVA-INST-DIR'  with '/var/lib/nova'


 2013/9/4 Guilherme Russi luisguilherme...@gmail.com

 Hey there, I made my NFS configuration again, I made this way:

 Controller node:
 1- mkdir -p /local2/instances

 2- mount --bind /var/lib/nova/instances /local2/instances

 3- added this inside /etc/exports
 /var/lib/nova/instances
 192.168.3.0/24(rw,sync,fsid=0,no_root_squash,no_subtree_check)
 /local2/instances
 192.168.3.0/24(rw,sync,fsid=0,no_root_squash,no_subtree_check,nohide)

 4- added this inside /etc/fstab
 /var/lib/nova/instances /local2/instances none bind 0 0


 Compute node:
 1- added inside /etc/fstab
 192.168.3.1://var/lib/nova/instances   nfs4 defaults
  0   0

  2- mount -t nfs4 192.168.3.1:/ /var/lib/nova/instances/

 Do I need to do anything else?

 Regards.

 Guilherme.


 2013/9/1 happy idea guolongcang.w...@gmail.com

 2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-007d: [Errno 2] No such file or
 directory: '/var/lib/nova/instances/72ec37a3-b209-4729-b628-
 005fdcea5a3c/disk'

 *I think may be your NFS config is not correct.*


 2013/8/31 Guilherme Russi luisguilherme...@gmail.com

 Hello Happy, these are my logs:

 2013-08-30 14:42:51.402 12667 AUDIT nova.compute.resource_tracker [-]
 Auditing locally available compute resources
 2013-08-30 14:42:51.562 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-0084: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/c9e1c5ed-a108-4196-bfbc-24495e2e71bd/disk'
 2013-08-30 14:42:51.564 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-0077: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/483f98e3-8ef5-43e2-8c3a-def55abdabcd/disk'
 2013-08-30 14:42:51.567 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-00bd: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/66abd40e-fb19-4cbe-a248-61d968fd84b7/disk'
 2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-]
 Getting disk size of instance-007d: [Errno 2] No such file or
 directory:
 '/var/lib/nova/instances/72ec37a3-b209-4729-b628-005fdcea5a3c/disk'
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-]
 Free ram (MB): 2746
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-]
 Free disk (GB): 53
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-]
 Free VCPUS: 1
 2013-08-30 14:42:51.773 12667 INFO nova.compute.resource_tracker [-]
 Compute_service record updated for caos:caos
 2013-08-30 14:42:51.774 12667 INFO nova.compute.manager [-] Updating
 host status


 And here the output when I run the command:

 ERROR: Live migration of instance c9af3e9e-87b1-4aa3-95aa-22700e1091e4
 to host tiresias failed (HTTP 400) (Request-ID:
 req-630d7837-6886-4e23-bc3d-a9fccc4a8868)

 My destiny host awsers ping when I ping.

 I've been fighting against it a quite while without success.

 Regards.

 Guilherme.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 OK.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 I am too, if I find something I'll let you know.

 Regards.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 I am trying to figure out what cause the bug.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 Well mine are:

 ii  nova-api
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - API 
 frontend
 ii  nova-cert
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - 
 certificate
 management
 ii  nova-common
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - common 
 files
 ii  nova-conductor
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - conductor
 service
 ii  nova-consoleauth
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - Console
 Authenticator
 ii  nova-novncproxy
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - NoVNC 
 proxy
 ii  nova-scheduler
 1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute - virtual 
 machine
 scheduler
 ii  python-nova
  1:2013.1-0ubuntu2.1~cloud0  OpenStack Compute Python 
 libraries
 ii  python-novaclient  1:2.13.0-0ubuntu1~cloud0




 2013/8/29 happy idea guolongcang.w...@gmail.com

 Thank you.


 2013/8/29 Razique Mahroua razique.mahr...@gmail.com

 Looks like a bug to me, definitely….
 but i can be wrong though

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 29 août 2013 à 11:29, happy idea guolongcang.w...@gmail.com
 a écrit :

 Thank you.

 -- Forwarded message --
 From: happy idea guolongcang.w...@gmail.com

Re: [Openstack] Slow API operations

2013-09-04 Thread Guilherme Russi
Thank you all guys.


2013/9/3 Jay Pipes jaypi...@gmail.com

 On 09/03/2013 03:12 PM, Clint Byrum wrote:

 Excerpts from Guilherme Russi's message of 2013-09-03 11:52:39 -0700:

 Query OK, 502150 rows affected (32 min 2.77 sec) and nothing has changed,
 lol.


 There's also indexes in Havana that help a lot, you might consider adding
 them manually:

 ALTER TABLE token ADD INDEX ix_token_valid (valid);
 ALTER TABLE token ADD INDEX ix_token_expires (expires);

 Note that a 500,000 row delete is _brutal_ on your server. We use this
 in TripleO:

 https://git.openstack.org/**cgit/openstack/tripleo-image-**
 elements/tree/elements/**keystone/cleanup-keystone-**tokens.shhttps://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/elements/keystone/cleanup-keystone-tokens.sh

 It allows space in between the deletes for other things to happen,
 and also deletes in a more efficient way to not thrash around the table
 deleting things in index order.

 Also, if you don't need the content of your token table for audit purposes
 and you can afford the RAM, you should definitely consider switching to
 the memcached backend for tokens.


 +1000. memcached is the superior option for tokens.

 -jay



 __**_
 Mailing list: http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
 openstack http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
 openstack http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Slow API operations

2013-09-03 Thread Guilherme Russi
Hello guys,

 I'm facing a problem where my APIs' works are too slow, I've been
searching through the internet and I saw some people talking about expired
keystone tokens, but my point is, how do I find which tokens are still
valid and which are expired? Those guys were talkning about keystone-manage
token flush but it will be coming in havana and I'm using grizzly.

Can anybody help me to fix it?

Regards.

Guilherme.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Slow API operations

2013-09-03 Thread Guilherme Russi
Query OK, 502150 rows affected (32 min 2.77 sec) and nothing has changed,
lol.

Thank you.

Regards.

Guilherme.


2013/9/3 Gangur, Hrushikesh (HP Converged Cloud - RD - Sunnyvale) 
hrushikesh.gan...@hp.com

  I usually run this on my SQL Query to clean up expired tokens:

 DELETE FROM token WHERE expires = NOW();

 ** **

 Unfortunately, this does not help much in improving slow response time of
 nova list or other APIs, but worth trying it out.

 ** **

 *From:* Guilherme Russi [mailto:luisguilherme...@gmail.com]
 *Sent:* Tuesday, September 03, 2013 8:23 AM
 *To:* openstack
 *Subject:* [Openstack] Slow API operations

 ** **

 Hello guys,

 ** **

  I'm facing a problem where my APIs' works are too slow, I've been
 searching through the internet and I saw some people talking about expired
 keystone tokens, but my point is, how do I find which tokens are still
 valid and which are expired? Those guys were talkning about keystone-manage
 token flush but it will be coming in havana and I'm using grizzly.

 ** **

 Can anybody help me to fix it?

 ** **

 Regards.

 ** **

 Guilherme.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Grizzly] Unable to reboot instance after Migrate

2013-09-03 Thread Guilherme Russi
Hey there, I made my NFS configuration again, I made this way:

Controller node:
1- mkdir -p /local2/instances

2- mount --bind /var/lib/nova/instances /local2/instances

3- added this inside /etc/exports
/var/lib/nova/instances
192.168.3.0/24(rw,sync,fsid=0,no_root_squash,no_subtree_check)
/local2/instances
192.168.3.0/24(rw,sync,fsid=0,no_root_squash,no_subtree_check,nohide)

4- added this inside /etc/fstab
/var/lib/nova/instances /local2/instances none bind 0 0


Compute node:
1- added inside /etc/fstab
192.168.3.1://var/lib/nova/instances   nfs4 defaults0
0

2- mount -t nfs4 192.168.3.1:/ /var/lib/nova/instances/

Do I need to do anything else?

Regards.

Guilherme.


2013/9/1 happy idea guolongcang.w...@gmail.com

 2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-] Getting
 disk size of instance-007d: [Errno 2] No such file or directory:
 '/var/lib/nova/instances/72ec37a3-b209-4729-b628-005fdcea5a3c/disk'

 *I think may be your NFS config is not correct.*


 2013/8/31 Guilherme Russi luisguilherme...@gmail.com

 Hello Happy, these are my logs:

 2013-08-30 14:42:51.402 12667 AUDIT nova.compute.resource_tracker [-]
 Auditing locally available compute resources
 2013-08-30 14:42:51.562 12667 ERROR nova.virt.libvirt.driver [-] Getting
 disk size of instance-0084: [Errno 2] No such file or directory:
 '/var/lib/nova/instances/c9e1c5ed-a108-4196-bfbc-24495e2e71bd/disk'
 2013-08-30 14:42:51.564 12667 ERROR nova.virt.libvirt.driver [-] Getting
 disk size of instance-0077: [Errno 2] No such file or directory:
 '/var/lib/nova/instances/483f98e3-8ef5-43e2-8c3a-def55abdabcd/disk'
 2013-08-30 14:42:51.567 12667 ERROR nova.virt.libvirt.driver [-] Getting
 disk size of instance-00bd: [Errno 2] No such file or directory:
 '/var/lib/nova/instances/66abd40e-fb19-4cbe-a248-61d968fd84b7/disk'
 2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-] Getting
 disk size of instance-007d: [Errno 2] No such file or directory:
 '/var/lib/nova/instances/72ec37a3-b209-4729-b628-005fdcea5a3c/disk'
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-]
 Free ram (MB): 2746
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-]
 Free disk (GB): 53
 2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-]
 Free VCPUS: 1
 2013-08-30 14:42:51.773 12667 INFO nova.compute.resource_tracker [-]
 Compute_service record updated for caos:caos
 2013-08-30 14:42:51.774 12667 INFO nova.compute.manager [-] Updating
 host status


 And here the output when I run the command:

 ERROR: Live migration of instance c9af3e9e-87b1-4aa3-95aa-22700e1091e4 to
 host tiresias failed (HTTP 400) (Request-ID:
 req-630d7837-6886-4e23-bc3d-a9fccc4a8868)

 My destiny host awsers ping when I ping.

 I've been fighting against it a quite while without success.

 Regards.

 Guilherme.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 OK.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 I am too, if I find something I'll let you know.

 Regards.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 I am trying to figure out what cause the bug.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 Well mine are:

 ii  nova-api   1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - API frontend
 ii  nova-cert  1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - certificate management
 ii  nova-common1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - common files
 ii  nova-conductor 1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - conductor service
 ii  nova-consoleauth   1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - Console Authenticator
 ii  nova-novncproxy1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - NoVNC proxy
 ii  nova-scheduler 1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - virtual machine scheduler
 ii  python-nova1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute Python libraries
 ii  python-novaclient  1:2.13.0-0ubuntu1~cloud0




 2013/8/29 happy idea guolongcang.w...@gmail.com

 Thank you.


 2013/8/29 Razique Mahroua razique.mahr...@gmail.com

 Looks like a bug to me, definitely….
 but i can be wrong though

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 29 août 2013 à 11:29, happy idea guolongcang.w...@gmail.com a
 écrit :

 Thank you.

 -- Forwarded message --
 From: happy idea guolongcang.w...@gmail.com
 Date: 2013/8/29
 Subject: [Openstack][Grizzly] Unable to reboot instance after
 Migrate
 To: openstack openstack@lists.openstack.org


  Hi All,

 Here's the stacktrace log:

 2013-08-29 15:12:29.515 WARNING nova.compute.manager
 [req-31944080-1a33-4679-98ce

Re: [Openstack] [Grizzly] Unable to reboot instance after Migrate

2013-08-30 Thread Guilherme Russi
Hello Happy, these are my logs:

2013-08-30 14:42:51.402 12667 AUDIT nova.compute.resource_tracker [-]
Auditing locally available compute resources
2013-08-30 14:42:51.562 12667 ERROR nova.virt.libvirt.driver [-] Getting
disk size of instance-0084: [Errno 2] No such file or directory:
'/var/lib/nova/instances/c9e1c5ed-a108-4196-bfbc-24495e2e71bd/disk'
2013-08-30 14:42:51.564 12667 ERROR nova.virt.libvirt.driver [-] Getting
disk size of instance-0077: [Errno 2] No such file or directory:
'/var/lib/nova/instances/483f98e3-8ef5-43e2-8c3a-def55abdabcd/disk'
2013-08-30 14:42:51.567 12667 ERROR nova.virt.libvirt.driver [-] Getting
disk size of instance-00bd: [Errno 2] No such file or directory:
'/var/lib/nova/instances/66abd40e-fb19-4cbe-a248-61d968fd84b7/disk'
2013-08-30 14:42:51.569 12667 ERROR nova.virt.libvirt.driver [-] Getting
disk size of instance-007d: [Errno 2] No such file or directory:
'/var/lib/nova/instances/72ec37a3-b209-4729-b628-005fdcea5a3c/disk'
2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-] Free
ram (MB): 2746
2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-] Free
disk (GB): 53
2013-08-30 14:42:51.679 12667 AUDIT nova.compute.resource_tracker [-] Free
VCPUS: 1
2013-08-30 14:42:51.773 12667 INFO nova.compute.resource_tracker [-]
Compute_service record updated for caos:caos
2013-08-30 14:42:51.774 12667 INFO nova.compute.manager [-] Updating host
status


And here the output when I run the command:

ERROR: Live migration of instance c9af3e9e-87b1-4aa3-95aa-22700e1091e4 to
host tiresias failed (HTTP 400) (Request-ID:
req-630d7837-6886-4e23-bc3d-a9fccc4a8868)

My destiny host awsers ping when I ping.

I've been fighting against it a quite while without success.

Regards.

Guilherme.


2013/8/29 happy idea guolongcang.w...@gmail.com

 OK.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 I am too, if I find something I'll let you know.

 Regards.


 2013/8/29 happy idea guolongcang.w...@gmail.com

 I am trying to figure out what cause the bug.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 Well mine are:

 ii  nova-api   1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - API frontend
 ii  nova-cert  1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - certificate management
 ii  nova-common1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - common files
 ii  nova-conductor 1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - conductor service
 ii  nova-consoleauth   1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - Console Authenticator
 ii  nova-novncproxy1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - NoVNC proxy
 ii  nova-scheduler 1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute - virtual machine scheduler
 ii  python-nova1:2013.1-0ubuntu2.1~cloud0
OpenStack Compute Python libraries
 ii  python-novaclient  1:2.13.0-0ubuntu1~cloud0




 2013/8/29 happy idea guolongcang.w...@gmail.com

 Thank you.


 2013/8/29 Razique Mahroua razique.mahr...@gmail.com

 Looks like a bug to me, definitely….
 but i can be wrong though

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 29 août 2013 à 11:29, happy idea guolongcang.w...@gmail.com a
 écrit :

 Thank you.

 -- Forwarded message --
 From: happy idea guolongcang.w...@gmail.com
 Date: 2013/8/29
 Subject: [Openstack][Grizzly] Unable to reboot instance after Migrate
 To: openstack openstack@lists.openstack.org


  Hi All,

 Here's the stacktrace log:

 2013-08-29 15:12:29.515 WARNING nova.compute.manager
 [req-31944080-1a33-4679-98ce-af36e3660679 
 ae0f00ede33f42d9a12385b2c2ce8c0d
 57d53e1dcff540b6aeaf0d6fd60be7ab] [instance:
 038dbba7-534b-4c03-8914-a830c424ce21] Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/nova/compute/manager.py,
 line 1718, in reboot_instance
 bad_volumes_callback=bad_volumes_callback)
   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
 1295,
 in reboot
 block_device_info)
   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
 1371,
 in _hard_reboot
 self._create_images_and_backing(context, instance, disk_info_json)
   File
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
 3248,
 in _create_images_and_backing
 cache_name = os.path.basename(info['backing_file'])
   File /usr/lib/python2.7/posixpath.py, line 121, in basename
 i = p.rfind('/') + 1
 AttributeError: 'NoneType' object has no attribute 'rfind'

 2013-08-29 15:12:29.516 ERROR nova.compute.manager
 [req-31944080-1a33-4679-98ce-af36e3660679 
 ae0f00ede33f42d9a12385b2c2ce8c0d
 57d53e1dcff540b6aeaf0d6fd60be7ab] [instance:
 038dbba7-534b-4c03

Re: [Openstack] [Grizzly] Unable to reboot instance after Migrate

2013-08-29 Thread Guilherme Russi
Well mine are:

ii  nova-api   1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - API frontend
ii  nova-cert  1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - certificate management
ii  nova-common1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - common files
ii  nova-conductor 1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - conductor service
ii  nova-consoleauth   1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - Console Authenticator
ii  nova-novncproxy1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - NoVNC proxy
ii  nova-scheduler 1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute - virtual machine scheduler
ii  python-nova1:2013.1-0ubuntu2.1~cloud0
   OpenStack Compute Python libraries
ii  python-novaclient  1:2.13.0-0ubuntu1~cloud0




2013/8/29 happy idea guolongcang.w...@gmail.com

 Thank you.


 2013/8/29 Razique Mahroua razique.mahr...@gmail.com

 Looks like a bug to me, definitely….
 but i can be wrong though

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 29 août 2013 à 11:29, happy idea guolongcang.w...@gmail.com a écrit
 :

 Thank you.

 -- Forwarded message --
 From: happy idea guolongcang.w...@gmail.com
 Date: 2013/8/29
 Subject: [Openstack][Grizzly] Unable to reboot instance after Migrate
 To: openstack openstack@lists.openstack.org


  Hi All,

 Here's the stacktrace log:

 2013-08-29 15:12:29.515 WARNING nova.compute.manager
 [req-31944080-1a33-4679-98ce-af36e3660679 ae0f00ede33f42d9a12385b2c2ce8c0d
 57d53e1dcff540b6aeaf0d6fd60be7ab] [instance:
 038dbba7-534b-4c03-8914-a830c424ce21] Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
 1718, in reboot_instance
 bad_volumes_callback=bad_volumes_callback)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py,
 line 1295, in reboot
 block_device_info)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py,
 line 1371, in _hard_reboot
 self._create_images_and_backing(context, instance, disk_info_json)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py,
 line 3248, in _create_images_and_backing
 cache_name = os.path.basename(info['backing_file'])
   File /usr/lib/python2.7/posixpath.py, line 121, in basename
 i = p.rfind('/') + 1
 AttributeError: 'NoneType' object has no attribute 'rfind'

 2013-08-29 15:12:29.516 ERROR nova.compute.manager
 [req-31944080-1a33-4679-98ce-af36e3660679 ae0f00ede33f42d9a12385b2c2ce8c0d
 57d53e1dcff540b6aeaf0d6fd60be7ab] [instance:
 038dbba7-534b-4c03-8914-a830c424ce21] Cannot reboot instance: 'NoneType'
 object has no attribute 'rfind'


 Looking for any help.
 Regards.





 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Grizzly] Unable to reboot instance after Migrate

2013-08-29 Thread Guilherme Russi
I am too, if I find something I'll let you know.

Regards.


2013/8/29 happy idea guolongcang.w...@gmail.com

 I am trying to figure out what cause the bug.


 2013/8/30 Guilherme Russi luisguilherme...@gmail.com

 Well mine are:

 ii  nova-api   1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - API frontend
 ii  nova-cert  1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - certificate management
 ii  nova-common1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - common files
 ii  nova-conductor 1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - conductor service
 ii  nova-consoleauth   1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - Console Authenticator
 ii  nova-novncproxy1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - NoVNC proxy
 ii  nova-scheduler 1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute - virtual machine scheduler
 ii  python-nova1:2013.1-0ubuntu2.1~cloud0
  OpenStack Compute Python libraries
 ii  python-novaclient  1:2.13.0-0ubuntu1~cloud0




 2013/8/29 happy idea guolongcang.w...@gmail.com

 Thank you.


 2013/8/29 Razique Mahroua razique.mahr...@gmail.com

 Looks like a bug to me, definitely….
 but i can be wrong though

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 29 août 2013 à 11:29, happy idea guolongcang.w...@gmail.com a
 écrit :

 Thank you.

 -- Forwarded message --
 From: happy idea guolongcang.w...@gmail.com
 Date: 2013/8/29
 Subject: [Openstack][Grizzly] Unable to reboot instance after Migrate
 To: openstack openstack@lists.openstack.org


  Hi All,

 Here's the stacktrace log:

 2013-08-29 15:12:29.515 WARNING nova.compute.manager
 [req-31944080-1a33-4679-98ce-af36e3660679 ae0f00ede33f42d9a12385b2c2ce8c0d
 57d53e1dcff540b6aeaf0d6fd60be7ab] [instance:
 038dbba7-534b-4c03-8914-a830c424ce21] Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
 1718, in reboot_instance
 bad_volumes_callback=bad_volumes_callback)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py,
 line 1295, in reboot
 block_device_info)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py,
 line 1371, in _hard_reboot
 self._create_images_and_backing(context, instance, disk_info_json)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py,
 line 3248, in _create_images_and_backing
 cache_name = os.path.basename(info['backing_file'])
   File /usr/lib/python2.7/posixpath.py, line 121, in basename
 i = p.rfind('/') + 1
 AttributeError: 'NoneType' object has no attribute 'rfind'

 2013-08-29 15:12:29.516 ERROR nova.compute.manager
 [req-31944080-1a33-4679-98ce-af36e3660679 ae0f00ede33f42d9a12385b2c2ce8c0d
 57d53e1dcff540b6aeaf0d6fd60be7ab] [instance:
 038dbba7-534b-4c03-8914-a830c424ce21] Cannot reboot instance: 'NoneType'
 object has no attribute 'rfind'


 Looking for any help.
 Regards.





 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Grizzly. Network cannot reachable After Migrate or Live Migration. Why ?

2013-08-27 Thread Guilherme Russi
Hello, how is your nova configs, I'm trying to realize what is going on
with my live migration, I even can't send the VM to another compute node
yet. Thank you.


2013/8/27 郭龙仓 guolongcang.w...@gmail.com



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Error during live migration

2013-08-15 Thread Guilherme Russi
Ok, let me write down what I've done:

#1 - Logged with nova's user at my first CN and generate a key with
ssh-keygen -t dsa at the /var/lib/nova/.ssh location;
#2 - Copied the id_dsa.pub from my first CN to my second CN at the
/var/lib/nova/.ssh location;
#3 - I'm sure I don't have te directive AllowUsers in my
/etc/ssh/sshd_config
#4 - At my two CN I have inside my /etc/passwd,
nova:x:123:131::/var/lib/nova:/bin/sh

Another questions, do* *I need to copy the .pub key to my controller node
too? And, do I need to create one key to each compute node and copy the
.pub to another one, or just create one key and copy the .pub?

I'm making some mass with the key's thing :(

Thank you all.

Guilherme.


2013/8/15 Razique Mahroua razique.mahr...@gmail.com

 I was so convinced we resolved your issue Guilherme
 but maybe it was someone else :)
 the error simply means the user nova from the first CN cannot connect as
 the user nova to the second CN it needs to send the image to
 Here are few checks :
 #1 - Exchange all nova's public keys between all compute nodes
 #2 - Make sure the connection with/ from that user is allowed (that you
 don't have directive such as AllowUsers in /etc/ssh/sshd_config)
 #3 - Make sure in /etc/passwd, the nova user has a shell


 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 15 août 2013 à 09:39, james jsha...@gmail.com a écrit :

 Pigging backing on the answer before me but yes, i had the same problem.
 Following the setup of ssh keys for the user 'Nova' between nodes will
 resolve the problem for you.


 Kind Regards,

 James Shaw


 On 15 August 2013 02:10, Md. Maruful Hassan mrf@gmail.com wrote:

 I have not done the live-migration setup before but looking at the log it
 seems like you don't have ssh setup right between your compute nodes.

 013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp Command:
 * ssh 10.3.77.52 mkdir -p
 /var/lib/nova/instances/e04986a1-8f56-4dd9-9995-419e05430da3*
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp Exit
 code: 255
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp Stdout:
 ''
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp S*tderr:
 'Permission denied, please try again.\r\nPermission denied, please try
 again.\r\nPermission denied (publickey,password).\r\n'*
 *
 *
 Setup ssh-key based passwordless access for user 'nova' (or whatever user
 nova process is running as ) between nodes. Then try again.



 --
 m@ruf


 On Thu, Aug 15, 2013 at 4:52 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

 Hello guys,

  I've been facing an error with live migration since last week and I
 wonder why I can't migrate my instances. I'm using the Grizzly release and
 I got this error in my both compute nodes:


 2013-08-14 15:49:47.788 ERROR nova.compute.manager
 [req-1fad8da2-2682-48a4-a390-64cb00036568 c402785616534f2096b34ce132b7d3f2
 d532a4fc2e9e4b5f83b6dec7085237e5] [instance:
 e04986a1-8f56-4dd9-9995-419e05430da3] Unexpected error while running
 command.
 Stderr: 'Permission denied, please try again.\r\nPermission denied,
 please try again.\r\nPermission denied (publickey,password).\r\n'. Setting
 instance vm_state to ERROR
 2013-08-14 15:49:48.294 ERROR nova.openstack.common.rpc.amqp
 [req-1fad8da2-2682-48a4-a390-64cb00036568 c402785616534f2096b34ce132b7d3f2
 d532a4fc2e9e4b5f83b6dec7085237e5] Exception during message handling


 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 Traceback (most recent call last):
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
 430, in _process_data
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 rval = self.proxy.dispatch(ctxt, version, method, **args)
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 return getattr(proxyobj, method)(ctxt, **kwargs)
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 temp_level, payload)
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 94, in wrapped
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 return f(self, context, *args, **kw)
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 209

Re: [Openstack] Error during live migration

2013-08-15 Thread Guilherme Russi
Right, I've done all again, and now things starting change but, I'm getting
this now:

2013-08-15 11:18:05.705 ERROR nova.compute.manager
[req-dbbe3889-acd0-4c99-b22d-68c7005901a3 c402785616534f2096b34ce132b7d3f2
d532a4fc2e9e4b5f83b6dec7085237e5] [instance:
53f0a8ff-cd3b-4ddc-be9c-76655e8b8354] Unexpected error while running
command.
Stderr: 'Bad owner or permissions on /var/lib/nova/.ssh/config\r\n'.
Setting instance vm_state to ERROR
2013-08-15 11:18:06.198 ERROR nova.openstack.common.rpc.amqp
[req-dbbe3889-acd0-4c99-b22d-68c7005901a3 c402785616534f2096b34ce132b7d3f2
d532a4fc2e9e4b5f83b6dec7085237e5] Exception during message handling


My config file content:

StrictHostKeyChecking no

and its properties:

-rw-rw  1 nova nova  25 Ago 15 10:39 config

What Am I missing?

Regards.

Guilherme.




2013/8/15 Guilherme Russi luisguilherme...@gmail.com

 Ok, let me write down what I've done:

 #1 - Logged with nova's user at my first CN and generate a key with
 ssh-keygen -t dsa at the /var/lib/nova/.ssh location;
 #2 - Copied the id_dsa.pub from my first CN to my second CN at the
 /var/lib/nova/.ssh location;
 #3 - I'm sure I don't have te directive AllowUsers in my
 /etc/ssh/sshd_config
 #4 - At my two CN I have inside my /etc/passwd,
 nova:x:123:131::/var/lib/nova:/bin/sh

 Another questions, do* *I need to copy the .pub key to my controller node
 too? And, do I need to create one key to each compute node and copy the
 .pub to another one, or just create one key and copy the .pub?

 I'm making some mass with the key's thing :(

 Thank you all.

 Guilherme.


 2013/8/15 Razique Mahroua razique.mahr...@gmail.com

 I was so convinced we resolved your issue Guilherme
 but maybe it was someone else :)
 the error simply means the user nova from the first CN cannot connect
 as the user nova to the second CN it needs to send the image to
 Here are few checks :
 #1 - Exchange all nova's public keys between all compute nodes
 #2 - Make sure the connection with/ from that user is allowed (that you
 don't have directive such as AllowUsers in /etc/ssh/sshd_config)
 #3 - Make sure in /etc/passwd, the nova user has a shell


 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 15 août 2013 à 09:39, james jsha...@gmail.com a écrit :

 Pigging backing on the answer before me but yes, i had the same problem.
 Following the setup of ssh keys for the user 'Nova' between nodes will
 resolve the problem for you.


 Kind Regards,

 James Shaw


 On 15 August 2013 02:10, Md. Maruful Hassan mrf@gmail.com wrote:

 I have not done the live-migration setup before but looking at the log
 it seems like you don't have ssh setup right between your compute nodes.

 013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp Command:
 * ssh 10.3.77.52 mkdir -p
 /var/lib/nova/instances/e04986a1-8f56-4dd9-9995-419e05430da3*
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp Exit
 code: 255
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 Stdout: ''
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp S*tderr:
 'Permission denied, please try again.\r\nPermission denied, please try
 again.\r\nPermission denied (publickey,password).\r\n'*
 *
 *
 Setup ssh-key based passwordless access for user 'nova' (or whatever
 user nova process is running as ) between nodes. Then try again.



 --
 m@ruf


 On Thu, Aug 15, 2013 at 4:52 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

 Hello guys,

  I've been facing an error with live migration since last week and I
 wonder why I can't migrate my instances. I'm using the Grizzly release and
 I got this error in my both compute nodes:


 2013-08-14 15:49:47.788 ERROR nova.compute.manager
 [req-1fad8da2-2682-48a4-a390-64cb00036568 c402785616534f2096b34ce132b7d3f2
 d532a4fc2e9e4b5f83b6dec7085237e5] [instance:
 e04986a1-8f56-4dd9-9995-419e05430da3] Unexpected error while running
 command.
 Stderr: 'Permission denied, please try again.\r\nPermission denied,
 please try again.\r\nPermission denied (publickey,password).\r\n'. Setting
 instance vm_state to ERROR
 2013-08-14 15:49:48.294 ERROR nova.openstack.common.rpc.amqp
 [req-1fad8da2-2682-48a4-a390-64cb00036568 c402785616534f2096b34ce132b7d3f2
 d532a4fc2e9e4b5f83b6dec7085237e5] Exception during message handling


 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 Traceback (most recent call last):
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 File /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 rval = self.proxy.dispatch(ctxt, version, method, **args)
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 2013-08-14 15:49:48.294 1700 TRACE nova.openstack.common.rpc.amqp
 return getattr

[Openstack] Live Migration with Gluster Storage

2013-08-07 Thread Guilherme Russi
Hello guys,

 I've been trying to deploy live migration to my cloud using NFS but
without success, I'd like to know if somebody has tried live migration with
Gluster Storage, does it work? Any problem when installing it? Following
the documentation from its website is easy to install?

The only thing that is left to my cloud works 100% is the live migration.

Thank you all.

Guilherme.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack