Re: [Openstack] NFS + RDMA == stuck at Booting from hard disk

2013-01-14 Thread Andrew Holway
Hey,

I had NFS over RDMA working but hit this bug with o_direct. I cannot remember 
the exact circumstances of this cropping up.

https://bugzilla.linux-nfs.org/show_bug.cgi?id=228

You can work around this with KVM by explicitly specifying the block size in 
the XML. I do not know how you would implement this in openstack. With 
difficulty I imagine :)

qemu:commandlineqemu:arg value='-set'/qemu:arg value='-set'/qemu:arg 
value='-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,logical_block_size=4096'//qemu:commandline

In general RDMA is best avoided for production setups. NFSoverRDMA support has 
been dumped from Mellanox OFED and Redhat support is really choppy. I saw weird 
kernel panics here and there and other general unhappiness.

ipoib is generally fast enough as long as you have it in connected mode and set 
the frame size appropriately try setting it a wee bit bigger than the block 
size of your filesystem.

ta for now

Andrew Holway



On Jan 10, 2013, at 10:14 PM, Mark Lehrer wrote:

 
 Has anyone here been able to make Openstack + KVM work with Infiniband  
 NFS-RDMA?
 
 I have spent a couple of days here trying to make it work, but with no luck.  
 At first I thought the problem was NFS3 and lockd, but I tried NFSv4 and I 
 have the same problem.  I also disabled AppArmor just as a test but that 
 didn't help.
 
 Things work fine with Infinband if I don't use RDMA, and I have many 
 non-libvirt QEMU-KVM vm's working fine using RDMA.
 
 When this problem occurs, the qemu process is CPU locked, strace shows calls 
 to FUTEX, select, and lots of EAGAIN messages.  If nobody else here is using 
 this setup, I'll keep digging to find out exactly which option is causing 
 this problem.
 
 Thanks,
 Mark
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] NFS + RDMA == stuck at Booting from hard disk

2013-01-14 Thread Andrew Holway
Also,

You might be better off asking on the KVM list for this low level stuff.

http://www.linux-kvm.org/page/Lists,_IRC#Mailing_Lists




On Jan 14, 2013, at 11:15 AM, Andrew Holway wrote:

 Hey,
 
 I had NFS over RDMA working but hit this bug with o_direct. I cannot remember 
 the exact circumstances of this cropping up.
 
 https://bugzilla.linux-nfs.org/show_bug.cgi?id=228
 
 You can work around this with KVM by explicitly specifying the block size in 
 the XML. I do not know how you would implement this in openstack. With 
 difficulty I imagine :)
 
 qemu:commandlineqemu:arg value='-set'/qemu:arg value='-set'/qemu:arg 
 value='-device 
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,logical_block_size=4096'//qemu:commandline
 
 In general RDMA is best avoided for production setups. NFSoverRDMA support 
 has been dumped from Mellanox OFED and Redhat support is really choppy. I saw 
 weird kernel panics here and there and other general unhappiness.
 
 ipoib is generally fast enough as long as you have it in connected mode and 
 set the frame size appropriately try setting it a wee bit bigger than the 
 block size of your filesystem.
 
 ta for now
 
 Andrew Holway
 
 
 
 On Jan 10, 2013, at 10:14 PM, Mark Lehrer wrote:
 
 
 Has anyone here been able to make Openstack + KVM work with Infiniband  
 NFS-RDMA?
 
 I have spent a couple of days here trying to make it work, but with no luck. 
  At first I thought the problem was NFS3 and lockd, but I tried NFSv4 and I 
 have the same problem.  I also disabled AppArmor just as a test but that 
 didn't help.
 
 Things work fine with Infinband if I don't use RDMA, and I have many 
 non-libvirt QEMU-KVM vm's working fine using RDMA.
 
 When this problem occurs, the qemu process is CPU locked, strace shows calls 
 to FUTEX, select, and lots of EAGAIN messages.  If nobody else here is using 
 this setup, I'll keep digging to find out exactly which option is causing 
 this problem.
 
 Thanks,
 Mark
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] What is the typical way to deploy OpenStack Compute with ESXi

2013-01-03 Thread Andrew Holway
Hi,

http://docs.openstack.org/trunk/openstack-compute/admin/content/vmware.html

Thanks,

Andrew
On Jan 3, 2013, at 11:38 AM, Balamurugan V G wrote:

 I would like to deploy OpenStack with two compute nodes; one using KVM(which 
 seems to be default) and one using VMware ESXi. I am clear on how to use KVM 
 compute nodes but its not clear on how to deploy the compute for ESXi.
 
 Assumptions
 I cant install compute software on the ESXi server directly since ESXi runs a 
 proprietary baremetal OS.
 I dont need any other proprietary software/license other than ESXi. That is I 
 dont need vSphere or vCenter etc.
 
 These are some of the questions that I have:
 Do I need another physical server to install compute which then points to the 
 ESXi?
 Or Do I need to create a Ubuntu(lets say) VM with in ESXi host and install 
 compute inside this Ubuntu VM?
 
 Any help in clarifying these will be greatly appreciated. Also if there is a 
 pointer to a deployment diagram for such a case, that will be great as well.
 
 Regards,
 Balu
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Failure when creating more than n instances. networking fail.

2012-12-28 Thread Andrew Holway
Hello,

Anyone else seen this?

https://bugs.launchpad.net/openstack-ci/+bug/1094226

When creating lots a instances simultaneously there is a point where it begins 
to fall apart.

It seems to be related to the speed of the CPU. I tested this on a much slower 
E5430 and I could handle about 20 Instances before causing error. My machines 
with E5-2670 can hangle much more.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cannot create projects. Folsom on Centos 6.3

2012-12-27 Thread Andrew Holway
Hi,

I am trying to create a new project on my fresh folsom install on Centos6.3.

In Dashboard I can see the service and admin project but I cannot edit these or 
make a new one. I see Error: An error occurred. Please try again..

nova-manage project create and  nova-manage project add etc do not seem to 
exist as commands on this system.

I am using Gluster and this install guide: 
https://github.com/beloglazov/openstack-centos-kvm-glusterfs/

I see this in the httpd log but do not see anything relevant in the nova logs.

[Thu Dec 27 13:51:05 2012] [error] Problem instantiating action class.
[Thu Dec 27 13:51:05 2012] [error] Traceback (most recent call last):
[Thu Dec 27 13:51:05 2012] [error]   File 
/usr/lib/python2.6/site-packages/horizon/workflows/base.py, line 361, in 
action
[Thu Dec 27 13:51:05 2012] [error] context)
[Thu Dec 27 13:51:05 2012] [error]   File 
/usr/lib/python2.6/site-packages/horizon/dashboards/syspanel/projects/workflows.py,
 line 110, in __init__
[Thu Dec 27 13:51:05 2012] [error] redirect=reverse(INDEX_URL))
[Thu Dec 27 13:51:05 2012] [error]   File 
/usr/lib/python2.6/site-packages/horizon/dashboards/syspanel/projects/workflows.py,
 line 106, in __init__
[Thu Dec 27 13:51:05 2012] [error] default_role = 
api.get_default_role(self.request).id
[Thu Dec 27 13:51:05 2012] [error] AttributeError: 'NoneType' object has no 
attribute 'id'
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] mod_wsgi 
(pid=15267): Exception occurred processing WSGI script 
'/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'.
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] Traceback (most 
recent call last):
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/wsgi.py, line 241, in 
__call__
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] response = 
self.get_response(request)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 179, in 
get_response
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] response = 
self.handle_uncaught_exception(request, resolver, sys.exc_info())
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 228, in 
handle_uncaught_exception
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
callback(request, **param_dict)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/utils/decorators.py, line 91, in 
_wrapped_view
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] response = 
view_func(request, *args, **kwargs)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/views/defaults.py, line 33, in 
server_error
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
http.HttpResponseServerError(t.render(Context({})))
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 140, in render
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
self._render(context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
self.nodelist.render(context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] bit = 
self.render_node(node, context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 837, in 
render_node
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
node.render(context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/loader_tags.py, line 123, in 
render
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
compiled_parent._render(context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 134, in _render
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] return 
self.nodelist.render(context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 823, in render
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222] bit = 
self.render_node(node, context)
[Thu Dec 27 13:51:05 2012] [error] [client 188.106.176.222]   File 
/usr/lib/python2.6/site-packages/django/template/base.py, line 837, in 
render_node
[Thu Dec 27 13:51:05 2012] [error] 

Re: [Openstack] Cannot create projects. Folsom on Centos 6.3

2012-12-27 Thread Andrew Holway

On Dec 27, 2012, at 4:13 PM, Julie Pichon wrote:

 I've seen something similar when the keystone default role defined in Horizon 
 doesn't actually exist in Keystone. The guide you link to suggests changing 
 the default role in Horizon to match the OS_TENANT_NAME environment variable. 
 Could you check that the value of OPENSTACK_KEYSTONE_DEFAULT_ROLE in 
 /etc/openstack-dashboard/local_settings matches one of the role names in the 
 output of 'keystone role-list'?

Hey,

It is just Dashboard that seems to have the problem. I guess there is something 
strange with users and roles going on...



/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = controller
OPENSTACK_KEYSTONE_URL = http://%s:5000/v2.0; % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = admin

##Logging is not very verbose..
[root@controller nova]# cat /var/log/keystone/keystone.log 
2012-12-20 13:54:48  WARNING [keystone.common.wsgi] Conflict occurred 
attempting to store tenant. (IntegrityError) (1062, Duplicate entry 'admin' 
for key 'name')
2012-12-20 17:08:26  WARNING [keystone.common.wsgi] Could not find user: admin
2012-12-27 11:52:36  WARNING [keystone.common.wsgi] Authorization failed. The 
request you have made requires authentication. from 127.0.0.1



[root@controller nova]# keystone tenant-list 
+--++-+
|id|  name  | enabled |
+--++-+
| 0c512648e33844cea8f957a37d5525e5 | widget-company |   True  |
| 60d672952f1b4917b90cf6821de24742 | admin  |   True  |
| bceb80c7104e475aab4b60786320a86f |service |   True  |
+--++-+
[root@controller nova]# keystone user-list
+--++-+---+
|id|  name  | enabled | email |
+--++-+---+
| 14791fc4ee364f7aa35cd8df3211dc2c |  ec2   |   True  |  None |
| 5387b83db7d14ca8843a3b16e68fc2ca | swift  |   True  |  None |
| 6a2593f0867c478bb2ece460226c3ce2 | admin  |   True  |  None |
| 784701dc5dfe41a7811f7261d8345a9a | andrew |   True  | a.hol...@syseleven.de |
| 89876a05f18c4d049a90ff6a863ce7c6 |  nova  |   True  |  None |
| bac2b9234ced458481733f98b0dacaa2 | glance |   True  |  None |
+--++-+---+
[root@controller nova]# keystone tenant-list 
+--++-+
|id|  name  | enabled |
+--++-+
| 0c512648e33844cea8f957a37d5525e5 | widget-company |   True  |
| 60d672952f1b4917b90cf6821de24742 | admin  |   True  |
| bceb80c7104e475aab4b60786320a86f |service |   True  |
+--++-+
[root@controller nova]# keystone role-list
+--++
|id|name|
+--++
| 64139d1b94214e20976489861da50bf1 | memberRole |
| bf2b003aaf7a424b967ed209a6c57215 |   admin|
+--++
[root@controller nova]# keystone user-role-list
+--+---+--+--+
|id|  name | user_id  | 
   tenant_id |
+--+---+--+--+
| bf2b003aaf7a424b967ed209a6c57215 | admin | 6a2593f0867c478bb2ece460226c3ce2 | 
60d672952f1b4917b90cf6821de24742 |
+--+---+--+--+

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot create projects. Folsom on Centos 6.3

2012-12-27 Thread Andrew Holway
 
 Thanks for all the extra information, which all looks correct to me... If you 
 set admin or memberRole as OPENSTACK_KEYSTONE_DEFAULT_ROLE and restart 
 httpd, it should work. If you've restarted httpd since setting that config 
 variable, I'm not sure why it's not. Sorry about that, hopefully someone else 
 will be able to chime in and help figure out what could be the cause.

Duh! I'd restarted everything apart from httpd :) Christmas befuddlement.

Thanks!

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Hi,

If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
control where openstack puts the disk files?

Thanks,

Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Hi David,

It is for nova. 

Im not sure I understand. I want to be able to say to openstack; openstack, 
please install this instance (A) on this mountpoint and please install this 
instance (B) on this other mountpoint. I am planning on having two NFS / 
Gluster based stores, a fast one and a slow one.

I probably will not want to say please every time :)

Thanks,

Andrew

On Dec 20, 2012, at 3:42 PM, David Busby wrote:

 Hi Andrew,
 
 Is this for glance or nova ?
 
 For nova change:
 
 state_path = /var/lib/nova
 lock_path = /var/lib/nova/tmp
 
 in your nova.conf
 
 For glance I'm unsure, may be easier to just mount gluster right onto 
 /var/lib/glance (similarly could do the same for /var/lib/nova).
 
 And just my £0.02 I've had no end of problems getting gluster to play nice 
 on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, tried 2 
 replica N distribute setups with many a random glusterfs death), as such I 
 have opted for using ceph.
 
 ceph's rados can also be used with cinder from the brief reading I've been 
 doing into it.
 
 
 Cheers
 
 David
 
 
 
 
 
 On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de wrote:
 Hi,
 
 If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
 control where openstack puts the disk files?
 
 Thanks,
 
 Andrew
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Vlanned networking setup

2012-12-20 Thread Andrew Holway
Hi,

I am thinking about the following network setup:


+-+
|   vlan101(eth0) |
+-+
+-+
|  br0101 |
+-+
|| |
+--+ +---+ +--+
|  | |   | |  |
|  vm  | |  vm   | |  vm  |
|  | |   | |  |
+--+ +---+ +--+
|| |
+-+
|  br1101 |
+-+
+-+
|   vlan101(eth1) |
+-+

Basically public IP addresses will go over eth1 and private stuff over eth0. 
This would mean that openstack would have to create two vlans and two bridges. 
Is this possible?

please create this vlanned network on eth0 (10.141) and create this other 
one(10.142) on eth1

Thanks,

Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Ah shame. You can specify different storage domains in oVirt.

On Dec 20, 2012, at 4:16 PM, David Busby wrote:

 Hi Andrew,
 
 An interesting idea, but I am unaware if nova supports storage affinity in 
 any way, it does support host affinity iirc, as a kludge you could have say 
 some nova compute nodes using your slow mount and reserve the fast mount 
 nodes as required, perhaps even defining separate zones for deployment?
 
 Cheers
 
 David
 
 
 
 
 
 On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de wrote:
 Hi David,
 
 It is for nova.
 
 Im not sure I understand. I want to be able to say to openstack; openstack, 
 please install this instance (A) on this mountpoint and please install this 
 instance (B) on this other mountpoint. I am planning on having two NFS / 
 Gluster based stores, a fast one and a slow one.
 
 I probably will not want to say please every time :)
 
 Thanks,
 
 Andrew
 
 On Dec 20, 2012, at 3:42 PM, David Busby wrote:
 
  Hi Andrew,
 
  Is this for glance or nova ?
 
  For nova change:
 
  state_path = /var/lib/nova
  lock_path = /var/lib/nova/tmp
 
  in your nova.conf
 
  For glance I'm unsure, may be easier to just mount gluster right onto 
  /var/lib/glance (similarly could do the same for /var/lib/nova).
 
  And just my £0.02 I've had no end of problems getting gluster to play 
  nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, 
  tried 2 replica N distribute setups with many a random glusterfs death), as 
  such I have opted for using ceph.
 
  ceph's rados can also be used with cinder from the brief reading I've been 
  doing into it.
 
 
  Cheers
 
  David
 
 
 
 
 
  On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de 
  wrote:
  Hi,
 
  If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
  control where openstack puts the disk files?
 
  Thanks,
 
  Andrew
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread Andrew Holway
Good plan.

https://blueprints.launchpad.net/openstack-ci/+spec/multiple-storage-domains


On Dec 20, 2012, at 4:25 PM, David Busby wrote:

 I may of course be entirely wrong :) which would be cool if this is 
 achievable / on the roadmap.
 
 At the very least if this is not already in discussion I'd raise it on 
 launchpad as a potential feature.
 
 
 
 
 On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.de wrote:
 Ah shame. You can specify different storage domains in oVirt.
 
 On Dec 20, 2012, at 4:16 PM, David Busby wrote:
 
  Hi Andrew,
 
  An interesting idea, but I am unaware if nova supports storage affinity in 
  any way, it does support host affinity iirc, as a kludge you could have say 
  some nova compute nodes using your slow mount and reserve the fast 
  mount nodes as required, perhaps even defining separate zones for 
  deployment?
 
  Cheers
 
  David
 
 
 
 
 
  On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de 
  wrote:
  Hi David,
 
  It is for nova.
 
  Im not sure I understand. I want to be able to say to openstack; 
  openstack, please install this instance (A) on this mountpoint and please 
  install this instance (B) on this other mountpoint. I am planning on 
  having two NFS / Gluster based stores, a fast one and a slow one.
 
  I probably will not want to say please every time :)
 
  Thanks,
 
  Andrew
 
  On Dec 20, 2012, at 3:42 PM, David Busby wrote:
 
   Hi Andrew,
  
   Is this for glance or nova ?
  
   For nova change:
  
   state_path = /var/lib/nova
   lock_path = /var/lib/nova/tmp
  
   in your nova.conf
  
   For glance I'm unsure, may be easier to just mount gluster right onto 
   /var/lib/glance (similarly could do the same for /var/lib/nova).
  
   And just my £0.02 I've had no end of problems getting gluster to play 
   nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs, 
   tried 2 replica N distribute setups with many a random glusterfs death), 
   as such I have opted for using ceph.
  
   ceph's rados can also be used with cinder from the brief reading I've 
   been doing into it.
  
  
   Cheers
  
   David
  
  
  
  
  
   On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de 
   wrote:
   Hi,
  
   If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I 
   control where openstack puts the disk files?
  
   Thanks,
  
   Andrew
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
 
 
 
 
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Vlanned networking setup

2012-12-20 Thread Andrew Holway
Hi Vish,

Manually creating vlans would be quite tiresome if you are using a vlan per 
project and I'm not sure flatdhcp is good for serious use in multi tenanted 
production environments. (thoughts?)

I tested the vlan manager functionality and this is *really* great for when you 
want to keep a customer on its own logical network with its own subnet but if 
you want to have a instance on more than one network your seem kinda screwed. 
This starts to be a problem when you think about DMZ's and proxys and stuff.

Thanks,

Andrew


On Dec 20, 2012, at 6:35 PM, Vishvananda Ishaya wrote:

 There is no need for nova to create the vlans, you could use flatdhcp and 
 manually create the vlans and specify the vlans when you create your networks:
 
 nova-manage network-create --bridge br0101 --bridge_interface eth0.101
 nova-manage network-create --bridge br1101 --bridge_interface eth1.101
 
 Note that exposing two networks to the guest can be tricky, so most people 
 just use the the first bridge and do the public addresses with floating ips:
 
 nova-manage floating-create --ip_range ip_range --interface eth1.101
 
 (no bridge is needed in this case)
 
 Vish
 
 
 On Dec 20, 2012, at 6:56 AM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hi,
 
 I am thinking about the following network setup:
 
 
   +-+
   |   vlan101(eth0) |
   +-+
   +-+
   |  br0101 |
   +-+
   || |
   +--+ +---+ +--+
   |  | |   | |  |
   |  vm  | |  vm   | |  vm  |
   |  | |   | |  |
   +--+ +---+ +--+
   || |
   +-+
   |  br1101 |
   +-+
   +-+
   |   vlan101(eth1) |
   +-+
 
 Basically public IP addresses will go over eth1 and private stuff over eth0. 
 This would mean that openstack would have to create two vlans and two 
 bridges. Is this possible?
 
 please create this vlanned network on eth0 (10.141) and create this other 
 one(10.142) on eth1
 
 Thanks,
 
 Andrew
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] RAID Performance Issue

2012-12-20 Thread Andrew Holway
Its always nice to have the benefit of a nice, big, fat BBU cache :)

On Dec 21, 2012, at 12:03 AM, Chuck Thier wrote:

 Yes, that's why I was careful to clarify that I was talking about parity 
 RAID.  Performance should be fine otherwise.
 
 --
 Chuck
 
 On Wed, Dec 19, 2012 at 8:26 PM, Hua ZZ Zhang zhu...@cn.ibm.com wrote:
 Chuck, David,
 
 Thanks for your explanation and sharing.
 Since RAID 0 doesn't have parity or mirroring to provide low level redundancy 
 which indicate there's no write penalty, it can improve overall performance 
 for concurrent IO of multiple disks.
 I'm wondering if it make sense to use such kind of RAID without 
 parity/mirroring to increase R/W performance and leave replication and 
 distribution to higher level of Swift.
 
 
 
 graycol.gifChuck Thier ---2012-12-20 上午 12:35:58---Chuck Thier 
 cth...@gmail.com
 
 Chuck Thier cth...@gmail.com 
 Sent by: openstack-bounces+zhuadl=cn.ibm@lists.launchpad.net
 2012-12-20 上午 12:33
 
 ecblank.gif
 To
 ecblank.gif
 David Busby d.bu...@saiweb.co.uk,
 ecblank.gif
 cc
 ecblank.gif
 openstack@lists.launchpad.net openstack@lists.launchpad.net
 ecblank.gif
 Subject
 ecblank.gif
 Re: [Openstack] [swift] RAID Performance Issue
 ecblank.gif ecblank.gif
 
 There are a couple of things to think about when using RAID (or more
 specifically parity RAID) with swift.
 
 The first has already been identified in that the workload for swift
 is very write heavy with small random IO, which is very bad for most
 parity RAID.  In our testing, under heavy workloads, the overall RAID
 performance would degrade to be as slow as a single drive.
 
 It is very common for servers to have many hard drives (our first
 servers that we did testing with had 24 2T drives).  During testing,
 RAID rebuilds were looking like they would take 2 weeks or so, which
 was not acceptable.  While the array was in a degraded state, the
 overall performance of that box would suffer dramatically, which would
 have ripple effects across the rest of the cluster.
 
 We tried to make things work well with RAID 5 for quite a while as it
 would have made operations easier, and the code simpler since we
 wouldn't have had to handle many of the failure scenarios.
 
 Looking back, having to not rely on RAID has made swift a much more
 robust and fault tolerant platform.
 
 --
 Chuck
 
 On Wed, Dec 19, 2012 at 4:32 AM, David Busby d.bu...@saiweb.co.uk wrote:
  Hi Zang,
 
  As JuanFra points out there's not much sense in using Swift on top of raid
  as swift handel; extending on this RAID introduces a write penalty
  (http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in turn
  leads to performance issues, refer the link for write penalty's per
  configuration.
 
  As I recall (though this was from way back in October 2010) the suggested
  method of deploying swift is onto standalone XFS drives, leaving swift to
  handel the replication and distribution.
 
 
  Cheers
 
  David
 
 
 
 
 
 
  On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso
  juanfra.rodriguez.card...@gmail.com wrote:
 
  Hi Zang:
 
  Basically, it makes no sense to use Swift on top of RAID because Swift
  just delivers replication schema.
 
  Regards,
  JuanFra.
 
  2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com
 
  Hi,
 
  I have read the admin document of Swift and find there's recommendation
  of not using RAID 5 or 6 because swift performance degrades quickly with 
  it.
  Can anyone explain why this could happen? If the RAID is done by hardware
  RAID controller, will the performance issue still exist?
  Anyone can share such kind of experience of using RAID with Swift?
  Appreciated for any suggestion from you.
 
  -Zhang Hua
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




Re: [Openstack] Horizon - OfflineGenerationError

2012-12-17 Thread Andrew Holway
Hi,

I got this error too but I cannot remember what did it.

Do you get this when you try and use the web interface?

Thanks,

Andrew


On Dec 17, 2012, at 6:05 PM, JuanFra Rodriguez Cardoso wrote:

 Hi guys:
 
 I've re-installed and re-configured again my deployment according to 
 suggested guide github.com/beloglazov/openstack-centos-kvm-glusterfs/. 
 Excepcion raised:
 
 [Mon Dec 17 18:02:42 2012] [error] 
 /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: 
 DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use 
 STATIC_URL instead.
 [Mon Dec 17 18:02:42 2012] [error]   use STATIC_URL instead., 
 DeprecationWarning)
 [Mon Dec 17 18:02:42 2012] [error] 
 /usr/lib/python2.6/site-packages/django/conf/__init__.py:110: 
 DeprecationWarning: The SECRET_KEY setting must not be empty.
 [Mon Dec 17 18:02:42 2012] [error]   warnings.warn(The SECRET_KEY setting 
 must not be empty., DeprecationWarning)
 [Mon Dec 17 18:02:42 2012] [error] 
 /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: 
 DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES 
 instead.
 [Mon Dec 17 18:02:42 2012] [error]   DeprecationWarning
 [Mon Dec 17 18:02:42 2012] [error] 
 /usr/lib/python2.6/site-packages/django/utils/translation/__init__.py:63: 
 DeprecationWarning: Translations in the project directory aren't supported 
 anymore. Use the LOCALE_PATHS setting instead.
 [Mon Dec 17 18:02:42 2012] [error]   DeprecationWarning)
 [Mon Dec 17 18:02:42 2012] [error] 
 /usr/lib/python2.6/site-packages/django/template/defaulttags.py:1235: 
 DeprecationWarning: The syntax for the url template tag is changing. Load the 
 `url` tag from the `future` tag library to start using the new behavior.
 [Mon Dec 17 18:02:42 2012] [error]   category=DeprecationWarning)
 [Mon Dec 17 17:02:59 2012] [error] 
 /usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py:26: 
 DeprecationWarning: Authentication backends without a 
 `supports_inactive_user` attribute are deprecated. Please define it in class 
 'openstack_auth.backend.KeystoneBackend'.
 [Mon Dec 17 17:02:59 2012] [error]   DeprecationWarning)
 [Mon Dec 17 17:02:59 2012] [error] unable to retrieve service catalog with 
 token
 [Mon Dec 17 17:02:59 2012] [error] Traceback (most recent call last):
 [Mon Dec 17 17:02:59 2012] [error]   File 
 /usr/lib/python2.6/site-packages/keystoneclient/v2_0/client.py, line 135, 
 in _extract_service_catalog
 [Mon Dec 17 17:02:59 2012] [error] endpoint_type='adminURL')
 [Mon Dec 17 17:02:59 2012] [error]   File 
 /usr/lib/python2.6/site-packages/keystoneclient/service_catalog.py, line 
 73, in url_for
 [Mon Dec 17 17:02:59 2012] [error] raise 
 exceptions.EndpointNotFound('Endpoint not found.')
 [Mon Dec 17 17:02:59 2012] [error] EndpointNotFound: Endpoint not found.
 
 any idea?
 
 2012/12/14 JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com
 Ok. I will continue trying to solve these errors with your suggestions.
 I'll tell you any result.
 
 Thanks @Matthias @Andres for you support.
 
 Regards!
 JuanFra
 
 
 2012/12/14 Andrew Holway a.hol...@syseleven.de
 
 On Dec 14, 2012, at 12:45 PM, JuanFra Rodriguez Cardoso wrote:
 
  @Andrew: Yes, I knew theses great guides. I had Essex 2012.1.3 (EPEL 6.7) 
  working ok on Centos 6.3, but with 2012.2 (EPEL 6.7) I'm getting errors 
  with Django/Horizon.
 
 Mine is working alright. I expect you have some silly misconfiguration 
 somewhere. It took me 4 times to make a working install and it worked only 
 when I followed the install guide letter by letter.
 
 
 
  What release are your running? Essex or Folsom?
  Do you know if it's possible to install previous Openstack RPM packages 
  from EPL 6.7 (i.e. openstack-nova-2012.1.3-...)
 
 Folsom. Why would you want to install previous Openstack packages? I think 
 you might have to use a different EPEL repo for earlier versions.   
 openstack-nova-2012.2-2 seems to be the only available version.
 
 Ta
 
 Andrew
 
 
 
  Thanks for your support!
  JuanFa
 
  2012/12/14 Andrew Holway a.hol...@syseleven.de
  Hi,
 
  This worked perfectly on Centos 6.3.
 
  github.com/beloglazov/openstack-centos-kvm-glusterfs/
 
  The hostname stuff can trip you up however. Watch out for the scripts 
  creating user@controller users in the database for keystone, nova, glance 
  et al. It seems user@localhost would be more sensible.
 
  Take care,
 
  Andrew
 
 
 
  On Dec 13, 2012, at 12:24 PM, JuanFra Rodriguez Cardoso wrote:
 
   Hi all:
  
   I'm installing OpenStack Dashboard 2012.2 on CentOS 6.3 and I got next 
   error related to css/js compression:
  
   File /usr/lib/python2.6/site-packages/django/template/base.py, line 
   837, in render_node
   [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] return 
   node.render(context)
   [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
   /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, 
   line 147, in render

Re: [Openstack] Horizon - OfflineGenerationError

2012-12-14 Thread Andrew Holway
Hi,

This worked perfectly on Centos 6.3. 

github.com/beloglazov/openstack-centos-kvm-glusterfs/

The hostname stuff can trip you up however. Watch out for the scripts creating 
user@controller users in the database for keystone, nova, glance et al. It 
seems user@localhost would be more sensible.

Take care,

Andrew



On Dec 13, 2012, at 12:24 PM, JuanFra Rodriguez Cardoso wrote:

 Hi all:
 
 I'm installing OpenStack Dashboard 2012.2 on CentOS 6.3 and I got next error 
 related to css/js compression:
 
 File /usr/lib/python2.6/site-packages/django/template/base.py, line 837, in 
 render_node
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] return 
 node.render(context)
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
 /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, line 
 147, in render
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] return 
 self.render_compressed(context, self.kind, self.mode, forced=forced)
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
 /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, line 
 88, in render_compressed
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] cached_offline = 
 self.render_offline(context, forced=forced)
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
 /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, line 
 72, in render_offline
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] 'You may need to 
 run python manage.py compress.' % key)
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] 
 OfflineGenerationError: You have offline compression enabled but key 
 1056718f92f8d4204721bac759b3871a is missing from offline manifest. You may 
 need to run python manage.py compress.
 [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] File does not exist: 
 /var/www/html/favicon.ico
 
 any idea for solving it?
 
 Thanks,
 JuanFra.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Horizon - OfflineGenerationError

2012-12-14 Thread Andrew Holway

On Dec 14, 2012, at 12:45 PM, JuanFra Rodriguez Cardoso wrote:

 @Andrew: Yes, I knew theses great guides. I had Essex 2012.1.3 (EPEL 6.7) 
 working ok on Centos 6.3, but with 2012.2 (EPEL 6.7) I'm getting errors with 
 Django/Horizon.

Mine is working alright. I expect you have some silly misconfiguration 
somewhere. It took me 4 times to make a working install and it worked only when 
I followed the install guide letter by letter.


 
 What release are your running? Essex or Folsom?
 Do you know if it's possible to install previous Openstack RPM packages from 
 EPL 6.7 (i.e. openstack-nova-2012.1.3-...)

Folsom. Why would you want to install previous Openstack packages? I think you 
might have to use a different EPEL repo for earlier versions.   
openstack-nova-2012.2-2 seems to be the only available version.

Ta

Andrew


 
 Thanks for your support!
 JuanFa
 
 2012/12/14 Andrew Holway a.hol...@syseleven.de
 Hi,
 
 This worked perfectly on Centos 6.3.
 
 github.com/beloglazov/openstack-centos-kvm-glusterfs/
 
 The hostname stuff can trip you up however. Watch out for the scripts 
 creating user@controller users in the database for keystone, nova, glance et 
 al. It seems user@localhost would be more sensible.
 
 Take care,
 
 Andrew
 
 
 
 On Dec 13, 2012, at 12:24 PM, JuanFra Rodriguez Cardoso wrote:
 
  Hi all:
 
  I'm installing OpenStack Dashboard 2012.2 on CentOS 6.3 and I got next 
  error related to css/js compression:
 
  File /usr/lib/python2.6/site-packages/django/template/base.py, line 837, 
  in render_node
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] return 
  node.render(context)
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
  /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, 
  line 147, in render
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] return 
  self.render_compressed(context, self.kind, self.mode, forced=forced)
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
  /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, 
  line 88, in render_compressed
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] cached_offline 
  = self.render_offline(context, forced=forced)
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36]   File 
  /usr/lib/python2.6/site-packages/compressor/templatetags/compress.py, 
  line 72, in render_offline
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] 'You may need 
  to run python manage.py compress.' % key)
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] 
  OfflineGenerationError: You have offline compression enabled but key 
  1056718f92f8d4204721bac759b3871a is missing from offline manifest. You 
  may need to run python manage.py compress.
  [Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] File does not 
  exist: /var/www/html/favicon.ico
 
  any idea for solving it?
 
  Thanks,
  JuanFra.
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Removing Orphaned instanced

2012-12-13 Thread Andrew Holway
Hello,

I have been playing with creating and destroying instances in the GUI.

Sometimes, if I create more than 10 or so, some will get stuck in an error 
state. Is this some kind of timeout or something waiting for the image file 
perhaps?

Thanks,

Andrew




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Removing Orphaned instanced

2012-12-13 Thread Andrew Holway
Hey

I grepped out the last hour where I have been doing lots of creating and 
terminating of instances. OMG there is so much logs. Its like treacle!

http://gauntlet.sys11.net/logs/compute.log
2012-12-13 11:14:23 TRACE nova.openstack.common.rpc.amqp Timeout: Timeout while 
waiting on RPC response.
2012-12-13 11:14:23 TRACE nova.openstack.common.rpc.amqp 
2012-12-13 11:14:23 ERROR nova.compute.manager 
[req-715fd35c-793b-430f-a837-29ed171aa44f 58c4fd56b6924264b914659e7c0ef2f2 
88fe447d408d418baad31f681330a648] [instance: 
739eed94-e990-4eef-8bef-4c99e49bbc12] Instance failed network setup

http://gauntlet.sys11.net/logs/scheduler.log
http://gauntlet.sys11.net/logs/api.log
http://gauntlet.sys11.net/logs/compute.log

Ta,

Andrew




On Dec 13, 2012, at 11:08 AM, JuanFra Rodriguez Cardoso wrote:

 Hi Andrew:
 
 Could you include extracts of logs from nova-scheduler, nova-compute or 
 nova-network where those errors appear?
 
 Thanks.
 JuanFra.
 
 2012/12/13 Andrew Holway a.hol...@syseleven.de
 Hello,
 
 I have been playing with creating and destroying instances in the GUI.
 
 Sometimes, if I create more than 10 or so, some will get stuck in an error 
 state. Is this some kind of timeout or something waiting for the image file 
 perhaps?
 
 Thanks,
 
 Andrew
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Removing Orphaned instanced

2012-12-13 Thread Andrew Holway
I set up multi_host and this seems to have fixed the problem.

I suppose is it resource contention on nova-network




On Dec 13, 2012, at 12:01 PM, Andrew Holway wrote:

 Hey
 
 I grepped out the last hour where I have been doing lots of creating and 
 terminating of instances. OMG there is so much logs. Its like treacle!
 
 http://gauntlet.sys11.net/logs/compute.log
 2012-12-13 11:14:23 TRACE nova.openstack.common.rpc.amqp Timeout: Timeout 
 while waiting on RPC response.
 2012-12-13 11:14:23 TRACE nova.openstack.common.rpc.amqp 
 2012-12-13 11:14:23 ERROR nova.compute.manager 
 [req-715fd35c-793b-430f-a837-29ed171aa44f 58c4fd56b6924264b914659e7c0ef2f2 
 88fe447d408d418baad31f681330a648] [instance: 
 739eed94-e990-4eef-8bef-4c99e49bbc12] Instance failed network setup
 
 http://gauntlet.sys11.net/logs/scheduler.log
 http://gauntlet.sys11.net/logs/api.log
 http://gauntlet.sys11.net/logs/compute.log
 
 Ta,
 
 Andrew
 
 
 
 
 On Dec 13, 2012, at 11:08 AM, JuanFra Rodriguez Cardoso wrote:
 
 Hi Andrew:
 
 Could you include extracts of logs from nova-scheduler, nova-compute or 
 nova-network where those errors appear?
 
 Thanks.
 JuanFra.
 
 2012/12/13 Andrew Holway a.hol...@syseleven.de
 Hello,
 
 I have been playing with creating and destroying instances in the GUI.
 
 Sometimes, if I create more than 10 or so, some will get stuck in an error 
 state. Is this some kind of timeout or something waiting for the image file 
 perhaps?
 
 Thanks,
 
 Andrew
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Dashboard + WebServer

2012-12-13 Thread Andrew Holway
Its vanilla apache httpd afaik.


On Dec 13, 2012, at 3:31 PM, Desta Haileselassie Hagos wrote:

 Hey guys,
 
 What sort of Web Server is behind OpenStack dashboard (horizon)? Is it some 
 sort of Apache??? 
 
 
 Cheers,
 
 Desta
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova list not working

2012-12-12 Thread Andrew Holway
Thanks for the Clarification!


On Dec 12, 2012, at 2:59 AM, ZhiQiang Fan wrote:

 what Andy said is right, you can not list another tenant's instance
 but you can use nova list --all_tenants to list all instances in all 
 tenants if you are administrator
 use nova help list to get more help
 
 
 On Tue, Dec 11, 2012 at 11:36 PM, Andy McCrae andrew.mcc...@rackspace.co.uk 
 wrote:
 It looks like its because your OS_TENANT_NAME is set to admin in order to
 use the hypervisor-servers option, but the instances are under another
 Tenant, e.g.:
 
 root@testServer:~# nova hypervisor-servers testServer
 +--+---+---
 +-+
 | ID   | Name  | Hypervisor ID
 | Hypervisor Hostname |
 +--+---+---
 +-+
 | 300a12fb-264b-4d5a-9f43-6cf50ecfc639 | instance-0001 | 1
 | testServer |
 | 4416fdda-d77b-41a6-a0cf-9f6a6c0c3b83 | instance-0002 | 1
 | testServer |
 | 5cfc5265-3d36-4b39-9ac4-44ec06b76921 | instance-0005 | 1
 | testServer |
 +--+---+---
 +-+
 root@testServer:~# nova list
 
 root@testServer:~# export OS_TENANT_NAME=demo
 root@testServer:~# nova list
 +--+-++
 --+
 | ID   | Name| Status | Networks
   |
 +--+-++
 --+
 | 300a12fb-264b-4d5a-9f43-6cf50ecfc639 | serverOne   | ACTIVE |
 private=10.0.0.2 |
 | 5cfc5265-3d36-4b39-9ac4-44ec06b76921 | serverThree | ACTIVE |
 private=10.0.0.4 |
 | 4416fdda-d77b-41a6-a0cf-9f6a6c0c3b83 | serverTwo   | ACTIVE |
 private=10.0.0.3 |
 +--+-++
 --+
 
 Hope that helps!
 Andy
 
 
 
 
 On 12/11/12 3:29 PM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hi,
 
 Does anyone have an idea why nova list isnt working?
 
 [root@blade02 08-openstack-compute]# nova hypervisor-servers blade04
 +--+---+--
 -+-+
 | ID   | Name  | Hypervisor
 ID | Hypervisor Hostname |
 +--+---+--
 -+-+
 | 1081d0d2-4dff-4d83-8ed6-422c8ef3df97 | instance-003e | 2
  | blade04.cm.cluster  |
 | 2019d7dd-4b91-472b-9969-b651b74ffc8d | instance-003a | 2
  | blade04.cm.cluster  |
 | 94b8c171-1902-4a69-b50e-2067cd8baabb | instance-003c | 2
  | blade04.cm.cluster  |
 | a43c5de6-7221-4ae0-8400-9b316ae64200 | instance-0038 | 2
  | blade04.cm.cluster  |
 | dcc7b747-2391-42ef-96db-6da814f1db79 | instance-0040 | 2
  | blade04.cm.cluster  |
 +--+---+--
 -+-+
 [root@blade02 08-openstack-compute]# nova hypervisor-servers blade03
 +--+---+--
 -+-+
 | ID   | Name  | Hypervisor
 ID | Hypervisor Hostname |
 +--+---+--
 -+-+
 | 280c2fbd-eac6-41a7-9e4a-672dfe601436 | instance-0039 | 3
  | blade03.cm.cluster  |
 | 2cf6c1c7-7562-4366-b627-b825529f3856 | instance-003d | 3
  | blade03.cm.cluster  |
 | 6376e9d5-d69a-4edf-a0e8-073515528d26 | instance-003b | 3
  | blade03.cm.cluster  |
 | bce24c6f-46e1-4076-8118-0b26c28dd8bc | instance-003f | 3
  | blade03.cm.cluster  |
 | cfdd827b-9551-4977-9b4c-a8d0cab3c82e | instance-0037 | 3
  | blade03.cm.cluster  |
 +--+---+--
 -+-+
 [root@blade02 08-openstack-compute]# nova list
 
 [root@blade02 08-openstack-compute]#
 
 Thanks,
 
 Andrew
 
 
 
 Andy McCrae
 OpenStack Engineer II - UK
 
 Tel: +442087344108
 Mob:
 Fax: +44 20 8606 6111
 Web: www.rackspace.co.uk
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 Rackspace International GmbH a company registered in the Canton of Zurich, 
 Switzerland (company identification number CH-020.4.047.077-1) whose 
 registered office is at Balz-Zimmerman Strauss 7, 8302 Kloten, Switzerland. 
 Rackspace International GmbH privacy policy can be viewed at 
 www.rackspace.co.uk/legal/swiss-privacy-policy
 -
 Rackspace Hosting Australia PTY LTD a company registered in the state of 
 Victoria, Australia (company registered number ACN 153 275 524) whose 
 registered office is at Suite 3, Level 7, 210 George Street

[Openstack] DEBUG nova.utils [-] backend

2012-12-12 Thread Andrew Holway
Hi,

2012-12-12 12:04:48 DEBUG nova.utils [-] backend module 
'nova.db.sqlalchemy.migration' from 
'/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/migration.pyc' from 
(pid=14756) __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:494

I get this error a lot when using the command line nova tools.

Anything to worry about?

Thanks,

Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Very slow and inoperable dashboard

2012-12-12 Thread Andrew Holway
Hello,

I have just reinstalled folsom on centos 6.3

I have a very slow and nearly inoperative dashboard. I think it might be 
related to Qpidd...?

I didn't see anything in the http error log.

Thanks,

Andrew



some logs from api.log: 

2012-12-12 17:51:51 INFO nova.api.openstack.wsgi 
[req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
2ac6ca1f639944a5927f62169f8bb351] GET 
http://controller:8774/v2/2ac6ca1f639944a5927f62169f8bb351/os-floating-ips
2012-12-12 17:51:51 DEBUG nova.api.openstack.wsgi 
[req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
2ac6ca1f639944a5927f62169f8bb351] No Content-Type provided in request from 
(pid=2310) get_body 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:783
2012-12-12 17:51:51 DEBUG nova.openstack.common.rpc.amqp [-] Making 
asynchronous call on network ... from (pid=2310) multicall 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:351
2012-12-12 17:51:51 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
0af3aad299164c12b12fad553bba27a6 from (pid=2310) multicall 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:354
2012-12-12 17:52:52 ERROR nova.openstack.common.rpc.impl_qpid [-] Timed out 
waiting for RPC response: None
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid Traceback (most 
recent call last):
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
376, in ensure
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid return 
method(*args, **kwargs)
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, line 
425, in _consume
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid nxt_receiver 
= self.session.next_receiver(timeout=timeout)
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
string, line 6, in next_receiver
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 663, in 
next_receiver
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid raise Empty
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid Empty: None
2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid 
2012-12-12 17:52:52 ERROR nova.api.openstack 
[req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
2ac6ca1f639944a5927f62169f8bb351] Caught error: Timeout while waiting on RPC 
response.
2012-12-12 17:52:52 TRACE nova.api.openstack Traceback (most recent call last):
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py, line 78, in 
__call__
2012-12-12 17:52:52 TRACE nova.api.openstack return 
req.get_response(self.application)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py, line 
1053, in get_response
2012-12-12 17:52:52 TRACE nova.api.openstack application, 
catch_exc_info=False)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py, line 
1022, in call_application
2012-12-12 17:52:52 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
159, in __call__
2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/keystone/middleware/auth_token.py, line 278, 
in __call__
2012-12-12 17:52:52 TRACE nova.api.openstack return self.app(env, 
start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
159, in __call__
2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
159, in __call__
2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
159, in __call__
2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py,
 line 131, in __call__
2012-12-12 17:52:52 TRACE nova.api.openstack response = self.app(environ, 
start_response)
2012-12-12 17:52:52 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
159, in __call__
2012-12-12 

Re: [Openstack] Very slow and inoperable dashboard

2012-12-12 Thread Andrew Holway
P.S. It is especially slow clicking on Overview, between Project and Admin and 
Logging in. 

On Dec 12, 2012, at 5:57 PM, Andrew Holway wrote:

 Hello,
 
 I have just reinstalled folsom on centos 6.3
 
 I have a very slow and nearly inoperative dashboard. I think it might be 
 related to Qpidd...?
 
 I didn't see anything in the http error log.
 
 Thanks,
 
 Andrew
 
 
 
 some logs from api.log: 
 
 2012-12-12 17:51:51 INFO nova.api.openstack.wsgi 
 [req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
 2ac6ca1f639944a5927f62169f8bb351] GET 
 http://controller:8774/v2/2ac6ca1f639944a5927f62169f8bb351/os-floating-ips
 2012-12-12 17:51:51 DEBUG nova.api.openstack.wsgi 
 [req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
 2ac6ca1f639944a5927f62169f8bb351] No Content-Type provided in request from 
 (pid=2310) get_body 
 /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:783
 2012-12-12 17:51:51 DEBUG nova.openstack.common.rpc.amqp [-] Making 
 asynchronous call on network ... from (pid=2310) multicall 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:351
 2012-12-12 17:51:51 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
 0af3aad299164c12b12fad553bba27a6 from (pid=2310) multicall 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:354
 2012-12-12 17:52:52 ERROR nova.openstack.common.rpc.impl_qpid [-] Timed out 
 waiting for RPC response: None
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid Traceback (most 
 recent call last):
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, 
 line 376, in ensure
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid return 
 method(*args, **kwargs)
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, 
 line 425, in _consume
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid 
 nxt_receiver = self.session.next_receiver(timeout=timeout)
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 string, line 6, in next_receiver
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 663, in 
 next_receiver
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid raise Empty
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid Empty: None
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid 
 2012-12-12 17:52:52 ERROR nova.api.openstack 
 [req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
 2ac6ca1f639944a5927f62169f8bb351] Caught error: Timeout while waiting on RPC 
 response.
 2012-12-12 17:52:52 TRACE nova.api.openstack Traceback (most recent call 
 last):
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py, line 78, 
 in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return 
 req.get_response(self.application)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py, 
 line 1053, in get_response
 2012-12-12 17:52:52 TRACE nova.api.openstack application, 
 catch_exc_info=False)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py, 
 line 1022, in call_application
 2012-12-12 17:52:52 TRACE nova.api.openstack app_iter = 
 application(self.environ, start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/keystone/middleware/auth_token.py, line 
 278, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return self.app(env, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py,
  line 131, in __call__

[Openstack] fixed - Re: Very slow and inoperable dashboard

2012-12-12 Thread Andrew Holway
I did not have nova-network process running.

Thanks,

Andrew

On Dec 12, 2012, at 6:01 PM, Andrew Holway wrote:

 P.S. It is especially slow clicking on Overview, between Project and Admin 
 and Logging in. 
 
 On Dec 12, 2012, at 5:57 PM, Andrew Holway wrote:
 
 Hello,
 
 I have just reinstalled folsom on centos 6.3
 
 I have a very slow and nearly inoperative dashboard. I think it might be 
 related to Qpidd...?
 
 I didn't see anything in the http error log.
 
 Thanks,
 
 Andrew
 
 
 
 some logs from api.log: 
 
 2012-12-12 17:51:51 INFO nova.api.openstack.wsgi 
 [req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
 2ac6ca1f639944a5927f62169f8bb351] GET 
 http://controller:8774/v2/2ac6ca1f639944a5927f62169f8bb351/os-floating-ips
 2012-12-12 17:51:51 DEBUG nova.api.openstack.wsgi 
 [req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
 2ac6ca1f639944a5927f62169f8bb351] No Content-Type provided in request from 
 (pid=2310) get_body 
 /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:783
 2012-12-12 17:51:51 DEBUG nova.openstack.common.rpc.amqp [-] Making 
 asynchronous call on network ... from (pid=2310) multicall 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:351
 2012-12-12 17:51:51 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
 0af3aad299164c12b12fad553bba27a6 from (pid=2310) multicall 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:354
 2012-12-12 17:52:52 ERROR nova.openstack.common.rpc.impl_qpid [-] Timed out 
 waiting for RPC response: None
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid Traceback 
 (most recent call last):
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, 
 line 376, in ensure
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid return 
 method(*args, **kwargs)
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py, 
 line 425, in _consume
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid 
 nxt_receiver = self.session.next_receiver(timeout=timeout)
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 string, line 6, in next_receiver
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid   File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 663, in 
 next_receiver
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid raise Empty
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid Empty: None
 2012-12-12 17:52:52 TRACE nova.openstack.common.rpc.impl_qpid 
 2012-12-12 17:52:52 ERROR nova.api.openstack 
 [req-ffcd7cfa-70c9-40fe-963d-2f04dc6c5a93 25182ffd9fdd4a2bbb53053d2bae0190 
 2ac6ca1f639944a5927f62169f8bb351] Caught error: Timeout while waiting on RPC 
 response.
 2012-12-12 17:52:52 TRACE nova.api.openstack Traceback (most recent call 
 last):
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py, line 78, 
 in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return 
 req.get_response(self.application)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py, 
 line 1053, in get_response
 2012-12-12 17:52:52 TRACE nova.api.openstack application, 
 catch_exc_info=False)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py, 
 line 1022, in call_application
 2012-12-12 17:52:52 TRACE nova.api.openstack app_iter = 
 application(self.environ, start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/keystone/middleware/auth_token.py, line 
 278, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return self.app(env, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE nova.api.openstack   File 
 /usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py, line 
 159, in __call__
 2012-12-12 17:52:52 TRACE nova.api.openstack return resp(environ, 
 start_response)
 2012-12-12 17:52:52 TRACE

[Openstack] Vlans and openstack.

2012-12-12 Thread Andrew Holway
Hi,

I have two hosts in my openstack setup: blade03 and blade04. I have set up my 
openstack with vlanned networking. The instances are being created on the 
specifed vlans correctly.

The problem is that I cannot ping instances on blade03 from blade04. I can ping 
blade04 instances from blade04. I do not expect to be able to ping blade03 
instances because the nova network service is not running there.

I have experimented to make sure vlans are working on the switch. I created a 
new vlan interface on blade03 and blade04 and pinged between them quite happily.

What am I missing?

Thanks,

Andrew


[root@blade02 ~]# nova-manage network list
id  IPv4IPv6start address   DNS1
DNS2VlanID  project uuid   
1   10.142.10.0/26  None10.142.10.3 None
None142 88fe447d408d418baad31f681330a648
8ed0508f-d8bb-4845-8eea-ed7b12f61adc


Switch Config:

Current VLAN 142: 
name VLAN 142, ports INT11-INT14, enabled,
Protocol- empty, 
spanning tree 1
Current VLAN 143: 
name VLAN 143, ports INT11-INT14, enabled,
Protocol- empty, 
spanning tree 1


[root@blade03 instance-0011]# ifconfig eth0.143
eth0.143  Link encap:Ethernet  HWaddr 00:1A:64:5D:10:98  
  inet addr:10.145.0.1  Bcast:10.145.255.255  Mask:255.255.0.0
  inet6 addr: fe80::21a:64ff:fe5d:1098/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:9 errors:0 dropped:0 overruns:0 frame:0
  TX packets:101 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:672 (672.0 b)  TX bytes:8241 (8.0 KiB)

[root@blade03 instance-0011]# ping 10.145.0.1
PING 10.145.0.1 (10.145.0.1) 56(84) bytes of data.
64 bytes from 10.145.0.1: icmp_seq=1 ttl=64 time=0.031 ms










___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Vlans and openstack.

2012-12-12 Thread Andrew Holway
Hi,

Yes, it appears I misconfigured that VLAN.

Thanks,

Andrew
 
On Dec 12, 2012, at 8:06 PM, Matt Joyce wrote:

 Check your switch.
 
 Make sure the ports are trunked.  Make sure they have access to the vlans 
 desired.  All ports.
 
 
 On Wed, Dec 12, 2012 at 10:33 AM, Andrew Holway a.hol...@syseleven.de wrote:
 Hi,
 
 I have two hosts in my openstack setup: blade03 and blade04. I have set up my 
 openstack with vlanned networking. The instances are being created on the 
 specifed vlans correctly.
 
 The problem is that I cannot ping instances on blade03 from blade04. I can 
 ping blade04 instances from blade04. I do not expect to be able to ping 
 blade03 instances because the nova network service is not running there.
 
 I have experimented to make sure vlans are working on the switch. I created a 
 new vlan interface on blade03 and blade04 and pinged between them quite 
 happily.
 
 What am I missing?
 
 Thanks,
 
 Andrew
 
 
 [root@blade02 ~]# nova-manage network list
 id  IPv4IPv6start address   DNS1  
   DNS2VlanID  project uuid
 1   10.142.10.0/26  None10.142.10.3 None  
   None142 88fe447d408d418baad31f681330a648
 8ed0508f-d8bb-4845-8eea-ed7b12f61adc
 
 
 Switch Config:
 
 Current VLAN 142:
 name VLAN 142, ports INT11-INT14, enabled,
 Protocol- empty,
 spanning tree 1
 Current VLAN 143:
 name VLAN 143, ports INT11-INT14, enabled,
 Protocol- empty,
 spanning tree 1
 
 
 [root@blade03 instance-0011]# ifconfig eth0.143
 eth0.143  Link encap:Ethernet  HWaddr 00:1A:64:5D:10:98
   inet addr:10.145.0.1  Bcast:10.145.255.255  Mask:255.255.0.0
   inet6 addr: fe80::21a:64ff:fe5d:1098/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:9 errors:0 dropped:0 overruns:0 frame:0
   TX packets:101 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:672 (672.0 b)  TX bytes:8241 (8.0 KiB)
 
 [root@blade03 instance-0011]# ping 10.145.0.1
 PING 10.145.0.1 (10.145.0.1) 56(84) bytes of data.
 64 bytes from 10.145.0.1: icmp_seq=1 ttl=64 time=0.031 ms
 
 
 
 
 
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova list not working

2012-12-11 Thread Andrew Holway
Hi,

Does anyone have an idea why nova list isnt working?

[root@blade02 08-openstack-compute]# nova hypervisor-servers blade04
+--+---+---+-+
| ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |
+--+---+---+-+
| 1081d0d2-4dff-4d83-8ed6-422c8ef3df97 | instance-003e | 2 | 
blade04.cm.cluster  |
| 2019d7dd-4b91-472b-9969-b651b74ffc8d | instance-003a | 2 | 
blade04.cm.cluster  |
| 94b8c171-1902-4a69-b50e-2067cd8baabb | instance-003c | 2 | 
blade04.cm.cluster  |
| a43c5de6-7221-4ae0-8400-9b316ae64200 | instance-0038 | 2 | 
blade04.cm.cluster  |
| dcc7b747-2391-42ef-96db-6da814f1db79 | instance-0040 | 2 | 
blade04.cm.cluster  |
+--+---+---+-+
[root@blade02 08-openstack-compute]# nova hypervisor-servers blade03
+--+---+---+-+
| ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |
+--+---+---+-+
| 280c2fbd-eac6-41a7-9e4a-672dfe601436 | instance-0039 | 3 | 
blade03.cm.cluster  |
| 2cf6c1c7-7562-4366-b627-b825529f3856 | instance-003d | 3 | 
blade03.cm.cluster  |
| 6376e9d5-d69a-4edf-a0e8-073515528d26 | instance-003b | 3 | 
blade03.cm.cluster  |
| bce24c6f-46e1-4076-8118-0b26c28dd8bc | instance-003f | 3 | 
blade03.cm.cluster  |
| cfdd827b-9551-4977-9b4c-a8d0cab3c82e | instance-0037 | 3 | 
blade03.cm.cluster  |
+--+---+---+-+
[root@blade02 08-openstack-compute]# nova list

[root@blade02 08-openstack-compute]# 

Thanks,

Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Something horribly wrong with NFS

2012-12-11 Thread Andrew Holway
Hello,

I tried this today:

http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-migrations.html

Everything seemed to break really horribly.

Is this documentation up to date? I am going to completely reinstall tomorrow.

I am using Centos 6.3

Thanks,

Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Something horribly wrong with NFS

2012-12-11 Thread Andrew Holway
My distribution of nodes seemed to be okay.

The VNC stopped working however and In the GUI all the nodes went in to 
Deleting state but were not actually deleting.

See how it looks in the morning after a reinstall. I think something like this 
you have to reinstall 20 times before you get it right anyway :)



On Dec 11, 2012, at 6:11 PM, Marco CONSONNI wrote:

 Hi all,
 
 some more information for further investigation on the horrible behavior.
 
 The phenomenon seems to be related to the concurrent access of the shared 
 file system.
 
 In fact, in my NFS deployment I noticed that problems started when a new VM 
 was booted on a node different from the one where the already running VMs 
 were booted (see the scheduling policy I briefly mentioned in my previous 
 e-mail).
 
 Let me describe what I experimented:
 
 1) I launched Vm1 and it started to Node1
 2) I launched Vm2 and it started to Node1
 
 ...
 
 8) I launched Vm8 and it started to Node1
 
  at this point Node1 was full 
 
 9) I launched Vm9 and it started to Node2
 
  at this point the cloud was stuck (I couldn't start new VMs and the 
 already running VMs didn't perform properly) 
 
 The only action I was able to do was to delete VMs.
 
 
 
 
 
 
 Hope it helps,
 Marco,
 
 
 On Tue, Dec 11, 2012 at 5:54 PM, Marco CONSONNI mcocm...@gmail.com wrote:
 Hello Andrew,
 
 using NFS for live migration I found strange behaviors too.
 
 To be more specific, I noted that at a certain point I couldn't boot any new 
 VM. 
 Live migration in itself was fine provided that I didn't reach a number of 
 concurrent VMs; the problem was that after a number of VMs (in my case 8) the 
 cloud stopped working.  I could do much but stopping VMs.
 In my case I set up the scheduling in a way that all the VMs were 
 'dispatched' to a compute node till such a node was 'full'.
 
 At the end I decided not to use it and use gluster, instead.
 
 I followed the instructions reported here 
 http://gluster.org/community/documentation//index.php/OSConnect with some 
 modifications for having a single node running as a gluster server and all 
 the compute nodes running as gluster clients.
 Something similar to what you deploy when you use NFS.
 Note that my deployment is not going to be a production installation, 
 therefore a single gluster file server is OK.
 
 Also note that I started my installation using Ubuntu 12.04 but the migration 
 didn't work properly for problems in the hypervisor,
 With 12.10 everything worked properly.
 
 Hope it helps,
 Marco.
 
 
 
 
 
 
 On Tue, Dec 11, 2012 at 5:33 PM, Andrew Holway a.hol...@syseleven.de wrote:
 Hello,
 
 I tried this today:
 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-migrations.html
 
 Everything seemed to break really horribly.
 
 Is this documentation up to date? I am going to completely reinstall tomorrow.
 
 I am using Centos 6.3
 
 Thanks,
 
 Andrew
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] CRITICAL nova [-] [Errno 98] Address already in use

2012-12-10 Thread Andrew Holway
Hi,

I cannot start the nova-api service.

[root@blade02 07-openstack-controller]# nova list
ERROR: ConnectionRefused: '[Errno 111] Connection refused'

I followed this guide very carefully:

https://github.com/beloglazov/openstack-centos-kvm-glusterfs/#07-openstack-controller-controller

Here is api.log

2012-12-10 17:51:31 DEBUG nova.wsgi [-] Loading app metadata from 
/etc/nova/api-paste.ini from (pid=2536) load_app 
/usr/lib/python2.6/site-packages/nova/wsgi.py:371
2012-12-10 17:51:31 CRITICAL nova [-] [Errno 98] Address already in use
2012-12-10 17:51:31 TRACE nova Traceback (most recent call last):
2012-12-10 17:51:31 TRACE nova   File /usr/bin/nova-api, line 50, in module
2012-12-10 17:51:31 TRACE nova server = service.WSGIService(api)
2012-12-10 17:51:31 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/service.py, line 584, in __init__
2012-12-10 17:51:31 TRACE nova port=self.port)
2012-12-10 17:51:31 TRACE nova   File 
/usr/lib/python2.6/site-packages/nova/wsgi.py, line 72, in __init__
2012-12-10 17:51:31 TRACE nova self._socket = eventlet.listen((host, port), 
backlog=backlog)
2012-12-10 17:51:31 TRACE nova   File 
/usr/lib/python2.6/site-packages/eventlet/convenience.py, line 38, in listen
2012-12-10 17:51:31 TRACE nova sock.bind(addr)
2012-12-10 17:51:31 TRACE nova   File string, line 1, in bind
2012-12-10 17:51:31 TRACE nova error: [Errno 98] Address already in use
2012-12-10 17:51:31 TRACE nova 
2012-12-10 17:51:31 INFO nova.service [-] Parent process has died unexpectedly, 
exiting
2012-12-10 17:51:31 INFO nova.service [-] Parent process has died unexpectedly, 
exiting
2012-12-10 17:51:31 INFO nova.wsgi [-] Stopping WSGI server.
2012-12-10 17:51:31 INFO nova.wsgi [-] Stopping WSGI server.

[root@blade02 07-openstack-controller]# cat /etc/nova/nova.conf 
[DEFAULT]
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp
volumes_dir = /etc/nova/volumes
dhcpbridge = /usr/bin/nova-dhcpbridge
dhcpbridge_flagfile = /etc/nova/nova.conf
force_dhcp_release = False
injected_network_template = /usr/share/nova/interfaces.template
libvirt_nonblocking = True
libvirt_inject_partition = -1
network_manager = nova.network.manager.FlatDHCPManager
iscsi_helper = tgtadm
sql_connection = mysql://nova:x7deix7dei@controller/nova
compute_driver = libvirt.LibvirtDriver
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
rpc_backend = nova.openstack.common.rpc.impl_qpid
rootwrap_config = /etc/nova/rootwrap.conf
verbose = True
auth_strategy = keystone
qpid_hostname = controller
network_host = compute1
fixed_range = 10.0.0.0/24
flat_interface = eth1
flat_network_bridge = br100
public_interface = eth1
glance_host = controller
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = controller
novncproxy_base_url = http://37.123.104.3:6080/vnc_auto.html
xvpvncproxy_base_url = http://37.123.104.3:6081/console
metadata_host = 10.141.6.2
enabled_apis=ec2,osapi_compute,metadata

#[keystone_authtoken]
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
signing_dirname = /tmp/keystone-signing-nova

There is no process using port 8774.

[root@blade02 07-openstack-controller]# netstat -tunlp | grep 877
tcp0  0 0.0.0.0:87750.0.0.0:*   
LISTEN  2157/python  

Maybe it is something similar to:

https://bugzilla.redhat.com/show_bug.cgi?id=877606#c3

Thanks,

Andrew




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] CRITICAL nova [-] [Errno 98] Address already in use

2012-12-10 Thread Andrew Holway
Hi,

maybe this will shed some light on it..?

Thanks,

Andrew

[root@blade02 init.d]# cat /etc/nova/api-paste.ini 

# Metadata #

[composite:metadata]
use = egg:Paste#urlmap
/: meta

[pipeline:meta]
pipeline = ec2faultwrap logrequest metaapp

[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

###
# EC2 #
###

[composite:ec2]
use = egg:Paste#urlmap
/services/Cloud: ec2cloud

[composite:ec2cloud]
use = call:nova.api.auth:pipeline_factory
noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator 
ec2executor

[filter:ec2faultwrap]
paste.filter_factory = nova.api.ec2:FaultWrapper.factory

[filter:logrequest]
paste.filter_factory = nova.api.ec2:RequestLogging.factory

[filter:ec2lockout]
paste.filter_factory = nova.api.ec2:Lockout.factory

[filter:ec2keystoneauth]
paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory

[filter:ec2noauth]
paste.filter_factory = nova.api.ec2:NoAuth.factory

[filter:cloudrequest]
controller = nova.api.ec2.cloud.CloudController
paste.filter_factory = nova.api.ec2:Requestify.factory

[filter:authorizer]
paste.filter_factory = nova.api.ec2:Authorizer.factory

[filter:validator]
paste.filter_factory = nova.api.ec2:Validator.factory

[app:ec2executor]
paste.app_factory = nova.api.ec2:Executor.factory

#
# Openstack #
#

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v1.1: openstack_compute_api_v2
/v2: openstack_compute_api_v2

[composite:osapi_volume]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: osvolumeversions
/v1: openstack_volume_api_v1

[composite:openstack_compute_api_v2]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit 
osapi_compute_app_v2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext 
osapi_compute_app_v2

[composite:openstack_volume_api_v1]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit 
osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext 
osapi_volume_app_v1

[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:ratelimit]
paste.filter_factory = 
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

[filter:sizelimit]
paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory

[app:osapi_compute_app_v2]
paste.app_factory = nova.api.openstack.compute:APIRouter.factory

[pipeline:oscomputeversions]
pipeline = faultwrap oscomputeversionapp

[app:osapi_volume_app_v1]
paste.app_factory = nova.api.openstack.volume:APIRouter.factory

[app:oscomputeversionapp]
paste.app_factory = nova.api.openstack.compute.versions:Versions.factory

[pipeline:osvolumeversions]
pipeline = faultwrap osvolumeversionapp

[app:osvolumeversionapp]
paste.app_factory = nova.api.openstack.volume.versions:Versions.factory

##
# Shared #
##

[filter:keystonecontext]
paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = nova
admin_password = x7deix7dei
auth_uri = http://controller:5000/
On Dec 10, 2012, at 7:10 PM, Vishvananda Ishaya wrote:

 Odd. This looks remarkably like it is trying to start osapi_volume even 
 though you don't have it specified in enabled apis. Your enabled_apis setting 
 looks correct to me.
 
 Vish
 
 
 On Dec 10, 2012, at 9:24 AM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hi,
 
 I cannot start the nova-api service.
 
 [root@blade02 07-openstack-controller]# nova list
 ERROR: ConnectionRefused: '[Errno 111] Connection refused'
 
 I followed this guide very carefully:
 
 https://github.com/beloglazov/openstack-centos-kvm-glusterfs/#07-openstack-controller-controller
 
 Here is api.log
 
 2012-12-10 17:51:31 DEBUG nova.wsgi [-] Loading app metadata from 
 /etc/nova/api-paste.ini from (pid=2536) load_app 
 /usr/lib/python2.6/site-packages/nova/wsgi.py:371
 2012-12-10 17:51:31 CRITICAL nova [-] [Errno 98] Address already in use
 2012-12-10 17:51:31 TRACE nova Traceback (most recent call last):
 2012-12-10 17:51:31 TRACE nova   File /usr/bin/nova-api, line 50, in 
 module
 2012-12-10 17:51:31 TRACE nova server = service.WSGIService(api)
 2012-12-10 17:51:31 TRACE nova   File 
 /usr/lib/python2.6/site-packages/nova/service.py, line 584, in __init__
 2012-12-10 17:51:31 TRACE nova port=self.port)
 2012-12-10 17:51:31 TRACE nova   File 
 /usr/lib/python2.6/site-packages/nova/wsgi.py, line 72, in __init__
 2012-12-10 17:51

Re: [Openstack] CRITICAL nova [-] [Errno 98] Address already in use

2012-12-10 Thread Andrew Holway
Hi,

I have actually no idea how do do that. But the service opts look vaguely 
relevant:

Does anyone have a working installation on Centos 6.3?

Thanks,

Andrew



service_opts = [
cfg.IntOpt('report_interval',
   default=10,
   help='seconds between nodes reporting state to datastore'),
cfg.IntOpt('periodic_interval',
   default=60,
   help='seconds between running periodic tasks'),
cfg.IntOpt('periodic_fuzzy_delay',
   default=60,
   help='range of seconds to randomly delay when starting the'
' periodic task scheduler to reduce stampeding.'
' (Disable by setting to 0)'),
cfg.StrOpt('ec2_listen',
   default=0.0.0.0,
   help='IP address for EC2 API to listen'),
cfg.IntOpt('ec2_listen_port',
   default=8773,
   help='port for ec2 api to listen'),
cfg.IntOpt('ec2_workers',
   default=None,
   help='Number of workers for EC2 API service'),
cfg.StrOpt('osapi_compute_listen',
   default=0.0.0.0,
   help='IP address for OpenStack API to listen'),
cfg.IntOpt('osapi_compute_listen_port',
   default=8774,
   help='list port for osapi compute'),
cfg.IntOpt('osapi_compute_workers',
   default=None,
   help='Number of workers for OpenStack API service'),
cfg.StrOpt('metadata_manager',
   default='nova.api.manager.MetadataManager',
   help='OpenStack metadata service manager'),
cfg.StrOpt('metadata_listen',
   default=0.0.0.0,
   help='IP address for metadata api to listen'),
cfg.IntOpt('metadata_listen_port',
   default=8775,
   help='port for metadata api to listen'),
cfg.IntOpt('metadata_workers',
   default=None,
   help='Number of workers for metadata service'),
cfg.StrOpt('osapi_volume_listen',
   default=0.0.0.0,
   help='IP address for OpenStack Volume API to listen'),
cfg.IntOpt('osapi_volume_listen_port',
   default=8776,
   help='port for os volume api to listen'),
cfg.IntOpt('osapi_volume_workers',
   default=None,
   help='Number of workers for OpenStack Volume API service'),
]

On Dec 10, 2012, at 7:29 PM, Vishvananda Ishaya wrote:

 Nope. Best i can think of is to throw some log statements into 
 nova/service.py right before the exception gets thrown. See which api it is 
 trying to start and what it thinks the value of enabled_apis is. Etc.
 
 Vish
 
 On Dec 10, 2012, at 10:24 AM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hi,
 
 maybe this will shed some light on it..?
 
 Thanks,
 
 Andrew
 
 [root@blade02 init.d]# cat /etc/nova/api-paste.ini 
 
 # Metadata #
 
 [composite:metadata]
 use = egg:Paste#urlmap
 /: meta
 
 [pipeline:meta]
 pipeline = ec2faultwrap logrequest metaapp
 
 [app:metaapp]
 paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory
 
 ###
 # EC2 #
 ###
 
 [composite:ec2]
 use = egg:Paste#urlmap
 /services/Cloud: ec2cloud
 
 [composite:ec2cloud]
 use = call:nova.api.auth:pipeline_factory
 noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
 keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator 
 ec2executor
 
 [filter:ec2faultwrap]
 paste.filter_factory = nova.api.ec2:FaultWrapper.factory
 
 [filter:logrequest]
 paste.filter_factory = nova.api.ec2:RequestLogging.factory
 
 [filter:ec2lockout]
 paste.filter_factory = nova.api.ec2:Lockout.factory
 
 [filter:ec2keystoneauth]
 paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory
 
 [filter:ec2noauth]
 paste.filter_factory = nova.api.ec2:NoAuth.factory
 
 [filter:cloudrequest]
 controller = nova.api.ec2.cloud.CloudController
 paste.filter_factory = nova.api.ec2:Requestify.factory
 
 [filter:authorizer]
 paste.filter_factory = nova.api.ec2:Authorizer.factory
 
 [filter:validator]
 paste.filter_factory = nova.api.ec2:Validator.factory
 
 [app:ec2executor]
 paste.app_factory = nova.api.ec2:Executor.factory
 
 #
 # Openstack #
 #
 
 [composite:osapi_compute]
 use = call:nova.api.openstack.urlmap:urlmap_factory
 /: oscomputeversions
 /v1.1: openstack_compute_api_v2
 /v2: openstack_compute_api_v2
 
 [composite:osapi_volume]
 use = call:nova.api.openstack.urlmap:urlmap_factory
 /: osvolumeversions
 /v1: openstack_volume_api_v1
 
 [composite:openstack_compute_api_v2]
 use = call:nova.api.auth:pipeline_factory
 noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
 keystone = faultwrap sizelimit authtoken keystonecontext ratelimit 
 osapi_compute_app_v2
 keystone_nolimit = faultwrap sizelimit authtoken keystonecontext 
 osapi_compute_app_v2
 
 [composite:openstack_volume_api_v1]
 use = call:nova.api.auth:pipeline_factory

Re: [Openstack] CRITICAL nova [-] [Errno 98] Address already in use

2012-12-10 Thread Andrew Holway
Hey,

Thanks, That seems to have done it!

Now I have some whole new errors to work on :)

2012-12-10 20:08:44 ERROR keystone.middleware.auth_token [-] HTTP connection 
exception: [Errno 1] _ssl.c:490: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol
2012-12-10 20:08:44 WARNING keystone.middleware.auth_token [-] Authorization 
failed for token fbb8f2fe3fcb4155b9428b862b5bb943
2012-12-10 20:08:44 INFO keystone.middleware.auth_token [-] Invalid user token 
- rejecting request
2012-12-10 20:08:44 INFO nova.osapi_compute.wsgi.server [-] 127.0.0.1 - - 
[10/Dec/2012 20:08:44] GET /v2/3efa0ffe282c4a0c8b0b3a2812b1b4d0/servers/detail 
HTTP/1.1 401 462 0.012388

2012-12-10 20:08:44 ERROR keystone.middleware.auth_token [-] HTTP connection 
exception: [Errno 1] _ssl.c:490: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol
2012-12-10 20:08:44 WARNING keystone.middleware.auth_token [-] Authorization 
failed for token e6f30024fc1142638452854f03a7735c
2012-12-10 20:08:44 INFO keystone.middleware.auth_token [-] Invalid user token 
- rejecting request
2012-12-10 20:08:44 INFO nova.osapi_compute.wsgi.server [-] 127.0.0.1 - - 
[10/Dec/2012 20:08:44] GET /v2/3efa0ffe282c4a0c8b0b3a2812b1b4d0/servers/detail 
HTTP/1.1 401 462 0.002427

Take care,

Andrew


On Dec 10, 2012, at 8:01 PM, Vishvananda Ishaya wrote:

 I just realized the problem. Your issue is actually the metadata api since 
 you have something listening on 8775. If you are running nova-api-metadata 
 separately then you can remove it from your list of enabled apis:
 
 enabled_apis=ec2,osapi_compute
 
 Alternatively just kill nova-api-metadata and allow it to run as wone of the 
 nova-api components.
 
 Just for your reference, nova-api is an easy way to run all of the apis as 
 one service. In this case it uses the enabled_apis config option. You can 
 also run all of the apis separately by using the individual binaries:
 
 nova-api-ec2
 nova-api-metadata
 nova-api-os-compute
 nova-api-os-volume
 
 Vish
 
 On Dec 10, 2012, at 10:42 AM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hi,
 
 I have actually no idea how do do that. But the service opts look vaguely 
 relevant:
 
 Does anyone have a working installation on Centos 6.3?
 
 Thanks,
 
 Andrew
 
 
 
 service_opts = [
   cfg.IntOpt('report_interval',
  default=10,
  help='seconds between nodes reporting state to datastore'),
   cfg.IntOpt('periodic_interval',
  default=60,
  help='seconds between running periodic tasks'),
   cfg.IntOpt('periodic_fuzzy_delay',
  default=60,
  help='range of seconds to randomly delay when starting the'
   ' periodic task scheduler to reduce stampeding.'
   ' (Disable by setting to 0)'),
   cfg.StrOpt('ec2_listen',
  default=0.0.0.0,
  help='IP address for EC2 API to listen'),
   cfg.IntOpt('ec2_listen_port',
  default=8773,
  help='port for ec2 api to listen'),
   cfg.IntOpt('ec2_workers',
  default=None,
  help='Number of workers for EC2 API service'),
   cfg.StrOpt('osapi_compute_listen',
  default=0.0.0.0,
  help='IP address for OpenStack API to listen'),
   cfg.IntOpt('osapi_compute_listen_port',
  default=8774,
  help='list port for osapi compute'),
   cfg.IntOpt('osapi_compute_workers',
  default=None,
  help='Number of workers for OpenStack API service'),
   cfg.StrOpt('metadata_manager',
  default='nova.api.manager.MetadataManager',
  help='OpenStack metadata service manager'),
   cfg.StrOpt('metadata_listen',
  default=0.0.0.0,
  help='IP address for metadata api to listen'),
   cfg.IntOpt('metadata_listen_port',
  default=8775,
  help='port for metadata api to listen'),
   cfg.IntOpt('metadata_workers',
  default=None,
  help='Number of workers for metadata service'),
   cfg.StrOpt('osapi_volume_listen',
  default=0.0.0.0,
  help='IP address for OpenStack Volume API to listen'),
   cfg.IntOpt('osapi_volume_listen_port',
  default=8776,
  help='port for os volume api to listen'),
   cfg.IntOpt('osapi_volume_workers',
  default=None,
  help='Number of workers for OpenStack Volume API service'),
   ]
 
 On Dec 10, 2012, at 7:29 PM, Vishvananda Ishaya wrote:
 
 Nope. Best i can think of is to throw some log statements into 
 nova/service.py right before the exception gets thrown. See which api it is 
 trying to start and what it thinks the value of enabled_apis is. Etc.
 
 Vish
 
 On Dec 10, 2012, at 10:24 AM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hi,
 
 maybe this will shed some light on it..?
 
 Thanks,
 
 Andrew
 
 [root@blade02 init.d]# cat /etc/nova/api-paste.ini 
 
 # Metadata

[Openstack] nova list requests failing

2012-12-10 Thread Andrew Holway
Hi,

Please excuse dumb questions. Im very new to openstack.

So I have just managed to get the nova-api service running (thanks virsh) but 
it seems to be having a little problem with auth. 

[root@blade02 07-openstack-controller]# nova list
ERROR: n/a (HTTP 401)
[root@blade02 07-openstack-controller]# nova--os_username=nova
--os_password=passw0rd--os_tenant_name=service
--os_auth_url=http://controller:5000/v2.0 list
ERROR: n/a (HTTP 401)

Can anyone see what is going on here?

Thanks,

Andrew



/var/log/nova/api.log
2012-12-10 20:24:49 ERROR keystone.middleware.auth_token [-] HTTP connection 
exception: [Errno 1] _ssl.c:490: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol
2012-12-10 20:24:49 WARNING keystone.middleware.auth_token [-] Authorization 
failed for token 61d0f67e79cd4f8fbe19412e4347e22c
2012-12-10 20:24:49 INFO keystone.middleware.auth_token [-] Invalid user token 
- rejecting request
2012-12-10 20:24:49 INFO nova.osapi_compute.wsgi.server [-] 127.0.0.1 - - 
[10/Dec/2012 20:24:49] GET /v2/3338bc86707e4cdb910a3c7a6fd1a649/servers/detail 
HTTP/1.1 401 462 0.002252

[root@blade02 07-openstack-controller]# cat /etc/nova/api-paste.ini

# Metadata #

[composite:metadata]
use = egg:Paste#urlmap
/: meta

[pipeline:meta]
pipeline = ec2faultwrap logrequest metaapp

[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

###
# EC2 #
###

[composite:ec2]
use = egg:Paste#urlmap
/services/Cloud: ec2cloud

[composite:ec2cloud]
use = call:nova.api.auth:pipeline_factory
noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator 
ec2executor

[filter:ec2faultwrap]
paste.filter_factory = nova.api.ec2:FaultWrapper.factory

[filter:logrequest]
paste.filter_factory = nova.api.ec2:RequestLogging.factory

[filter:ec2lockout]
paste.filter_factory = nova.api.ec2:Lockout.factory

[filter:ec2keystoneauth]
paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory

[filter:ec2noauth]
paste.filter_factory = nova.api.ec2:NoAuth.factory

[filter:cloudrequest]
controller = nova.api.ec2.cloud.CloudController
paste.filter_factory = nova.api.ec2:Requestify.factory

[filter:authorizer]
paste.filter_factory = nova.api.ec2:Authorizer.factory

[filter:validator]
paste.filter_factory = nova.api.ec2:Validator.factory

[app:ec2executor]
paste.app_factory = nova.api.ec2:Executor.factory

#
# Openstack #
#

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v1.1: openstack_compute_api_v2
/v2: openstack_compute_api_v2

[composite:osapi_volume]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: osvolumeversions
/v1: openstack_volume_api_v1

[composite:openstack_compute_api_v2]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit 
osapi_compute_app_v2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext 
osapi_compute_app_v2

[composite:openstack_volume_api_v1]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit 
osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext 
osapi_volume_app_v1

[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:ratelimit]
paste.filter_factory = 
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

[filter:sizelimit]
paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory

[app:osapi_compute_app_v2]
paste.app_factory = nova.api.openstack.compute:APIRouter.factory

[pipeline:oscomputeversions]
pipeline = faultwrap oscomputeversionapp

[app:osapi_volume_app_v1]
paste.app_factory = nova.api.openstack.volume:APIRouter.factory

[app:oscomputeversionapp]
paste.app_factory = nova.api.openstack.compute.versions:Versions.factory

[pipeline:osvolumeversions]
pipeline = faultwrap osvolumeversionapp

[app:osvolumeversionapp]
paste.app_factory = nova.api.openstack.volume.versions:Versions.factory

##
# Shared #
##

[filter:keystonecontext]
paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = nova
admin_password = passw0rd
auth_uri = http://controller:5000/



/etc/nova/nova.conf 
[DEFAULT]
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp
volumes_dir = /etc/nova/volumes
dhcpbridge = /usr/bin/nova-dhcpbridge
dhcpbridge_flagfile = /etc/nova/nova.conf
force_dhcp_release = False
injected_network_template = 

Re: [Openstack] nova list requests failing

2012-12-10 Thread Andrew Holway
It was this little fella in my nova.conf :)

On Dec 10, 2012, at 8:35 PM, Andrew Holway wrote:

 #[keystone_authtoken]



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] 500 - Internal Server Error when using Volumes in Dashboard (Centos 6.3)

2012-12-10 Thread Andrew Holway
Hello,

I have a Dashboard install.

/dashboard/syspanel/volumes/  /dashboard/nova/volumes/ causes a 500 error.

The 500 goes away when I run 

$nova-api-os-volume 


I also have a

$/etc/init.d/openstack-nova-volume start

Which doesn't make the 500 error go away.

Can someone tell me what nova-api-os-volume is, what nova-volume is and how to 
get them both properly doing their thing on my Centos 6.3 install.

The guide I am following has no mention of it: 
https://github.com/beloglazov/openstack-centos-kvm-glusterfs/

Thanks,

Andrew




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 500 - Internal Server Error when using Volumes in Dashboard (Centos 6.3)

2012-12-10 Thread Andrew Holway
Thanks for the advice. Actually Im not planning on using block volumes at all 
but its nice to know how everything plugs together.

Could you tell me what would be the best way to attach NFS filers as shared 
storage? I am planning on running glusterfs on the hypervisors and then sharing 
that back to the hypervisors (If that makes sense). Do you have some tips for 
such a setup?

Thanks,

Andrew

On Dec 10, 2012, at 10:46 PM, Vishvananda Ishaya wrote:

 The recommended way is to run cinder. The config that you showed before was 
 not running osapi_volume as one of your enabled apis.
 
 Prior to folsom the way was to enable osapi_volume or run nova-api-volume. 
 The worker that processes commands is called nova-volume (similar to 
 nova-compute on the compute side). In cinder these are cinder-api and 
 cinder-volume.
 
 FYI, you don't need volumes working to use nova. It is for attachable block 
 storage devices (similar to ebs).
 
 I hope that helps.
 
 Vish
 
 On Dec 10, 2012, at 1:37 PM, Andrew Holway a.hol...@syseleven.de wrote:
 
 Hello,
 
 I have a Dashboard install.
 
 /dashboard/syspanel/volumes/  /dashboard/nova/volumes/ causes a 500 error.
 
 The 500 goes away when I run 
 
 $nova-api-os-volume 
 
 
 I also have a
 
 $/etc/init.d/openstack-nova-volume start
 
 Which doesn't make the 500 error go away.
 
 Can someone tell me what nova-api-os-volume is, what nova-volume is and how 
 to get them both properly doing their thing on my Centos 6.3 install.
 
 The guide I am following has no mention of it: 
 https://github.com/beloglazov/openstack-centos-kvm-glusterfs/
 
 Thanks,
 
 Andrew
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp