Also, 
you can import manually one instance and see if it boots.
$ cd  /exports/instances/instances/instance-00000242
$ virsh define libvirt.xml
$ virsh start instance-00000242

and see if it boots, if so, we should start looking somewhere else

Razique Mahroua - Nuage & Co
Tel : +33 9 72 37 94 15


Le 11 avr. 2013 à 20:14, Vishvananda Ishaya <[email protected]> a écrit :

You should check your syslog for app armor denied messages. It is possible
app armor is getting in the way here.

Vish

On Apr 11, 2013, at 8:35 AM, John Paul Walters <[email protected]> wrote:

Hi Sylvain,

I agree, though I've confirmed that the UID and GID are consistent across both the compute nodes and my Glusterfs nodes. 

JP


On Apr 11, 2013, at 11:22 AM, Sylvain Bauza <[email protected]> wrote:

Agree.
As for other shared FS, this is *highly* important to make sure Nova UID and GID are consistent in between all compute nodes.
If this is not the case, then you have to usermod all instances...

-Sylvain

Le 11/04/2013 16:49, Razique Mahroua a écrit :
Hi JP,
my bet is that this is a writing permissions issue. Does nova has the right to write within the mounted directory?

Razique Mahroua - Nuage & Co
Tel : +33 9 72 37 94 15



Le 11 avr. 2013 à 16:36, John Paul Walters <[email protected]> a écrit :

Hi,

We've started implementing a Glusterfs-based solution for instance storage in order to provide live migration.  I've run into a strange problem when using a multi-node Gluster setup that I hope someone has a suggestion to resolve.

I have a 12 node distributed/replicated Gluster cluster.  I can mount it to my client machines, and it seems to be working alright.  When I launch instances, the nova-compute log on the client machines are giving me two error messages:

First is a qemu-kvm error: could not open disk image /exports/instances/instances/instance-00000242/disk: Invalid argument
(full output at http://pastebin.com/i8vzWegJ)

The second error message comes a short time later ending with nova.openstack.common.rpc.amqp Invalid: Instance has already been created
(full output at http://pastebin.com/6Ta4kkBN)

This happens reliably with the multi-Gluster-node setup.  Oddly, after creating a test Gluster volume composed of a single brick and single node, everything works fine.

Does anyone have any suggestions?

thanks,
JP


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to