On 07/17/2013 10:20 PM, Steve Dainard wrote:
Completed changes:
*gluster> volume info vol1*
Volume Name: vol1
Type: Replicate
Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1
Completed changes:
*gluster> volume info vol1*
Volume Name: vol1
Type: Replicate
Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1
Brick2: ovirt002.miovision.corp:/mnt/storage1
On 07/17/2013 09:04 PM, Steve Dainard wrote:
*Web-UI displays:*
VM VM1 is down. Exit message: internal error process exited while
connecting to monitor: qemu-system-x86_64: -drive
file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2
I'm getting an error when attempting to run up a VM with a disk in a
gluster storage domain. Note gluster is running on the same host as the
Ovirt virt node, but not managed by ovirt manager.
*Ovirt Host RPM's:*
vdsm-xmlrpc-4.11.0-143.git5fe89d4.fc18.noarch
vdsm-python-cpopen-4.11.0-142.git24ad94d
4 matches
Mail list logo