Re: [Users] Host cannot access storage domains

2014-01-17 Thread Itamar Heim

On 01/03/2014 03:12 PM, Albl, Oliver wrote:

Hi,

   I am starting with oVirt 3.3.2 and I have an issue adding a host to a
cluster.

I am using oVirt Engine Version 3.3.2-1.el6

There is a cluster with one host (installed with oVirt Node - 3.0.3 -
1.1.fc19 ISO image) up and running.

I installed a second host using the same ISO image.

I approved the host in the cluster.

When I try to activate the second host, I receive the following messages
in the events pane:

State was set to Up for host host02.

Host host02 reports about one of the Active Storage Domains as Problematic.

Host host02 cannot access one of the Storage Domains attached to the
Data Center Test303. Stetting Host state to Non-Operational.

Failed to connect Host host02 to Storage Pool Test303

There are 3 FC Storage Domains configured and visible to both hosts.

multipath –ll shows all LUNs on both hosts.

The engine.log reports the following about every five minutes:

2014-01-03 13:50:15,408 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-44) Domain 7841a1c0-181a-4d43-9a25-b707accb5c4b: LUN_105
check timeot 69.7 is too big

2014-01-03 13:50:15,409 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-44) Domain 52cf84ce-6eda-4337-8c94-491d94f5a18d: LUN_103
check timeot 59.6 is too big

2014-01-03 13:50:15,410 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44)
Storage Domain LUN_105 of pool Test303 is in problem in host host02

2014-01-03 13:50:15,411 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44)
Storage Domain LUN_103 of pool Test030 is in problem in host host02

Please let me know if there are any log files I should attach.

Thank you for your help!

All the best,

Oliver Albl



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



was this resolved?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host cannot access storage domains

2014-01-04 Thread Itamar Heim

On 01/03/2014 04:59 PM, Alon Bar-Lev wrote:



- Original Message -

From: "Oliver Albl" 
To: d...@redhat.com
Cc: users@ovirt.org
Sent: Friday, January 3, 2014 4:56:33 PM
Subject: Re: [Users] Host cannot access storage domains

Redirecting to /bin/systemctl reconfigure  vdsmd.service
Unknown operation 'reconfigure'.


/usr/lib/systemd/systemd-vdsmd reconfigure force


in which use case is this needed manually?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
Dafna,

  /usr/lib/systemd/systemd-vdsmd reconfigure force worked!

VMs start and can be migrated. Thanks a lot for your help - and I'll stay with 
the node iso image :)

All the best,
Oliver
-Ursprüngliche Nachricht-
Von: Alon Bar-Lev [mailto:alo...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 16:00
An: Albl, Oliver
Cc: d...@redhat.com; users@ovirt.org
Betreff: Re: [Users] Host cannot access storage domains



- Original Message -
> From: "Oliver Albl" 
> To: d...@redhat.com
> Cc: users@ovirt.org
> Sent: Friday, January 3, 2014 4:56:33 PM
> Subject: Re: [Users] Host cannot access storage domains
> 
> Redirecting to /bin/systemctl reconfigure  vdsmd.service Unknown 
> operation 'reconfigure'.

/usr/lib/systemd/systemd-vdsmd reconfigure force

> 
> ... seems to me, I should get rid of the ovirt-node iso installation 
> and move to a rpm based install?
> 
> Thanks,
> Oliver
> -Ursprüngliche Nachricht-
> Von: Dafna Ron [mailto:d...@redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 15:51
> An: Albl, Oliver
> Cc: users@ovirt.org
> Betreff: Re: AW: AW: AW: AW: AW: [Users] Host cannot access storage 
> domains
> 
> can you run:
> service vdsmd reconfigure on the second host?
> 
> On 01/03/2014 02:43 PM, Albl, Oliver wrote:
> > Dafna,
> >
> >yes, the VM starts on the first node, the issues are on the second node
> >only.
> >
> > /etc/libvirt/qemu-sanlock.conf is identical on on both nodes:
> >
> > auto_disk_leases=0
> > require_lease_for_disks=0
> >
> > yum updates reports "Using yum is not supported"...
> >
> > Thanks,
> > Oliver
> >
> > -Ursprüngliche Nachricht-
> > Von: Dafna Ron [mailto:d...@redhat.com]
> > Gesendet: Freitag, 03. Jänner 2014 15:39
> > An: Albl, Oliver
> > Cc: users@ovirt.org
> > Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage 
> > domains
> >
> > ok, let's try to zoom in on the issue...
> > can you run vm's on the first host or do you have issues only on the 
> > second host you added?
> > can you run on both hosts?
> > # egrep -v ^# /etc/libvirt/qemu-sanlock.conf
> >
> > can you run yum update on one of the hosts and see if there are 
> > newer packages?
> >
> > Thanks,
> >
> > Dafna
> >
> > On 01/03/2014 02:30 PM, Albl, Oliver wrote:
> >> I installed both hosts using the oVirt Node ISO image:
> >>
> >> OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
> >> 200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
> >> libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19
> >>
> >> Thanks,
> >> Oliver
> >> -Ursprüngliche Nachricht-
> >> Von: Dafna Ron [mailto:d...@redhat.com]
> >> Gesendet: Freitag, 03. Jänner 2014 15:24
> >> An: Albl, Oliver
> >> Cc: users@ovirt.org
> >> Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains
> >>
> >> ignore the link :)
> >>
> >> so searching for this error I hit an old bug and it seemed to be an 
> >> issue between libvirt/sanlock.
> >>
> >> https://bugzilla.redhat.com/show_bug.cgi?id=828633
> >>
> >> are you using latest packages?
> >>
> >>
> >>
> >>
> >> On 01/03/2014 02:15 PM, Albl, Oliver wrote:
> >>> Dafna,
> >>>
> >>>  Libvirtd.log shows no errors, but VM log shows the following:
> >>>
> >>> 2014-01-03 13:52:11.296+: starting up LC_ALL=C 
> >>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >>> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
> >>> pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime 
> >>> mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
> >>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios 
> >>> type=1,manufacturer=oVirt,product=oVirt
> >>> Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304
> >>> C3
> >>> 8
> >>> 4
> >>> C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config 
> >>> -nodefaults -chardev 
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,s
> >>> er v e r,nowait -mon chardev=charmonitor,id=monitor,mode=control 
> >>> -rtc base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -de

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron

awesome!
Thanks for reporting this and sticking out with me :)

Dafna

On 01/03/2014 03:08 PM, Albl, Oliver wrote:

Dafna,

   /usr/lib/systemd/systemd-vdsmd reconfigure force worked!

VMs start and can be migrated. Thanks a lot for your help - and I'll stay with 
the node iso image :)

All the best,
Oliver
-Ursprüngliche Nachricht-
Von: Alon Bar-Lev [mailto:alo...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 16:00
An: Albl, Oliver
Cc: d...@redhat.com; users@ovirt.org
Betreff: Re: [Users] Host cannot access storage domains



- Original Message -

From: "Oliver Albl" 
To: d...@redhat.com
Cc: users@ovirt.org
Sent: Friday, January 3, 2014 4:56:33 PM
Subject: Re: [Users] Host cannot access storage domains

Redirecting to /bin/systemctl reconfigure  vdsmd.service Unknown
operation 'reconfigure'.

/usr/lib/systemd/systemd-vdsmd reconfigure force


... seems to me, I should get rid of the ovirt-node iso installation
and move to a rpm based install?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:51
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: AW: [Users] Host cannot access storage
domains

can you run:
service vdsmd reconfigure on the second host?

On 01/03/2014 02:43 PM, Albl, Oliver wrote:

Dafna,

yes, the VM starts on the first node, the issues are on the second node
only.

/etc/libvirt/qemu-sanlock.conf is identical on on both nodes:

auto_disk_leases=0
require_lease_for_disks=0

yum updates reports "Using yum is not supported"...

Thanks,
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:39
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage
domains

ok, let's try to zoom in on the issue...
can you run vm's on the first host or do you have issues only on the
second host you added?
can you run on both hosts?
# egrep -v ^# /etc/libvirt/qemu-sanlock.conf

can you run yum update on one of the hosts and see if there are
newer packages?

Thanks,

Dafna

On 01/03/2014 02:30 PM, Albl, Oliver wrote:

I installed both hosts using the oVirt Node ISO image:

OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:24
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains

ignore the link :)

so searching for this error I hit an old bug and it seemed to be an
issue between libvirt/sanlock.

https://bugzilla.redhat.com/show_bug.cgi?id=828633

are you using latest packages?




On 01/03/2014 02:15 PM, Albl, Oliver wrote:

Dafna,

  Libvirtd.log shows no errors, but VM log shows the following:

2014-01-03 13:52:11.296+: starting up LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine
pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime
mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304
C3
8
4
C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,s
er v e r,nowait -mon chardev=charmonitor,id=monitor,mode=control
-rtc base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707acc
b5
c
4
b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8
38
7
-
ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca
99
2
-
b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio
=n
a
t
ive -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk
0,
i
d
=virtio-disk0,bootindex=1 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcd
b-
a
2 c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=c
ha n n el0,name=com.redhat.rhevm.vdsm -chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcd
b-
a
2 c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=c
ha
n
n
el1,name=org.qemu.guest_agent.0 -chardev
spicevmc,id=charcha

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Alon Bar-Lev


- Original Message -
> From: "Oliver Albl" 
> To: d...@redhat.com
> Cc: users@ovirt.org
> Sent: Friday, January 3, 2014 4:56:33 PM
> Subject: Re: [Users] Host cannot access storage domains
> 
> Redirecting to /bin/systemctl reconfigure  vdsmd.service
> Unknown operation 'reconfigure'.

/usr/lib/systemd/systemd-vdsmd reconfigure force

> 
> ... seems to me, I should get rid of the ovirt-node iso installation and move
> to a rpm based install?
> 
> Thanks,
> Oliver
> -Ursprüngliche Nachricht-
> Von: Dafna Ron [mailto:d...@redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 15:51
> An: Albl, Oliver
> Cc: users@ovirt.org
> Betreff: Re: AW: AW: AW: AW: AW: [Users] Host cannot access storage domains
> 
> can you run:
> service vdsmd reconfigure on the second host?
> 
> On 01/03/2014 02:43 PM, Albl, Oliver wrote:
> > Dafna,
> >
> >yes, the VM starts on the first node, the issues are on the second node
> >only.
> >
> > /etc/libvirt/qemu-sanlock.conf is identical on on both nodes:
> >
> > auto_disk_leases=0
> > require_lease_for_disks=0
> >
> > yum updates reports "Using yum is not supported"...
> >
> > Thanks,
> > Oliver
> >
> > -Ursprüngliche Nachricht-
> > Von: Dafna Ron [mailto:d...@redhat.com]
> > Gesendet: Freitag, 03. Jänner 2014 15:39
> > An: Albl, Oliver
> > Cc: users@ovirt.org
> > Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage
> > domains
> >
> > ok, let's try to zoom in on the issue...
> > can you run vm's on the first host or do you have issues only on the second
> > host you added?
> > can you run on both hosts?
> > # egrep -v ^# /etc/libvirt/qemu-sanlock.conf
> >
> > can you run yum update on one of the hosts and see if there are newer
> > packages?
> >
> > Thanks,
> >
> > Dafna
> >
> > On 01/03/2014 02:30 PM, Albl, Oliver wrote:
> >> I installed both hosts using the oVirt Node ISO image:
> >>
> >> OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
> >> 200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
> >> libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19
> >>
> >> Thanks,
> >> Oliver
> >> -Ursprüngliche Nachricht-
> >> Von: Dafna Ron [mailto:d...@redhat.com]
> >> Gesendet: Freitag, 03. Jänner 2014 15:24
> >> An: Albl, Oliver
> >> Cc: users@ovirt.org
> >> Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains
> >>
> >> ignore the link :)
> >>
> >> so searching for this error I hit an old bug and it seemed to be an issue
> >> between libvirt/sanlock.
> >>
> >> https://bugzilla.redhat.com/show_bug.cgi?id=828633
> >>
> >> are you using latest packages?
> >>
> >>
> >>
> >>
> >> On 01/03/2014 02:15 PM, Albl, Oliver wrote:
> >>> Dafna,
> >>>
> >>>  Libvirtd.log shows no errors, but VM log shows the following:
> >>>
> >>> 2014-01-03 13:52:11.296+: starting up LC_ALL=C
> >>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> >>> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine
> >>> pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime
> >>> mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
> >>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios
> >>> type=1,manufacturer=oVirt,product=oVirt
> >>> Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C3
> >>> 8
> >>> 4
> >>> C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config
> >>> -nodefaults -chardev
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,ser
> >>> v e r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >>> base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> >>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> >>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive
> >>> if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
> >>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> >>> file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5
> >>> c
> >>> 4
> >>> b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron

can you run:
service vdsmd reconfigure on the second host?

On 01/03/2014 02:43 PM, Albl, Oliver wrote:

Dafna,

   yes, the VM starts on the first node, the issues are on the second node only.

/etc/libvirt/qemu-sanlock.conf is identical on on both nodes:

auto_disk_leases=0
require_lease_for_disks=0

yum updates reports "Using yum is not supported"...

Thanks,
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:39
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage domains

ok, let's try to zoom in on the issue...
can you run vm's on the first host or do you have issues only on the second 
host you added?
can you run on both hosts?
# egrep -v ^# /etc/libvirt/qemu-sanlock.conf

can you run yum update on one of the hosts and see if there are newer packages?

Thanks,

Dafna

On 01/03/2014 02:30 PM, Albl, Oliver wrote:

I installed both hosts using the oVirt Node ISO image:

OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:24
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains

ignore the link :)

so searching for this error I hit an old bug and it seemed to be an issue 
between libvirt/sanlock.

https://bugzilla.redhat.com/show_bug.cgi?id=828633

are you using latest packages?




On 01/03/2014 02:15 PM, Albl, Oliver wrote:

Dafna,

 Libvirtd.log shows no errors, but VM log shows the following:

2014-01-03 13:52:11.296+: starting up LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine
pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime mlock=off
-smp 1,sockets=1,cores=1,threads=1 -uuid
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C38
4
C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,serv
e r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5c
4
b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8387
-
ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca992
-
b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=na
t
ive -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,i
d
=virtio-disk0,bootindex=1 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a
2 c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=chan
n el0,name=com.redhat.rhevm.vdsm -chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a
2 c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=chan
n
el1,name=org.qemu.guest_agent.0 -chardev
spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=chan
n
el2,name=com.redhat.spice.0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel
=
main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-ch
a
nnel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=us
b redir,seamless-migration=on -k en-us -device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr
=
0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
libvirt: Lock Driver error : unsupported configuration: Read/write,
exclusive access, disks were present, but no leases specified
2014-01-03 13:52:11.306+: shutting down

Not sure what you mean with this 
http://forums.opensuse.org/english/get-technical-help-here/virtualization/492483-cannot-start-libvert-kvm-guests-after-update-tumbleweed.html.
 Do you want me to update libvirt with these repos on the oVirt-Node based 
installation?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:10
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: [Users] Host cannot access storage domains

actually, looking at this again, it's a libvirt error and it can 

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron
:) no, there are just different ways of dealing with things... don't 
give up now.
lets try to re-install the host, perhaps the original issue cased a 
configuration problem and a re-install will solve it...



On 01/03/2014 02:56 PM, Albl, Oliver wrote:

Redirecting to /bin/systemctl reconfigure  vdsmd.service
Unknown operation 'reconfigure'.

... seems to me, I should get rid of the ovirt-node iso installation and move 
to a rpm based install?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:51
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: AW: [Users] Host cannot access storage domains

can you run:
service vdsmd reconfigure on the second host?

On 01/03/2014 02:43 PM, Albl, Oliver wrote:

Dafna,

yes, the VM starts on the first node, the issues are on the second node 
only.

/etc/libvirt/qemu-sanlock.conf is identical on on both nodes:

auto_disk_leases=0
require_lease_for_disks=0

yum updates reports "Using yum is not supported"...

Thanks,
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:39
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage
domains

ok, let's try to zoom in on the issue...
can you run vm's on the first host or do you have issues only on the second 
host you added?
can you run on both hosts?
# egrep -v ^# /etc/libvirt/qemu-sanlock.conf

can you run yum update on one of the hosts and see if there are newer packages?

Thanks,

Dafna

On 01/03/2014 02:30 PM, Albl, Oliver wrote:

I installed both hosts using the oVirt Node ISO image:

OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:24
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains

ignore the link :)

so searching for this error I hit an old bug and it seemed to be an issue 
between libvirt/sanlock.

https://bugzilla.redhat.com/show_bug.cgi?id=828633

are you using latest packages?




On 01/03/2014 02:15 PM, Albl, Oliver wrote:

Dafna,

  Libvirtd.log shows no errors, but VM log shows the following:

2014-01-03 13:52:11.296+: starting up LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine
pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime
mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C3
8
4
C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,ser
v e r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5
c
4
b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-838
7
-
ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca99
2
-
b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=n
a
t
ive -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,
i
d
=virtio-disk0,bootindex=1 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-
a
2 c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=cha
n n el0,name=com.redhat.rhevm.vdsm -chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-
a
2 c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=cha
n
n
el1,name=org.qemu.guest_agent.0 -chardev
spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=cha
n
n
el2,name=com.redhat.spice.0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channe
l
=
main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-c
h
a
nnel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=u
s b redir,seamless-migration=on -k en-us -device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,add
r
=
0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
libvirt: Lock Drive

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
Redirecting to /bin/systemctl reconfigure  vdsmd.service
Unknown operation 'reconfigure'.

... seems to me, I should get rid of the ovirt-node iso installation and move 
to a rpm based install?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 15:51
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: AW: [Users] Host cannot access storage domains

can you run:
service vdsmd reconfigure on the second host?

On 01/03/2014 02:43 PM, Albl, Oliver wrote:
> Dafna,
>
>yes, the VM starts on the first node, the issues are on the second node 
> only.
>
> /etc/libvirt/qemu-sanlock.conf is identical on on both nodes:
>
> auto_disk_leases=0
> require_lease_for_disks=0
>
> yum updates reports "Using yum is not supported"...
>
> Thanks,
> Oliver
>
> -Ursprüngliche Nachricht-
> Von: Dafna Ron [mailto:d...@redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 15:39
> An: Albl, Oliver
> Cc: users@ovirt.org
> Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage 
> domains
>
> ok, let's try to zoom in on the issue...
> can you run vm's on the first host or do you have issues only on the second 
> host you added?
> can you run on both hosts?
> # egrep -v ^# /etc/libvirt/qemu-sanlock.conf
>
> can you run yum update on one of the hosts and see if there are newer 
> packages?
>
> Thanks,
>
> Dafna
>
> On 01/03/2014 02:30 PM, Albl, Oliver wrote:
>> I installed both hosts using the oVirt Node ISO image:
>>
>> OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 -
>> 200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version:
>> libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19
>>
>> Thanks,
>> Oliver
>> -----Ursprüngliche Nachricht-----
>> Von: Dafna Ron [mailto:d...@redhat.com]
>> Gesendet: Freitag, 03. Jänner 2014 15:24
>> An: Albl, Oliver
>> Cc: users@ovirt.org
>> Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains
>>
>> ignore the link :)
>>
>> so searching for this error I hit an old bug and it seemed to be an issue 
>> between libvirt/sanlock.
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=828633
>>
>> are you using latest packages?
>>
>>
>>
>>
>> On 01/03/2014 02:15 PM, Albl, Oliver wrote:
>>> Dafna,
>>>
>>>  Libvirtd.log shows no errors, but VM log shows the following:
>>>
>>> 2014-01-03 13:52:11.296+: starting up LC_ALL=C 
>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>>> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
>>> pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime 
>>> mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
>>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios 
>>> type=1,manufacturer=oVirt,product=oVirt
>>> Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C3
>>> 8
>>> 4
>>> C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config 
>>> -nodefaults -chardev 
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,ser
>>> v e r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc 
>>> base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
>>> if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
>>> file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5
>>> c
>>> 4
>>> b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-838
>>> 7
>>> -
>>> ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca99
>>> 2
>>> -
>>> b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=n
>>> a
>>> t
>>> ive -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,
>>> i
>>> d
>>> =virtio-disk0,bootindex=1 -chardev
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-
>>> a
>>> 2 c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
>>> -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=cha
>>> n n el0,name=com.redhat.rhevm.vdsm -chardev 
>>> socket,id=charchannel1,path=/var/lib/li

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
Dafna,

  yes, the VM starts on the first node, the issues are on the second node only.

/etc/libvirt/qemu-sanlock.conf is identical on on both nodes:

auto_disk_leases=0
require_lease_for_disks=0

yum updates reports "Using yum is not supported"...

Thanks,
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 15:39
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: AW: [Users] Host cannot access storage domains

ok, let's try to zoom in on the issue...
can you run vm's on the first host or do you have issues only on the second 
host you added?
can you run on both hosts?
# egrep -v ^# /etc/libvirt/qemu-sanlock.conf

can you run yum update on one of the hosts and see if there are newer packages?

Thanks,

Dafna

On 01/03/2014 02:30 PM, Albl, Oliver wrote:
> I installed both hosts using the oVirt Node ISO image:
>
> OS Version: oVirt Node - 3.0.3 - 1.1.fc19 Kernel Version: 3.11.9 - 
> 200.fc19.x86_64 KVM Version: 1.6.1 - 2.fc19 LIBVIRT Version: 
> libvirt-1.1.3.1-2.fc19 VDSM Version: vdsm-4.13.0-11.fc19
>
> Thanks,
> Oliver
> -Ursprüngliche Nachricht-
> Von: Dafna Ron [mailto:d...@redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 15:24
> An: Albl, Oliver
> Cc: users@ovirt.org
> Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains
>
> ignore the link :)
>
> so searching for this error I hit an old bug and it seemed to be an issue 
> between libvirt/sanlock.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=828633
>
> are you using latest packages?
>
>
>
>
> On 01/03/2014 02:15 PM, Albl, Oliver wrote:
>> Dafna,
>>
>> Libvirtd.log shows no errors, but VM log shows the following:
>>
>> 2014-01-03 13:52:11.296+: starting up LC_ALL=C 
>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
>> pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime mlock=off 
>> -smp 1,sockets=1,cores=1,threads=1 -uuid
>> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios 
>> type=1,manufacturer=oVirt,product=oVirt
>> Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C38
>> 4
>> C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config 
>> -nodefaults -chardev 
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,serv
>> e r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc 
>> base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
>> if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>> file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5c
>> 4
>> b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8387
>> -
>> ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca992
>> - 
>> b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=na
>> t
>> ive -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,i
>> d
>> =virtio-disk0,bootindex=1 -chardev
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a
>> 2 c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait 
>> -device 
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=chan
>> n el0,name=com.redhat.rhevm.vdsm -chardev
>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a
>> 2 c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait 
>> -device 
>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=chan
>> n
>> el1,name=org.qemu.guest_agent.0 -chardev 
>> spicevmc,id=charchannel2,name=vdagent -device 
>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=chan
>> n
>> el2,name=com.redhat.spice.0 -spice
>> tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel
>> = 
>> main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-ch
>> a 
>> nnel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=us
>> b redir,seamless-migration=on -k en-us -device 
>> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr
>> =
>> 0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>> libvirt: Lock Driver error : unsupported configuration: Read/write, 
>> exclusive access, disks were present, but no leases specified
>> 2014-01-03 13:52:

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron

ok, let's try to zoom in on the issue...
can you run vm's on the first host or do you have issues only on the 
second host you added?

can you run on both hosts?
# egrep -v ^# /etc/libvirt/qemu-sanlock.conf

can you run yum update on one of the hosts and see if there are newer 
packages?


Thanks,

Dafna

On 01/03/2014 02:30 PM, Albl, Oliver wrote:

I installed both hosts using the oVirt Node ISO image:

OS Version: oVirt Node - 3.0.3 - 1.1.fc19
Kernel Version: 3.11.9 - 200.fc19.x86_64
KVM Version: 1.6.1 - 2.fc19
LIBVIRT Version: libvirt-1.1.3.1-2.fc19
VDSM Version: vdsm-4.13.0-11.fc19

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:24
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains

ignore the link :)

so searching for this error I hit an old bug and it seemed to be an issue 
between libvirt/sanlock.

https://bugzilla.redhat.com/show_bug.cgi?id=828633

are you using latest packages?




On 01/03/2014 02:15 PM, Albl, Oliver wrote:

Dafna,

Libvirtd.log shows no errors, but VM log shows the following:

2014-01-03 13:52:11.296+: starting up LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine
pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime mlock=off
-smp 1,sockets=1,cores=1,threads=1 -uuid
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C384
C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,serve
r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5c4
b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8387-
ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca992-
b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=nat
ive -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id
=virtio-disk0,bootindex=1 -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2
c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=chann
el0,name=com.redhat.rhevm.vdsm -chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2
c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=chann
el1,name=org.qemu.guest_agent.0 -chardev
spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=chann
el2,name=com.redhat.spice.0 -spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=
main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-cha
nnel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usb
redir,seamless-migration=on -k en-us -device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=
0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
libvirt: Lock Driver error : unsupported configuration: Read/write,
exclusive access, disks were present, but no leases specified
2014-01-03 13:52:11.306+: shutting down

Not sure what you mean with this 
http://forums.opensuse.org/english/get-technical-help-here/virtualization/492483-cannot-start-libvert-kvm-guests-after-update-tumbleweed.html.
 Do you want me to update libvirt with these repos on the oVirt-Node based 
installation?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:10
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: [Users] Host cannot access storage domains

actually, looking at this again, it's a libvirt error and it can be related to 
selinux or sasl.
can you also, look at libvirt log and the vm log under /var/log/libvirt?

On 01/03/2014 02:00 PM, Albl, Oliver wrote:

Dafna,

 please find the logs below:

ERRORs in vdsm.log on host02:

Thread-61::ERROR::2014-01-03
13:51:48,956::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain f404398a-97f9-474c-af2c-e8887f53f688
Thread-61::ERROR::2014-01-03
13:51:48,959::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDo
m
ain) looking for domain f404398a-97f9-474c-af2c-e8887f53f688
Thread-323::ERROR::2014-01-03
13:52:11,527::vm::2132::vm.Vm::(_startUnde

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
I installed both hosts using the oVirt Node ISO image:

OS Version: oVirt Node - 3.0.3 - 1.1.fc19
Kernel Version: 3.11.9 - 200.fc19.x86_64
KVM Version: 1.6.1 - 2.fc19
LIBVIRT Version: libvirt-1.1.3.1-2.fc19
VDSM Version: vdsm-4.13.0-11.fc19

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 15:24
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: AW: [Users] Host cannot access storage domains

ignore the link :)

so searching for this error I hit an old bug and it seemed to be an issue 
between libvirt/sanlock.

https://bugzilla.redhat.com/show_bug.cgi?id=828633

are you using latest packages?




On 01/03/2014 02:15 PM, Albl, Oliver wrote:
> Dafna,
>
>Libvirtd.log shows no errors, but VM log shows the following:
>
> 2014-01-03 13:52:11.296+: starting up LC_ALL=C 
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
> pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime mlock=off 
> -smp 1,sockets=1,cores=1,threads=1 -uuid 
> d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -smbios 
> type=1,manufacturer=oVirt,product=oVirt 
> Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C384
> C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 -no-user-config 
> -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,serve
> r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc 
> base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device 
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
> if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
> file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5c4
> b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8387-
> ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca992-
> b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=nat
> ive -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id
> =virtio-disk0,bootindex=1 -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2
> c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait -device 
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=chann
> el0,name=com.redhat.rhevm.vdsm -chardev 
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2
> c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait -device 
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=chann
> el1,name=org.qemu.guest_agent.0 -chardev 
> spicevmc,id=charchannel2,name=vdagent -device 
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=chann
> el2,name=com.redhat.spice.0 -spice 
> tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=
> main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-cha
> nnel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usb
> redir,seamless-migration=on -k en-us -device 
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=
> 0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> libvirt: Lock Driver error : unsupported configuration: Read/write, 
> exclusive access, disks were present, but no leases specified
> 2014-01-03 13:52:11.306+: shutting down
>
> Not sure what you mean with this 
> http://forums.opensuse.org/english/get-technical-help-here/virtualization/492483-cannot-start-libvert-kvm-guests-after-update-tumbleweed.html.
>  Do you want me to update libvirt with these repos on the oVirt-Node based 
> installation?
>
> Thanks,
> Oliver
> -Ursprüngliche Nachricht-
> Von: Dafna Ron [mailto:d...@redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 15:10
> An: Albl, Oliver
> Cc: users@ovirt.org
> Betreff: Re: AW: AW: [Users] Host cannot access storage domains
>
> actually, looking at this again, it's a libvirt error and it can be related 
> to selinux or sasl.
> can you also, look at libvirt log and the vm log under /var/log/libvirt?
>
> On 01/03/2014 02:00 PM, Albl, Oliver wrote:
>> Dafna,
>>
>> please find the logs below:
>>
>> ERRORs in vdsm.log on host02:
>>
>> Thread-61::ERROR::2014-01-03
>> 13:51:48,956::sdc::137::Storage.StorageDomainCache::(_findDomain)
>> looking for unfetched domain f404398a-97f9-474c-af2c-e8887f53f688
>> Thread-61::ERROR::2014-01-03
>> 13:51:48,959::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDo
>> m
>> ain) looking for domain f

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron

ignore the link :)

so searching for this error I hit an old bug and it seemed to be an 
issue between libvirt/sanlock.


https://bugzilla.redhat.com/show_bug.cgi?id=828633

are you using latest packages?




On 01/03/2014 02:15 PM, Albl, Oliver wrote:

Dafna,

   Libvirtd.log shows no errors, but VM log shows the following:

2014-01-03 13:52:11.296+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime mlock=off -smp 
1,sockets=1,cores=1,threads=1 -uuid d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 
-smbios type=1,manufacturer=oVirt,product=oVirt 
Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C384C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6
 -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device 
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5c4b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8387-ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca992-b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=native
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -chardev spicevmc,id=charchannel2,name=vdagent -device 
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
 -spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
 -k en-us -device 
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
libvirt: Lock Driver error : unsupported configuration: Read/write, exclusive 
access, disks were present, but no leases specified
2014-01-03 13:52:11.306+: shutting down

Not sure what you mean with this 
http://forums.opensuse.org/english/get-technical-help-here/virtualization/492483-cannot-start-libvert-kvm-guests-after-update-tumbleweed.html.
 Do you want me to update libvirt with these repos on the oVirt-Node based 
installation?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 15:10
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: [Users] Host cannot access storage domains

actually, looking at this again, it's a libvirt error and it can be related to 
selinux or sasl.
can you also, look at libvirt log and the vm log under /var/log/libvirt?

On 01/03/2014 02:00 PM, Albl, Oliver wrote:

Dafna,

please find the logs below:

ERRORs in vdsm.log on host02:

Thread-61::ERROR::2014-01-03
13:51:48,956::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain f404398a-97f9-474c-af2c-e8887f53f688
Thread-61::ERROR::2014-01-03
13:51:48,959::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDom
ain) looking for domain f404398a-97f9-474c-af2c-e8887f53f688
Thread-323::ERROR::2014-01-03
13:52:11,527::vm::2132::vm.Vm::(_startUnderlyingVm) 
vmId=`d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6`::The vm start process failed 
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2092, in _startUnderlyingVm
  self._run()
File "/usr/share/vdsm/vm.py", line 2959, in _run
  self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 
76, in wrapper
  ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2920, in
createXML
libvirtError: Child quit during startup handshake: Input/output error
Thread-60::ERROR::2014-01-03
13:52:23,111::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain 52cf84ce-6eda-4337-8c94-491d94f5a18d
Thread-60::ERROR::2014-01-03
13:52

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
Dafna,

  Libvirtd.log shows no errors, but VM log shows the following:

2014-01-03 13:52:11.296+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name OATEST2 -S -machine 
pc-1.0,accel=kvm,usb=off -cpu SandyBridge -m 1024 -realtime mlock=off -smp 
1,sockets=1,cores=1,threads=1 -uuid d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6 
-smbios type=1,manufacturer=oVirt,product=oVirt 
Node,version=3.0.3-1.1.fc19,serial=30313436-3631-5A43-4A33-3332304C384C,uuid=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6
 -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/OATEST2.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=2014-01-03T13:52:11,driftfix=slew -no-shutdown -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device 
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive 
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
file=/rhev/data-center/mnt/blockSD/7841a1c0-181a-4d43-9a25-b707accb5c4b/images/de7ca992-b1c1-4cb8-9470-2494304c9b69/cbf1f376-23e8-40f3-8387-ed299ee62607,if=none,id=drive-virtio-disk0,format=raw,serial=de7ca992-b1c1-4cb8-9470-2494304c9b69,cache=none,werror=stop,rerror=stop,aio=native
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6.com.redhat.rhevm.vdsm,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6.org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -chardev spicevmc,id=charchannel2,name=vdagent -device 
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
 -spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
 -k en-us -device 
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
libvirt: Lock Driver error : unsupported configuration: Read/write, exclusive 
access, disks were present, but no leases specified
2014-01-03 13:52:11.306+: shutting down

Not sure what you mean with this 
http://forums.opensuse.org/english/get-technical-help-here/virtualization/492483-cannot-start-libvert-kvm-guests-after-update-tumbleweed.html.
 Do you want me to update libvirt with these repos on the oVirt-Node based 
installation?

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 15:10
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: AW: [Users] Host cannot access storage domains

actually, looking at this again, it's a libvirt error and it can be related to 
selinux or sasl.
can you also, look at libvirt log and the vm log under /var/log/libvirt?

On 01/03/2014 02:00 PM, Albl, Oliver wrote:
> Dafna,
>
>please find the logs below:
>
> ERRORs in vdsm.log on host02:
>
> Thread-61::ERROR::2014-01-03 
> 13:51:48,956::sdc::137::Storage.StorageDomainCache::(_findDomain) 
> looking for unfetched domain f404398a-97f9-474c-af2c-e8887f53f688
> Thread-61::ERROR::2014-01-03 
> 13:51:48,959::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDom
> ain) looking for domain f404398a-97f9-474c-af2c-e8887f53f688
> Thread-323::ERROR::2014-01-03 
> 13:52:11,527::vm::2132::vm.Vm::(_startUnderlyingVm) 
> vmId=`d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6`::The vm start process failed 
> Traceback (most recent call last):
>File "/usr/share/vdsm/vm.py", line 2092, in _startUnderlyingVm
>  self._run()
>File "/usr/share/vdsm/vm.py", line 2959, in _run
>  self._connection.createXML(domxml, flags),
>File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> 76, in wrapper
>  ret = f(*args, **kwargs)
>File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2920, in 
> createXML
> libvirtError: Child quit during startup handshake: Input/output error
> Thread-60::ERROR::2014-01-03 
> 13:52:23,111::sdc::137::Storage.StorageDomainCache::(_findDomain) 
> looking for unfetched domain 52cf84ce-6eda-4337-8c94-491d94f5a18d
> Thread-60::ERROR::2014-01-03 
> 13:52:23,111::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDom
> ain) looking for domain 52cf84ce-6eda-4337-8

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron
oghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-3) [2ab5cd2] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: VM TEST2 was started by oliver.albl (Host: host02).
2014-01-03 14:52:14,728 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] START, DestroyVDSCommand(HostName 
= host02, HostId = 6dc7fac6-149e-4445-ace1-3c334a24d52a, 
vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6, force=false, secondsToWait=0, 
gracefully=false), log id: 6a95ffd5
2014-01-03 14:52:15,783 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] FINISH, DestroyVDSCommand, log id: 
6a95ffd5
2014-01-03 14:52:15,804 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM TEST2 is down. Exit message: Child quit 
during startup handshake: Input/output error.
2014-01-03 14:52:15,805 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Running on vds during rerun failed 
vm: null
2014-01-03 14:52:15,805 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] vm TEST2 running in db and not 
running in vds - add to rerun treatment. vds host02
2014-01-03 14:52:15,808 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Rerun vm 
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6. Called from vds host02
2014-01-03 14:52:15,810 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: Failed to run VM TEST2 on Host host02.
2014-01-03 14:52:15,823 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(pool-6-thread-40) [24696b3e] START, IsVmDuringInitiatingVDSCommand( vmId = 
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6), log id: 35e1eec
2014-01-03 14:52:15,824 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(pool-6-thread-40) [24696b3e] FINISH, IsVmDuringInitiatingVDSCommand, return: 
false, log id: 35e1eec
2014-01-03 14:52:15,858 WARN  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(pool-6-thread-40) [24696b3e] CanDoAction of action RunVmOnce failed. 
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT
2014-01-03 14:52:15,862 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: Failed to run VM TEST2 (User: oliver.albl).

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 14:51
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: [Users] Host cannot access storage domains

Thanks for reporting the issue :)

As for the vm, can you please find the error in vdsm.log and in engine and 
paste it?

Thanks,

Dafna


On 01/03/2014 01:49 PM, Albl, Oliver wrote:

Dafna,

you were right, it seems to be a caching issue. Rebooting the host did the 
job:

Before Reboot:

[root@host01 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

[root@host02 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b
925ee53a-69b5-440f-b145-138ada5b452e

After Reboot:

[root@host02 admin]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

So now I have both hosts up and running but when I try to start a VM on the 
second host, I receive the following messages in the events pane:

VM TEST2 was started by oliver.albl (Host: host02) VM TEST2 is down.
Exit message: Child quit during startup handshake: Input/output error.

Thanks again for your help!
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 14:22
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: [Users] Host cannot access storage domains

yes, please attach the vdsm log
also, can you run vdsClient 0 getStorageDomainsList and vdsClient 0 
getDeviceList on both hosts?

It might be a cache issue, so can you please restart the host and if it helps 
attach output before and after the reboot?

Thanks,

Dafna


On 01/03/2014 01:12 PM, Albl, Oliver wrote:

Hi,

I am starting with oVirt 3.3.2 and I have an issue adding a host to a
cluster.

I am using oVirt Engine Version 3.3.2-1.el6

The

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron
dling.AuditLogDirector] 
(ajp--127.0.0.1-8702-3) [2ab5cd2] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: VM TEST2 was started by oliver.albl (Host: host02).
2014-01-03 14:52:14,728 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] START, DestroyVDSCommand(HostName 
= host02, HostId = 6dc7fac6-149e-4445-ace1-3c334a24d52a, 
vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6, force=false, secondsToWait=0, 
gracefully=false), log id: 6a95ffd5
2014-01-03 14:52:15,783 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] FINISH, DestroyVDSCommand, log id: 
6a95ffd5
2014-01-03 14:52:15,804 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM TEST2 is down. Exit message: Child quit 
during startup handshake: Input/output error.
2014-01-03 14:52:15,805 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Running on vds during rerun failed 
vm: null
2014-01-03 14:52:15,805 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] vm TEST2 running in db and not 
running in vds - add to rerun treatment. vds host02
2014-01-03 14:52:15,808 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Rerun vm 
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6. Called from vds host02
2014-01-03 14:52:15,810 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: Failed to run VM TEST2 on Host host02.
2014-01-03 14:52:15,823 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(pool-6-thread-40) [24696b3e] START, IsVmDuringInitiatingVDSCommand( vmId = 
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6), log id: 35e1eec
2014-01-03 14:52:15,824 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(pool-6-thread-40) [24696b3e] FINISH, IsVmDuringInitiatingVDSCommand, return: 
false, log id: 35e1eec
2014-01-03 14:52:15,858 WARN  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(pool-6-thread-40) [24696b3e] CanDoAction of action RunVmOnce failed. 
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT
2014-01-03 14:52:15,862 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: Failed to run VM TEST2 (User: oliver.albl).

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 14:51
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: [Users] Host cannot access storage domains

Thanks for reporting the issue :)

As for the vm, can you please find the error in vdsm.log and in engine and 
paste it?

Thanks,

Dafna


On 01/03/2014 01:49 PM, Albl, Oliver wrote:

Dafna,

you were right, it seems to be a caching issue. Rebooting the host did the 
job:

Before Reboot:

[root@host01 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

[root@host02 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b
925ee53a-69b5-440f-b145-138ada5b452e

After Reboot:

[root@host02 admin]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

So now I have both hosts up and running but when I try to start a VM on the 
second host, I receive the following messages in the events pane:

VM TEST2 was started by oliver.albl (Host: host02) VM TEST2 is down.
Exit message: Child quit during startup handshake: Input/output error.

Thanks again for your help!
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 14:22
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: [Users] Host cannot access storage domains

yes, please attach the vdsm log
also, can you run vdsClient 0 getStorageDomainsList and vdsClient 0 
getDeviceList on both hosts?

It might be a cache issue, so can you please restart the host and if it helps 
attach output before and after the reboot?

Thanks,

Dafna


On 01/03/2014 01:12 PM, Albl, Oliver wrote:

Hi,

I am starting with oVirt 3.3.2 and I have an issue adding a host to a
cluster.

I am using oVirt Engine Version 3.3.2-1.el6

There is

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
t: host02).
2014-01-03 14:52:14,728 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] START, DestroyVDSCommand(HostName 
= host02, HostId = 6dc7fac6-149e-4445-ace1-3c334a24d52a, 
vmId=d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6, force=false, secondsToWait=0, 
gracefully=false), log id: 6a95ffd5
2014-01-03 14:52:15,783 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] FINISH, DestroyVDSCommand, log id: 
6a95ffd5
2014-01-03 14:52:15,804 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM TEST2 is down. Exit message: Child quit 
during startup handshake: Input/output error.
2014-01-03 14:52:15,805 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Running on vds during rerun failed 
vm: null
2014-01-03 14:52:15,805 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] vm TEST2 running in db and not 
running in vds - add to rerun treatment. vds host02
2014-01-03 14:52:15,808 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-7) [24696b3e] Rerun vm 
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6. Called from vds host02
2014-01-03 14:52:15,810 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: Failed to run VM TEST2 on Host host02.
2014-01-03 14:52:15,823 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(pool-6-thread-40) [24696b3e] START, IsVmDuringInitiatingVDSCommand( vmId = 
d2bddcdb-a2c8-4c77-b0cf-b83fa3c2a0b6), log id: 35e1eec
2014-01-03 14:52:15,824 INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] 
(pool-6-thread-40) [24696b3e] FINISH, IsVmDuringInitiatingVDSCommand, return: 
false, log id: 35e1eec
2014-01-03 14:52:15,858 WARN  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(pool-6-thread-40) [24696b3e] CanDoAction of action RunVmOnce failed. 
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT
2014-01-03 14:52:15,862 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-40) [24696b3e] Correlation ID: 2ab5cd2, Job ID: 
2913133b-1301-484e-9887-b110841c8078, Call Stack: null, Custom Event ID: -1, 
Message: Failed to run VM TEST2 (User: oliver.albl).

Thanks,
Oliver
-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 14:51
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: AW: [Users] Host cannot access storage domains

Thanks for reporting the issue :)

As for the vm, can you please find the error in vdsm.log and in engine and 
paste it?

Thanks,

Dafna


On 01/03/2014 01:49 PM, Albl, Oliver wrote:
> Dafna,
>
>you were right, it seems to be a caching issue. Rebooting the host did the 
> job:
>
> Before Reboot:
>
> [root@host01 log]# vdsClient -s 0 getStorageDomainsList 
> 52cf84ce-6eda-4337-8c94-491d94f5a18d
> f404398a-97f9-474c-af2c-e8887f53f688
> 7841a1c0-181a-4d43-9a25-b707accb5c4b
>
> [root@host02 log]# vdsClient -s 0 getStorageDomainsList 
> 52cf84ce-6eda-4337-8c94-491d94f5a18d
> f404398a-97f9-474c-af2c-e8887f53f688
> 7841a1c0-181a-4d43-9a25-b707accb5c4b
> 925ee53a-69b5-440f-b145-138ada5b452e
>
> After Reboot:
>
> [root@host02 admin]# vdsClient -s 0 getStorageDomainsList 
> 52cf84ce-6eda-4337-8c94-491d94f5a18d
> f404398a-97f9-474c-af2c-e8887f53f688
> 7841a1c0-181a-4d43-9a25-b707accb5c4b
>
> So now I have both hosts up and running but when I try to start a VM on the 
> second host, I receive the following messages in the events pane:
>
> VM TEST2 was started by oliver.albl (Host: host02) VM TEST2 is down. 
> Exit message: Child quit during startup handshake: Input/output error.
>
> Thanks again for your help!
> Oliver
>
> -Ursprüngliche Nachricht-----
> Von: Dafna Ron [mailto:d...@redhat.com]
> Gesendet: Freitag, 03. Jänner 2014 14:22
> An: Albl, Oliver
> Cc: users@ovirt.org
> Betreff: Re: [Users] Host cannot access storage domains
>
> yes, please attach the vdsm log
> also, can you run vdsClient 0 getStorageDomainsList and vdsClient 0 
> getDeviceList on both hosts?
>
> It might be a cache issue, so can you please restart the host and if it helps 
> attach output before and after the reboot?
>
> Thanks,
>
> Dafna
>
>
> On 01/03/2014 01:12 PM, Albl, Oliver wrote:
>> Hi,
>>
>> I am starting with oVirt 3.3.2 and I have an issue adding a host to a 
>>

Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron

Thanks for reporting the issue :)

As for the vm, can you please find the error in vdsm.log and in engine 
and paste it?


Thanks,

Dafna


On 01/03/2014 01:49 PM, Albl, Oliver wrote:

Dafna,

   you were right, it seems to be a caching issue. Rebooting the host did the 
job:

Before Reboot:

[root@host01 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

[root@host02 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b
925ee53a-69b5-440f-b145-138ada5b452e

After Reboot:

[root@host02 admin]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

So now I have both hosts up and running but when I try to start a VM on the 
second host, I receive the following messages in the events pane:

VM TEST2 was started by oliver.albl (Host: host02)
VM TEST2 is down. Exit message: Child quit during startup handshake: 
Input/output error.

Thanks again for your help!
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com]
Gesendet: Freitag, 03. Jänner 2014 14:22
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: [Users] Host cannot access storage domains

yes, please attach the vdsm log
also, can you run vdsClient 0 getStorageDomainsList and vdsClient 0 
getDeviceList on both hosts?

It might be a cache issue, so can you please restart the host and if it helps 
attach output before and after the reboot?

Thanks,

Dafna


On 01/03/2014 01:12 PM, Albl, Oliver wrote:

Hi,

I am starting with oVirt 3.3.2 and I have an issue adding a host to a
cluster.

I am using oVirt Engine Version 3.3.2-1.el6

There is a cluster with one host (installed with oVirt Node - 3.0.3 -
1.1.fc19 ISO image) up and running.

I installed a second host using the same ISO image.

I approved the host in the cluster.

When I try to activate the second host, I receive the following
messages in the events pane:

State was set to Up for host host02.

Host host02 reports about one of the Active Storage Domains as
Problematic.

Host host02 cannot access one of the Storage Domains attached to the
Data Center Test303. Stetting Host state to Non-Operational.

Failed to connect Host host02 to Storage Pool Test303

There are 3 FC Storage Domains configured and visible to both hosts.

multipath -ll shows all LUNs on both hosts.

The engine.log reports the following about every five minutes:

2014-01-03 13:50:15,408 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-44) Domain 7841a1c0-181a-4d43-9a25-b707accb5c4b:
LUN_105 check timeot 69.7 is too big

2014-01-03 13:50:15,409 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-44) Domain 52cf84ce-6eda-4337-8c94-491d94f5a18d:
LUN_103 check timeot 59.6 is too big

2014-01-03 13:50:15,410 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44)
Storage Domain LUN_105 of pool Test303 is in problem in host host02

2014-01-03 13:50:15,411 ERROR
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44)
Storage Domain LUN_103 of pool Test030 is in problem in host host02

Please let me know if there are any log files I should attach.

Thank you for your help!

All the best,

Oliver Albl



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Dafna Ron



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
Dafna,

  you were right, it seems to be a caching issue. Rebooting the host did the 
job:

Before Reboot:

[root@host01 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

[root@host02 log]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b
925ee53a-69b5-440f-b145-138ada5b452e

After Reboot:

[root@host02 admin]# vdsClient -s 0 getStorageDomainsList
52cf84ce-6eda-4337-8c94-491d94f5a18d
f404398a-97f9-474c-af2c-e8887f53f688
7841a1c0-181a-4d43-9a25-b707accb5c4b

So now I have both hosts up and running but when I try to start a VM on the 
second host, I receive the following messages in the events pane:

VM TEST2 was started by oliver.albl (Host: host02)
VM TEST2 is down. Exit message: Child quit during startup handshake: 
Input/output error.

Thanks again for your help!
Oliver

-Ursprüngliche Nachricht-
Von: Dafna Ron [mailto:d...@redhat.com] 
Gesendet: Freitag, 03. Jänner 2014 14:22
An: Albl, Oliver
Cc: users@ovirt.org
Betreff: Re: [Users] Host cannot access storage domains

yes, please attach the vdsm log
also, can you run vdsClient 0 getStorageDomainsList and vdsClient 0 
getDeviceList on both hosts?

It might be a cache issue, so can you please restart the host and if it helps 
attach output before and after the reboot?

Thanks,

Dafna


On 01/03/2014 01:12 PM, Albl, Oliver wrote:
>
> Hi,
>
> I am starting with oVirt 3.3.2 and I have an issue adding a host to a 
> cluster.
>
> I am using oVirt Engine Version 3.3.2-1.el6
>
> There is a cluster with one host (installed with oVirt Node - 3.0.3 -
> 1.1.fc19 ISO image) up and running.
>
> I installed a second host using the same ISO image.
>
> I approved the host in the cluster.
>
> When I try to activate the second host, I receive the following 
> messages in the events pane:
>
> State was set to Up for host host02.
>
> Host host02 reports about one of the Active Storage Domains as 
> Problematic.
>
> Host host02 cannot access one of the Storage Domains attached to the 
> Data Center Test303. Stetting Host state to Non-Operational.
>
> Failed to connect Host host02 to Storage Pool Test303
>
> There are 3 FC Storage Domains configured and visible to both hosts.
>
> multipath -ll shows all LUNs on both hosts.
>
> The engine.log reports the following about every five minutes:
>
> 2014-01-03 13:50:15,408 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (pool-6-thread-44) Domain 7841a1c0-181a-4d43-9a25-b707accb5c4b: 
> LUN_105 check timeot 69.7 is too big
>
> 2014-01-03 13:50:15,409 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (pool-6-thread-44) Domain 52cf84ce-6eda-4337-8c94-491d94f5a18d: 
> LUN_103 check timeot 59.6 is too big
>
> 2014-01-03 13:50:15,410 ERROR
> [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44) 
> Storage Domain LUN_105 of pool Test303 is in problem in host host02
>
> 2014-01-03 13:50:15,411 ERROR
> [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44) 
> Storage Domain LUN_103 of pool Test030 is in problem in host host02
>
> Please let me know if there are any log files I should attach.
>
> Thank you for your help!
>
> All the best,
>
> Oliver Albl
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host cannot access storage domains

2014-01-03 Thread Dafna Ron

yes, please attach the vdsm log
also, can you run vdsClient 0 getStorageDomainsList and vdsClient 0 
getDeviceList on both hosts?


It might be a cache issue, so can you please restart the host and if it 
helps attach output before and after the reboot?


Thanks,

Dafna


On 01/03/2014 01:12 PM, Albl, Oliver wrote:


Hi,

I am starting with oVirt 3.3.2 and I have an issue adding a host to a 
cluster.


I am using oVirt Engine Version 3.3.2-1.el6

There is a cluster with one host (installed with oVirt Node - 3.0.3 - 
1.1.fc19 ISO image) up and running.


I installed a second host using the same ISO image.

I approved the host in the cluster.

When I try to activate the second host, I receive the following 
messages in the events pane:


State was set to Up for host host02.

Host host02 reports about one of the Active Storage Domains as 
Problematic.


Host host02 cannot access one of the Storage Domains attached to the 
Data Center Test303. Stetting Host state to Non-Operational.


Failed to connect Host host02 to Storage Pool Test303

There are 3 FC Storage Domains configured and visible to both hosts.

multipath –ll shows all LUNs on both hosts.

The engine.log reports the following about every five minutes:

2014-01-03 13:50:15,408 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-6-thread-44) Domain 7841a1c0-181a-4d43-9a25-b707accb5c4b: 
LUN_105 check timeot 69.7 is too big


2014-01-03 13:50:15,409 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-6-thread-44) Domain 52cf84ce-6eda-4337-8c94-491d94f5a18d: 
LUN_103 check timeot 59.6 is too big


2014-01-03 13:50:15,410 ERROR 
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44) 
Storage Domain LUN_105 of pool Test303 is in problem in host host02


2014-01-03 13:50:15,411 ERROR 
[org.ovirt.engine.core.bll.InitVdsOnUpCommand] (pool-6-thread-44) 
Storage Domain LUN_103 of pool Test030 is in problem in host host02


Please let me know if there are any log files I should attach.

Thank you for your help!

All the best,

Oliver Albl



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Host cannot access storage domains

2014-01-03 Thread Albl, Oliver
Hi,

  I am starting with oVirt 3.3.2 and I have an issue adding a host to a cluster.

I am using oVirt Engine Version 3.3.2-1.el6
There is a cluster with one host (installed with oVirt Node - 3.0.3 - 1.1.fc19 
ISO image) up and running.
I installed a second host using the same ISO image.
I approved the host in the cluster.

When I try to activate the second host, I receive the following messages in the 
events pane:

State was set to Up for host host02.
Host host02 reports about one of the Active Storage Domains as Problematic.
Host host02 cannot access one of the Storage Domains attached to the Data 
Center Test303. Stetting Host state to Non-Operational.
Failed to connect Host host02 to Storage Pool Test303

There are 3 FC Storage Domains configured and visible to both hosts.
multipath -ll shows all LUNs on both hosts.

The engine.log reports the following about every five minutes:

2014-01-03 13:50:15,408 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-44) 
Domain 7841a1c0-181a-4d43-9a25-b707accb5c4b: LUN_105 check timeot 69.7 is too 
big
2014-01-03 13:50:15,409 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (pool-6-thread-44) 
Domain 52cf84ce-6eda-4337-8c94-491d94f5a18d: LUN_103 check timeot 59.6 is too 
big
2014-01-03 13:50:15,410 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(pool-6-thread-44) Storage Domain LUN_105 of pool Test303 is in problem in host 
host02
2014-01-03 13:50:15,411 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] 
(pool-6-thread-44) Storage Domain LUN_103 of pool Test030 is in problem in host 
host02

Please let me know if there are any log files I should attach.

Thank you for your help!

All the best,
Oliver Albl

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users