Re: [ovirt-users] Self hosted engine iusses

2015-02-06 Thread George Skorup
I wiped my test cluster and started over. This time I did not do the 
devnode blacklist and instead did find_multipaths yes (as is also in 
the default EL7 multipath.conf) and that worked fine as well, device 
mapper system messages went away.


On 2/6/2015 5:33 AM, Doron Fediuck wrote:

On 06/02/15 13:25, Fabian Deutsch wrote:

I think this bug is covering the cause for this:

https://bugzilla.redhat.com/show_bug.cgi?id=1173290

- fabian


Thanks Fabian.

- Original Message -

Please open a bug Stefano.

Thanks,
Doron

On 06/02/15 11:19, Stefano Danzi wrote:

This solved the issue!!!
Thanks!!

If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug
What do you-all think about it?

Il 05/02/2015 20.36, Darrell Budic ha scritto:

You can also add “find_multipaths 1” to /etc/multipath.conf, this
keeps multipathd from finding non-multipath devices as multi path
devices and avoids the error message and keeps mutlipathd from binding
your normal devices. I find it simpler than blacklisting and it should
work if you also have real multi path devices.

defaults {
  find_multipaths yes
  polling_interval5
  …



On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:

I ran into this same problem after setting up my cluster on EL7. As
has been pointed out, the hosted-engine installer modifies
/etc/multipath.conf.

I appended:

blacklist {
 devnode *
}

to the end of the modified multipath.conf, which is what was there
before the engine installer, and the errors stopped.

I think I was getting 253:3 trying to map which don't exist on my
systems. I have a similar setup, md raid1 and LVM+XFS for gluster.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-06 Thread Fabian Deutsch
I think this bug is covering the cause for this:

https://bugzilla.redhat.com/show_bug.cgi?id=1173290

- fabian

- Original Message -
 Please open a bug Stefano.
 
 Thanks,
 Doron
 
 On 06/02/15 11:19, Stefano Danzi wrote:
  This solved the issue!!!
  Thanks!!
 
  If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug
  What do you-all think about it?
 
  Il 05/02/2015 20.36, Darrell Budic ha scritto:
   You can also add “find_multipaths 1” to /etc/multipath.conf, this
   keeps multipathd from finding non-multipath devices as multi path
   devices and avoids the error message and keeps mutlipathd from binding
   your normal devices. I find it simpler than blacklisting and it should
   work if you also have real multi path devices.
  
   defaults {
find_multipaths yes
polling_interval5
…
  
  
   On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:
  
   I ran into this same problem after setting up my cluster on EL7. As
   has been pointed out, the hosted-engine installer modifies
   /etc/multipath.conf.
  
   I appended:
  
   blacklist {
   devnode *
   }
  
   to the end of the modified multipath.conf, which is what was there
   before the engine installer, and the errors stopped.
  
   I think I was getting 253:3 trying to map which don't exist on my
   systems. I have a similar setup, md raid1 and LVM+XFS for gluster.
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-06 Thread Doron Fediuck

On 06/02/15 13:25, Fabian Deutsch wrote:
 I think this bug is covering the cause for this:

 https://bugzilla.redhat.com/show_bug.cgi?id=1173290

 - fabian


Thanks Fabian.
 - Original Message -
  Please open a bug Stefano.
 
  Thanks,
  Doron
 
  On 06/02/15 11:19, Stefano Danzi wrote:
  This solved the issue!!!
  Thanks!!
 
  If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug
  What do you-all think about it?
 
  Il 05/02/2015 20.36, Darrell Budic ha scritto:
  You can also add “find_multipaths 1” to /etc/multipath.conf, this
  keeps multipathd from finding non-multipath devices as multi path
  devices and avoids the error message and keeps mutlipathd from binding
  your normal devices. I find it simpler than blacklisting and it should
  work if you also have real multi path devices.
 
  defaults {
   find_multipaths yes
   polling_interval5
   …
 
 
  On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:
 
  I ran into this same problem after setting up my cluster on EL7. As
  has been pointed out, the hosted-engine installer modifies
  /etc/multipath.conf.
 
  I appended:
 
  blacklist {
  devnode *
  }
 
  to the end of the modified multipath.conf, which is what was there
  before the engine installer, and the errors stopped.
 
  I think I was getting 253:3 trying to map which don't exist on my
  systems. I have a similar setup, md raid1 and LVM+XFS for gluster.
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-06 Thread Stefano Danzi

This solved the issue!!!
Thanks!!

If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug
What do you-all think about it?

Il 05/02/2015 20.36, Darrell Budic ha scritto:

You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps 
multipathd from finding non-multipath devices as multi path devices and avoids 
the error message and keeps mutlipathd from binding your normal devices. I find 
it simpler than blacklisting and it should work if you also have real multi 
path devices.

defaults {
 find_multipaths yes
 polling_interval5
 …



On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:

I ran into this same problem after setting up my cluster on EL7. As has been 
pointed out, the hosted-engine installer modifies /etc/multipath.conf.

I appended:

blacklist {
devnode *
}

to the end of the modified multipath.conf, which is what was there before the 
engine installer, and the errors stopped.

I think I was getting 253:3 trying to map which don't exist on my systems. I 
have a similar setup, md raid1 and LVM+XFS for gluster.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-06 Thread Doron Fediuck
Please open a bug Stefano.

Thanks,
Doron

On 06/02/15 11:19, Stefano Danzi wrote:
 This solved the issue!!!
 Thanks!!

 If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug
 What do you-all think about it?

 Il 05/02/2015 20.36, Darrell Budic ha scritto:
  You can also add “find_multipaths 1” to /etc/multipath.conf, this
  keeps multipathd from finding non-multipath devices as multi path
  devices and avoids the error message and keeps mutlipathd from binding
  your normal devices. I find it simpler than blacklisting and it should
  work if you also have real multi path devices.
 
  defaults {
   find_multipaths yes
   polling_interval5
   …
 
 
  On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:
 
  I ran into this same problem after setting up my cluster on EL7. As
  has been pointed out, the hosted-engine installer modifies
  /etc/multipath.conf.
 
  I appended:
 
  blacklist {
  devnode *
  }
 
  to the end of the modified multipath.conf, which is what was there
  before the engine installer, and the errors stopped.
 
  I think I was getting 253:3 trying to map which don't exist on my
  systems. I have a similar setup, md raid1 and LVM+XFS for gluster.
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Nir Soffer
- Original Message -
 From: Stefano Danzi s.da...@hawai.it
 To: Nir Soffer nsof...@redhat.com
 Sent: Thursday, February 5, 2015 1:39:41 PM
 Subject: Re: [ovirt-users] Self hosted engine iusses
 
 Here

In vdsm log I see that you installed hosted engine on NFS storage domain; not 
having
multipath devices and ovirt vgs/lvs is expected.

Your multipath errors may be related to ovirt, but only because vdsm requires
and starts multipathd, and install new multipath.conf.

I suggest to try to get help about it in device-mapper channels (e.g. #lvm on 
freenode).

If you cannot resolve this, please open a bug.

Nir 

 
 Il 05/02/2015 12.31, Nir Soffer ha scritto:
  - Original Message -
  From: Stefano Danzi s.da...@hawai.it
  To: Nir Soffer nsof...@redhat.com
  Cc: users@ovirt.org
  Sent: Thursday, February 5, 2015 1:17:01 PM
  Subject: Re: [ovirt-users] Self hosted engine iusses
 
 
  Il 05/02/2015 12.08, Nir Soffer ha scritto:
  - Original Message -
  From: Stefano Danzi s.da...@hawai.it
  To: Nir Soffer nsof...@redhat.com
  Cc: users@ovirt.org
  Sent: Thursday, February 5, 2015 12:58:35 PM
  Subject: Re: [ovirt-users] Self hosted engine iusses
 
 
  Il 05/02/2015 11.52, Nir Soffer ha scritto:
  - Original Message -
 
  After ovirt installation on host console I see this error every 5
  minutes:
 
  [ 1823.837020] device-mapper: table: 253:4: multipath: error getting
  device
  [ 1823.837228] device-mapper: ioctl: error adding target to table
  This may be caused by the fact that vdsm does not cleanup properly
  after
  deactivating storage domains. We have an open bugs on this.
 
  You may have an active lv using non-existent multipath device.
 
  Can you share with us the output of:
 
  lsblk
  multipath -ll
  dmsetup table
  cat /etc/multipath.conf
  pvscan --cache  /dev/null  lvs
 
  Nir
 
  See above:
 
  [root@ovirt01 etc]# lsblk
  NAME MAJ:MIN RM   SIZE RO TYPE
  MOUNTPOINT
  sda8:00 931,5G  0 disk
  ├─sda1 8:10   500M  0 part
  │ └─md09:00   500M  0 raid1 /boot
  └─sda2 8:20   931G  0 part
   └─md19:10 930,9G  0 raid1
 ├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
 ├─centos_ovirt01-root253:1050G  0 lvm   /
 ├─centos_ovirt01-home253:2010G  0 lvm /home
 └─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
  /home/glusterfs/engine
  sdb8:16   0 931,5G  0 disk
  ├─sdb1 8:17   0   500M  0 part
  │ └─md09:00   500M  0 raid1 /boot
  └─sdb2 8:18   0   931G  0 part
   └─md19:10 930,9G  0 raid1
 ├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
 ├─centos_ovirt01-root253:1050G  0 lvm   /
 ├─centos_ovirt01-home253:2010G  0 lvm /home
 └─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
  /home/glusterfs/engine
 
  [root@ovirt01 etc]# multipath -ll
  Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout
  Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout
  Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout
 
  [root@ovirt01 etc]# dmsetup table
  centos_ovirt01-home: 0 20971520 linear 9:1 121391104
  centos_ovirt01-swap: 0 16531456 linear 9:1 2048
  centos_ovirt01-root: 0 104857600 linear 9:1 16533504
  centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624
 
  [root@ovirt01 etc]# cat /etc/multipath.conf
  # RHEV REVISION 1.1
 
  defaults {
 polling_interval5
 getuid_callout  /usr/lib/udev/scsi_id --whitelisted
 --replace-whitespace --device=/dev/%n
 no_path_retry   fail
 user_friendly_names no
 flush_on_last_del   yes
 fast_io_fail_tmo5
 dev_loss_tmo30
 max_fds 4096
  }
 
  devices {
  device {
 vendor  HITACHI
 product DF.*
 getuid_callout  /usr/lib/udev/scsi_id --whitelisted
 --replace-whitespace --device=/dev/%n
  }
  device {
 vendor  COMPELNT
 product Compellent Vol
 no_path_retry   fail
  }
  device {
 # multipath.conf.default
 vendor  DGC
 product .*
 product_blacklist   LUNZ
 path_grouping_policygroup_by_prio
 path_checkeremc_clariion
 hardware_handler1 emc
 prioemc
 failbackimmediate
 rr_weight

Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Nir Soffer


- Original Message -
 From: Stefano Danzi s.da...@hawai.it
 To: Nir Soffer nsof...@redhat.com
 Cc: users@ovirt.org
 Sent: Thursday, February 5, 2015 12:58:35 PM
 Subject: Re: [ovirt-users] Self hosted engine iusses
 
 
 Il 05/02/2015 11.52, Nir Soffer ha scritto:
  - Original Message -
 
  After ovirt installation on host console I see this error every 5 minutes:
 
  [ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
  [ 1823.837228] device-mapper: ioctl: error adding target to table
  This may be caused by the fact that vdsm does not cleanup properly after
  deactivating storage domains. We have an open bugs on this.
 
  You may have an active lv using non-existent multipath device.
 
  Can you share with us the output of:
 
  lsblk
  multipath -ll
  dmsetup table
  cat /etc/multipath.conf
  pvscan --cache  /dev/null  lvs
 
  Nir
 
 See above:
 
 [root@ovirt01 etc]# lsblk
 NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
 sda8:00 931,5G  0 disk
 ├─sda1 8:10   500M  0 part
 │ └─md09:00   500M  0 raid1 /boot
 └─sda2 8:20   931G  0 part
└─md19:10 930,9G  0 raid1
  ├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
  ├─centos_ovirt01-root253:1050G  0 lvm   /
  ├─centos_ovirt01-home253:2010G  0 lvm /home
  └─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
 /home/glusterfs/engine
 sdb8:16   0 931,5G  0 disk
 ├─sdb1 8:17   0   500M  0 part
 │ └─md09:00   500M  0 raid1 /boot
 └─sdb2 8:18   0   931G  0 part
└─md19:10 930,9G  0 raid1
  ├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
  ├─centos_ovirt01-root253:1050G  0 lvm   /
  ├─centos_ovirt01-home253:2010G  0 lvm /home
  └─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
 /home/glusterfs/engine
 
 [root@ovirt01 etc]# multipath -ll
 Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout
 Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout
 Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout
 
 [root@ovirt01 etc]# dmsetup table
 centos_ovirt01-home: 0 20971520 linear 9:1 121391104
 centos_ovirt01-swap: 0 16531456 linear 9:1 2048
 centos_ovirt01-root: 0 104857600 linear 9:1 16533504
 centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624
 
 [root@ovirt01 etc]# cat /etc/multipath.conf
 # RHEV REVISION 1.1
 
 defaults {
  polling_interval5
  getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
  no_path_retry   fail
  user_friendly_names no
  flush_on_last_del   yes
  fast_io_fail_tmo5
  dev_loss_tmo30
  max_fds 4096
 }
 
 devices {
 device {
  vendor  HITACHI
  product DF.*
  getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
 }
 device {
  vendor  COMPELNT
  product Compellent Vol
  no_path_retry   fail
 }
 device {
  # multipath.conf.default
  vendor  DGC
  product .*
  product_blacklist   LUNZ
  path_grouping_policygroup_by_prio
  path_checkeremc_clariion
  hardware_handler1 emc
  prioemc
  failbackimmediate
  rr_weight   uniform
  # vdsm required configuration
  getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
  features0
  no_path_retry   fail
 }
 }
 
 [root@ovirt01 etc]# pvscan --cache  /dev/null  lvs
Incorrect metadata area header checksum on /dev/sda2 at offset 4096
Incorrect metadata area header checksum on /dev/sda2 at offset 4096
LV  VG Attr   LSize  Pool Origin Data%  Move
Log Cpy%Sync Convert
glusterOVEngine centos_ovirt01 -wi-ao 50,00g
homecentos_ovirt01 -wi-ao 10,00g
rootcentos_ovirt01 -wi-ao 50,00g
swapcentos_ovirt01 -wi-ao  7,88g

Are you sure this is the correct host that the multipath error came from?

There are no multipath devices in this host and no ovirt storage domains lvs.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread George Skorup
I ran into this same problem after setting up my cluster on EL7. As has 
been pointed out, the hosted-engine installer modifies /etc/multipath.conf.


I appended:

blacklist {
devnode *
}

to the end of the modified multipath.conf, which is what was there 
before the engine installer, and the errors stopped.


I think I was getting 253:3 trying to map which don't exist on my 
systems. I have a similar setup, md raid1 and LVM+XFS for gluster.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Yedidyah Bar David
- Original Message -
 From: Stefano Danzi s.da...@hawai.it
 To: users@ovirt.org
 Sent: Thursday, February 5, 2015 12:30:46 PM
 Subject: [ovirt-users] Self hosted engine iusses
 
 
 Hello,
 I'm performing a self hosted engine installation for the first time.
 The system now has only one host.
 
 Host machine and engine VM run Centos 7 .
 
 The host uses LVM
 
 After ovirt installation on host console I see this error every 5 minutes:
 
 [ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
 [ 1823.837228] device-mapper: ioctl: error adding target to table
 
 On /dev there are't devices that match 253:4
 Seems that multipath try to add /dev/dm-4 but this isn't in /dev (and dm-* is
 blacklisted in multipath.conf default)

No idea, adding Nir.

 
 Other thing is related to the engine.
 On oVirt web interface I can see the VM for engine running. I don't see IP
 address and FQDN in the list.
 If I try to change something in VM configuration I get the message:
 
 Cannot edit VM. This VM is not managed by the engine.
 
 Is this correct??

Yes. Will probably be possible in 3.6.

For now you can manually edit vm.conf, see e.g.:
http://lists.ovirt.org/pipermail/users/2014-March/022511.html
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Self hosted engine iusses

2015-02-05 Thread Stefano Danzi


Hello,
I'm performing a self hosted engine installation for the first time.
The system now has only one host.

Host machine and engine VM run Centos 7 .

The host uses LVM

After ovirt installation on host console I see this error every 5 minutes:

[ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
[ 1823.837228] device-mapper: ioctl: error adding target to table

On /dev there are't devices that match 253:4
Seems that multipath try to add /dev/dm-4 but this isn't in /dev (and dm-* is 
blacklisted in multipath.conf default)

Other thing is related to the engine.
On oVirt web interface I can see the VM for engine running. I don't see IP 
address and FQDN in the list.
If I try to change something in VM configuration I get the message:

Cannot edit VM. This VM is not managed by the engine.

Is this correct??

Bye
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Stefano Danzi


Il 05/02/2015 11.52, Nir Soffer ha scritto:

- Original Message -

After ovirt installation on host console I see this error every 5 minutes:

[ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
[ 1823.837228] device-mapper: ioctl: error adding target to table
This may be caused by the fact that vdsm does not cleanup properly after
deactivating storage domains. We have an open bugs on this.

You may have an active lv using non-existent multipath device.

Can you share with us the output of:

lsblk
multipath -ll
dmsetup table
cat /etc/multipath.conf
pvscan --cache  /dev/null  lvs

Nir


See above:

[root@ovirt01 etc]# lsblk
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda8:00 931,5G  0 disk
├─sda1 8:10   500M  0 part
│ └─md09:00   500M  0 raid1 /boot
└─sda2 8:20   931G  0 part
  └─md19:10 930,9G  0 raid1
├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
├─centos_ovirt01-root253:1050G  0 lvm   /
├─centos_ovirt01-home253:2010G  0 lvm /home
└─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm 
/home/glusterfs/engine

sdb8:16   0 931,5G  0 disk
├─sdb1 8:17   0   500M  0 part
│ └─md09:00   500M  0 raid1 /boot
└─sdb2 8:18   0   931G  0 part
  └─md19:10 930,9G  0 raid1
├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
├─centos_ovirt01-root253:1050G  0 lvm   /
├─centos_ovirt01-home253:2010G  0 lvm /home
└─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm 
/home/glusterfs/engine


[root@ovirt01 etc]# multipath -ll
Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout
Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout
Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout

[root@ovirt01 etc]# dmsetup table
centos_ovirt01-home: 0 20971520 linear 9:1 121391104
centos_ovirt01-swap: 0 16531456 linear 9:1 2048
centos_ovirt01-root: 0 104857600 linear 9:1 16533504
centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624

[root@ovirt01 etc]# cat /etc/multipath.conf
# RHEV REVISION 1.1

defaults {
polling_interval5
getuid_callout  /usr/lib/udev/scsi_id --whitelisted 
--replace-whitespace --device=/dev/%n
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
}

devices {
device {
vendor  HITACHI
product DF.*
getuid_callout  /usr/lib/udev/scsi_id --whitelisted 
--replace-whitespace --device=/dev/%n
}
device {
vendor  COMPELNT
product Compellent Vol
no_path_retry   fail
}
device {
# multipath.conf.default
vendor  DGC
product .*
product_blacklist   LUNZ
path_grouping_policygroup_by_prio
path_checkeremc_clariion
hardware_handler1 emc
prioemc
failbackimmediate
rr_weight   uniform
# vdsm required configuration
getuid_callout  /usr/lib/udev/scsi_id --whitelisted 
--replace-whitespace --device=/dev/%n
features0
no_path_retry   fail
}
}

[root@ovirt01 etc]# pvscan --cache  /dev/null  lvs
  Incorrect metadata area header checksum on /dev/sda2 at offset 4096
  Incorrect metadata area header checksum on /dev/sda2 at offset 4096
  LV  VG Attr   LSize  Pool Origin Data%  Move Log 
Cpy%Sync Convert
  glusterOVEngine centos_ovirt01 -wi-ao 50,00g
  homecentos_ovirt01 -wi-ao 10,00g
  rootcentos_ovirt01 -wi-ao 50,00g
  swapcentos_ovirt01 -wi-ao  7,88g

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Stefano Danzi


Il 05/02/2015 12.08, Nir Soffer ha scritto:


- Original Message -

From: Stefano Danzi s.da...@hawai.it
To: Nir Soffer nsof...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, February 5, 2015 12:58:35 PM
Subject: Re: [ovirt-users] Self hosted engine iusses


Il 05/02/2015 11.52, Nir Soffer ha scritto:

- Original Message -

After ovirt installation on host console I see this error every 5 minutes:

[ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
[ 1823.837228] device-mapper: ioctl: error adding target to table
This may be caused by the fact that vdsm does not cleanup properly after
deactivating storage domains. We have an open bugs on this.

You may have an active lv using non-existent multipath device.

Can you share with us the output of:

lsblk
multipath -ll
dmsetup table
cat /etc/multipath.conf
pvscan --cache  /dev/null  lvs

Nir


See above:

[root@ovirt01 etc]# lsblk
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda8:00 931,5G  0 disk
├─sda1 8:10   500M  0 part
│ └─md09:00   500M  0 raid1 /boot
└─sda2 8:20   931G  0 part
└─md19:10 930,9G  0 raid1
  ├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
  ├─centos_ovirt01-root253:1050G  0 lvm   /
  ├─centos_ovirt01-home253:2010G  0 lvm /home
  └─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
/home/glusterfs/engine
sdb8:16   0 931,5G  0 disk
├─sdb1 8:17   0   500M  0 part
│ └─md09:00   500M  0 raid1 /boot
└─sdb2 8:18   0   931G  0 part
└─md19:10 930,9G  0 raid1
  ├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
  ├─centos_ovirt01-root253:1050G  0 lvm   /
  ├─centos_ovirt01-home253:2010G  0 lvm /home
  └─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
/home/glusterfs/engine

[root@ovirt01 etc]# multipath -ll
Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout
Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout
Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout

[root@ovirt01 etc]# dmsetup table
centos_ovirt01-home: 0 20971520 linear 9:1 121391104
centos_ovirt01-swap: 0 16531456 linear 9:1 2048
centos_ovirt01-root: 0 104857600 linear 9:1 16533504
centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624

[root@ovirt01 etc]# cat /etc/multipath.conf
# RHEV REVISION 1.1

defaults {
  polling_interval5
  getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
  no_path_retry   fail
  user_friendly_names no
  flush_on_last_del   yes
  fast_io_fail_tmo5
  dev_loss_tmo30
  max_fds 4096
}

devices {
device {
  vendor  HITACHI
  product DF.*
  getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
}
device {
  vendor  COMPELNT
  product Compellent Vol
  no_path_retry   fail
}
device {
  # multipath.conf.default
  vendor  DGC
  product .*
  product_blacklist   LUNZ
  path_grouping_policygroup_by_prio
  path_checkeremc_clariion
  hardware_handler1 emc
  prioemc
  failbackimmediate
  rr_weight   uniform
  # vdsm required configuration
  getuid_callout  /usr/lib/udev/scsi_id --whitelisted
  --replace-whitespace --device=/dev/%n
  features0
  no_path_retry   fail
}
}

[root@ovirt01 etc]# pvscan --cache  /dev/null  lvs
Incorrect metadata area header checksum on /dev/sda2 at offset 4096
Incorrect metadata area header checksum on /dev/sda2 at offset 4096
LV  VG Attr   LSize  Pool Origin Data%  Move
Log Cpy%Sync Convert
glusterOVEngine centos_ovirt01 -wi-ao 50,00g
homecentos_ovirt01 -wi-ao 10,00g
rootcentos_ovirt01 -wi-ao 50,00g
swapcentos_ovirt01 -wi-ao  7,88g

Are you sure this is the correct host that the multipath error came from?

There are no multipath devices in this host and no ovirt storage domains lvs.

Nir



Yes this is the host. I'm sure.
I've not yet configured ovirt storage domains (only installed ovirt on host and 
self hosted engine VM)

Here a part of /var/log/messages:

Feb  5 10:04:43 ovirt01 kernel: device-mapper: table

Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Nir Soffer
- Original Message -
 From: Stefano Danzi s.da...@hawai.it
 To: Nir Soffer nsof...@redhat.com
 Cc: users@ovirt.org
 Sent: Thursday, February 5, 2015 1:17:01 PM
 Subject: Re: [ovirt-users] Self hosted engine iusses
 
 
 Il 05/02/2015 12.08, Nir Soffer ha scritto:
 
  - Original Message -
  From: Stefano Danzi s.da...@hawai.it
  To: Nir Soffer nsof...@redhat.com
  Cc: users@ovirt.org
  Sent: Thursday, February 5, 2015 12:58:35 PM
  Subject: Re: [ovirt-users] Self hosted engine iusses
 
 
  Il 05/02/2015 11.52, Nir Soffer ha scritto:
  - Original Message -
 
  After ovirt installation on host console I see this error every 5
  minutes:
 
  [ 1823.837020] device-mapper: table: 253:4: multipath: error getting
  device
  [ 1823.837228] device-mapper: ioctl: error adding target to table
  This may be caused by the fact that vdsm does not cleanup properly after
  deactivating storage domains. We have an open bugs on this.
 
  You may have an active lv using non-existent multipath device.
 
  Can you share with us the output of:
 
  lsblk
  multipath -ll
  dmsetup table
  cat /etc/multipath.conf
  pvscan --cache  /dev/null  lvs
 
  Nir
 
  See above:
 
  [root@ovirt01 etc]# lsblk
  NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
  sda8:00 931,5G  0 disk
  ├─sda1 8:10   500M  0 part
  │ └─md09:00   500M  0 raid1 /boot
  └─sda2 8:20   931G  0 part
  └─md19:10 930,9G  0 raid1
├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
├─centos_ovirt01-root253:1050G  0 lvm   /
├─centos_ovirt01-home253:2010G  0 lvm /home
└─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
  /home/glusterfs/engine
  sdb8:16   0 931,5G  0 disk
  ├─sdb1 8:17   0   500M  0 part
  │ └─md09:00   500M  0 raid1 /boot
  └─sdb2 8:18   0   931G  0 part
  └─md19:10 930,9G  0 raid1
├─centos_ovirt01-swap253:00   7,9G  0 lvm [SWAP]
├─centos_ovirt01-root253:1050G  0 lvm   /
├─centos_ovirt01-home253:2010G  0 lvm /home
└─centos_ovirt01-glusterOVEngine 253:3050G  0 lvm
  /home/glusterfs/engine
 
  [root@ovirt01 etc]# multipath -ll
  Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout
  Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout
  Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout
 
  [root@ovirt01 etc]# dmsetup table
  centos_ovirt01-home: 0 20971520 linear 9:1 121391104
  centos_ovirt01-swap: 0 16531456 linear 9:1 2048
  centos_ovirt01-root: 0 104857600 linear 9:1 16533504
  centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624
 
  [root@ovirt01 etc]# cat /etc/multipath.conf
  # RHEV REVISION 1.1
 
  defaults {
polling_interval5
getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
  }
 
  devices {
  device {
vendor  HITACHI
product DF.*
getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n
  }
  device {
vendor  COMPELNT
product Compellent Vol
no_path_retry   fail
  }
  device {
# multipath.conf.default
vendor  DGC
product .*
product_blacklist   LUNZ
path_grouping_policygroup_by_prio
path_checkeremc_clariion
hardware_handler1 emc
prioemc
failbackimmediate
rr_weight   uniform
# vdsm required configuration
getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n
features0
no_path_retry   fail
  }
  }
 
  [root@ovirt01 etc]# pvscan --cache  /dev/null  lvs
  Incorrect metadata area header checksum on /dev/sda2 at offset 4096
  Incorrect metadata area header checksum on /dev/sda2 at offset 4096
  LV  VG Attr   LSize  Pool Origin Data%
  Move
  Log Cpy%Sync Convert
  glusterOVEngine centos_ovirt01 -wi-ao 50,00g
  homecentos_ovirt01 -wi-ao 10,00g
  rootcentos_ovirt01 -wi-ao

Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Nir Soffer
- Original Message -
 From: Stefano Danzi s.da...@hawai.it
 To: users@ovirt.org
 Sent: Thursday, February 5, 2015 12:30:46 PM
 Subject: [ovirt-users] Self hosted engine iusses
 
 
 Hello,
 I'm performing a self hosted engine installation for the first time.
 The system now has only one host.
 
 Host machine and engine VM run Centos 7 .
 
 The host uses LVM
 
 After ovirt installation on host console I see this error every 5 minutes:
 
 [ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
 [ 1823.837228] device-mapper: ioctl: error adding target to table

This may be caused by the fact that vdsm does not cleanup properly after
deactivating storage domains. We have an open bugs on this.

You may have an active lv using non-existent multipath device.

Can you share with us the output of:

lsblk
multipath -ll
dmsetup table
cat /etc/multipath.conf
pvscan --cache  /dev/null  lvs

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-05 Thread Darrell Budic
You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps 
multipathd from finding non-multipath devices as multi path devices and avoids 
the error message and keeps mutlipathd from binding your normal devices. I find 
it simpler than blacklisting and it should work if you also have real multi 
path devices.

defaults {
find_multipaths yes
polling_interval5
…


 On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:
 
 I ran into this same problem after setting up my cluster on EL7. As has been 
 pointed out, the hosted-engine installer modifies /etc/multipath.conf.
 
 I appended:
 
 blacklist {
devnode *
 }
 
 to the end of the modified multipath.conf, which is what was there before the 
 engine installer, and the errors stopped.
 
 I think I was getting 253:3 trying to map which don't exist on my systems. I 
 have a similar setup, md raid1 and LVM+XFS for gluster.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users