I wiped my test cluster and started over. This time I did not do the devnode blacklist and instead did "find_multipaths yes" (as is also in the default EL7 multipath.conf) and that worked fine as well, device mapper system messages went away.

On 2/6/2015 5:33 AM, Doron Fediuck wrote:
On 06/02/15 13:25, Fabian Deutsch wrote:
I think this bug is covering the cause for this:

https://bugzilla.redhat.com/show_bug.cgi?id=1173290

- fabian

Thanks Fabian.
----- Original Message -----
Please open a bug Stefano.

Thanks,
Doron

On 06/02/15 11:19, Stefano Danzi wrote:
This solved the issue!!!
Thanks!!

If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug....
What do you-all think about it?

Il 05/02/2015 20.36, Darrell Budic ha scritto:
You can also add “find_multipaths 1” to /etc/multipath.conf, this
keeps multipathd from finding non-multipath devices as multi path
devices and avoids the error message and keeps mutlipathd from binding
your normal devices. I find it simpler than blacklisting and it should
work if you also have real multi path devices.

defaults {
      find_multipaths         yes
      polling_interval        5
      …


On Feb 5, 2015, at 1:04 PM, George Skorup <geo...@mwcomm.com> wrote:

I ran into this same problem after setting up my cluster on EL7. As
has been pointed out, the hosted-engine installer modifies
/etc/multipath.conf.

I appended:

blacklist {
         devnode "*"
}

to the end of the modified multipath.conf, which is what was there
before the engine installer, and the errors stopped.

I think I was getting 253:3 trying to map which don't exist on my
systems. I have a similar setup, md raid1 and LVM+XFS for gluster.
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to