On Sat, Nov 14, 2015 at 4:30 AM, Ramakrishna Nishtala (rnishtal)
wrote:
> Hi,
>
> It appears that Multipath support is available on 512 and not 4k sector
> size. This is on RHEL 7.1. Can someone please confirm?
>
>
>
> 4k Sector size
>
> ==
>
> Nov 13 16:20:16 colusa5-ceph kernel: device-mapper: table: 253:60:
> len=5119745 not aligned to h/w logical block size 4096 of dm-16
>
>
>
> [ceph-node][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
>
> [ceph-node][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe
> /dev/mapper/mpathba
>
> [ceph-node][WARNIN] device-mapper: resume ioctl on mpathba2 failed: Invalid
> argument
>
> [ceph-node][WARNIN] device-mapper: remove ioctl on mpathba2 failed: No such
> device or address
The error above does not look like anything ceph-specific. If running
partprobe on your device is failing, you have bigger problems with
whatever is providing your storage (partprobe is just the linux tool
for re-detecting what partitions are on a device).
John
>
> [ceph-node][WARNIN] Traceback (most recent call last):
>
> [ceph-node][WARNIN] File "/usr/sbin/ceph-disk", line 3576, in
>
> [ceph-node][WARNIN] main(sys.argv[1:])
>
> [ceph-node][WARNIN] File "/usr/sbin/ceph-disk", line 3530, in main
>
> [ceph-node][WARNIN] args.func(args)
>
> [ceph-node][WARNIN] File "/usr/sbin/ceph-disk", line 1863, in main_prepare
>
> [ceph-node][WARNIN] luks=luks
>
> [ceph-node][WARNIN] File "/usr/sbin/ceph-disk", line 1465, in
> prepare_journal
>
> [ceph-node][WARNIN] return prepare_journal_dev(data, journal,
> journal_size, journal_uuid, journal_dm_keypath, cryptsetup_parameters, luks)
>
> [ceph-node][WARNIN] File "/usr/sbin/ceph-disk", line 1419, in
> prepare_journal_dev
>
> [ceph-node][WARNIN] raise Error(e)
>
> [ceph-node][WARNIN] __main__.Error: Error: Command '['/usr/sbin/partprobe',
> '/dev/mapper/mpathba']' returned non-zero exit status 1
>
> [ceph-node][ERROR ] RuntimeError: command returned non-zero exit status: 1
>
> [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare
> --cluster ceph --fs-type xfs -- /dev/mapper/mpathba
>
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>
>
>
> 512 bytes
>
> ===
>
> [ceph-node1][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/dm-20
>
> [ceph-node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs
> -f -i size=2048 -- /dev/dm-20
>
> [ceph-node1][DEBUG ] meta-data=/dev/dm-20 isize=2048
> agcount=4, agsize=242908597 blks
>
> [ceph-node1][DEBUG ] = sectsz=512 attr=2,
> projid32bit=1
>
> [ceph-node1][DEBUG ] = crc=0finobt=0
>
> [ceph-node1][DEBUG ] data = bsize=4096
> blocks=971634385, imaxpct=5
>
> [ceph-node1][DEBUG ] = sunit=0 swidth=0
> blks
>
> [ceph-node1][DEBUG ] naming =version 2 bsize=4096
> ascii-ci=0 ftype=0
>
> [ceph-node1][DEBUG ] log =internal log bsize=4096
> blocks=474430, version=2
>
> [ceph-node1][DEBUG ] = sectsz=512 sunit=0
> blks, lazy-count=1
>
> [ceph-node1][DEBUG ] realtime =none extsz=4096 blocks=0,
> rtextents=0
>
> [ceph-node1][WARNIN] DEBUG:ceph-disk:Mounting /dev/dm-20 on
> /var/lib/ceph/tmp/mnt._dvVgI with options inode64,noatime,logbsize=256k
>
>
>
> Regards,
>
>
>
> Rama
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com