Unfortunately I can no longer execute those commands for that rbd5, as I
had to delete it; I couldn't 'resurrect' it, at least not in a decent time.

Here is the output for another image, which is 2TB big:














*ceph-admin@ceph-client-01:~$ sudo blockdev --getsz --getss --getbsz
/dev/rbd14194304000512512ceph-admin@ceph-client-01:~$ xfs_info
/dev/rbd1meta-data=/dev/rbd2              isize=256    agcount=8127,
agsize=64512 blks         =                       sectsz=512
attr=2data     =                       bsize=4096   blocks=524288000,
imaxpct=25         =                       sunit=1024   swidth=1024
blksnaming   =version 2              bsize=4096   ascii-ci=0log
=internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=8 blks, lazy-count=1realtime
=none                   extsz=4096   blocks=0, rtextents=0*


I know rbd can also shrink the image, but I'm sure I haven't shrunk it.
What I have tried, accidentally, was to resize the image to the same size
it previously had, and that operation has failed, after trying for some
time. Hmm... I think the failed resize was the culprit for it's
malfunctioning, then.

Any (additional) advices on how to prevent this type of issues, in the
future? Should the resizing and the xfs_growfs be executed with some
parameters, for a better configuration of the image and / or filesystem?

Thank you very much for your help!

Regards,
Bogdan


On Thu, Nov 12, 2015 at 11:00 PM, Jan Schermer <j...@schermer.cz> wrote:

> Can you post the output of:
>
> blockdev --getsz --getss --getbsz /dev/rbd5
> and
> xfs_info /dev/rbd5
>
> rbd resize can actually (?) shrink the image as well - is it possible that
> the device was actually larger and you shrunk it?
>
> Jan
>
> On 12 Nov 2015, at 21:46, Bogdan SOLGA <bogdan.so...@gmail.com> wrote:
>
> By running rbd resize
> <http://docs.ceph.com/docs/master/rbd/rados-rbd-cmds/> and then
> 'xfs_growfs -d' on the filesystem.
>
> Is there a better way to resize an RBD image and the filesystem?
>
> On Thu, Nov 12, 2015 at 10:35 PM, Jan Schermer <j...@schermer.cz> wrote:
>
>>
>> On 12 Nov 2015, at 20:49, Bogdan SOLGA <bogdan.so...@gmail.com> wrote:
>>
>> Hello Jan!
>>
>> Thank you for your advices, first of all!
>>
>> The filesystem was created using mkfs.xfs, after creating the RBD block
>> device and mapping it on the Ceph client. I haven't specified any
>> parameters when I created the filesystem, I just ran mkfs.xfs on the image
>> name.
>>
>> As you mentioned the filesystem thinking the block device should be
>> larger than it is - I have initially created that image as a 2GB image, and
>> then resized it to be much bigger. Could this be the issue?
>>
>>
>> Sounds more than likely :-) How exactly did you grow it?
>>
>> Jan
>>
>>
>> There are several RBD images mounted on one Ceph client, but only one of
>> them had issues. I have made a clone, and I will try running fsck on it.
>>
>> Fortunately it's not important data, it's just testing data. If I won't
>> succeed repairing it I will trash and re-create it, of course.
>>
>> Thank you, once again!
>>
>>
>>
>> On Thu, Nov 12, 2015 at 9:28 PM, Jan Schermer <j...@schermer.cz> wrote:
>>
>>> How did you create filesystems and/or partitions on this RBD block
>>> device?
>>> The obvious causes would be
>>> 1) you partitioned it and the partition on which you ran mkfs points or
>>> pointed during mkfs outside the block device size (happens if you for
>>> example automate this and confuse sectors x cylinders, or if you copied the
>>> partition table with dd or from some image)
>>> or
>>> 2) mkfs created the filesystem with pointers outside of the block device
>>> for some other reason (bug?)
>>> or
>>> 3) this RBD device is a snapshot that got corrupted (or wasn't
>>> snapshotted in crash-consistent state and you got "lucky") and some
>>> reference points to a non-sensical block number (fsck could fix this, but I
>>> wouldn't trust the data integrity anymore)
>>>
>>> Basically the filesystem thinks the block device should be larger than
>>> it is and tries to reach beyond.
>>>
>>> Is this just one machine or RBD image or is there more?
>>>
>>> I'd first create a snapshot and then try running fsck on it, it should
>>> hopefully tell you if there's a problem in setup or a corruption.
>>>
>>> If it's not important data and it's just one instance of this problem
>>> then I'd just trash and recreate it.
>>>
>>> Jan
>>>
>>> On 12 Nov 2015, at 20:14, Bogdan SOLGA <bogdan.so...@gmail.com> wrote:
>>>
>>> Hello everyone!
>>>
>>> We have a recently installed Ceph cluster (v 0.94.5, Ubuntu 14.04), and
>>> today I noticed a lot of 'attempt to access beyond end of device' messages
>>> in the /var/log/syslog file. They are related to a mounted RBD image, and
>>> have the following format:
>>>
>>>
>>> *Nov 12 21:06:44 ceph-client-01 kernel: [438507.952532] attempt to
>>> access beyond end of deviceNov 12 21:06:44 ceph-client-01 kernel:
>>> [438507.952534] rbd5: rw=33, want=6193176, limit=4194304*
>>>
>>> After restarting that Ceph client, I see a lot of 'metadata I/O error'
>>> messages in the boot log:
>>>
>>> *XFS (rbd5): metadata I/O error: block 0x46e001
>>> ("xfs_buf_iodone_callbacks") error 5 numblks 1*
>>>
>>> Any idea on why these messages are shown? The health of the cluster
>>> shows as OK, and I can access that block device without (apparent) issues...
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Bogdan
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>
>>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to