You can force an rbd unmap with the command below:

rbd unmap -o force $DEV

If it still doesn't unmap, then you have pending IO blocking you.

As llya mentioned for good measure you should also check to see if LVM is in 
use on this RBD volume. If it is, then that could be blocking you from 
unmapping the RBD device normally.

________________________________
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of David Turner 
<drakonst...@gmail.com>
Sent: Friday, March 1, 2019 8:03 PM
To: solarflow99
Cc: ceph-users
Subject: Re: [ceph-users] rbd unmap fails with error: rbd: sysfs write failed 
rbd: unmap failed: (16) Device or resource busy

True, but not before you unmap it from the previous server. It's like 
physically connecting a harddrive to two servers at the same time. Neither 
knows what the other is doing to it and can corrupt your data. You should 
always make sure to unmap an rbd before mapping it to another server.

On Fri, Mar 1, 2019, 6:28 PM solarflow99 
<solarflo...@gmail.com<mailto:solarflo...@gmail.com>> wrote:
It has to be mounted from somewhere, if that server goes offline, you need to 
mount it from somewhere else right?


On Thu, Feb 28, 2019 at 11:15 PM David Turner 
<drakonst...@gmail.com<mailto:drakonst...@gmail.com>> wrote:
Why are you making the same rbd to multiple servers?

On Wed, Feb 27, 2019, 9:50 AM Ilya Dryomov 
<idryo...@gmail.com<mailto:idryo...@gmail.com>> wrote:
On Wed, Feb 27, 2019 at 12:00 PM Thomas 
<74cmo...@gmail.com<mailto:74cmo...@gmail.com>> wrote:
>
> Hi,
> I have noticed an error when writing to a mapped RBD.
> Therefore I unmounted the block device.
> Then I tried to unmap it w/o success:
> ld2110:~ # rbd unmap /dev/rbd0
> rbd: sysfs write failed
> rbd: unmap failed: (16) Device or resource busy
>
> The same block device is mapped on another client and there are no issues:
> root@ld4257:~# rbd info hdb-backup/ld2110
> rbd image 'ld2110':
>         size 7.81TiB in 2048000 objects
>         order 22 (4MiB objects)
>         block_name_prefix: rbd_data.3cda0d6b8b4567
>         format: 2
>         features: layering
>         flags:
>         create_timestamp: Fri Feb 15 10:53:50 2019
> root@ld4257:~# rados -p hdb-backup  listwatchers rbd_data.3cda0d6b8b4567
> error listing watchers hdb-backup/rbd_data.3cda0d6b8b4567: (2) No such
> file or directory
> root@ld4257:~# rados -p hdb-backup  listwatchers rbd_header.3cda0d6b8b4567
> watcher=10.76.177.185:0/1144812735<http://10.76.177.185:0/1144812735> 
> client.21865052 cookie=1
> watcher=10.97.206.97:0/4023931980<http://10.97.206.97:0/4023931980> 
> client.18484780
> cookie=18446462598732841027
>
>
> Question:
> How can I force to unmap the RBD on client ld2110 (= 10.76.177.185)?

Hi Thomas,

It appears that /dev/rbd0 is still open on that node.

Was the unmount successful?  Which filesystem (ext4, xfs, etc)?

What is the output of "ps aux | grep rbd" on that node?

Try lsof, fuser, check for LVM volumes and multipath -- these have been
reported to cause this issue previously:

  http://tracker.ceph.com/issues/12763

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to