Thank you very much, Jason!

I've updated the ticket with new data, but I'm not sure if I attached
logs correctly. Please let me know if anything more is needed.

2016-01-14 23:29 GMT+08:00 Jason Dillaman <dilla...@redhat.com>:
> I would need to see the log from the point where you've frozen the disks 
> until the point when you attempt to create a snapshot.  The logs below just 
> show normal IO.
>
> I've opened a new ticket [1] where you can attach the logs.
>
> [1] http://tracker.ceph.com/issues/14373
>
> --
>
> Jason Dillaman
>
>
> ----- Original Message -----
>> From: "Василий Ангапов" <anga...@gmail.com>
>> To: "Jason Dillaman" <dilla...@redhat.com>
>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>> Sent: Wednesday, January 13, 2016 9:00:03 PM
>> Subject: Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?
>>
>> Thanks, Jason, I forgot about this trick!
>>
>> These are the qemu rbd logs (last 200 lines). These lines are
>> endlessly repeating when snapshot taking hangs:
>> 2016-01-14 04:56:34.469568 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 4096 bytes to
>> 0x55bcc86c6000
>> 2016-01-14 04:56:34.469576 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd3eb0 finish_op
>> 2016-01-14 04:56:34.469719 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.469732 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.469739 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a13c0
>> rbd_data.2f31e252fa88e4.0000000000000130 1634304~36864 r = 36864
>> 2016-01-14 04:56:34.469745 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a13c0 READ_FLAT
>> 2016-01-14 04:56:34.469747 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc41a13c0
>> 2016-01-14 04:56:34.469748 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcd00c0700 r = 36864
>> 2016-01-14 04:56:34.469750 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,36864] bl 36864
>> 2016-01-14 04:56:34.469769 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd3000
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.469772 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd3000 rval 36864 read_buf
>> 0x55bcc4f8a000 read_bl 0
>> 2016-01-14 04:56:34.469787 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 36864 bytes to
>> 0x55bcc4f8a000
>> 2016-01-14 04:56:34.469789 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd3130 finish_op
>> 2016-01-14 04:56:34.469847 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.469865 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.469869 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a00a0
>> rbd_data.2f31e252fa88e4.0000000000000130 1888256~4096 r = 4096
>> 2016-01-14 04:56:34.469874 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a00a0 READ_FLAT
>> 2016-01-14 04:56:34.469876 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc41a00a0
>> 2016-01-14 04:56:34.469877 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcd00c2aa0 r = 4096
>> 2016-01-14 04:56:34.469880 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,4096] bl 4096
>> 2016-01-14 04:56:34.469884 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd0d80
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.469886 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd0d80 rval 4096 read_buf
>> 0x55bcc45c8000 read_bl 0
>> 2016-01-14 04:56:34.469890 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 4096 bytes to
>> 0x55bcc45c8000
>> 2016-01-14 04:56:34.469892 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd0eb0 finish_op
>> 2016-01-14 04:56:34.470023 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470032 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470038 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc419f320
>> rbd_data.2f31e252fa88e4.0000000000000130 1900544~20480 r = 20480
>> 2016-01-14 04:56:34.470044 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc419f320 READ_FLAT
>> 2016-01-14 04:56:34.470045 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc419f320
>> 2016-01-14 04:56:34.470046 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcd00c2bc0 r = 20480
>> 2016-01-14 04:56:34.470047 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,20480] bl 20480
>> 2016-01-14 04:56:34.470051 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd0900
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470052 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd0900 rval 20480 read_buf
>> 0x55bcc6741000 read_bl 0
>> 2016-01-14 04:56:34.470062 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 20480 bytes to
>> 0x55bcc6741000
>> 2016-01-14 04:56:34.470064 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd0a30 finish_op
>> 2016-01-14 04:56:34.470176 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470191 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470193 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470197 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc419f8c0
>> rbd_data.2f31e252fa88e4.0000000000000130 2502656~4096 r = 4096
>> 2016-01-14 04:56:34.470201 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc419f8c0 READ_FLAT
>> 2016-01-14 04:56:34.470202 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470203 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc419f8c0
>> 2016-01-14 04:56:34.470205 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcd00c03e0 r = 4096
>> 2016-01-14 04:56:34.470208 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,4096] bl 4096
>> 2016-01-14 04:56:34.470214 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd1980
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470216 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd1980 rval 4096 read_buf
>> 0x55bcc1a2d000 read_bl 0
>> 2016-01-14 04:56:34.470220 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 4096 bytes to
>> 0x55bcc1a2d000
>> 2016-01-14 04:56:34.470224 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd1ab0 finish_op
>> 2016-01-14 04:56:34.470235 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a1960
>> rbd_data.2f31e252fa88e4.0000000000000130 2678784~12288 r = 12288
>> 2016-01-14 04:56:34.470241 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a1960 READ_FLAT
>> 2016-01-14 04:56:34.470242 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc41a1960
>> 2016-01-14 04:56:34.470244 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcd00c11e0 r = 12288
>> 2016-01-14 04:56:34.470245 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,12288] bl 12288
>> 2016-01-14 04:56:34.470248 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd1b00
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470250 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd1b00 rval 12288 read_buf
>> 0x55bcc28d2000 read_bl 0
>> 2016-01-14 04:56:34.470257 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 12288 bytes to
>> 0x55bcc28d2000
>> 2016-01-14 04:56:34.470275 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd1c30 finish_op
>> 2016-01-14 04:56:34.470294 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470308 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470313 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc419fc20
>> rbd_data.2f31e252fa88e4.0000000000000130 2740224~53248 r = 53248
>> 2016-01-14 04:56:34.470317 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc419fc20 READ_FLAT
>> 2016-01-14 04:56:34.470318 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc419fc20
>> 2016-01-14 04:56:34.470319 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcd00c1c00 r = 53248
>> 2016-01-14 04:56:34.470321 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,53248] bl 53248
>> 2016-01-14 04:56:34.470324 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd1680
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470326 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd1680 rval 53248 read_buf
>> 0x55bcc6280000 read_bl 0
>> 2016-01-14 04:56:34.470342 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 53248 bytes to
>> 0x55bcc6280000
>> 2016-01-14 04:56:34.470344 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd17b0 finish_op
>> 2016-01-14 04:56:34.470407 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470415 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470422 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a1180
>> rbd_data.2f31e252fa88e4.0000000000000130 2801664~57344 r = 57344
>> 2016-01-14 04:56:34.470427 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a1180 READ_FLAT
>> 2016-01-14 04:56:34.470428 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc41a1180
>> 2016-01-14 04:56:34.470430 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcc1ee5f40 r = 57344
>> 2016-01-14 04:56:34.470433 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,57344] bl 57344
>> 2016-01-14 04:56:34.470437 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bccafd3c00
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470439 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bccafd3c00 rval 57344 read_buf
>> 0x55bcc2914000 read_bl 0
>> 2016-01-14 04:56:34.470472 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 57344 bytes to
>> 0x55bcc2914000
>> 2016-01-14 04:56:34.470485 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bccafd3d30 finish_op
>> 2016-01-14 04:56:34.470667 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470693 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470696 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a0be0
>> rbd_data.2f31e252fa88e4.0000000000000130 2867200~32768 r = 32768
>> 2016-01-14 04:56:34.470701 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a0be0 READ_FLAT
>> 2016-01-14 04:56:34.470702 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc41a0be0
>> 2016-01-14 04:56:34.470703 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcc1ee5040 r = 32768
>> 2016-01-14 04:56:34.470704 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,32768] bl 32768
>> 2016-01-14 04:56:34.470707 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bcc5d8ae80
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470708 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bcc5d8ae80 rval 32768 read_buf
>> 0x55bcca55c000 read_bl 0
>> 2016-01-14 04:56:34.470722 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 32768 bytes to
>> 0x55bcca55c000
>> 2016-01-14 04:56:34.470724 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bcc5d8afb0 finish_op
>> 2016-01-14 04:56:34.470796 7ff810942700 20 librbdwriteback: aio_cb completing
>> 2016-01-14 04:56:34.470854 7ff810942700 20 librbdwriteback: aio_cb finished
>> 2016-01-14 04:56:34.470859 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a0880
>> rbd_data.2f31e252fa88e4.0000000000000130 2912256~233472 r = 233472
>> 2016-01-14 04:56:34.470864 7ff80e93e700 20 librbd::AioRequest:
>> should_complete 0x55bcc41a0880 READ_FLAT
>> 2016-01-14 04:56:34.470866 7ff80e93e700 20 librbd::AioRequest:
>> complete 0x55bcc41a0880
>> 2016-01-14 04:56:34.470869 7ff80e93e700 10 librbd::AioCompletion:
>> C_AioRead::finish() 0x55bcc1ee36c0 r = 233472
>> 2016-01-14 04:56:34.470871 7ff80e93e700 10 librbd::AioCompletion:  got
>> {} for [0,233472] bl 233472
>> 2016-01-14 04:56:34.470874 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::complete_request() 0x55bcc5d88000
>> complete_cb=0x55bcbee4f440 pending 1
>> 2016-01-14 04:56:34.470876 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() 0x55bcc5d88000 rval 233472 read_buf
>> 0x55bcc3d66000 read_bl 0
>> 2016-01-14 04:56:34.471020 7ff80e93e700 20 librbd::AioCompletion:
>> AioCompletion::finalize() copied resulting 233472 bytes to
>> 0x55bcc3d66000
>> 2016-01-14 04:56:34.471024 7ff80e93e700 20 librbd::AsyncOperation:
>> 0x55bcc5d88130 finish_op
>> 2016-01-14 04:56:35.365701 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c73e40
>> 2016-01-14 04:56:35.365709 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.365724 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7de8ff0 rbd_data.2f31e252fa88e4.0000000000000000 1052672~4096
>> 2016-01-14 04:56:35.365727 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7de8ff0 rbd_data.2f31e252fa88e4.0000000000000000
>> 1052672~4096
>> 2016-01-14 04:56:35.365810 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c742a0
>> 2016-01-14 04:56:35.365814 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.365817 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7deb660 rbd_data.2f31e252fa88e4.0000000000000000 4988928~4096
>> 2016-01-14 04:56:35.365818 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7deb660 rbd_data.2f31e252fa88e4.0000000000000000
>> 4988928~4096
>> 2016-01-14 04:56:35.365882 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c73ba0
>> 2016-01-14 04:56:35.365888 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.365891 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7dea850 rbd_data.2f31e252fa88e4.0000000000000000 5193728~4096
>> 2016-01-14 04:56:35.365893 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7dea850 rbd_data.2f31e252fa88e4.0000000000000000
>> 5193728~4096
>> 2016-01-14 04:56:35.365918 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c73ae0
>> 2016-01-14 04:56:35.365919 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.365922 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7de8d20 rbd_data.2f31e252fa88e4.0000000000000010 7311360~4096
>> 2016-01-14 04:56:35.365924 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7de8d20 rbd_data.2f31e252fa88e4.0000000000000010
>> 7311360~4096
>> 2016-01-14 04:56:35.365956 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c73140
>> 2016-01-14 04:56:35.365957 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.365960 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7deb390 rbd_data.2f31e252fa88e4.0000000000000101 389120~4096
>> 2016-01-14 04:56:35.365961 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7deb390 rbd_data.2f31e252fa88e4.0000000000000101
>> 389120~4096
>> 2016-01-14 04:56:35.365982 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c75b80
>> 2016-01-14 04:56:35.365984 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.365986 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7deb0c0 rbd_data.2f31e252fa88e4.0000000000000101 790528~4096
>> 2016-01-14 04:56:35.365988 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7deb0c0 rbd_data.2f31e252fa88e4.0000000000000101
>> 790528~4096
>> 2016-01-14 04:56:35.366135 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c74380
>> 2016-01-14 04:56:35.366142 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.366146 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7dea1c0 rbd_data.2f31e252fa88e4.0000000000000149 3203072~16384
>> 2016-01-14 04:56:35.366148 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7dea1c0 rbd_data.2f31e252fa88e4.0000000000000149
>> 3203072~16384
>> 2016-01-14 04:56:35.366318 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c74640
>> 2016-01-14 04:56:35.366328 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.366332 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7deb480 rbd_data.2f31e252fa88e4.0000000000000149 3231744~8192
>> 2016-01-14 04:56:35.366335 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7deb480 rbd_data.2f31e252fa88e4.0000000000000149
>> 3231744~8192
>> 2016-01-14 04:56:35.366475 7ff80f13f700 20 librbdwriteback: write will
>> wait for result 0x55bcc3c74b40
>> 2016-01-14 04:56:35.366479 7ff80f13f700 10 librbd::ImageCtx:
>> prune_parent_extents image overlap 0, object overlap 0 from image
>> extents []
>> 2016-01-14 04:56:35.366481 7ff80f13f700 20 librbd::AioRequest: send
>> 0x55bcc7de9e00 rbd_data.2f31e252fa88e4.0000000000000149 3248128~4096
>> 2016-01-14 04:56:35.366483 7ff80f13f700 20 librbd::AioRequest:
>> send_write 0x55bcc7de9e00 rbd_data.2f31e252fa88e4.0000000000000149
>> 3248128~4096
>> 2016-01-14 04:56:35.368655 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7de8d20 rbd_data.2f31e252fa88e4.0000000000000010 7311360~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.368673 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.368677 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7de8d20
>> 2016-01-14 04:56:35.368679 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c73ae0
>> 2016-01-14 04:56:35.368682 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000010
>> 2016-01-14 04:56:35.368685 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c73ae0
>> 2016-01-14 04:56:35.368694 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c73ae0
>> 2016-01-14 04:56:35.368744 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7dea1c0 rbd_data.2f31e252fa88e4.0000000000000149 3203072~16384
>> should_complete: r = 0
>> 2016-01-14 04:56:35.368749 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.368751 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7dea1c0
>> 2016-01-14 04:56:35.368752 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c74380
>> 2016-01-14 04:56:35.368754 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000149
>> 2016-01-14 04:56:35.368755 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c74380
>> 2016-01-14 04:56:35.368762 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c74380
>> 2016-01-14 04:56:35.369727 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7deb480 rbd_data.2f31e252fa88e4.0000000000000149 3231744~8192
>> should_complete: r = 0
>> 2016-01-14 04:56:35.369738 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.369741 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7deb480
>> 2016-01-14 04:56:35.369742 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c74640
>> 2016-01-14 04:56:35.369745 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000149
>> 2016-01-14 04:56:35.369746 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c74640
>> 2016-01-14 04:56:35.369754 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c74640
>> 2016-01-14 04:56:35.369877 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7deb390 rbd_data.2f31e252fa88e4.0000000000000101 389120~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.369886 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.369904 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7deb390
>> 2016-01-14 04:56:35.369906 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c73140
>> 2016-01-14 04:56:35.369908 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000101
>> 2016-01-14 04:56:35.369910 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c73140
>> 2016-01-14 04:56:35.369915 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c73140
>> 2016-01-14 04:56:35.369918 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7de9e00 rbd_data.2f31e252fa88e4.0000000000000149 3248128~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.369920 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.369921 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7de9e00
>> 2016-01-14 04:56:35.369923 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c74b40
>> 2016-01-14 04:56:35.369924 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000149
>> 2016-01-14 04:56:35.369925 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c74b40
>> 2016-01-14 04:56:35.369928 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c74b40
>> 2016-01-14 04:56:35.369931 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7deb0c0 rbd_data.2f31e252fa88e4.0000000000000101 790528~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.369933 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.369934 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7deb0c0
>> 2016-01-14 04:56:35.369935 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c75b80
>> 2016-01-14 04:56:35.369936 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000101
>> 2016-01-14 04:56:35.369937 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c75b80
>> 2016-01-14 04:56:35.369940 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c75b80
>> 2016-01-14 04:56:35.371882 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7de8ff0 rbd_data.2f31e252fa88e4.0000000000000000 1052672~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.371892 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.371916 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7de8ff0
>> 2016-01-14 04:56:35.371971 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c73e40
>> 2016-01-14 04:56:35.371975 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000000
>> 2016-01-14 04:56:35.371980 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c73e40
>> 2016-01-14 04:56:35.371989 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c73e40
>> 2016-01-14 04:56:35.372028 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7deb660 rbd_data.2f31e252fa88e4.0000000000000000 4988928~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.372034 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.372036 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7deb660
>> 2016-01-14 04:56:35.372038 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c742a0
>> 2016-01-14 04:56:35.372041 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000000
>> 2016-01-14 04:56:35.372043 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c742a0
>> 2016-01-14 04:56:35.372048 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c742a0
>> 2016-01-14 04:56:35.372090 7ff810942700 20 librbd::AioRequest: write
>> 0x55bcc7dea850 rbd_data.2f31e252fa88e4.0000000000000000 5193728~4096
>> should_complete: r = 0
>> 2016-01-14 04:56:35.372094 7ff810942700 20 librbd::AioRequest: WRITE_FLAT
>> 2016-01-14 04:56:35.372096 7ff810942700 20 librbd::AioRequest:
>> complete 0x55bcc7dea850
>> 2016-01-14 04:56:35.372097 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite completing 0x55bcc3c73ba0
>> 2016-01-14 04:56:35.372099 7ff810942700 20 librbdwriteback:
>> complete_writes() oid rbd_data.2f31e252fa88e4.0000000000000000
>> 2016-01-14 04:56:35.372101 7ff810942700 20 librbdwriteback:
>> complete_writes() completing 0x55bcc3c73ba0
>> 2016-01-14 04:56:35.372210 7ff810942700 20 librbdwriteback:
>> C_OrderedWrite finished 0x55bcc3c73ba0
>>
>> Thanks in advance!
>>
>> Best regards, Vasily.
>>
>> 2016-01-14 9:08 GMT+08:00 Jason Dillaman <dilla...@redhat.com>:
>> > Definitely would like to see the "debug rbd = 20" logs from 192.168.254.17
>> > when this occurs.  If you are co-locating your OSDs, MONs, and qemu-kvm
>> > processes, make sure your ceph.conf has "log file = </path/to/client.log>"
>> > defined in the [global] or [client] section.
>> >
>> > --
>> >
>> > Jason Dillaman
>> >
>> >
>> > ----- Original Message -----
>> >> From: "Василий Ангапов" <anga...@gmail.com>
>> >> To: "Jason Dillaman" <dilla...@redhat.com>, "ceph-users"
>> >> <ceph-users@lists.ceph.com>
>> >> Sent: Wednesday, January 13, 2016 4:22:02 AM
>> >> Subject: Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?
>> >>
>> >> Hello again!
>> >>
>> >> Unfortunately I have to raise the problem again. I have constantly
>> >> hanging snapshots on several images.
>> >> My Ceph version is now 0.94.5.
>> >> RBD CLI always giving me this:
>> >> root@slpeah001:[~]:# rbd snap create
>> >> volumes/volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd --snap test
>> >> 2016-01-13 12:04:39.107166 7fb70e4c2880 -1 librbd::ImageWatcher:
>> >> 0x427a710 no lock owners detected
>> >> 2016-01-13 12:04:44.108783 7fb70e4c2880 -1 librbd::ImageWatcher:
>> >> 0x427a710 no lock owners detected
>> >> 2016-01-13 12:04:49.110321 7fb70e4c2880 -1 librbd::ImageWatcher:
>> >> 0x427a710 no lock owners detected
>> >> 2016-01-13 12:04:54.112373 7fb70e4c2880 -1 librbd::ImageWatcher:
>> >> 0x427a710 no lock owners detected
>> >>
>> >> I turned "debug rbd = 20" and found this records only on one of OSDs
>> >> (on the same host as RBD client):
>> >> 2016-01-13 11:44:46.076780 7fb5f05d8700  0 --
>> >> 192.168.252.11:6804/407141 >> 192.168.252.11:6800/407122
>> >> pipe(0x392d2000 sd=257 :6804 s=2 pgs=17 cs=1 l=0 c=0x383b4160).fault
>> >> with nothing to send, going to standby
>> >> 2016-01-13 11:58:26.261460 7fb5efbce700  0 --
>> >> 192.168.252.11:6804/407141 >> 192.168.252.11:6802/407124
>> >> pipe(0x39e45000 sd=156 :6804 s=2 pgs=17 cs=1 l=0 c=0x386fbb20).fault
>> >> with nothing to send, going to standby
>> >> 2016-01-13 12:04:23.948931 7fb5fede2700  0 --
>> >> 192.168.254.11:6804/407141 submit_message watch-notify(notify_complete
>> >> (2) cookie 44850800 notify 99720550678667 ret -110) v3 remote,
>> >> 192.168.254.11:0/1468572, failed lossy con, dropping message
>> >> 0x3ab76fc0
>> >> 2016-01-13 12:09:04.254329 7fb5fede2700  0 --
>> >> 192.168.254.11:6804/407141 submit_message watch-notify(notify_complete
>> >> (2) cookie 69846112 notify 99720550678721 ret -110) v3 remote,
>> >> 192.168.254.11:0/1509673, failed lossy con, dropping message
>> >> 0x3830cb40
>> >>
>> >> Here is the image properties
>> >> root@slpeah001:[~]:# rbd info
>> >> volumes/volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd
>> >> rbd image 'volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd':
>> >>         size 200 GB in 51200 objects
>> >>         order 22 (4096 kB objects)
>> >>         block_name_prefix: rbd_data.2f2a81562fea59
>> >>         format: 2
>> >>         features: layering, striping, exclusive, object map
>> >>         flags:
>> >>         stripe unit: 4096 kB
>> >>         stripe count: 1
>> >> root@slpeah001:[~]:# rbd status
>> >> volumes/volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd
>> >> Watchers:
>> >>         watcher=192.168.254.17:0/2088291 client.3424561
>> >>         cookie=93888518795008
>> >> root@slpeah001:[~]:# rbd lock list
>> >> volumes/volume-26c89a0a-be4d-45d4-85a6-e0dc134941fd
>> >> There is 1 exclusive lock on this image.
>> >> Locker         ID                  Address
>> >> client.3424561 auto 93888518795008 192.168.254.17:0/2088291
>> >>
>> >> Also taking RBD snapshots from python API also is hanging...
>> >> This image is being used by libvirt.
>> >>
>> >> Any suggestions?
>> >> Thanks!
>> >>
>> >> Regards, Vasily.
>> >>
>> >>
>> >> 2016-01-06 1:11 GMT+08:00 Мистер Сёма <anga...@gmail.com>:
>> >> > Well, I believe the problem is no more valid.
>> >> > My code before was:
>> >> > virsh qemu-agent-command $INSTANCE '{"execute":"guest-fsfreeze-freeze"}'
>> >> > rbd snap create $RBD_ID --snap `date +%F-%T`
>> >> >
>> >> > and then snapshot creation was hanging forever. I inserted a 2 second
>> >> > sleep.
>> >> >
>> >> > My code after
>> >> > virsh qemu-agent-command $INSTANCE '{"execute":"guest-fsfreeze-freeze"}'
>> >> > sleep 2
>> >> > rbd snap create $RBD_ID --snap `date +%F-%T`
>> >> >
>> >> > And now it works perfectly. Again, I have no idea, how it solved the
>> >> > problem.
>> >> > Thanks :)
>> >> >
>> >> > 2016-01-06 0:49 GMT+08:00 Мистер Сёма <anga...@gmail.com>:
>> >> >> I am very sorry, but I am not able to increase log versbosity because
>> >> >> it's a production cluster with very limited space for logs. Sounds
>> >> >> crazy, but that's it.
>> >> >> I have found out that the RBD snapshot process hangs forever only when
>> >> >> QEMU fsfreeze was issued just before the snapshot. If the guest is not
>> >> >> frozen - snapshot is taken with no problem... I have absolutely no
>> >> >> idea how these two things could be related to each other... And again
>> >> >> this issue occurs only when there is an exclusive lock on image and
>> >> >> exclusive lock feature is enabled also on it.
>> >> >>
>> >> >> Do somebody else have such a problem?
>> >> >>
>> >> >> 2016-01-05 2:55 GMT+08:00 Jason Dillaman <dilla...@redhat.com>:
>> >> >>> I am surprised by the error you are seeing with exclusive lock
>> >> >>> enabled.
>> >> >>> The rbd CLI should be able to send the 'snap create' request to QEMU
>> >> >>> without an error.  Are you able to provide "debug rbd = 20" logs from
>> >> >>> shortly before and after your snapshot attempt?
>> >> >>>
>> >> >>> --
>> >> >>>
>> >> >>> Jason Dillaman
>> >> >>>
>> >> >>>
>> >> >>> ----- Original Message -----
>> >> >>>> From: "Мистер Сёма" <anga...@gmail.com>
>> >> >>>> To: "ceph-users" <ceph-users@lists.ceph.com>
>> >> >>>> Sent: Monday, January 4, 2016 12:37:07 PM
>> >> >>>> Subject: [ceph-users] How to do quiesced rbd snapshot in libvirt?
>> >> >>>>
>> >> >>>> Hello,
>> >> >>>>
>> >> >>>> Can anyone please tell me what is the right way to do quiesced RBD
>> >> >>>> snapshots in libvirt (OpenStack)?
>> >> >>>> My Ceph version is 0.94.3.
>> >> >>>>
>> >> >>>> I found two possible ways, none of them is working for me. Wonder if
>> >> >>>> I'm doing something wrong:
>> >> >>>> 1) Do VM fsFreeze through QEMU guest agent, perform RBD snapshot, do
>> >> >>>> fsThaw. Looks good but the bad thing here is that libvirt uses
>> >> >>>> exclusive lock on image, which results in errors like that when
>> >> >>>> taking
>> >> >>>> snapshot: " 7f359d304880 -1 librbd::ImageWatcher: no lock owners
>> >> >>>> detected". It seems like rbd client is trying to take snapshot on
>> >> >>>> behalf of exclusive lock owner but is unable to find this owner.
>> >> >>>> Without an exclusive lock everything is working nice.
>> >> >>>>
>> >> >>>> 2)  Performing QEMU external snapshots with local QCOW2 file being
>> >> >>>> overlayed on top of RBD image. This seems really interesting but the
>> >> >>>> bad thing is that there is no way currently to remove this kind of
>> >> >>>> snapshot because active blockcommit is not currently working for RBD
>> >> >>>> images (https://bugzilla.redhat.com/show_bug.cgi?id=1189998).
>> >> >>>>
>> >> >>>> So again my question is: how do you guys take quiesced RBD snapshots
>> >> >>>> in
>> >> >>>> libvirt?
>> >> >>>> _______________________________________________
>> >> >>>> ceph-users mailing list
>> >> >>>> ceph-users@lists.ceph.com
>> >> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >>>>
>> >>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to