On the bright side, at least your week of export-related pain should result in 
a decent speed boost when your clients get 64MB of cache instead of 64B.

-- 

Jason Dillaman 


----- Original Message -----
> From: "Joe Ryner" <jry...@cait.org>
> To: "Jason Dillaman" <dilla...@redhat.com>
> Cc: ceph-us...@ceph.com
> Sent: Thursday, November 5, 2015 5:20:03 PM
> Subject: Re: [ceph-users] rbd hang
> 
> Thanks for the heads up.  I have had this set this way for a long time in all
> of my deployments.  I assumed that the units where in MB.
> 
> Arg..
> 
> I will test new settings.
> 
> Joe
> 
> ----- Original Message -----
> From: "Jason Dillaman" <dilla...@redhat.com>
> To: "Joe Ryner" <jry...@cait.org>
> Cc: ceph-us...@ceph.com
> Sent: Thursday, November 5, 2015 3:52:20 PM
> Subject: Re: [ceph-users] rbd hang
> 
> It appears you have set your cache size to 64 bytes(!):
> 
> 2015-11-05 15:07:49.927510 7f0d9af5a760 20 librbd::ImageCtx: Initial cache
> settings: size=64 num_objects=10 max_dirty=32 target_dirty=16
> max_dirty_age=5
> 
> This exposed a known issue [1] when you attempt to read more data in a single
> read request than your cache can allocate.
> 
> [1] http://tracker.ceph.com/issues/13388
> 
> --
> 
> Jason Dillaman
> 
> 
> ----- Original Message -----
> > From: "Joe Ryner" <jry...@cait.org>
> > To: "Jason Dillaman" <dilla...@redhat.com>
> > Cc: ceph-us...@ceph.com
> > Sent: Thursday, November 5, 2015 4:24:29 PM
> > Subject: Re: [ceph-users] rbd hang
> > 
> > It worked.
> > 
> > So Whats broken with caching?
> > 
> > ----- Original Message -----
> > From: "Jason Dillaman" <dilla...@redhat.com>
> > To: "Joe Ryner" <jry...@cait.org>
> > Cc: ceph-us...@ceph.com
> > Sent: Thursday, November 5, 2015 3:18:39 PM
> > Subject: Re: [ceph-users] rbd hang
> > 
> > Can you retry with 'rbd --rbd-cache=false -p images export joe
> > /root/joe.raw'?
> > 
> > --
> > 
> > Jason Dillaman
> > 
> > ----- Original Message -----
> > > From: "Joe Ryner" <jry...@cait.org>
> > > To: "Jason Dillaman" <dilla...@redhat.com>
> > > Cc: ceph-us...@ceph.com
> > > Sent: Thursday, November 5, 2015 4:14:28 PM
> > > Subject: Re: [ceph-users] rbd hang
> > > 
> > > Hi,
> > > 
> > > Do you have any ideas as to what might be wrong?  Since my last email I
> > > decided to recreate the cluster.  I am currently testing upgrading from
> > > 0.72
> > > to 0.80.10 with hopes to end up on hammer.
> > > 
> > > So I completely erased the cluster and reloaded the machines with centos
> > > 6.5(to match my production servers)
> > > I loaded 0.72 and got it working perfectly.
> > > Next I upgraded directly to 0.80.10 and now I'm stuck again.  I thought I
> > > had
> > > tested this procedure with 0.80.6 and it worked fine.  Not sure what
> > > changed
> > > or what starting on 0.72 might have to do with it.
> > > 
> > > I would greatly appreciate any help.
> > > 
> > > Joe
> > > 
> > > rbd -p images export joe /root/joe.raw
> > > 2015-11-05 15:07:49.920941 7f0d9af5a760  1 -- :/0 messenger.start
> > > 2015-11-05 15:07:49.922701 7f0d9af5a760  1 -- :/1007378 -->
> > > 10.134.128.41:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> > > 0x1558800
> > > con 0x15583f0
> > > 2015-11-05 15:07:49.923178 7f0d9af52700  1 -- 10.134.128.41:0/1007378
> > > learned
> > > my addr 10.134.128.41:0/1007378
> > > 2015-11-05 15:07:49.923533 7f0d95549700 10 client.?.objecter
> > > ms_handle_connect 0x15583f0
> > > 2015-11-05 15:07:49.923617 7f0d95549700 10 client.?.objecter
> > > resend_mon_ops
> > > 2015-11-05 15:07:49.924049 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 491 (0 ->
> > > 491)
> > > 2015-11-05 15:07:49.924242 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 33 (491 ->
> > > 524)
> > > 2015-11-05 15:07:49.924244 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 1 ==== mon_map v1 ==== 491+0+0 (785919251 0 0)
> > > 0x7f0d84000cb0 con 0x15583f0
> > > 2015-11-05 15:07:49.924344 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 491 (524 ->
> > > 33)
> > > 2015-11-05 15:07:49.924357 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1
> > > ====
> > > 33+0+0 (1606025069 0 0) 0x7f0d84001230 con 0x15583f0
> > > 2015-11-05 15:07:49.924649 7f0d95549700  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.41:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
> > > 0x7f0d80001970 con 0x15583f0
> > > 2015-11-05 15:07:49.924682 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 33 (33 -> 0)
> > > 2015-11-05 15:07:49.925084 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 206 (0 ->
> > > 206)
> > > 2015-11-05 15:07:49.925133 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1
> > > ====
> > > 206+0+0 (3696183790 0 0) 0x7f0d84001230 con 0x15583f0
> > > 2015-11-05 15:07:49.925467 7f0d95549700  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.41:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
> > > 0x7f0d80002250 con 0x15583f0
> > > 2015-11-05 15:07:49.925497 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 206 (206 ->
> > > 0)
> > > 2015-11-05 15:07:49.926070 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 393 (0 ->
> > > 393)
> > > 2015-11-05 15:07:49.926161 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1
> > > ====
> > > 393+0+0 (3778161986 0 0) 0x7f0d840014a0 con 0x15583f0
> > > 2015-11-05 15:07:49.926282 7f0d95549700  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.41:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x1558a10
> > > con
> > > 0x15583f0
> > > 2015-11-05 15:07:49.926332 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 393 (393 ->
> > > 0)
> > > 2015-11-05 15:07:49.926458 7f0d9af5a760 10 client.277182.objecter
> > > maybe_request_map subscribing (onetime) to next osd map
> > > 2015-11-05 15:07:49.926484 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.41:6789/0 -- mon_subscribe({monmap=4+,osdmap=0}) v2 -- ?+0
> > > 0x1558800 con 0x15583f0
> > > 2015-11-05 15:07:49.926517 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.41:6789/0 -- mon_subscribe({monmap=4+,osdmap=0}) v2 -- ?+0
> > > 0x15568d0 con 0x15583f0
> > > 2015-11-05 15:07:49.926654 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 491 (0 ->
> > > 491)
> > > 2015-11-05 15:07:49.926715 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 20 (491 ->
> > > 511)
> > > 2015-11-05 15:07:49.926719 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 5 ==== mon_map v1 ==== 491+0+0 (785919251 0 0)
> > > 0x7f0d840015b0 con 0x15583f0
> > > 2015-11-05 15:07:49.926812 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 491 (511 ->
> > > 20)
> > > 2015-11-05 15:07:49.926838 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> > > (646930372 0 0) 0x7f0d840019a0 con 0x15583f0
> > > 2015-11-05 15:07:49.926857 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 20 (20 -> 0)
> > > 2015-11-05 15:07:49.926915 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 7562 (0 ->
> > > 7562)
> > > 2015-11-05 15:07:49.927037 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 20 (7562 ->
> > > 7582)
> > > 2015-11-05 15:07:49.927041 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 7 ==== osd_map(94..94 src has 1..94) v3 ====
> > > 7562+0+0 (1465643944 0 0) 0x7f0d840013b0 con 0x15583f0
> > > 2015-11-05 15:07:49.927083 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 7562 (7582 ->
> > > 15144)
> > > 2015-11-05 15:07:49.927082 7f0d95549700  3 client.277182.objecter
> > > handle_osd_map got epochs [94,94] > 0
> > > 2015-11-05 15:07:49.927092 7f0d95549700  3 client.277182.objecter
> > > handle_osd_map decoding full epoch 94
> > > 2015-11-05 15:07:49.927119 7f0d94547700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 20 (15144 ->
> > > 15164)
> > > 2015-11-05 15:07:49.927383 7f0d95549700 20 client.277182.objecter
> > > dump_active
> > > .. 0 homeless
> > > 2015-11-05 15:07:49.927417 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 7562 (15164
> > > ->
> > > 7602)
> > > 2015-11-05 15:07:49.927444 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 8 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> > > (646930372 0 0) 0x7f0d84003810 con 0x15583f0
> > > 2015-11-05 15:07:49.927456 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 20 (7602 ->
> > > 7582)
> > > 2015-11-05 15:07:49.927460 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 9 ==== osd_map(94..94 src has 1..94) v3 ====
> > > 7562+0+0 (1465643944 0 0) 0x7f0d84005920 con 0x15583f0
> > > 2015-11-05 15:07:49.927469 7f0d95549700  3 client.277182.objecter
> > > handle_osd_map ignoring epochs [94,94] <= 94
> > > 2015-11-05 15:07:49.927472 7f0d95549700 20 client.277182.objecter
> > > dump_active
> > > .. 0 homeless
> > > 2015-11-05 15:07:49.927476 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 7562 (7582 ->
> > > 20)
> > > 2015-11-05 15:07:49.927482 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > mon.0 10.134.128.41:6789/0 10 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> > > (646930372 0 0) 0x7f0d84005eb0 con 0x15583f0
> > > 2015-11-05 15:07:49.927492 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 20 (20 -> 0)
> > > 2015-11-05 15:07:49.927497 7f0d9af5a760 20 librbd::ImageCtx: enabling
> > > caching...
> > > 2015-11-05 15:07:49.927510 7f0d9af5a760 20 librbd::ImageCtx: Initial
> > > cache
> > > settings: size=64 num_objects=10 max_dirty=32 target_dirty=16
> > > max_dirty_age=5
> > > 2015-11-05 15:07:49.927667 7f0d9af5a760 20 librbd: open_image: ictx =
> > > 0x1552fb0 name = 'joe' id = '' snap_name = ''
> > > 2015-11-05 15:07:49.927676 7f0d8d7fa700 10 objectcacher flusher start
> > > 2015-11-05 15:07:49.927776 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:49.927807 7f0d9af5a760 10 throttle(objecter_bytes
> > > 0x1551688)
> > > get_or_fail 0 success (0 -> 0)
> > > 2015-11-05 15:07:49.927823 7f0d9af5a760 10 throttle(objecter_ops
> > > 0x1551720)
> > > get_or_fail 1 success (0 -> 1)
> > > 2015-11-05 15:07:49.927886 7f0d9af5a760 10 client.277182.objecter
> > > calc_target
> > > pgid 3.3c22790c acting [7,3]
> > > 2015-11-05 15:07:49.927963 7f0d9af5a760 20 client.277182.objecter  note:
> > > not
> > > requesting commit
> > > 2015-11-05 15:07:49.927979 7f0d9af5a760 10 client.277182.objecter
> > > op_submit
> > > oid joe.rbd @3 @3 [stat] tid 1 osd.7
> > > 2015-11-05 15:07:49.927989 7f0d9af5a760 15 client.277182.objecter send_op
> > > 1
> > > to osd.7
> > > 2015-11-05 15:07:49.927997 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.42:6810/11215 -- osd_op(client.277182.0:1 joe.rbd [stat]
> > > 3.3c22790c ack+read e94) v4 -- ?+0 0x1555ca0 con 0x1555680
> > > 2015-11-05 15:07:49.928029 7f0d9af5a760  5 client.277182.objecter 1
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.929312 7f0d95549700 10 client.277182.objecter
> > > ms_handle_connect 0x1555680
> > > 2015-11-05 15:07:49.930214 7f0d94345700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 174 (0 ->
> > > 174)
> > > 2015-11-05 15:07:49.930429 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > osd.7 10.134.128.42:6810/11215 1 ==== osd_op_reply(1 joe.rbd [stat] v0'0
> > > uv0
> > > ack = -2 ((2) No such file or directory)) v6 ==== 174+0+0 (3862803988 0
> > > 0)
> > > 0x7f0d70000b70 con 0x1555680
> > > 2015-11-05 15:07:49.930487 7f0d95549700 10 client.277182.objecter in
> > > handle_osd_op_reply
> > > 2015-11-05 15:07:49.930491 7f0d95549700  7 client.277182.objecter
> > > handle_osd_op_reply 1 ack v 0'0 uv 0 in 3.3c22790c attempt 0
> > > 2015-11-05 15:07:49.930497 7f0d95549700 10 client.277182.objecter  op 0
> > > rval
> > > 0 len 0
> > > 2015-11-05 15:07:49.930502 7f0d95549700 10 client.277182.objecter  op 0
> > > handler 0x1554dd0
> > > 2015-11-05 15:07:49.930549 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply ack
> > > 2015-11-05 15:07:49.930554 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply completed tid 1
> > > 2015-11-05 15:07:49.930560 7f0d95549700 15 client.277182.objecter
> > > finish_op
> > > 1
> > > 2015-11-05 15:07:49.930565 7f0d95549700 10 throttle(objecter_bytes
> > > 0x1551688)
> > > put 0 (0 -> 0)
> > > 2015-11-05 15:07:49.930568 7f0d95549700 10 throttle(objecter_ops
> > > 0x1551720)
> > > put 1 (1 -> 0)
> > > 2015-11-05 15:07:49.930578 7f0d95549700  5 client.277182.objecter 0
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.930591 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 174 (174 ->
> > > 0)
> > > 2015-11-05 15:07:49.930646 7f0d9af5a760 10 throttle(objecter_bytes
> > > 0x1551688)
> > > get_or_fail 0 success (0 -> 0)
> > > 2015-11-05 15:07:49.930652 7f0d9af5a760 10 throttle(objecter_ops
> > > 0x1551720)
> > > get_or_fail 1 success (0 -> 1)
> > > 2015-11-05 15:07:49.930662 7f0d9af5a760 10 client.277182.objecter
> > > calc_target
> > > pgid 3.5f9799ec acting [3,7]
> > > 2015-11-05 15:07:49.930734 7f0d9af5a760 20 client.277182.objecter  note:
> > > not
> > > requesting commit
> > > 2015-11-05 15:07:49.930739 7f0d9af5a760 10 client.277182.objecter
> > > op_submit
> > > oid rbd_id.joe @3 @3 [stat] tid 2 osd.3
> > > 2015-11-05 15:07:49.930741 7f0d9af5a760 15 client.277182.objecter send_op
> > > 2
> > > to osd.3
> > > 2015-11-05 15:07:49.930744 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.43:6809/26543 -- osd_op(client.277182.0:2 rbd_id.joe [stat]
> > > 3.5f9799ec ack+read e94) v4 -- ?+0 0x155aad0 con 0x155a520
> > > 2015-11-05 15:07:49.930756 7f0d9af5a760  5 client.277182.objecter 1
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.931944 7f0d95549700 10 client.277182.objecter
> > > ms_handle_connect 0x155a520
> > > 2015-11-05 15:07:49.932897 7f0d94143700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 193 (0 ->
> > > 193)
> > > 2015-11-05 15:07:49.933199 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > osd.3 10.134.128.43:6809/26543 1 ==== osd_op_reply(2 rbd_id.joe [stat]
> > > v0'0
> > > uv44 ondisk = 0) v6 ==== 177+0+16 (3010198147 0 361450234) 0x7f0d68000ce0
> > > con 0x155a520
> > > 2015-11-05 15:07:49.933235 7f0d95549700 10 client.277182.objecter in
> > > handle_osd_op_reply
> > > 2015-11-05 15:07:49.933237 7f0d95549700  7 client.277182.objecter
> > > handle_osd_op_reply 2 ondisk v 0'0 uv 44 in 3.5f9799ec attempt 0
> > > 2015-11-05 15:07:49.933242 7f0d95549700 10 client.277182.objecter  op 0
> > > rval
> > > 0 len 16
> > > 2015-11-05 15:07:49.933245 7f0d95549700 10 client.277182.objecter  op 0
> > > handler 0x1554dd0
> > > 2015-11-05 15:07:49.933248 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply ack
> > > 2015-11-05 15:07:49.933251 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply completed tid 2
> > > 2015-11-05 15:07:49.933252 7f0d95549700 15 client.277182.objecter
> > > finish_op
> > > 2
> > > 2015-11-05 15:07:49.933254 7f0d95549700 10 throttle(objecter_bytes
> > > 0x1551688)
> > > put 0 (0 -> 0)
> > > 2015-11-05 15:07:49.933256 7f0d95549700 10 throttle(objecter_ops
> > > 0x1551720)
> > > put 1 (1 -> 0)
> > > 2015-11-05 15:07:49.933262 7f0d95549700  5 client.277182.objecter 0
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.933269 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 193 (193 ->
> > > 0)
> > > 2015-11-05 15:07:49.933278 7f0d9af5a760 20 librbd: detect format of joe :
> > > new
> > > 2015-11-05 15:07:49.933330 7f0d9af5a760 10 throttle(objecter_bytes
> > > 0x1551688)
> > > get_or_fail 0 success (0 -> 0)
> > > 2015-11-05 15:07:49.933335 7f0d9af5a760 10 throttle(objecter_ops
> > > 0x1551720)
> > > get_or_fail 1 success (0 -> 1)
> > > 2015-11-05 15:07:49.933345 7f0d9af5a760 10 client.277182.objecter
> > > calc_target
> > > pgid 3.5f9799ec acting [3,7]
> > > 2015-11-05 15:07:49.933349 7f0d9af5a760 20 client.277182.objecter  note:
> > > not
> > > requesting commit
> > > 2015-11-05 15:07:49.933353 7f0d9af5a760 10 client.277182.objecter
> > > op_submit
> > > oid rbd_id.joe @3 @3 [call rbd.get_id] tid 3 osd.3
> > > 2015-11-05 15:07:49.933365 7f0d9af5a760 15 client.277182.objecter send_op
> > > 3
> > > to osd.3
> > > 2015-11-05 15:07:49.933369 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.43:6809/26543 -- osd_op(client.277182.0:3 rbd_id.joe [call
> > > rbd.get_id] 3.5f9799ec ack+read e94) v4 -- ?+0 0x155b370 con 0x155a520
> > > 2015-11-05 15:07:49.933389 7f0d9af5a760  5 client.277182.objecter 1
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.934094 7f0d94143700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 193 (0 ->
> > > 193)
> > > 2015-11-05 15:07:49.934335 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > osd.3 10.134.128.43:6809/26543 2 ==== osd_op_reply(3 rbd_id.joe [call]
> > > v0'0
> > > uv44 ondisk = 0) v6 ==== 177+0+16 (728921235 0 342179323) 0x7f0d68000ce0
> > > con
> > > 0x155a520
> > > 2015-11-05 15:07:49.934362 7f0d95549700 10 client.277182.objecter in
> > > handle_osd_op_reply
> > > 2015-11-05 15:07:49.934364 7f0d95549700  7 client.277182.objecter
> > > handle_osd_op_reply 3 ondisk v 0'0 uv 44 in 3.5f9799ec attempt 0
> > > 2015-11-05 15:07:49.934377 7f0d95549700 10 client.277182.objecter  op 0
> > > rval
> > > 0 len 16
> > > 2015-11-05 15:07:49.934380 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply ack
> > > 2015-11-05 15:07:49.934385 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply completed tid 3
> > > 2015-11-05 15:07:49.934387 7f0d95549700 15 client.277182.objecter
> > > finish_op
> > > 3
> > > 2015-11-05 15:07:49.934389 7f0d95549700 10 throttle(objecter_bytes
> > > 0x1551688)
> > > put 0 (0 -> 0)
> > > 2015-11-05 15:07:49.934393 7f0d95549700 10 throttle(objecter_ops
> > > 0x1551720)
> > > put 1 (1 -> 0)
> > > 2015-11-05 15:07:49.934402 7f0d95549700  5 client.277182.objecter 0
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.934410 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 193 (193 ->
> > > 0)
> > > 2015-11-05 15:07:49.934486 7f0d9af5a760 10 throttle(objecter_bytes
> > > 0x1551688)
> > > get_or_fail 0 success (0 -> 0)
> > > 2015-11-05 15:07:49.934492 7f0d9af5a760 10 throttle(objecter_ops
> > > 0x1551720)
> > > get_or_fail 1 success (0 -> 1)
> > > 2015-11-05 15:07:49.934521 7f0d9af5a760 10 client.277182.objecter
> > > calc_target
> > > pgid 3.6e4e925b acting [7,0]
> > > 2015-11-05 15:07:49.934525 7f0d9af5a760 20 client.277182.objecter  note:
> > > not
> > > requesting commit
> > > 2015-11-05 15:07:49.934528 7f0d9af5a760 10 client.277182.objecter
> > > op_submit
> > > oid rbd_header.92b32ae8944a @3 @3 [call rbd.get_size,call
> > > rbd.get_object_prefix] tid 4 osd.7
> > > 2015-11-05 15:07:49.934533 7f0d9af5a760 15 client.277182.objecter send_op
> > > 4
> > > to osd.7
> > > 2015-11-05 15:07:49.934537 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.42:6810/11215 -- osd_op(client.277182.0:4
> > > rbd_header.92b32ae8944a
> > > [call rbd.get_size,call rbd.get_object_prefix] 3.6e4e925b ack+read e94)
> > > v4
> > > -- ?+0 0x155bd20 con 0x1555680
> > > 2015-11-05 15:07:49.934555 7f0d9af5a760  5 client.277182.objecter 1
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.935611 7f0d94345700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 266 (0 ->
> > > 266)
> > > 2015-11-05 15:07:49.935863 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > osd.7 10.134.128.42:6810/11215 2 ==== osd_op_reply(4
> > > rbd_header.92b32ae8944a
> > > [call,call] v0'0 uv53 ondisk = 0) v6 ==== 232+0+34 (1104830396 0
> > > 3807877729)
> > > 0x7f0d70000b70 con 0x1555680
> > > 2015-11-05 15:07:49.935880 7f0d95549700 10 client.277182.objecter in
> > > handle_osd_op_reply
> > > 2015-11-05 15:07:49.935882 7f0d95549700  7 client.277182.objecter
> > > handle_osd_op_reply 4 ondisk v 0'0 uv 53 in 3.6e4e925b attempt 0
> > > 2015-11-05 15:07:49.935887 7f0d95549700 10 client.277182.objecter  op 0
> > > rval
> > > 0 len 9
> > > 2015-11-05 15:07:49.935890 7f0d95549700 10 client.277182.objecter  op 1
> > > rval
> > > 0 len 25
> > > 2015-11-05 15:07:49.935891 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply ack
> > > 2015-11-05 15:07:49.935894 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply completed tid 4
> > > 2015-11-05 15:07:49.935896 7f0d95549700 15 client.277182.objecter
> > > finish_op
> > > 4
> > > 2015-11-05 15:07:49.935898 7f0d95549700 10 throttle(objecter_bytes
> > > 0x1551688)
> > > put 0 (0 -> 0)
> > > 2015-11-05 15:07:49.935900 7f0d95549700 10 throttle(objecter_ops
> > > 0x1551720)
> > > put 1 (1 -> 0)
> > > 2015-11-05 15:07:49.935909 7f0d95549700  5 client.277182.objecter 0
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.935952 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 266 (266 ->
> > > 0)
> > > 2015-11-05 15:07:49.935995 7f0d9af5a760 10 throttle(objecter_bytes
> > > 0x1551688)
> > > get_or_fail 0 success (0 -> 0)
> > > 2015-11-05 15:07:49.936001 7f0d9af5a760 10 throttle(objecter_ops
> > > 0x1551720)
> > > get_or_fail 1 success (0 -> 1)
> > > 2015-11-05 15:07:49.936008 7f0d9af5a760 10 client.277182.objecter
> > > calc_target
> > > pgid 3.6e4e925b acting [7,0]
> > > 2015-11-05 15:07:49.936011 7f0d9af5a760 20 client.277182.objecter  note:
> > > not
> > > requesting commit
> > > 2015-11-05 15:07:49.936015 7f0d9af5a760 10 client.277182.objecter
> > > op_submit
> > > oid rbd_header.92b32ae8944a @3 @3 [call rbd.get_stripe_unit_count] tid 5
> > > osd.7
> > > 2015-11-05 15:07:49.936020 7f0d9af5a760 15 client.277182.objecter send_op
> > > 5
> > > to osd.7
> > > 2015-11-05 15:07:49.936023 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.42:6810/11215 -- osd_op(client.277182.0:5
> > > rbd_header.92b32ae8944a
> > > [call rbd.get_stripe_unit_count] 3.6e4e925b ack+read e94) v4 -- ?+0
> > > 0x155e610 con 0x1555680
> > > 2015-11-05 15:07:49.936040 7f0d9af5a760  5 client.277182.objecter 1
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.936785 7f0d94345700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 190 (0 ->
> > > 190)
> > > 2015-11-05 15:07:49.936890 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > osd.7 10.134.128.42:6810/11215 3 ==== osd_op_reply(5
> > > rbd_header.92b32ae8944a
> > > [call] v0'0 uv0 ondisk = -8 ((8) Exec format error)) v6 ==== 190+0+0
> > > (2171578682 0 0) 0x7f0d70000b70 con 0x1555680
> > > 2015-11-05 15:07:49.936983 7f0d95549700 10 client.277182.objecter in
> > > handle_osd_op_reply
> > > 2015-11-05 15:07:49.936985 7f0d95549700  7 client.277182.objecter
> > > handle_osd_op_reply 5 ondisk v 0'0 uv 0 in 3.6e4e925b attempt 0
> > > 2015-11-05 15:07:49.936989 7f0d95549700 10 client.277182.objecter  op 0
> > > rval
> > > -8 len 0
> > > 2015-11-05 15:07:49.936991 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply ack
> > > 2015-11-05 15:07:49.936994 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply completed tid 5
> > > 2015-11-05 15:07:49.936995 7f0d95549700 15 client.277182.objecter
> > > finish_op
> > > 5
> > > 2015-11-05 15:07:49.936997 7f0d95549700 10 throttle(objecter_bytes
> > > 0x1551688)
> > > put 0 (0 -> 0)
> > > 2015-11-05 15:07:49.937000 7f0d95549700 10 throttle(objecter_ops
> > > 0x1551720)
> > > put 1 (1 -> 0)
> > > 2015-11-05 15:07:49.937018 7f0d95549700  5 client.277182.objecter 0
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.937026 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 190 (190 ->
> > > 0)
> > > 2015-11-05 15:07:49.937046 7f0d9af5a760 10 librbd::ImageCtx:  cache bytes
> > > 64
> > > order 22 -> about 10 objects
> > > 2015-11-05 15:07:49.937063 7f0d9af5a760 10 librbd::ImageCtx: init_layout
> > > stripe_unit 4194304 stripe_count 1 object_size 4194304 prefix
> > > rbd_data.92b32ae8944a format rbd_data.92b32ae8944a.%016llx
> > > 2015-11-05 15:07:49.937071 7f0d9af5a760 20 librbd: ictx_refresh 0x1552fb0
> > > 2015-11-05 15:07:49.937189 7f0d9af5a760 10 throttle(objecter_bytes
> > > 0x1551688)
> > > get_or_fail 0 success (0 -> 0)
> > > 2015-11-05 15:07:49.937194 7f0d9af5a760 10 throttle(objecter_ops
> > > 0x1551720)
> > > get_or_fail 1 success (0 -> 1)
> > > 2015-11-05 15:07:49.937203 7f0d9af5a760 10 client.277182.objecter
> > > calc_target
> > > pgid 3.6e4e925b acting [7,0]
> > > 2015-11-05 15:07:49.937207 7f0d9af5a760 20 client.277182.objecter  note:
> > > not
> > > requesting commit
> > > 2015-11-05 15:07:49.937210 7f0d9af5a760 10 client.277182.objecter
> > > op_submit
> > > oid rbd_header.92b32ae8944a @3 @3 [call rbd.get_size,call
> > > rbd.get_features,call rbd.get_snapcontext,call rbd.get_parent,call
> > > lock.get_info] tid 6 osd.7
> > > 2015-11-05 15:07:49.937220 7f0d9af5a760 15 client.277182.objecter send_op
> > > 6
> > > to osd.7
> > > 2015-11-05 15:07:49.937226 7f0d9af5a760  1 -- 10.134.128.41:0/1007378 -->
> > > 10.134.128.42:6810/11215 -- osd_op(client.277182.0:6
> > > rbd_header.92b32ae8944a
> > > [call rbd.get_size,call rbd.get_features,call rbd.get_snapcontext,call
> > > rbd.get_parent,call lock.get_info] 3.6e4e925b ack+read e94) v4 -- ?+0
> > > 0x1562d10 con 0x1555680
> > > 2015-11-05 15:07:49.937249 7f0d9af5a760  5 client.277182.objecter 1
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.938340 7f0d94345700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) get 438 (0 ->
> > > 438)
> > > 2015-11-05 15:07:49.938454 7f0d95549700  1 -- 10.134.128.41:0/1007378 <==
> > > osd.7 10.134.128.42:6810/11215 4 ==== osd_op_reply(6
> > > rbd_header.92b32ae8944a
> > > [call,call,call,call,call] v0'0 uv53 ondisk = 0) v6 ==== 358+0+80
> > > (1348922735 0 2219832889) 0x7f0d70002b50 con 0x1555680
> > > 2015-11-05 15:07:49.938471 7f0d95549700 10 client.277182.objecter in
> > > handle_osd_op_reply
> > > 2015-11-05 15:07:49.938493 7f0d95549700  7 client.277182.objecter
> > > handle_osd_op_reply 6 ondisk v 0'0 uv 53 in 3.6e4e925b attempt 0
> > > 2015-11-05 15:07:49.938496 7f0d95549700 10 client.277182.objecter  op 0
> > > rval
> > > 0 len 9
> > > 2015-11-05 15:07:49.938498 7f0d95549700 10 client.277182.objecter  op 1
> > > rval
> > > 0 len 16
> > > 2015-11-05 15:07:49.938500 7f0d95549700 10 client.277182.objecter  op 2
> > > rval
> > > 0 len 12
> > > 2015-11-05 15:07:49.938514 7f0d95549700 10 client.277182.objecter  op 3
> > > rval
> > > 0 len 28
> > > 2015-11-05 15:07:49.938515 7f0d95549700 10 client.277182.objecter  op 4
> > > rval
> > > 0 len 15
> > > 2015-11-05 15:07:49.938517 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply ack
> > > 2015-11-05 15:07:49.938519 7f0d95549700 15 client.277182.objecter
> > > handle_osd_op_reply completed tid 6
> > > 2015-11-05 15:07:49.938521 7f0d95549700 15 client.277182.objecter
> > > finish_op
> > > 6
> > > 2015-11-05 15:07:49.938522 7f0d95549700 10 throttle(objecter_bytes
> > > 0x1551688)
> > > put 0 (0 -> 0)
> > > 2015-11-05 15:07:49.938524 7f0d95549700 10 throttle(objecter_ops
> > > 0x1551720)
> > > put 1 (1 -> 0)
> > > 2015-11-05 15:07:49.938537 7f0d95549700  5 client.277182.objecter 0
> > > unacked,
> > > 0 uncommitted
> > > 2015-11-05 15:07:49.938546 7f0d95549700 10
> > > throttle(msgr_dispatch_throttler-radosclient 0x1550df8) put 438 (438 ->
> > > 0)
> > > 2015-11-05 15:07:49.938652 7f0d9af5a760 20 librbd: info 0x1552fb0
> > > 2015-11-05 15:07:49.938657 7f0d9af5a760 20 librbd: ictx_check 0x1552fb0
> > > 2015-11-05 15:07:49.938904 7f0d9af5a760 20 librbd: read_iterate 0x1552fb0
> > > off
> > > = 0 len = 2147483648
> > > 2015-11-05 15:07:49.938908 7f0d9af5a760 20 librbd: ictx_check 0x1552fb0
> > > 2015-11-05 15:07:49.938920 7f0d9af5a760 20 librbd: aio_read 0x1552fb0
> > > completion 0x1562d10 [0,4194304]
> > > 2015-11-05 15:07:49.938923 7f0d9af5a760 20 librbd: ictx_check 0x1552fb0
> > > 2015-11-05 15:07:49.938945 7f0d9af5a760 20 librbd:  oid
> > > rbd_data.92b32ae8944a.0000000000000000 0~4194304 from [0,4194304]
> > > 2015-11-05 15:07:49.938972 7f0d9af5a760 10 objectcacher readx
> > > extent(rbd_data.92b32ae8944a.0000000000000000 (0) in @3 0~4194304 ->
> > > [0,4194304])
> > > 2015-11-05 15:07:49.939004 7f0d9af5a760 10
> > > objectcacher.object(rbd_data.92b32ae8944a.0000000000000000/head) map_read
> > > rbd_data.92b32ae8944a.0000000000000000 0~4194304
> > > 2015-11-05 15:07:49.939015 7f0d9af5a760 20
> > > objectcacher.object(rbd_data.92b32ae8944a.0000000000000000/head) map_read
> > > miss 4194304 left, bh[ 0x1563270 0~4194304 0x1562fe0 (0) v 0 missing]
> > > waiters = {}
> > > 2015-11-05 15:07:49.939022 7f0d9af5a760 10 objectcacher readx missed,
> > > waiting
> > > on cache to complete 0 blocked reads, 4194240 read bytes
> > > 2015-11-05 15:07:49.939044 7f0d9af5a760 20 objectcacher readx defer
> > > 0x1562700
> > > 2015-11-05 15:07:49.939048 7f0d9af5a760 20 librbd::AioCompletion:
> > > AioCompletion::finish_adding_requests 0x1562d10 pending 1
> > > 2015-11-05 15:07:50.927873 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:51.927982 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:52.928122 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:53.928234 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:54.926869 7f0d8effd700 10 client.277182.objecter tick
> > > 2015-11-05 15:07:54.928357 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:55.928511 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:56.928605 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 2015-11-05 15:07:57.928707 7f0d8d7fa700 11 objectcacher flusher 0 / 64:
> > > 0
> > > tx, 0 rx, 0 clean, 0 dirty (16 target, 32 max)
> > > 
> > > 
> > > ----- Original Message -----
> > > From: "Jason Dillaman" <dilla...@redhat.com>
> > > To: "Joe Ryner" <jry...@cait.org>
> > > Cc: ceph-us...@ceph.com
> > > Sent: Thursday, October 29, 2015 12:05:38 PM
> > > Subject: Re: [ceph-users] rbd hang
> > > 
> > > I don't see the read request hitting the wire, so I am thinking your
> > > client
> > > cannot talk to the primary PG for the 'rb.0.16cf.238e1f29.000000000000'
> > > object.  Try adding "debug objecter = 20" to your configuration to get
> > > more
> > > details.
> > > 
> > > --
> > > 
> > > Jason Dillaman
> > > 
> > > ----- Original Message -----
> > > > From: "Joe Ryner" <jry...@cait.org>
> > > > To: ceph-us...@ceph.com
> > > > Sent: Thursday, October 29, 2015 12:22:01 PM
> > > > Subject: [ceph-users] rbd hang
> > > > 
> > > > i,
> > > > 
> > > > I am having a strange problem with our development cluster.  When I run
> > > > rbd
> > > > export it just hangs.  I have been running ceph for a long time and
> > > > haven't
> > > > encountered this kind of issue.  Any ideas as to what is going on?
> > > > 
> > > > rbd -p locks export seco101ira -
> > > > 
> > > > 
> > > > I am running
> > > > 
> > > > Centos 6.6 x86 64
> > > > 
> > > > ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)
> > > > 
> > > > I have enabled debugging and get the following when I run the command
> > > > 
> > > > [root@durbium ~]# rbd -p locks export seco101ira -
> > > > 2015-10-29 11:17:08.183597 7fc3334fa7c0  1 librados: starting msgr at
> > > > :/0
> > > > 2015-10-29 11:17:08.183613 7fc3334fa7c0  1 librados: starting objecter
> > > > 2015-10-29 11:17:08.183739 7fc3334fa7c0  1 -- :/0 messenger.start
> > > > 2015-10-29 11:17:08.183779 7fc3334fa7c0  1 librados: setting wanted
> > > > keys
> > > > 2015-10-29 11:17:08.183782 7fc3334fa7c0  1 librados: calling monclient
> > > > init
> > > > 2015-10-29 11:17:08.184365 7fc3334fa7c0  1 -- :/1024687 -->
> > > > 10.134.128.42:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> > > > 0x15ba900
> > > > con 0x15ba540
> > > > 2015-10-29 11:17:08.185006 7fc3334f2700  1 -- 10.134.128.41:0/1024687
> > > > learned
> > > > my addr 10.134.128.41:0/1024687
> > > > 2015-10-29 11:17:08.185995 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 1 ==== mon_map v1 ==== 491+0+0 (318324477 0
> > > > 0)
> > > > 0x7fc318000be0 con 0x15ba540
> > > > 2015-10-29 11:17:08.186213 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1
> > > > ====
> > > > 33+0+0 (4093383511 0 0) 0x7fc318001090 con 0x15ba540
> > > > 2015-10-29 11:17:08.186544 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.42:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
> > > > 0x7fc31c001700 con 0x15ba540
> > > > 2015-10-29 11:17:08.187160 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1
> > > > ====
> > > > 206+0+0 (2382192463 0 0) 0x7fc318001090 con 0x15ba540
> > > > 2015-10-29 11:17:08.187354 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.42:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
> > > > 0x7fc31c002220 con 0x15ba540
> > > > 2015-10-29 11:17:08.188001 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1
> > > > ====
> > > > 393+0+0 (34117402 0 0) 0x7fc3180008c0 con 0x15ba540
> > > > 2015-10-29 11:17:08.188148 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.42:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x15b6b80
> > > > con
> > > > 0x15ba540
> > > > 2015-10-29 11:17:08.188334 7fc3334fa7c0  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.42:6789/0 -- mon_subscribe({monmap=6+,osdmap=0}) v2 -- ?+0
> > > > 0x15b7700 con 0x15ba540
> > > > 2015-10-29 11:17:08.188355 7fc3334fa7c0  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.42:6789/0 -- mon_subscribe({monmap=6+,osdmap=0}) v2 -- ?+0
> > > > 0x15b7ca0 con 0x15ba540
> > > > 2015-10-29 11:17:08.188445 7fc3334fa7c0  1 librados: init done
> > > > 2015-10-29 11:17:08.188463 7fc3334fa7c0 10 librados: wait_for_osdmap
> > > > waiting
> > > > 2015-10-29 11:17:08.188625 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 5 ==== mon_map v1 ==== 491+0+0 (318324477 0
> > > > 0)
> > > > 0x7fc318001300 con 0x15ba540
> > > > 2015-10-29 11:17:08.188795 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 6 ==== mon_subscribe_ack(300s) v1 ====
> > > > 20+0+0
> > > > (646930372 0 0) 0x7fc3180015a0 con 0x15ba540
> > > > 2015-10-29 11:17:08.189129 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 7 ==== osd_map(4350..4350 src has
> > > > 3829..4350)
> > > > v3
> > > > ==== 7562+0+0 (1787729222 0 0) 0x7fc3180013b0 con 0x15ba540
> > > > 2015-10-29 11:17:08.189452 7fc3334fa7c0 10 librados: wait_for_osdmap
> > > > done
> > > > waiting
> > > > 2015-10-29 11:17:08.189454 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 8 ==== mon_subscribe_ack(300s) v1 ====
> > > > 20+0+0
> > > > (646930372 0 0) 0x7fc3180008c0 con 0x15ba540
> > > > 2015-10-29 11:17:08.189470 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 9 ==== osd_map(4350..4350 src has
> > > > 3829..4350)
> > > > v3
> > > > ==== 7562+0+0 (1787729222 0 0) 0x7fc318005290 con 0x15ba540
> > > > 2015-10-29 11:17:08.189485 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > mon.1 10.134.128.42:6789/0 10 ==== mon_subscribe_ack(300s) v1 ====
> > > > 20+0+0
> > > > (646930372 0 0) 0x7fc3180056d0 con 0x15ba540
> > > > 2015-10-29 11:17:08.189522 7fc3334fa7c0 20 librbd::ImageCtx: enabling
> > > > caching...
> > > > 2015-10-29 11:17:08.189540 7fc3334fa7c0 20 librbd::ImageCtx: Initial
> > > > cache
> > > > settings: size=64 num_objects=10 max_dirty=32 target_dirty=16
> > > > max_dirty_age=5
> > > > 2015-10-29 11:17:08.189686 7fc3334fa7c0 20 librbd: open_image: ictx =
> > > > 0x15b8390 name = 'seco101ira' id = '' snap_name = ''
> > > > 2015-10-29 11:17:08.189730 7fc3334fa7c0 10 librados: stat
> > > > oid=seco101ira.rbd
> > > > nspace=
> > > > 2015-10-29 11:17:08.189882 7fc3334fa7c0  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.43:6803/2741 -- osd_op(client.7543.0:1 seco101ira.rbd [stat]
> > > > 4.a982c550 ack+read e4350) v4 -- ?+0 0x15baf60 con 0x15b9e70
> > > > 2015-10-29 11:17:08.192470 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > osd.2 10.134.128.43:6803/2741 1 ==== osd_op_reply(1 seco101ira.rbd
> > > > [stat]
> > > > v0'0 uv1 ondisk = 0) v6 ==== 181+0+16 (1355327314 0 448133945)
> > > > 0x7fc314000c10 con 0x15b9e70
> > > > 2015-10-29 11:17:08.192548 7fc3334fa7c0 10 librados: Objecter returned
> > > > from
> > > > stat r=0
> > > > 2015-10-29 11:17:08.192563 7fc3334fa7c0 20 librbd: detect format of
> > > > seco101ira : old
> > > > 2015-10-29 11:17:08.192600 7fc3334fa7c0 20 librbd: ictx_refresh
> > > > 0x15b8390
> > > > 2015-10-29 11:17:08.192614 7fc3334fa7c0 10 librados: read
> > > > oid=seco101ira.rbd
> > > > nspace=
> > > > 2015-10-29 11:17:08.192631 7fc3334fa7c0  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.43:6803/2741 -- osd_op(client.7543.0:2 seco101ira.rbd [read
> > > > 0~4096] 4.a982c550 ack+read e4350) v4 -- ?+0 0x15bb700 con 0x15b9e70
> > > > 2015-10-29 11:17:08.193957 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > osd.2 10.134.128.43:6803/2741 2 ==== osd_op_reply(2 seco101ira.rbd
> > > > [read
> > > > 0~112] v0'0 uv1 ondisk = 0) v6 ==== 181+0+112 (674783017 0 641510945)
> > > > 0x7fc314000c10 con 0x15b9e70
> > > > 2015-10-29 11:17:08.194018 7fc3334fa7c0 10 librados: Objecter returned
> > > > from
> > > > read r=0
> > > > 2015-10-29 11:17:08.194022 7fc3334fa7c0 10 librados: Returned length
> > > > 112
> > > > less
> > > > than original length 4096
> > > > 2015-10-29 11:17:08.194058 7fc3334fa7c0 10 librados: call
> > > > oid=seco101ira.rbd
> > > > nspace=
> > > > 2015-10-29 11:17:08.194074 7fc3334fa7c0  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.43:6803/2741 -- osd_op(client.7543.0:3 seco101ira.rbd [call
> > > > rbd.snap_list] 4.a982c550 ack+read e4350) v4 -- ?+0 0x15bbd40 con
> > > > 0x15b9e70
> > > > 2015-10-29 11:17:08.194845 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > osd.2 10.134.128.43:6803/2741 3 ==== osd_op_reply(3 seco101ira.rbd
> > > > [call]
> > > > v0'0 uv1 ondisk = 0) v6 ==== 181+0+12 (1759032444 0 0) 0x7fc314000c10
> > > > con
> > > > 0x15b9e70
> > > > 2015-10-29 11:17:08.194902 7fc3334fa7c0 10 librados: Objecter returned
> > > > from
> > > > call r=0
> > > > 2015-10-29 11:17:08.194944 7fc3334fa7c0 10 librados: call
> > > > oid=seco101ira.rbd
> > > > nspace=
> > > > 2015-10-29 11:17:08.194963 7fc3334fa7c0  1 -- 10.134.128.41:0/1024687
> > > > -->
> > > > 10.134.128.43:6803/2741 -- osd_op(client.7543.0:4 seco101ira.rbd [call
> > > > lock.get_info] 4.a982c550 ack+read e4350) v4 -- ?+0 0x15bd450 con
> > > > 0x15b9e70
> > > > 2015-10-29 11:17:08.196313 7fc32da9a700  1 -- 10.134.128.41:0/1024687
> > > > <==
> > > > osd.2 10.134.128.43:6803/2741 4 ==== osd_op_reply(4 seco101ira.rbd
> > > > [call]
> > > > v0'0 uv1 ondisk = 0) v6 ==== 181+0+15 (3120959177 0 2149983739)
> > > > 0x7fc314000c10 con 0x15b9e70
> > > > 2015-10-29 11:17:08.196356 7fc3334fa7c0 10 librados: Objecter returned
> > > > from
> > > > call r=0
> > > > 2015-10-29 11:17:08.196382 7fc3334fa7c0 10 librbd::ImageCtx:  cache
> > > > bytes
> > > > 64
> > > > order 22 -> about 10 objects
> > > > 2015-10-29 11:17:08.196386 7fc3334fa7c0 10 librbd::ImageCtx:
> > > > init_layout
> > > > stripe_unit 4194304 stripe_count 1 object_size 4194304 prefix
> > > > rb.0.16cf.238e1f29 format rb.0.16cf.238e1f29.%012llx
> > > > 2015-10-29 11:17:08.196403 7fc3334fa7c0 10 librados: set snap write
> > > > context:
> > > > seq = 0 and snaps = []
> > > > 2015-10-29 11:17:08.196437 7fc3334fa7c0 10 librados: set snap read head
> > > > ->
> > > > head
> > > > 2015-10-29 11:17:08.196446 7fc3334fa7c0 20 librbd: info 0x15b8390
> > > > 2015-10-29 11:17:08.196452 7fc3334fa7c0 20 librbd: ictx_check 0x15b8390
> > > > 2015-10-29 11:17:08.196461 7fc3334fa7c0 20 librbd: read_iterate
> > > > 0x15b8390
> > > > off
> > > > = 0 len = 1048576
> > > > 2015-10-29 11:17:08.196464 7fc3334fa7c0 20 librbd: ictx_check 0x15b8390
> > > > 2015-10-29 11:17:08.196475 7fc3334fa7c0 20 librbd: aio_read 0x15b8390
> > > > completion 0x15bbc80 [0,1048576]
> > > > 2015-10-29 11:17:08.196479 7fc3334fa7c0 20 librbd: ictx_check 0x15b8390
> > > > 2015-10-29 11:17:08.196504 7fc3334fa7c0 20 librbd:  oid
> > > > rb.0.16cf.238e1f29.000000000000 0~1048576 from [0,1048576]
> > > > 2015-10-29 11:17:08.196564 7fc3334fa7c0 20 librbd::AioCompletion:
> > > > AioCompletion::finish_adding_requests 0x15bbc80 pending 1
> > > > 
> > > > Below is my ceph.conf
> > > > ----------------------
> > > > 
> > > > [client]
> > > >         rbd cache = true
> > > >         rbd cache size = 64
> > > >         rbd cache max dirty = 32
> > > >         rbd cache target dirty = 16
> > > >         rbd cache max dirty age = 5.0
> > > > 
> > > >         debug rbd = 20
> > > >         debug rados = 20
> > > >         debug ms = 1
> > > >         log file = /var/log/ceph/client.$name.$pid.log
> > > > [global]
> > > >          public network = 10.134.128.0/26
> > > >          cluster network = 10.134.128.64/26
> > > >          fsid = a0a1c9ea-9a77-41fe-aef1-b87136776ac3
> > > > 
> > > >          # For version 0.55 and beyond, you must explicitly enable
> > > >          # or disable authentication with "auth" entries in [global].
> > > >  
> > > >           auth cluster required = cephx
> > > >          auth service required = cephx
> > > >          auth client required = cephx
> > > >                 osd journal size = 10000
> > > > 
> > > > [osd]
> > > >          osd recovery max active = 1
> > > > 
> > > >          osd journal size = 10000
> > > >          journal aio = true
> > > >  
> > > >          #The following assumes ext4 filesystem.
> > > >          #osd mkfs options {fs-type} = {mkfs options}   # default for
> > > >          xfs
> > > >          is
> > > >          "-f"
> > > >          #osd mount options {fs-type} = {mount options} # default mount
> > > >          option is "rw,noatime"
> > > >          # For example, for ext4, the mount option might look like
> > > >          this:
> > > >          #osd mkfs options ext4 = user_xattr,rw,noatime
> > > >         
> > > >          osd mkfs options xfs = -L -f -d su=64k,sw=1 $name
> > > >          osd mount options btrfs = rw,noatime
> > > >          osd mount options xfs = rw,noatime,nodiratime
> > > > 
> > > >          filestore xattr use omap = true
> > > >          # osd mkfs type = btrfs
> > > >          osd mkfs type = xfs
> > > >          osd mkfs options btrfs = -L $name
> > > > 
> > > > 
> > > >         # CAIT -- Manual commands to make and mount file system
> > > > 
> > > >         # -- Make xfs file system
> > > >         # mkfs -t xfs -f -L ceph-X -d su=64k,sw=1 /dev/sdX1
> > > > 
> > > >         # -- Rescan Parition Label
> > > >         # partprobe /dev/sdX1
> > > > 
> > > >         # -- Mount ceph file system
> > > >         # mount -o rw,noatime,nodiratime /dev/disk/by-label/ceph-X
> > > >         /var/lib/ceph/osd/ceph-X
> > > > 
> > > > [mon.durbium]
> > > >         host = durbium
> > > >         mon addr = 10.134.128.41:6789
> > > > 
> > > > [mon.zirconium]
> > > >         host = zirconium
> > > >         mon addr = 10.134.128.43:6789
> > > > 
> > > > [mon.stone]
> > > >         host = stone
> > > >         mon addr = 10.134.128.42:6789
> > > > 
> > > > [osd.0]
> > > >          host = durbium
> > > >          devs = /dev/vg-osd-0/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-0/lv-journal
> > > > 
> > > > [osd.1]
> > > >          host = zirconium
> > > >          devs = /dev/vg-osd-1/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-1/lv-journal
> > > > 
> > > > [osd.2]
> > > >          host = zirconium
> > > >          devs = /dev/vg-osd-2/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-2/lv-journal
> > > > 
> > > > 
> > > > [osd.3]
> > > >          host = zirconium
> > > >          devs = /dev/vg-osd-3/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-3/lv-journal
> > > > 
> > > > [osd.4]
> > > >          host = zirconium
> > > >          devs = /dev/vg-osd-4/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-4/lv-journal
> > > > [osd.5]
> > > >          host = stone
> > > >          devs = /dev/vg-osd-5/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-5/lv-journal
> > > > 
> > > > [osd.6]
> > > >          host = stone
> > > >          devs = /dev/vg-osd-6/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-6/lv-journal
> > > > 
> > > > [osd.7]
> > > >          host = stone
> > > >          devs = /dev/vg-osd-7/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-7/lv-journal
> > > > 
> > > > [osd.8]
> > > >          host = stone
> > > >          devs = /dev/vg-osd-8/lv-osd
> > > >          osd mkfs type = xfs
> > > >          osd journal = /dev/vg-osd-8/lv-journal
> > > > 
> > > > 
> > > > 
> > > > 
> > > > --
> > > > Joe Ryner
> > > > Center for the Application of Information Technologies (CAIT)
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > 
> > > 
> > > --
> > > Joe Ryner
> > > Center for the Application of Information Technologies (CAIT)
> > > Production Coordinator
> > > P: (309) 298-1804
> > > F: (309) 298-2806
> > > 
> > 
> > --
> > Joe Ryner
> > Center for the Application of Information Technologies (CAIT)
> > Production Coordinator
> > P: (309) 298-1804
> > F: (309) 298-2806
> > 
> 
> --
> Joe Ryner
> Center for the Application of Information Technologies (CAIT)
> Production Coordinator
> P: (309) 298-1804
> F: (309) 298-2806
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to