Re: [ceph-users] CephFS/ceph-fuse performance

2018-06-06 Thread Gregory Farnum
>
> On 06/06/2018 12:22 PM, Andras Pataki wrote:
> > Hi Greg,
> >
> > The docs say that client_cache_size is the number of inodes that are
> > cached, not bytes of data.  Is that incorrect?
>

Oh whoops, you're correct of course. Sorry about that!

On Wed, Jun 6, 2018 at 12:33 PM Andras Pataki 
wrote:

> Staring at the logs a bit more it seems like the following lines might
> be the clue:
>
> 2018-06-06 08:14:17.615359 7fffefa45700 10 objectcacher trim  start:
> bytes: max 2147483640 <(214)%20748-3640>  clean 2145935360
> <(214)%20593-5360>, objects: max 8192 current 8192
> 2018-06-06 08:14:17.615361 7fffefa45700 10 objectcacher trim finish:
> max 2147483640 <(214)%20748-3640>  clean 2145935360 <(214)%20593-5360>,
> objects: max 8192 current 8192
>
> The object caching could not free objects up to cache new ones perhaps
> (it was caching 8192 objects which is the maximum in the config)?  Not
> sure why that would be though.  Unfortunately the job since then
> terminated so I can't look at the caches any longer of the client.
>

Yeah, that's got to be why. I don't *think* there's any reason to set a
reachable limit on number of objects. It may not be able to free them if
they're still dirty and haven't been flushed; that ought to be the only
reason. Or maybe you've discovered some bug in the caching code,
but...well, it's not super likely.
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS/ceph-fuse performance

2018-06-06 Thread Andras Pataki
Staring at the logs a bit more it seems like the following lines might 
be the clue:


2018-06-06 08:14:17.615359 7fffefa45700 10 objectcacher trim  start: 
bytes: max 2147483640  clean 2145935360, objects: max 8192 current 8192
2018-06-06 08:14:17.615361 7fffefa45700 10 objectcacher trim finish:  
max 2147483640  clean 2145935360, objects: max 8192 current 8192


The object caching could not free objects up to cache new ones perhaps 
(it was caching 8192 objects which is the maximum in the config)?  Not 
sure why that would be though.  Unfortunately the job since then 
terminated so I can't look at the caches any longer of the client.


Andras


On 06/06/2018 12:22 PM, Andras Pataki wrote:

Hi Greg,

The docs say that client_cache_size is the number of inodes that are 
cached, not bytes of data.  Is that incorrect?


Andras


On 06/06/2018 11:25 AM, Gregory Farnum wrote:

On Wed, Jun 6, 2018 at 5:52 AM, Andras Pataki
 wrote:
We're using CephFS with Luminous 12.2.5 and the fuse client (on 
CentOS 7.4,

kernel 3.10.0-693.5.2.el7.x86_64).  Performance has been very good
generally, but we're currently running into some strange performance 
issues

with one of our applications.  The client in this case is on a higher
latency link - it is about 2.5ms away from all the ceph server nodes 
(all
ceph server nodes are near each other on 10/40Ggbps local ethernet, 
only the

client is "away").

The application is reading contiguous data at 64k chunks, the strace 
(-tt -T

flags) looks something like:

06:37:04.152667 read(3, ".:.:.\t./.:.:.:.:.\t./.:.:.:.:.\t./"..., 
65536) =

65536 <0.024052>
06:37:04.178432 read(3, ",1523\t./.:.:.:.:.\t0/0:34,0:34:99"..., 
65536) =

65536 <0.023990>
06:37:04.204087 read(3, ":20:21:0,21,738\t0/0:8,0:8:0:0,0,"..., 
65536) =

65536 <0.024053>
06:37:04.229919 read(3, "665\t0/0:35,0:35:99:0,102,1530\t./"..., 
65536) =

65536 <0.024574>
06:37:04.255623 read(3, ":37:99:0,99,1485\t0/0:34,0:34:99:"..., 
65536) =

65536 <0.023795>
06:37:04.280914 read(3, ":.\t./.:.:.:.:.:.:.\t./.:.:.:.:.:."..., 
65536) =

65536 <0.023614>
06:37:04.306022 read(3, "0,0,0\t./.:0,0:0:.:0,0,0\t./.:0,0:"..., 
65536) =

65536 <0.024037>


so each 64k read takes about 23-24ms.  The client has the file open for
read, the machine is not busy (load of 0.2), neither are the ceph 
nodes.

The fuse client seems pretty idle also.

Increasing the log level to 20 for 'client' and 'objectcacher' on 
ceph-fuse,
it looks like ceph-fuse gets ll_read requests of 4k in size, and it 
looks
like it does an async read from the OSDs in 4k chunks (if I'm 
interpreting

the logs right).  Here is a trace of one ll_read:

2018-06-06 08:14:17.609495 7fffe7a35700  3 client.16794661 ll_read
0x5556dadfc1a0 0x1000d092e5f  238173646848~4096
2018-06-06 08:14:17.609506 7fffe7a35700 10 client.16794661 get_caps
0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
cap_refs={4=0,1024=0,2048=0,4096=0,8192=0} open={1=1,2=0} mode=100664
size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 
objects

6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680) have
pAsLsXsFsxcrwb need Fr want Fc revoking -
2018-06-06 08:14:17.609517 7fffe7a35700 10 client.16794661 _read_async
0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0} mode=100664
size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 
objects
6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680) 
238173646848~4096

2018-06-06 08:14:17.609523 7fffe7a35700 10 client.16794661
min_bytes=4194304 max_bytes=268435456 max_periods=64
2018-06-06 08:14:17.609528 7fffe7a35700 10 objectcacher readx
extent(1000d092e5f.ddd1 (56785) in @6 94208~4096 -> [0,4096])
2018-06-06 08:14:17.609532 7fffe7a35700 10
objectcacher.object(1000d092e5f.ddd1/head) map_read 
1000d092e5f.ddd1

94208~4096
2018-06-06 08:14:17.609535 7fffe7a35700 20
objectcacher.object(1000d092e5f.ddd1/head) map_read miss 4096 
left, bh[

0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing] waiters = {}
2018-06-06 08:14:17.609537 7fffe7a35700  7 objectcacher bh_read on bh[
0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing] waiters = {}
outstanding reads 0
2018-06-06 08:14:17.609576 7fffe7a35700 10 objectcacher readx missed,
waiting on bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 rx] 
waiters

= {} off 94208
2018-06-06 08:14:17.609579 7fffe7a35700 20 objectcacher readx defer
0x55570211ec00
2018-06-06 08:14:17.609580 7fffe7a35700  5 client.16794661 
get_cap_ref got

first FILE_CACHE ref on 0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0} mode=100664
size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 
objects

6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680)
2018-06-06 

Re: [ceph-users] CephFS/ceph-fuse performance

2018-06-06 Thread Andras Pataki

Hi Greg,

The docs say that client_cache_size is the number of inodes that are 
cached, not bytes of data.  Is that incorrect?


Andras


On 06/06/2018 11:25 AM, Gregory Farnum wrote:

On Wed, Jun 6, 2018 at 5:52 AM, Andras Pataki
 wrote:

We're using CephFS with Luminous 12.2.5 and the fuse client (on CentOS 7.4,
kernel 3.10.0-693.5.2.el7.x86_64).  Performance has been very good
generally, but we're currently running into some strange performance issues
with one of our applications.  The client in this case is on a higher
latency link - it is about 2.5ms away from all the ceph server nodes (all
ceph server nodes are near each other on 10/40Ggbps local ethernet, only the
client is "away").

The application is reading contiguous data at 64k chunks, the strace (-tt -T
flags) looks something like:

06:37:04.152667 read(3, ".:.:.\t./.:.:.:.:.\t./.:.:.:.:.\t./"..., 65536) =
65536 <0.024052>
06:37:04.178432 read(3, ",1523\t./.:.:.:.:.\t0/0:34,0:34:99"..., 65536) =
65536 <0.023990>
06:37:04.204087 read(3, ":20:21:0,21,738\t0/0:8,0:8:0:0,0,"..., 65536) =
65536 <0.024053>
06:37:04.229919 read(3, "665\t0/0:35,0:35:99:0,102,1530\t./"..., 65536) =
65536 <0.024574>
06:37:04.255623 read(3, ":37:99:0,99,1485\t0/0:34,0:34:99:"..., 65536) =
65536 <0.023795>
06:37:04.280914 read(3, ":.\t./.:.:.:.:.:.:.\t./.:.:.:.:.:."..., 65536) =
65536 <0.023614>
06:37:04.306022 read(3, "0,0,0\t./.:0,0:0:.:0,0,0\t./.:0,0:"..., 65536) =
65536 <0.024037>


so each 64k read takes about 23-24ms.  The client has the file open for
read, the machine is not busy (load of 0.2), neither are the ceph nodes.
The fuse client seems pretty idle also.

Increasing the log level to 20 for 'client' and 'objectcacher' on ceph-fuse,
it looks like ceph-fuse gets ll_read requests of 4k in size, and it looks
like it does an async read from the OSDs in 4k chunks (if I'm interpreting
the logs right).  Here is a trace of one ll_read:

2018-06-06 08:14:17.609495 7fffe7a35700  3 client.16794661 ll_read
0x5556dadfc1a0 0x1000d092e5f  238173646848~4096
2018-06-06 08:14:17.609506 7fffe7a35700 10 client.16794661 get_caps
0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
cap_refs={4=0,1024=0,2048=0,4096=0,8192=0} open={1=1,2=0} mode=100664
size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 objects
6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680) have
pAsLsXsFsxcrwb need Fr want Fc revoking -
2018-06-06 08:14:17.609517 7fffe7a35700 10 client.16794661 _read_async
0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0} mode=100664
size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 objects
6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680) 238173646848~4096
2018-06-06 08:14:17.609523 7fffe7a35700 10 client.16794661
min_bytes=4194304 max_bytes=268435456 max_periods=64
2018-06-06 08:14:17.609528 7fffe7a35700 10 objectcacher readx
extent(1000d092e5f.ddd1 (56785) in @6 94208~4096 -> [0,4096])
2018-06-06 08:14:17.609532 7fffe7a35700 10
objectcacher.object(1000d092e5f.ddd1/head) map_read 1000d092e5f.ddd1
94208~4096
2018-06-06 08:14:17.609535 7fffe7a35700 20
objectcacher.object(1000d092e5f.ddd1/head) map_read miss 4096 left, bh[
0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing] waiters = {}
2018-06-06 08:14:17.609537 7fffe7a35700  7 objectcacher bh_read on bh[
0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing] waiters = {}
outstanding reads 0
2018-06-06 08:14:17.609576 7fffe7a35700 10 objectcacher readx missed,
waiting on bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 rx] waiters
= {} off 94208
2018-06-06 08:14:17.609579 7fffe7a35700 20 objectcacher readx defer
0x55570211ec00
2018-06-06 08:14:17.609580 7fffe7a35700  5 client.16794661 get_cap_ref got
first FILE_CACHE ref on 0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0} mode=100664
size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 objects
6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680)
2018-06-06 08:14:17.609587 7fffe7a35700 15 inode.get on 0x5f138680
0x1000d092e5f.head now 4
2018-06-06 08:14:17.612318 7fffefa45700  7 objectcacher bh_read_finish
1000d092e5f.ddd1/head tid 29067611 94208~4096 (bl is 4096) returned 0
outstanding reads 1
2018-06-06 08:14:17.612338 7fffefa45700 20 objectcacher checking bh bh[
0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 rx] waiters = {
94208->[0x5557007383a0, ]}
2018-06-06 08:14:17.612341 7fffefa45700 10 objectcacher bh_read_finish read
bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (4096) v 0 clean firstbyte=46]
waiters = {}
2018-06-06 08:14:17.612344 7fffefa45700 10
objectcacher.object(1000d092e5f.ddd1/head) try_merge_bh bh[
0x5fecdd40 94208~4096 0x5556226235c0 (4096) v 0 clean

Re: [ceph-users] CephFS/ceph-fuse performance

2018-06-06 Thread Gregory Farnum
On Wed, Jun 6, 2018 at 5:52 AM, Andras Pataki
 wrote:
> We're using CephFS with Luminous 12.2.5 and the fuse client (on CentOS 7.4,
> kernel 3.10.0-693.5.2.el7.x86_64).  Performance has been very good
> generally, but we're currently running into some strange performance issues
> with one of our applications.  The client in this case is on a higher
> latency link - it is about 2.5ms away from all the ceph server nodes (all
> ceph server nodes are near each other on 10/40Ggbps local ethernet, only the
> client is "away").
>
> The application is reading contiguous data at 64k chunks, the strace (-tt -T
> flags) looks something like:
>
> 06:37:04.152667 read(3, ".:.:.\t./.:.:.:.:.\t./.:.:.:.:.\t./"..., 65536) =
> 65536 <0.024052>
> 06:37:04.178432 read(3, ",1523\t./.:.:.:.:.\t0/0:34,0:34:99"..., 65536) =
> 65536 <0.023990>
> 06:37:04.204087 read(3, ":20:21:0,21,738\t0/0:8,0:8:0:0,0,"..., 65536) =
> 65536 <0.024053>
> 06:37:04.229919 read(3, "665\t0/0:35,0:35:99:0,102,1530\t./"..., 65536) =
> 65536 <0.024574>
> 06:37:04.255623 read(3, ":37:99:0,99,1485\t0/0:34,0:34:99:"..., 65536) =
> 65536 <0.023795>
> 06:37:04.280914 read(3, ":.\t./.:.:.:.:.:.:.\t./.:.:.:.:.:."..., 65536) =
> 65536 <0.023614>
> 06:37:04.306022 read(3, "0,0,0\t./.:0,0:0:.:0,0,0\t./.:0,0:"..., 65536) =
> 65536 <0.024037>
>
>
> so each 64k read takes about 23-24ms.  The client has the file open for
> read, the machine is not busy (load of 0.2), neither are the ceph nodes.
> The fuse client seems pretty idle also.
>
> Increasing the log level to 20 for 'client' and 'objectcacher' on ceph-fuse,
> it looks like ceph-fuse gets ll_read requests of 4k in size, and it looks
> like it does an async read from the OSDs in 4k chunks (if I'm interpreting
> the logs right).  Here is a trace of one ll_read:
>
> 2018-06-06 08:14:17.609495 7fffe7a35700  3 client.16794661 ll_read
> 0x5556dadfc1a0 0x1000d092e5f  238173646848~4096
> 2018-06-06 08:14:17.609506 7fffe7a35700 10 client.16794661 get_caps
> 0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
> cap_refs={4=0,1024=0,2048=0,4096=0,8192=0} open={1=1,2=0} mode=100664
> size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
> caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 objects
> 6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680) have
> pAsLsXsFsxcrwb need Fr want Fc revoking -
> 2018-06-06 08:14:17.609517 7fffe7a35700 10 client.16794661 _read_async
> 0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
> cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0} mode=100664
> size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
> caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 objects
> 6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680) 238173646848~4096
> 2018-06-06 08:14:17.609523 7fffe7a35700 10 client.16794661
> min_bytes=4194304 max_bytes=268435456 max_periods=64
> 2018-06-06 08:14:17.609528 7fffe7a35700 10 objectcacher readx
> extent(1000d092e5f.ddd1 (56785) in @6 94208~4096 -> [0,4096])
> 2018-06-06 08:14:17.609532 7fffe7a35700 10
> objectcacher.object(1000d092e5f.ddd1/head) map_read 1000d092e5f.ddd1
> 94208~4096
> 2018-06-06 08:14:17.609535 7fffe7a35700 20
> objectcacher.object(1000d092e5f.ddd1/head) map_read miss 4096 left, bh[
> 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing] waiters = {}
> 2018-06-06 08:14:17.609537 7fffe7a35700  7 objectcacher bh_read on bh[
> 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing] waiters = {}
> outstanding reads 0
> 2018-06-06 08:14:17.609576 7fffe7a35700 10 objectcacher readx missed,
> waiting on bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 rx] waiters
> = {} off 94208
> 2018-06-06 08:14:17.609579 7fffe7a35700 20 objectcacher readx defer
> 0x55570211ec00
> 2018-06-06 08:14:17.609580 7fffe7a35700  5 client.16794661 get_cap_ref got
> first FILE_CACHE ref on 0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
> cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0} mode=100664
> size=244712765330/249011634176 mtime=2018-06-05 00:33:31.332901
> caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x1000d092e5f ts 0/0 objects
> 6769 dirty_or_tx 0] parents=0x5f187680 0x5f138680)
> 2018-06-06 08:14:17.609587 7fffe7a35700 15 inode.get on 0x5f138680
> 0x1000d092e5f.head now 4
> 2018-06-06 08:14:17.612318 7fffefa45700  7 objectcacher bh_read_finish
> 1000d092e5f.ddd1/head tid 29067611 94208~4096 (bl is 4096) returned 0
> outstanding reads 1
> 2018-06-06 08:14:17.612338 7fffefa45700 20 objectcacher checking bh bh[
> 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 rx] waiters = {
> 94208->[0x5557007383a0, ]}
> 2018-06-06 08:14:17.612341 7fffefa45700 10 objectcacher bh_read_finish read
> bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (4096) v 0 clean firstbyte=46]
> waiters = {}
> 2018-06-06 08:14:17.612344 7fffefa45700 10
> objectcacher.object(1000d092e5f.ddd1/head) try_merge_bh bh[
> 0x5fecdd40 94208~4096 0x5556226235c0 (4096) v 0 clean firstbyte=46]

[ceph-users] CephFS/ceph-fuse performance

2018-06-06 Thread Andras Pataki
We're using CephFS with Luminous 12.2.5 and the fuse client (on CentOS 
7.4, kernel 3.10.0-693.5.2.el7.x86_64).  Performance has been very good 
generally, but we're currently running into some strange performance 
issues with one of our applications.  The client in this case is on a 
higher latency link - it is about 2.5ms away from all the ceph server 
nodes (all ceph server nodes are near each other on 10/40Ggbps local 
ethernet, only the client is "away").


The application is reading contiguous data at 64k chunks, the strace 
(-tt -T flags) looks something like:


   06:37:04.152667 read(3, ".:.:.\t./.:.:.:.:.\t./.:.:.:.:.\t./"...,
   65536) = 65536 <0.024052>
   06:37:04.178432 read(3, ",1523\t./.:.:.:.:.\t0/0:34,0:34:99"...,
   65536) = 65536 <0.023990>
   06:37:04.204087 read(3, ":20:21:0,21,738\t0/0:8,0:8:0:0,0,"...,
   65536) = 65536 <0.024053>
   06:37:04.229919 read(3, "665\t0/0:35,0:35:99:0,102,1530\t./"...,
   65536) = 65536 <0.024574>
   06:37:04.255623 read(3, ":37:99:0,99,1485\t0/0:34,0:34:99:"...,
   65536) = 65536 <0.023795>
   06:37:04.280914 read(3, ":.\t./.:.:.:.:.:.:.\t./.:.:.:.:.:."...,
   65536) = 65536 <0.023614>
   06:37:04.306022 read(3, "0,0,0\t./.:0,0:0:.:0,0,0\t./.:0,0:"...,
   65536) = 65536 <0.024037>


so each 64k read takes about 23-24ms.  The client has the file open for 
read, the machine is not busy (load of 0.2), neither are the ceph 
nodes.  The fuse client seems pretty idle also.


Increasing the log level to 20 for 'client' and 'objectcacher' on 
ceph-fuse, it looks like ceph-fuse gets ll_read requests of 4k in size, 
and it looks like it does an async read from the OSDs in 4k chunks (if 
I'm interpreting the logs right).  Here is a trace of one ll_read:


   2018-06-06 08:14:17.609495 7fffe7a35700  3 client.16794661 ll_read
   0x5556dadfc1a0 0x1000d092e5f 238173646848~4096
   2018-06-06 08:14:17.609506 7fffe7a35700 10 client.16794661 get_caps
   0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
   cap_refs={4=0,1024=0,2048=0,4096=0,8192=0} open={1=1,2=0}
   mode=100664 size=244712765330/249011634176 mtime=2018-06-05
   00:33:31.332901 caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb)
   objectset[0x1000d092e5f ts 0/0 objects 6769 dirty_or_tx 0]
   parents=0x5f187680 0x5f138680) have pAsLsXsFsxcrwb need Fr
   want Fc revoking -
   2018-06-06 08:14:17.609517 7fffe7a35700 10 client.16794661
   _read_async 0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
   cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0}
   mode=100664 size=244712765330/249011634176 mtime=2018-06-05
   00:33:31.332901 caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb)
   objectset[0x1000d092e5f ts 0/0 objects 6769 dirty_or_tx 0]
   parents=0x5f187680 0x5f138680) 238173646848~4096
   2018-06-06 08:14:17.609523 7fffe7a35700 10 client.16794661
   min_bytes=4194304 max_bytes=268435456 max_periods=64
   2018-06-06 08:14:17.609528 7fffe7a35700 10 objectcacher readx
   extent(1000d092e5f.ddd1 (56785) in @6 94208~4096 -> [0,4096])
   2018-06-06 08:14:17.609532 7fffe7a35700 10
   objectcacher.object(1000d092e5f.ddd1/head) map_read
   1000d092e5f.ddd1 94208~4096
   2018-06-06 08:14:17.609535 7fffe7a35700 20
   objectcacher.object(1000d092e5f.ddd1/head) map_read miss 4096
   left, bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing]
   waiters = {}
   2018-06-06 08:14:17.609537 7fffe7a35700  7 objectcacher bh_read on
   bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 missing]
   waiters = {} outstanding reads 0
   2018-06-06 08:14:17.609576 7fffe7a35700 10 objectcacher readx
   missed, waiting on bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0)
   v 0 rx] waiters = {} off 94208
   2018-06-06 08:14:17.609579 7fffe7a35700 20 objectcacher readx defer
   0x55570211ec00
   2018-06-06 08:14:17.609580 7fffe7a35700  5 client.16794661
   get_cap_ref got first FILE_CACHE ref on
   0x1000d092e5f.head(faked_ino=0 ref=3 ll_ref=31
   cap_refs={4=0,1024=0,2048=1,4096=0,8192=0} open={1=1,2=0}
   mode=100664 size=244712765330/249011634176 mtime=2018-06-05
   00:33:31.332901 caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb)
   objectset[0x1000d092e5f ts 0/0 objects 6769 dirty_or_tx 0]
   parents=0x5f187680 0x5f138680)
   2018-06-06 08:14:17.609587 7fffe7a35700 15 inode.get on
   0x5f138680 0x1000d092e5f.head now 4
   2018-06-06 08:14:17.612318 7fffefa45700  7 objectcacher
   bh_read_finish 1000d092e5f.ddd1/head tid 29067611 94208~4096 (bl
   is 4096) returned 0 outstanding reads 1
   2018-06-06 08:14:17.612338 7fffefa45700 20 objectcacher checking bh
   bh[ 0x5fecdd40 94208~4096 0x5556226235c0 (0) v 0 rx] waiters = {
   94208->[0x5557007383a0, ]}
   2018-06-06 08:14:17.612341 7fffefa45700 10 objectcacher
   bh_read_finish read bh[ 0x5fecdd40 94208~4096 0x5556226235c0
   (4096) v 0 clean firstbyte=46] waiters = {}
   2018-06-06 08:14:17.612344 7fffefa45700 10
   objectcacher.object(1000d092e5f.ddd1/head) try_merge_bh bh[
   0x5fecdd40 94208~4096 0x5556226235c0 (4096) v 0 clean
   first

Re: [ceph-users] cephfs ceph-fuse performance

2017-10-18 Thread Patrick Donnelly
Hello Ashley,

On Wed, Oct 18, 2017 at 12:45 AM, Ashley Merrick  wrote:
> 1/ Is there any options or optimizations that anyone has used or can suggest
> to increase ceph-fuse performance?

You may try playing with the sizes of reads/writes. Another
alternative is to use libcephfs directly to avoid fuse entirely.

> 2/ The reason for looking at ceph-fuse is the benefit of cephfs quotas
> (currently not enabled), will it ever be possible for enable quotas on the
> kernel mount or is this something not possible with the current
> implementation of quotas?

Adding quota support to the kernel is one of our priorities for Mimic.

-- 
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs ceph-fuse performance

2017-10-18 Thread Ashley Merrick
Hello,

I have been trying cephfs on the latest 12.x release.

Performance under cephfs mounted via kernel seems to be as expected maxing out 
the underlying storage / resources using kernel version 4.13.4.

However when it comes to mounting cephfs via ceph-fuse looking at performance 
of 5-10% for the same read & write operations as per the kernel mount.

1/ Is there any options or optimizations that anyone has used or can suggest to 
increase ceph-fuse performance?
2/ The reason for looking at ceph-fuse is the benefit of cephfs quotas 
(currently not enabled), will it ever be possible for enable quotas on the 
kernel mount or is this something not possible with the current implementation 
of quotas?

>From the few articles ive read online about ceph-fuse some have found the 
>performance to be the other way around.

Cheers
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com