Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-22 Thread Anthony D'Atri
> FWIW, the xfs -n size=64k option is probably not a good idea. Agreed, moreover it’s a really bad idea. You get memory allocation slowdowns as described in the linked post, and eventually the OSD dies. It can be mitigated somewhat by periodically (say every 2 hours, ymmv) flushing the

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-14 Thread Adrian Saul
ceph-users@lists.ceph.com Subject: Re: [ceph-users] Terrible RBD performance with Jewel Try increasing the following to say 10 osd_op_num_shards = 10 filestore_fd_cache_size = 128 Hope, the following you introduced after I told you , so, it shouldn't be the cause it seems (?) filestore_odsync_wr

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-14 Thread Garg, Pankaj
Disregard the last msg. Still getting long 0 IOPS periods. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg, Pankaj Sent: Thursday, July 14, 2016 10:05 AM To: Somnath Roy; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Terrible RBD performance with Jewel

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-14 Thread Somnath Roy
om: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg, Pankaj Sent: Wednesday, July 13, 2016 4:57 PM To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: [ceph-users] Terrible RBD performance with Jewel Hi, I just installed jewel on a small cluster of

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-14 Thread Garg, Pankaj
016 5:47 PM To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] Terrible RBD performance with Jewel Pankaj, Could be related to the new throttle parameter introduced in jewel. By default these throttles are off , you need to tweak it acco

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-14 Thread Nick Fisk
To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Terrible RBD performance with Jewel > > As Somnath mentioned, you've got a lot of tunables set there. Are you sure > those are all doing what you think they are doing? > > FWIW, the xfs -n size=64k option is probably not a go

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Mark Nelson
2016 5:47 PM *To:* Garg, Pankaj; ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> *Subject:* Re: [ceph-users] Terrible RBD performance with Jewel Pankaj, Could be related to the new throttle parameter introduced in jewel. By default these throttles are off , you need to tweak it

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Somnath Roy
lf Of Somnath Roy Sent: Wednesday, July 13, 2016 5:47 PM To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] Terrible RBD performance with Jewel Pankaj, Could be related to the new throttle parameter introduced in jewel. By default the

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Garg, Pankaj
@lists.ceph.com] On Behalf Of Garg, Pankaj Sent: Wednesday, July 13, 2016 4:57 PM To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: [ceph-users] Terrible RBD performance with Jewel Hi, I just installed jewel on a small cluster of 3 machines with 4 SSDs each. I creat

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Somnath Roy
<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] Terrible RBD performance with Jewel Pankaj, Could be related to the new throttle parameter introduced in jewel. By default these throttles are off , you need to tweak it according to your setup. What is your journal size and fio block siz

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Garg, Pankaj
lso increase the following.. filestore_op_threads From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Somnath Roy Sent: Wednesday, July 13, 2016 5:47 PM To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] Terrible RBD perf

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Somnath Roy
eph-users@lists.ceph.com> Subject: Re: [ceph-users] Terrible RBD performance with Jewel Pankaj, Could be related to the new throttle parameter introduced in jewel. By default these throttles are off , you need to tweak it according to your setup. What is your journal size and fio block size ? If it is

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Somnath Roy
Also increase the following.. filestore_op_threads From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Somnath Roy Sent: Wednesday, July 13, 2016 5:47 PM To: Garg, Pankaj; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Terrible RBD performance with Jewel Pankaj

Re: [ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Somnath Roy
nkaj Sent: Wednesday, July 13, 2016 4:57 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] Terrible RBD performance with Jewel Hi, I just installed jewel on a small cluster of 3 machines with 4 SSDs each. I created 8 RBD images, and use a single client, with 8 threads, to do random writes (u

[ceph-users] Terrible RBD performance with Jewel

2016-07-13 Thread Garg, Pankaj
Hi, I just installed jewel on a small cluster of 3 machines with 4 SSDs each. I created 8 RBD images, and use a single client, with 8 threads, to do random writes (using FIO with RBD engine) on the images ( 1 thread per image). The cluster has 3X replication and 10G cluster and client networks.