> FWIW, the xfs -n size=64k option is probably not a good idea.
Agreed, moreover it’s a really bad idea. You get memory allocation slowdowns
as described in the linked post, and eventually the OSD dies.
It can be mitigated somewhat by periodically (say every 2 hours, ymmv) flushing
the system
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Try increasing the following to say 10
osd_op_num_shards = 10
filestore_fd_cache_size = 128
Hope, the following you introduced after I told you , so, it shouldn't be the
cause it seems (?)
filestore_odsy
Disregard the last msg. Still getting long 0 IOPS periods.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Thursday, July 14, 2016 10:05 AM
To: Somnath Roy; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
p; Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Wednesday, July 13, 2016 4:57 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Terrible RBD performance with Jewel
Hi,
I just installed jewe
ednesday, July 13, 2016 5:47 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj,
Could be related to the new throttle parameter introduced in jewel. By default
these throttles are off , you n
2016 03:34
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Terrible RBD performance with Jewel
>
> As Somnath mentioned, you've got a lot of tunables set there. Are you sure
> those are all doing what you think they are doing?
>
> FWIW, the xfs -n size=64k option is
esday, July 13, 2016 5:47 PM
*To:* Garg, Pankaj; ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>
*Subject:* Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj,
Could be related to the new throttle parameter introduced in jewel. By
default these throttles are off , you ne
ts.ceph.com] On Behalf Of
Somnath Roy
Sent: Wednesday, July 13, 2016 5:47 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj,
Could be related to the new throttle parameter introduced in jewel
h-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Wednesday, July 13, 2016 4:57 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Terrible RBD performance with Jewel
Hi,
I just installed jewel on a small cluster of 3 machines with 4 SSDs
h.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj,
Could be related to the new throttle parameter introduced in jewel. By default
these throttles are off , you need to tweak it according to your setup.
What is your journal size and fio bloc
Also increase the following..
filestore_op_threads
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Somnath Roy
Sent: Wednesday, July 13, 2016 5:47 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Terrible RBD
lto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj,
Could be related to the new throttle parameter introduced in jewel. By default
these throttles are off , you need to tweak it according to your setup.
What is your journal size and fio block size ?
If
ehalf Of
Somnath Roy
Sent: Wednesday, July 13, 2016 5:47 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj,
Could be related to the new throttle parameter introduced in jewel. By def
Also increase the following..
filestore_op_threads
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Somnath Roy
Sent: Wednesday, July 13, 2016 5:47 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
Pankaj
nkaj
Sent: Wednesday, July 13, 2016 4:57 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Terrible RBD performance with Jewel
Hi,
I just installed jewel on a small cluster of 3 machines with 4 SSDs each. I
created 8 RBD images, and use a single client, with 8 threads, to do random
writes (u
Hi,
I just installed jewel on a small cluster of 3 machines with 4 SSDs each. I
created 8 RBD images, and use a single client, with 8 threads, to do random
writes (using FIO with RBD engine) on the images ( 1 thread per image).
The cluster has 3X replication and 10G cluster and client networks.
16 matches
Mail list logo