I'm pretty new to RGW, but I'm needing to get max performance as well. Have
you tried moving your RGW metadata pools to nvme? Carve out a bit of NVMe
space and then pin the pool to the SSD class in CRUSH, that way the small
metadata ops aren't on slow media.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 17, 2019 at 5:59 PM Ravi Patel <r...@kheironmed.com> wrote:

> Hello,
>
> We have deployed ceph cluster and we are trying to debug a massive drop in
> performance between the RADOS layer vs the RGW layer
>
> ## Cluster config
> 4 OSD nodes (12 Drives each, NVME Journals, 1 SSD drive) 40GbE NIC
> 2 RGW nodes ( DNS RR load balancing) 40GbE NIC
> 3 MON nodes 1 GbE NIC
>
> ## Pool configuration
> RGW data pool  - replicated 3x 4M stripe (HDD)
> RGW metadata pool - replicated 3x (SSD) pool
>
> ## Benchmarks
> 4K Read IOP/s performance using RADOS Bench 48,000~ IOP/s
> 4K Read RGW performance via s3 interface ~ 130 IOP/s
>
> Really trying to understand how to debug this issue. all the nodes never
> break 15% CPU utilization and there is plenty of RAM. The one pathological
> issue in our cluster is that the MON nodes are currently on VMs that are
> sitting behind a single 1 GbE NIC. (We are in the process of moving them,
> but are unsure if that will fix the issue.
>
> What metrics should we be looking at to debug the RGW layer. Where do we
> need to look?
>
> ---
>
> Ravi Patel, PhD
> Machine Learning Systems Lead
> Email: r...@kheironmed.com
>
>
> *Kheiron Medical Technologies*
>
> kheironmed.com | supporting radiologists with deep learning
>
> Kheiron Medical Technologies Ltd. is a registered company in England and
> Wales. This e-mail and its attachment(s) are intended for the above named
> only and are confidential. If they have come to you in error then you must
> take no action based upon them but contact us immediately. Any disclosure,
> copying, distribution or any action taken or omitted to be taken in
> reliance on it is prohibited and may be unlawful. Although this e-mail and
> its attachments are believed to be free of any virus, it is the
> responsibility of the recipient to ensure that they are virus free. If you
> contact us by e-mail then we will store your name and address to facilitate
> communications. Any statements contained herein are those of the individual
> and not the organisation.
>
> Registered number: 10184103. Registered office: RocketSpace, 40 Islington
> High Street, London, N1 8EQ
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to