Interesting, we've seen some issues with aio_submit and NVMe cards with 3.10, but haven't seen any issues with spinning disks.

Mark

On 05/07/2016 01:00 PM, Roozbeh Shafiee wrote:
Thank you Mark for your respond,

The problem caused by some kernel issues. I installed Jewel version on
CentOS 7 with 3.10 kernel, and it seems 3.10 is too old for Ceph Jewel
so with upgrading to kernel 4.5.2, everything fixed and works perfectly.

Regards,
Roozbeh

On May 3, 2016 21:13, "Mark Nelson" <mnel...@redhat.com
<mailto:mnel...@redhat.com>> wrote:

    Hi Roozbeh,

    There isn't nearly enough information here regarding your benchmark
    and test parameters to be able to tell why you are seeing
    performance swings.  It could be anything from network hiccups, to
    throttling in the ceph stack, to unlucky randomness in object
    distribution, to vibrations in the rack causing your disk heads to
    resync, to fragmentation of the underlying filesystem (especially
    important for sequential reads).

    Generally speaking if you want to try to isolate the source of the
    problem, it's best to find a way to make the issue repeatable on
    demand, then setup your tests so you can record system metrics
    (device queue/service times, throughput stalls, network oddities,
    etc) and start systematically tracking down when and why slowdowns
    occur.  Sometimes you might even be able to reproduce issues outside
    of Ceph (Network problems are often a common source).

    It might also be worth looking at your PG and data distribution.  IE
    if you have some clumpiness you might see variation in performance
    as some OSDs starve for IOs while others are overloaded.

    Good luck!

    Mark

    On 05/03/2016 11:16 AM, Roozbeh Shafiee wrote:

        Hi,

        I have a test Ceph cluster in my lab which will be a storage
        backend for
        one of my projects.
        This cluster is my first experience on CentOS-7, but recently I
        had some
        use case on Ubuntu 14.04 too.

        Actually everything works fine and I have a good functionality
        on this
        cluster, but the main problem is the performance
        of cluster in read and write data. I have too much swing in read and
        write and the rate of this swing is between 60 KB/s - 70 MB/s,
        specially
        on read.
        how can I tune this cluster as stable storage backend for my case?

        More information:

            Number of OSDs: 5 physical server with 4x4TB - 16 GB of RAM
        - Core
            i7 CPU
            Number of Monitors: 1 virtual machine with 180 GB on SSD -
        16 GB of
            RAM - on an KVM Virtualization Machine
            All Operating Systems: CentOS 7.2 with default kernel 3.10
            All File Systems: XFS
            Ceph Version: 10.2 Jewel
            Switch for Private Networking: D-Link DGS-1008D Gigabit 8
            NICs: Gb/s NIC x 2 for each server
            Block Device on Client Server: Linux kernel RBD module


        Thank you
        Roozbeh



        _______________________________________________
        ceph-users mailing list
        ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to