On 10/14/2015 10:43 PM, Mohamed Pakkeer wrote:
Hi Pranith,

Will this patch improve the heal performance on distributed disperse volume?. Currently we are getting 10MB/s heal performance on 10G backed network. SHD daemon takes 5 days to complete the heal operation for single 4TB( 3.5 TB data) disk failure.
I will try to see if the patch for replication can be generalized for disperse volume as well. Keep watching the mailing list for updates :-)

Pranith

Regards,
Backer

On Wed, Oct 14, 2015 at 9:08 PM, Ben Turner <btur...@redhat.com <mailto:btur...@redhat.com>> wrote:

    ----- Original Message -----
    > From: "Pranith Kumar Karampuri" <pkara...@redhat.com
    <mailto:pkara...@redhat.com>>
    > To: "Ben Turner" <btur...@redhat.com
    <mailto:btur...@redhat.com>>, "Humble Devassy Chirammal"
    <humble.deva...@gmail.com <mailto:humble.deva...@gmail.com>>,
    "Atin Mukherjee"
    > <atin.mukherje...@gmail.com <mailto:atin.mukherje...@gmail.com>>
    > Cc: "gluster-users" <gluster-users@gluster.org
    <mailto:gluster-users@gluster.org>>
    > Sent: Wednesday, October 14, 2015 1:39:14 AM
    > Subject: Re: [Gluster-users] Speed up heal performance
    >
    >
    >
    > On 10/13/2015 07:11 PM, Ben Turner wrote:
    > > ----- Original Message -----
    > >> From: "Humble Devassy Chirammal" <humble.deva...@gmail.com
    <mailto:humble.deva...@gmail.com>>
    > >> To: "Atin Mukherjee" <atin.mukherje...@gmail.com
    <mailto:atin.mukherje...@gmail.com>>
    > >> Cc: "Ben Turner" <btur...@redhat.com
    <mailto:btur...@redhat.com>>, "gluster-users"
    > >> <gluster-users@gluster.org <mailto:gluster-users@gluster.org>>
    > >> Sent: Tuesday, October 13, 2015 6:14:46 AM
    > >> Subject: Re: [Gluster-users] Speed up heal performance
    > >>
    > >>> Good news is we already have a WIP patch
    review.glusterd.org/10851 <http://review.glusterd.org/10851> to
    > >> introduce multi threaded shd. Credits to Richard/Shreyas from
    facebook for
    > >> this. IIRC, we also have a BZ for the same
    > >> Isnt it the same bugzilla (
    > >> https://bugzilla.redhat.com/show_bug.cgi?id=1221737)
    mentioned in the
    > >> commit log?
    > > @Lindsay - No need for a BZ, the above BZ should suffice.
    > >
    > > @Anyone - In the commit I see:
    > >
    > >          { .key        = "cluster.shd-max-threads",
    > >            .voltype    = "cluster/replicate",
    > >            .option     = "shd-max-threads",
    > >            .op_version = 1,
    > >            .flags      = OPT_FLAG_CLIENT_OPT
    > >          },
    > >          { .key        = "cluster.shd-thread-batch-size",
    > >            .voltype    = "cluster/replicate",
    > >            .option     = "shd-thread-batch-size",
    > >            .op_version = 1,
    > >            .flags      = OPT_FLAG_CLIENT_OPT
    > >          },
    > >
    > > So we can tune max threads and thread batch size?  I
    understand max
    > > threads, but what is batch size?  In my testing on 10G NICs
    with a backend
    > > that will service 10G throughput I see about 1.5 GB per minute
    of SH
    > > throughput.  To Lindsay's other point, will this patch improve SH
    > > throughput?  My systems can write at 1.5 GB / Sec and NICs can
    to 1.2 GB /
    > > sec but I only see ~1.5 GB per _minute_ of SH throughput.  If
    we can not
    > > only make SH multi threaded, but improve the performance of a
    single
    > > thread that would be awesome.  Super bonus points if we can
    have some sort
    > > of tunible that can limit the bandwidth each thread can
    consume.  It would
    > > be great to be able to crank things up when the systems aren't
    busy and
    > > slow things down when load increases.
    > This patch is not merged because I thought we needed throttling
    feature
    > to go in before we can merge this for better control of the
    self-heal
    > speed. We are doing that for 3.8. So expect to see both of these
    for 3.8.

    Great news!  You da man Pranith, next time I am on your side of
    the world beers are on me :)

    -b

    >
    > Pranith
    > >
    > > -b
    > >
    > >
    > >> --Humble
    > >>
    > >>
    > >> On Tue, Oct 13, 2015 at 7:26 AM, Atin Mukherjee
    > >> <atin.mukherje...@gmail.com <mailto:atin.mukherje...@gmail.com>>
    > >> wrote:
    > >>
    > >>> -Atin
    > >>> Sent from one plus one
    > >>> On Oct 13, 2015 3:16 AM, "Ben Turner" <btur...@redhat.com
    <mailto:btur...@redhat.com>> wrote:
    > >>>> ----- Original Message -----
    > >>>>> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com
    <mailto:lindsay.mathie...@gmail.com>>
    > >>>>> To: "gluster-users" <gluster-users@gluster.org
    <mailto:gluster-users@gluster.org>>
    > >>>>> Sent: Friday, October 9, 2015 9:18:11 AM
    > >>>>> Subject: [Gluster-users] Speed up heal performance
    > >>>>>
    > >>>>> Is there any way to max out heal performance? My cluster
    is unused
    > >>> overnight,
    > >>>>> and lightly used at lunchtimes, it would be handy to speed
    up a heal.
    > >>>>>
    > >>>>> The only tuneable I found was
    cluster.self-heal-window-size, which
    > >>> doesn't
    > >>>>> seem to make much difference.
    > >>>> I don't know of any way to speed this up, maybe someone
    else could chime
    > >>> in here that knows the heal daemon better than me.  Maybe
    you could open
    > >>> an
    > >>> RFE on this?  In my testing I only see 2 files getting
    healed at a time
    > >>> per
    > >>> replica pair.  I would like to see this be multi threaded(if
    its not
    > >>> already) with the ability to tune it to control resource
    usage(similar to
    > >>> what we did in the rebalance refactoring done recently).  If
    you let me
    > >>> know the BZ # I'll add my data + suggestions, I have been
    testing this
    > >>> pretty extensively in recent weeks and good data + some
    ideas on how to
    > >>> speed things up.
    > >>> Good news is we already have a WIP patch
    review.glusterd.org/10851 <http://review.glusterd.org/10851> to
    > >>> introduce multi threaded shd. Credits to Richard/Shreyas
    from facebook
    > >>> for
    > >>> this. IIRC, we also have a BZ for the same but the patch is
    in rfc as of
    > >>> now. AFAIK, this is a candidate to land in 3.8 as well,
    Vijay can correct
    > >>> me otherwise.
    > >>>> -b
    > >>>>
    > >>>>> thanks,
    > >>>>> --
    > >>>>> Lindsay
    > >>>>>
    > >>>>> _______________________________________________
    > >>>>> Gluster-users mailing list
    > >>>>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    > >>>>> http://www.gluster.org/mailman/listinfo/gluster-users
    > >>>> _______________________________________________
    > >>>> Gluster-users mailing list
    > >>>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    > >>>> http://www.gluster.org/mailman/listinfo/gluster-users
    > >>> _______________________________________________
    > >>> Gluster-users mailing list
    > >>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    > >>> http://www.gluster.org/mailman/listinfo/gluster-users
    > >>>
    > > _______________________________________________
    > > Gluster-users mailing list
    > > Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    > > http://www.gluster.org/mailman/listinfo/gluster-users
    >
    >
    _______________________________________________
    Gluster-users mailing list
    Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
    http://www.gluster.org/mailman/listinfo/gluster-users







_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to