"Gluster Devel" <gluster-devel@gluster.org>, "Raghavendra Gowdappa"
<rgowd...@redhat.com>, "Venky Shankar" <vshan...@redhat.com>, "Pranith Kumar
Karampuri" <pkara...@redhat.com>, "Shyamsundar Ranganathan"
<srang...@redhat.
Kumar Karampuri"
>> <pkara...@redhat.com>
>> Cc: gluster-devel@gluster.org
>> Sent: Wednesday, September 2, 2015 8:12:37 PM
>> Subject: Re: [Gluster-devel] FOP ratelimit?
>>
>> Raghavendra Gowdappa <rgowd...@redhat.com> wrote:
>>
>> &
..@redhat.com>
To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Thursday, September 10, 2015 12:16:41 PM
Subject: Re: [Gluster-devel] FOP ratelimit?
On Thu, Sep 3, 2015 at 11:36 AM, Raghavendra Gowdappa
<rgowd
..@redhat.com>
To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Thursday, September 10, 2015 12:16:41 PM
Subject: Re: [Gluster-devel] FOP ratelimit?
On Thu, Sep 3, 2015 at 11:36 AM, Raghavendra Gowdappa
<rgowd
> Have we given thought about other IO scheduling algorithms like mclock
> algorithm [1], used by vmware for their QOS solution.
> Plus another point to keep in mind here is the distributed nature of the
> solution. Its easier to think of a brick
> controlling the throughput for a client or a
- Original Message -
> From: "Emmanuel Dreyfus" <m...@netbsd.org>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Pranith Kumar Karampuri"
> <pkara...@redhat.com>
> Cc: gluster-devel@gluster.org
> Sent: Wednesday,
On Wed, Sep 02, 2015 at 02:04:32PM +0530, Pranith Kumar Karampuri wrote:
> >And more generally, do we have a way to ratelimit FOPs per client, so
> >that one client cannot make the cluster unusable for the others?
> Do you have profile data?
No, it was on a production setup and I was too focused
On Wed, Sep 2, 2015 at 2:05 PM, Venky Shankar wrote:
> On Wed, Sep 2, 2015 at 1:59 PM, Emmanuel Dreyfus wrote:
>> Hi
>>
>> Yesterday I experienced the problem of a single user bringing down
>> a glusterfs cluster to its knees because of a high amount of
On Wed, Sep 2, 2015 at 1:59 PM, Emmanuel Dreyfus wrote:
> Hi
>
> Yesterday I experienced the problem of a single user bringing down
> a glusterfs cluster to its knees because of a high amount of rename
> operations.
>
> I understand rename on DHT can be very costly because data
Hi
Yesterday I experienced the problem of a single user bringing down
a glusterfs cluster to its knees because of a high amount of rename
operations.
I understand rename on DHT can be very costly because data really have
to be moved from a brick to another one just for a file name change.
Is
On Wed, Sep 02, 2015 at 02:05:03PM +0530, Venky Shankar wrote:
> > I understand rename on DHT can be very costly because data really have
> > to be moved from a brick to another one just for a file name change.
> > Is there a workaround for this behavior?
>
> Not really. DHT uses pointer files
- Original Message -
> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> To: "Emmanuel Dreyfus" <m...@netbsd.org>, gluster-devel@gluster.org
> Sent: Wednesday, September 2, 2015 2:04:32 PM
> Subject: Re: [Gluster-devel] FOP ratelimit?
> Do you have any ideas here on QoS? Can it be provided as a use-case for
> multi-tenancy you were working on earlier?
My interpretation of QoS would include rate limiting, but more per
*activity* (e.g. self-heal, rebalance, user I/O) or per *tenant* rather
than per *client*. Also, it's easier
Raghavendra Gowdappa wrote:
> Its helpful if you can give some pointers on what parameters (like
> latency, throughput etc) you want us to consider for QoS.
Full blown QoS would be nice, but a first line of defense against
resource hogs seems just badly required.
A bare
14 matches
Mail list logo