Re: [Gluster-devel] NSR design document

2015-10-15 Thread Manoj Pillai
- Original Message - > October 14 2015 3:11 PM, "Manoj Pillai" <mpil...@redhat.com> wrote: > > E.g. 3x number of bricks could be a problem if workload has > > operations that don't scale well with brick count. > > Fortunately we have DHT2 t

Re: [Gluster-devel] NSR design document

2015-10-14 Thread Manoj Pillai
- Original Message - > > "The reads will also be sent to, and processed by the current > > leader." > > > > So, at any given time, only one brick in the replica group is > > handling read requests? For a read-only workload-phase, > > all except one will be idle in any given term? > >

Re: [Gluster-devel] NSR design document

2015-10-14 Thread Manoj Pillai
- Original Message - > From: "Avra Sengupta" > To: "Gluster Devel" > Sent: Wednesday, October 14, 2015 2:10:33 PM > Subject: [Gluster-devel] NSR design document > > Hi, > > Please find attached the NSR design document. It captures the >

Re: [Gluster-devel] performance issues Manoj found in EC testing

2016-06-25 Thread Manoj Pillai
- Original Message - > From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > To: "Xavier Hernandez" <xhernan...@datalab.es> > Cc: "Manoj Pillai" <mpil...@redhat.com>, "Gluster Devel" > <gluster-devel@gluster.org&

Re: [Gluster-devel] Fragment size in Systematic erasure code

2016-03-14 Thread Manoj Pillai
Hi Xavi, - Original Message - > From: "Xavier Hernandez" <xhernan...@datalab.es> > To: "Ashish Pandey" <aspan...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Manoj Pillai" > <mpil...@redhat.co

Re: [Gluster-devel] performance issues Manoj found in EC testing

2016-06-27 Thread Manoj Pillai
in EC testing > > On Mon, Jun 27, 2016 at 2:38 PM, Manoj Pillai <mpil...@redhat.com> wrote: > > Thanks, folks! As a quick update, throughput on a single client test jumped > > from ~180 MB/s to 700+MB/s after enabling client-io-threads. Throughput is > >

Re: [Gluster-devel] performance issues Manoj found in EC testing

2016-06-27 Thread Manoj Pillai
olks! As a quick update, throughput on a single client test jumped from ~180 MB/s to 700+MB/s after enabling client-io-threads. Throughput is now more in line with what is expected for this workload based on back-of-the-envelope calculations. Are there any reservations about recommending client-io

Re: [Gluster-devel] CFP for Gluster Developer Summit

2016-08-19 Thread Manoj Pillai
Here's a proposal ... Title: State of Gluster Performance Theme: Stability and Performance I hope to achieve the following in this talk: * present a brief overview of current performance for the broad workload classes: large-file sequential and random workloads, small-file and

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-08 Thread Manoj Pillai
> filename=/perf8/iotest/fio_8 > > I have 3 vms reading from one mount, and each of these vms is running the > above job in parallel. > > -Krutika > > On Tue, Jun 6, 2017 at 9:14 PM, Manoj Pillai <mpil...@redhat.com> wrote: > >> >> >> On Tue, Jun 6,

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-06 Thread Manoj Pillai
On Tue, Jun 6, 2017 at 5:05 PM, Krutika Dhananjay wrote: > Hi, > > As part of identifying performance bottlenecks within gluster stack for VM > image store use-case, I loaded io-stats at multiple points on the client > and brick stack and ran randrd test using fio from

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-09 Thread Manoj Pillai
- end of brick stack > > Will continue reading code and get back when I find sth concrete. > > -Krutika > > > On Thu, Jun 8, 2017 at 12:22 PM, Manoj Pillai <mpil...@redhat.com> wrote: > >> Thanks. So I was suggesting a repeat

Re: [Gluster-devel] 2 way with Arbiter degraded behavior

2018-02-21 Thread Manoj Pillai
On Wed, Feb 21, 2018 at 9:13 PM, Jeff Applewhite wrote: > Hi All > > When you have a setup with 2 way replication + Arbiter backed by two > large RAID 6 volumes what happens when there is a disk failure and > rebuild in progress in one of those RAID sets from a client >

Re: [Gluster-devel] optimizing gluster fuse

2018-04-09 Thread Manoj Pillai
On Tue, Apr 10, 2018 at 10:02 AM, riya khanna wrote: > On Mon, Apr 9, 2018 at 10:42 PM, Raghavendra Gowdappa > wrote: > >> +Manoj. >> >> On Mon, Apr 9, 2018 at 10:18 PM, riya khanna >> wrote: >> >>> Hi All, >>> >>> I'm

Re: [Gluster-devel] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Manoj Pillai
On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa wrote: > > > On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa > wrote: > >> All, >> >> We've found perf xlators io-cache and read-ahead not adding any >> performance improvement. At best read-ahead is redundant due to kernel >>