- Original Message -
> October 14 2015 3:11 PM, "Manoj Pillai" <mpil...@redhat.com> wrote:
> > E.g. 3x number of bricks could be a problem if workload has
> > operations that don't scale well with brick count.
>
> Fortunately we have DHT2 t
- Original Message -
> > "The reads will also be sent to, and processed by the current
> > leader."
> >
> > So, at any given time, only one brick in the replica group is
> > handling read requests? For a read-only workload-phase,
> > all except one will be idle in any given term?
>
>
- Original Message -
> From: "Avra Sengupta"
> To: "Gluster Devel"
> Sent: Wednesday, October 14, 2015 2:10:33 PM
> Subject: [Gluster-devel] NSR design document
>
> Hi,
>
> Please find attached the NSR design document. It captures the
>
- Original Message -
> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> To: "Xavier Hernandez" <xhernan...@datalab.es>
> Cc: "Manoj Pillai" <mpil...@redhat.com>, "Gluster Devel"
> <gluster-devel@gluster.org&
Hi Xavi,
- Original Message -
> From: "Xavier Hernandez" <xhernan...@datalab.es>
> To: "Ashish Pandey" <aspan...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Manoj Pillai"
> <mpil...@redhat.co
in EC testing
>
> On Mon, Jun 27, 2016 at 2:38 PM, Manoj Pillai <mpil...@redhat.com> wrote:
> > Thanks, folks! As a quick update, throughput on a single client test jumped
> > from ~180 MB/s to 700+MB/s after enabling client-io-threads. Throughput is
> >
olks! As a quick update, throughput on a single client test jumped
from ~180 MB/s to 700+MB/s after enabling client-io-threads. Throughput is
now more in line with what is expected for this workload based on
back-of-the-envelope calculations.
Are there any reservations about recommending client-io
Here's a proposal ...
Title: State of Gluster Performance
Theme: Stability and Performance
I hope to achieve the following in this talk:
* present a brief overview of current performance for the broad
workload classes: large-file sequential and random workloads,
small-file and
> filename=/perf8/iotest/fio_8
>
> I have 3 vms reading from one mount, and each of these vms is running the
> above job in parallel.
>
> -Krutika
>
> On Tue, Jun 6, 2017 at 9:14 PM, Manoj Pillai <mpil...@redhat.com> wrote:
>
>>
>>
>> On Tue, Jun 6,
On Tue, Jun 6, 2017 at 5:05 PM, Krutika Dhananjay
wrote:
> Hi,
>
> As part of identifying performance bottlenecks within gluster stack for VM
> image store use-case, I loaded io-stats at multiple points on the client
> and brick stack and ran randrd test using fio from
- end of brick stack
>
> Will continue reading code and get back when I find sth concrete.
>
> -Krutika
>
>
> On Thu, Jun 8, 2017 at 12:22 PM, Manoj Pillai <mpil...@redhat.com> wrote:
>
>> Thanks. So I was suggesting a repeat
On Wed, Feb 21, 2018 at 9:13 PM, Jeff Applewhite
wrote:
> Hi All
>
> When you have a setup with 2 way replication + Arbiter backed by two
> large RAID 6 volumes what happens when there is a disk failure and
> rebuild in progress in one of those RAID sets from a client
>
On Tue, Apr 10, 2018 at 10:02 AM, riya khanna
wrote:
> On Mon, Apr 9, 2018 at 10:42 PM, Raghavendra Gowdappa > wrote:
>
>> +Manoj.
>>
>> On Mon, Apr 9, 2018 at 10:18 PM, riya khanna
>> wrote:
>>
>>> Hi All,
>>>
>>> I'm
On Wed, Feb 13, 2019 at 10:51 AM Raghavendra Gowdappa
wrote:
>
>
> On Tue, Feb 12, 2019 at 5:38 PM Raghavendra Gowdappa
> wrote:
>
>> All,
>>
>> We've found perf xlators io-cache and read-ahead not adding any
>> performance improvement. At best read-ahead is redundant due to kernel
>>
14 matches
Mail list logo