On Sat, 30 Mar 2019 at 08:06, Vijay Bellur wrote:
>
>
> On Fri, Mar 29, 2019 at 6:42 AM Atin Mukherjee
> wrote:
>
>> All,
>>
>> As many of you already know that the design logic with which GlusterD
>> (here on to be referred as GD1) was implemented has some fundamental
>> scalability bottlenecks
On Fri, Mar 29, 2019 at 6:42 AM Atin Mukherjee wrote:
> All,
>
> As many of you already know that the design logic with which GlusterD
> (here on to be referred as GD1) was implemented has some fundamental
> scalability bottlenecks at design level, especially around it's way of
> handshaking conf
Hello,
Yes I did find some hits on this in the following logs. We started seeing
failures after upgrading to 5.3 from 4.6. If you want me to check for
something else let me know. Thank you all on the gluster team for finding and
fixing that problem whatever it was!
[root@lonbaknode3 g
Hello Nithya,
I removed several options that I admit I didn't quite understand and I had
added from Google searches. Was dumb for me to have added in the first place
not understanding them.
1 of these options apparently was causing directory listing to be about 7
seconds vs when I cut d
On Fri, Mar 29, 2019, 10:03 PM Jim Kinney wrote:
> Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and
> out of sync, need heal files.
>
> We need to migrate the three replica servers to gluster v. 5 or 6. Also
> will need to upgrade about 80 clients as well. Given that a comp
Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and
out of sync, need heal files.
We need to migrate the three replica servers to gluster v. 5 or 6. Also
will need to upgrade about 80 clients as well. Given that a complete
removal of gluster will not touch the 200+TB of data on
All,
As many of you already know that the design logic with which GlusterD (here
on to be referred as GD1) was implemented has some fundamental scalability
bottlenecks at design level, especially around it's way of handshaking
configuration meta data and replicating them across all the peers. Whil
Hi,
Have added a few more info that was missed earlier.
The disconnect issue being minor we are working on it with a lower priority.
But yes, it will be fixed soon.
The bug to track this is: https://bugzilla.redhat.com/show_bug.cgi?id=1694010
The workaround to get over this if it happens is to,
On Fri, Mar 29, 2019 at 12:47 PM Krutika Dhananjay
wrote:
> Questions/comments inline ...
>
> On Thu, Mar 28, 2019 at 10:18 PM wrote:
>
>> Dear All,
>>
>> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
>> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one w
Questions/comments inline ...
On Thu, Mar 28, 2019 at 10:18 PM wrote:
> Dear All,
>
> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
> different experience. After first trying a test upgrade on a 3
10 matches
Mail list logo