Re: [Gluster-users] Increasing replica count from 2 to 3

2016-12-29 Thread Jackie Tung
shd ..."), you can restart that with "gluster volume start > ... force" (even if the volume is already started). > > > On 12/29/2016 02:27 PM, Jackie Tung wrote: > > Ravi, > > Got it thanks. I’ve kicked this off, it seems be doing OK. > > I am a little concerned

Re: [Gluster-users] Increasing replica count from 2 to 3

2016-12-29 Thread Jackie Tung
=1400927 <https://bugzilla.redhat.com/show_bug.cgi?id=1400927> Even without doing the upgrade, I may need to restart glusterfs-server anyway to reset memory usage. Thanks, Jackie > On Dec 28, 2016, at 9:40 PM, Ravishankar N <ravishan...@redhat.com> wrote: > > On 12/29/2016

Re: [Gluster-users] Increasing replica count from 2 to 3

2016-12-28 Thread Jackie Tung
Version is 3.8.7 on Ubuntu xenial. On Dec 28, 2016 5:56 PM, "Jackie Tung" <jac...@drive.ai> wrote: > If someone has experience to share in this area, i'd be grateful. I have > an existing distributed replicated volume, 2x16. > > We have a third server ready to go.

[Gluster-users] Increasing replica count from 2 to 3

2016-12-28 Thread Jackie Tung
If someone has experience to share in this area, i'd be grateful. I have an existing distributed replicated volume, 2x16. We have a third server ready to go. Redhat docs say just run add brick replica 3, then run rebalance. The rebalance step feels a bit off to me. Isn't some kind of heal

Re: [Gluster-users] bitrot log messages

2016-10-27 Thread Jackie Tung
; > ===== > > > Thanks and Regards, > Kotresh H R > > - Original Message - >> From: "Jackie Tung" <jac...@drive.ai> >> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>

Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Jackie Tung
the command..I had missed 'scrub' keyword. > > "gluster vol bitrot scrub status" > > Thanks and Regards, > Kotresh H R > > - Original Message - >> From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> >> To: "Jackie Tung&

[Gluster-users] bitrot log messages

2016-10-25 Thread Jackie Tung
Hi, Redhat documentation says that things will get logged to bitd.log, and scrub.log. These files are pretty big - even when we only take the “ E “ log level lines. https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Detecting_Data_Corruption.html

[Gluster-users] gluster brick daemon segfaulted in pairs

2016-10-24 Thread Jackie Tung
Hi, We are running a distributed replicated volume: 16 pairs of bricks (rep count 2), 2 nodes. On Friday, 2 pairs of brick daemons seg-faulted within minutes of each other, leading to 2 subvolumes down (no replicas left). We tried to bring them up again by doing a "volume start force”, which

Re: [Gluster-users] trashcan file size limit

2016-10-20 Thread Jackie Tung
Thanks for the quick response here. How does this make it to a release? Should I hope for it in 3.8.6? > On Oct 20, 2016, at 11:48 AM, Jiffin Tony Thottan <jthot...@redhat.com> wrote: > > > > On 19/10/16 20:54, Jackie Tung wrote: >> Thanks Jiffin, filed

Re: [Gluster-users] trashcan file size limit

2016-10-19 Thread Jackie Tung
eter would be preferable in my humble opinion. > On Oct 19, 2016, at 2:02 AM, Jiffin Tony Thottan <jthot...@redhat.com> wrote: > > Hi Jackie, > > On 18/10/16 23:48, Jackie Tung wrote: >> Hi all, >> >> Documentation says: >> https://gluster.readthed

[Gluster-users] trashcan file size limit

2016-10-18 Thread Jackie Tung
Hi all, Documentation says: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Trash/ gluster volume set features.trash-max-filesize This command can be used to filter files entering trash directory based

Re: [Gluster-users] Rebalancing after adding larger bricks

2016-10-11 Thread Jackie Tung
xpensive/> which might make it > clear why the end result is not what you expected. > > By setting cluster.min-free-disk (defaults to 10%) you can, at least, ensure > that your new bricks are utilized as needed to prevent over filling your > smaller bricks. > On 10/10/2016 10:13

[Gluster-users] Rebalancing after adding larger bricks

2016-10-10 Thread Jackie Tung
Hi, We have a 2 node, distributed replicated setup (11 bricks on each node). Each of these bricks are 6TB in size. node_A:/brick1 replicates node_B:/brick1 node_A:/brick2 replicates node_B:/brick2 node_A:/brick3 replicates node_B:/brick3 … … node_A:/brick11 replicates node_B:/brick11 We

[Gluster-users] gluster profile mode - showing unexpected WRITE fops

2016-08-08 Thread Jackie Tung
Hi, I’m doing some benchmarking vs our trial GlusterFS setup (distributed replicated, 20 bricks configured as 10 pairs). I’m running 3.6.9 currently. Our benchmarking load involves a large number of concurrent readers that continuously pick random file / offsets to read. No writes are ever