Hello
What is the right way to replace a dead brick with glusterfs 3.6?
The old brick is listen in gluster volume status as being unavaiable.
If I try to remove it I get an error, because the volume is replicated
and I should remove bricks by pairs.
# gluster volume remove-brick tmp cubi:/wd1
On Fri, Feb 06, 2015 at 05:06:38PM +, ML mail wrote:
Hello,
I read in the Gluster Getting Started leaflet
(https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf)
that the max recommended brick size should be 100 TB.
Once my storage server nodes filled up with
Hello,
I'm new to glusterfs and I dont understand 'the
performance.write-behind-window-size' option. I'm using glusterfs 3.5.2
and setup a volume over two nodes connected over 1 GbE.
Actually the write performance is about 58 Mb/s. This means glusterfs
writes the data simultaneous on both
Is there a way of setting the initial permissions on a volume's root dir on
creation? I'm seeing a 755 on mine and that causes write permission
problems for anyone other than the owner when done through samba.. There
seems to be an option the set the owner uid and gid but nothing for the
create
On 02/07/2015 10:01 AM, Emmanuel Dreyfus wrote:
Hello
What is the right way to replace a dead brick with glusterfs 3.6?
The old brick is listen in gluster volume status as being unavaiable.
If I try to remove it I get an error, because the volume is replicated
and I should remove bricks by
Thank you Niels for your input, that definitely makes me more curious... Now
let me tell you a bit more about my intended setup. First of all my major
difference is that I will not be using XFS but ZFS. Then second major
difference I will not be using any hardware RAID card but one single HBA