[Gluster-users] Changing the initial permission on a volume's root dir

2015-02-07 Thread Raghuram BK
Is there a way of setting the initial permissions on a volume's root dir on
creation? I'm seeing a 755 on mine and that causes write permission
problems for anyone other than the owner when done through samba.. There
seems to be an option the set the owner uid and gid but nothing for the
create mask..
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replace brick

2015-02-07 Thread Vijay Bellur

On 02/07/2015 10:01 AM, Emmanuel Dreyfus wrote:

Hello

What is the right way to replace a dead brick with glusterfs 3.6?

The old brick is listen in gluster volume status as being unavaiable.

If I try to remove it I get an error, because the volume is replicated
and I should remove bricks by pairs.
# gluster volume remove-brick tmp  cubi:/wd1 force
Removing brick(s) can result in data loss. Do you want to Continue?
(y/n) y
volume remove-brick commit force: failed: Remove brick incorrect brick
count of 1 for replica 2

If I try to replace it by itself I get an error because the brick
already
exists.
# gluster volume replace-brick tmp cubi:/wd1 cubi:/wd1 commit force
volume replace-brick: failed: Brick: cubi:/wd1 not available. Brick may
be containing or be contained by an existing brick



Performing a replace-brick commit force with an alternate brick and 
letting self-heal move data on to the new brick is the way to go as of now.


-Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Max recommended brick size of 100 TB

2015-02-07 Thread ML mail
Thank you Niels for your input, that definitely makes me more curious... Now 
let me tell you a bit more about my intended setup. First of all my major 
difference is that I will not be using XFS but ZFS. Then second major 
difference I will not be using any hardware RAID card but one single HBA (LSI 
3008 chip). My inteded ZFS setup would consist of one ZFS pool per node. This 
pool will have 3 virtual devices of 12 disks each (6 TB per disk) each using 
RAIDZ-2 (equivalent to RAID 6) for integrity. This gives me a total of 36 disks 
for a total of 180 TB of raw capacity.



I will then create one big 180 TB ZFS data set (virtual device, file system or 
whatever you want to call it) for my GlusterFS brick. Now as mentioned I could 
also create have two bricks by creating two ZFS data sets of around 90 TB each. 
But as everything is behind the same HBA and same ZFS pool there will not be 
any gain in performance nor availability from the ZFS side.

On the other hand, you mention in your mail that having two bricks per node 
means having two glusterfsd processes running and allows me to handle more 
clients. Can you tell me more about that? Will I also see any genernal 
performance gain? For example in terms of MB/s throughput? Also are there maybe 
any disadvantages of running two bricks on the same node, especially in my case?




On Saturday, February 7, 2015 10:24 AM, Niels de Vos  wrote:
On Fri, Feb 06, 2015 at 05:06:38PM +, ML mail wrote:

> Hello,
> 
> I read in the Gluster Getting Started leaflet
> (https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf)
> that the max recommended brick size should be 100 TB.
> 
> Once my storage server nodes filled up with disks they will have in
> total 192 TB of storage space, does this mean I should create two
> bricks per storage server node?
> 
> Note here that these two bricks would still be on the same controller
> so I don't really see the point or advantage of having two 100 TB
> bricks instead of one single brick of 200 TB per node. But maybe
> someone can explain the rational here?

This is based on the recommendation that RHEL has for maximum size of
XFS filesystems. They might have adjusted the size with more  recent
releases, though.

However, having multiple bricks per server can help with other things
too. Multiple processes (one per brick) could handle more clients at the
same time. Depending on how you configure your RAID for the bricks, you
could possibly reduce the performance loss while a RAID-set gets rebuild
after a disk loss.

Best practise seems to be to use 12 disks per RAID-set, mostly RAID10 or
RAID6 is advised.

HTH,
Niels 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] increase write performance in gfs 3.5.2

2015-02-07 Thread Peter Prusi

Hello,

I'm new to glusterfs and I dont understand 'the 
performance.write-behind-window-size' option. I'm using glusterfs 3.5.2 
and setup a volume over two nodes connected over 1 GbE.
Actually the write performance is about 58 Mb/s. This means glusterfs 
writes the data simultaneous on both nodes. I'm trying to increase the 
write speed and I thought this option would cache the data
on one node and then distribute to the other. So i could double the 
speed. I played around with this option, but it has no effect to the 
write speed.
What am I doing wrong or is there another tweak to increase the write 
speed ?


Thank you
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Max recommended brick size of 100 TB

2015-02-07 Thread Niels de Vos
On Fri, Feb 06, 2015 at 05:06:38PM +, ML mail wrote:
> Hello,
> 
> I read in the Gluster Getting Started leaflet
> (https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf)
> that the max recommended brick size should be 100 TB.
> 
> Once my storage server nodes filled up with disks they will have in
> total 192 TB of storage space, does this mean I should create two
> bricks per storage server node?
> 
> Note here that these two bricks would still be on the same controller
> so I don't really see the point or advantage of having two 100 TB
> bricks instead of one single brick of 200 TB per node. But maybe
> someone can explain the rational here?

This is based on the recommendation that RHEL has for maximum size of
XFS filesystems. They might have adjusted the size with more  recent
releases, though.

However, having multiple bricks per server can help with other things
too. Multiple processes (one per brick) could handle more clients at the
same time. Depending on how you configure your RAID for the bricks, you
could possibly reduce the performance loss while a RAID-set gets rebuild
after a disk loss.

Best practise seems to be to use 12 disks per RAID-set, mostly RAID10 or
RAID6 is advised.

HTH,
Niels


pgpur0HCxiuGt.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] replace brick

2015-02-07 Thread Emmanuel Dreyfus
Hello

What is the right way to replace a dead brick with glusterfs 3.6?

The old brick is listen in gluster volume status as being unavaiable.

If I try to remove it I get an error, because the volume is replicated
and I should remove bricks by pairs.
# gluster volume remove-brick tmp  cubi:/wd1 force 
Removing brick(s) can result in data loss. Do you want to Continue?
(y/n) y
volume remove-brick commit force: failed: Remove brick incorrect brick
count of 1 for replica 2

If I try to replace it by itself I get an error because the brick
already 
exists.
# gluster volume replace-brick tmp cubi:/wd1 cubi:/wd1 commit force 
volume replace-brick: failed: Brick: cubi:/wd1 not available. Brick may
be containing or be contained by an existing brick

I guess there is a trick but I do not wee what is it.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users