Re: [Gluster-users] Gluster 4.1.2 performance tuning as Vmware datastore

2018-08-08 Thread Jonathan Archer
 Edy,
Have been working on this on and off for some time now, but have yet to find a 
working configuration.
Upon failover I always end up with inaccessible within VMware, have you seen 
this?
But to answer your question have you looked at sharding? storing large files as 
smaller chunks to reduce the sync times between nodes.
Jon
On Wednesday, 8 August 2018, 15:11:12 BST, Pui Edylie  
wrote:  
 
 Dear All,

Recently I have setup a glusterfs 4.1.2 with 3 nodes and uses 
nfs-ganesha with storhaug to share out the NFS service to Vmware 6.7 as 
a datastore.

The following is my volume setting

Volume Name: gv0
Type: Replicate
Volume ID: b1b57ff2-b81f-4625-846a-87064023cf22
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.0.3:/brick1685/gv0
Brick2: 192.168.0.2:/brick1684/gv0
Brick3: 192.168.0.1:/brick1683/gv0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
network.ping-timeout: 1
cluster.enable-shared-storage: enable


Do you have any suggestion to tune the volume to optimise for NFSv3 and 
as a Vmware ESXI 6.7 datastore?

Thank you!

Regards,
Edy

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  ___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster 4.1.2 performance tuning as Vmware datastore

2018-08-08 Thread Pui Edylie

Dear All,

Recently I have setup a glusterfs 4.1.2 with 3 nodes and uses 
nfs-ganesha with storhaug to share out the NFS service to Vmware 6.7 as 
a datastore.


The following is my volume setting

Volume Name: gv0
Type: Replicate
Volume ID: b1b57ff2-b81f-4625-846a-87064023cf22
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.0.3:/brick1685/gv0
Brick2: 192.168.0.2:/brick1684/gv0
Brick3: 192.168.0.1:/brick1683/gv0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
network.ping-timeout: 1
cluster.enable-shared-storage: enable


Do you have any suggestion to tune the volume to optimise for NFSv3 and 
as a Vmware ESXI 6.7 datastore?


Thank you!

Regards,
Edy

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] is Gluster 4.1 the current "stable" LTM build?

2018-08-08 Thread Amye Scavarda
Release Schedule will always have your back on this,
http://gluster.org/release-schedule -- I'll go update the /install page.
Thanks for the reminder!



On Tue, Aug 7, 2018 at 12:06 AM wkmail  wrote:

> Website says 3.10 is the stable version
>
> https://www.gluster.org/install/
>
> Ubuntu PPA pages says :
>
> "NOTE: 3.10 (LTM), 3.11 (STM), 3.13 (STM), and 4.0 (STM) have reached
> EOL; the 3.11 repo will be deleted on 31 August, and the 3.13 repo on 31
> October."
>
> We have some 3.12 installs, but I'm waiting for that memory leak to be
> fixed, so I don't want to go there as we use fuse.
>
> So.
>
> -wk
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users