[Gluster-users] glustefs as vmware datastore in production

2018-05-25 Thread joao
Hi,

Does anyone have glusterfs as vmware datastore working in production in a real 
world case? How to serve the glusterfs cluster? As iscsi, NFS?

Thanks in advance
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-05-25 Thread mabi
Thanks Ravi. Let me know when you have time to have a look. It sort of happens 
around once or twice per week but today it was 24 files in one go which are 
unsynched and where I need to manually reset the xattrs on the arbiter node.

By the way on this volume I use quotas which I set on specifc directories, I 
don't know if this is relevant or not but thought I would just mention.
​​

‐‐‐ Original Message ‐‐‐

On May 23, 2018 9:25 AM, Ravishankar N  wrote:

> ​​
> 
> On 05/23/2018 12:47 PM, mabi wrote:
> 
> > Hello,
> > 
> > I just wanted to ask if you had time to look into this bug I am 
> > encountering and if there is anything else I can do?
> > 
> > For now in order to get rid of these 3 unsynched files shall I do the same 
> > method that was suggested to me in this thread?
> 
> Sorry Mabi,  I haven't had a chance to dig deeper into this. The
> 
> workaround of resetting xattrs should be fine though.
> 
> Thanks,
> 
> Ravi
> 
> > Thanks,
> > 
> > Mabi
> > 
> > ‐‐‐ Original Message ‐‐‐
> > 
> > On May 17, 2018 11:07 PM, mabi m...@protonmail.ch wrote:
> > 
> > > Hi Ravi,
> > > 
> > > Please fine below the answers to your questions
> > > 
> > > 1.  I have never touched the cluster.quorum-type option. Currently it is 
> > > set as following for this volume:
> > > 
> > > Option Value
> > > 
> > > 
> > > cluster.quorum-type none
> > > 
> > > 2.  The .shareKey files are not supposed to be empty. They should be 512 
> > > bytes big and contain binary data (PGP Secret Sub-key). I am not in a 
> > > position to say why it is in this specific case only 0 bytes and if it is 
> > > the fault of the software (Nextcloud) or GlusterFS. I can just say here 
> > > that I have another file server which is a simple NFS server with another 
> > > Nextcloud installation and there I never saw any 0 bytes .shareKey files 
> > > being created.
> > > 
> > > 3.  It seems to be quite random and I am not the person who uses the 
> > > Nextcloud software so I can't say what it was doing at that specific time 
> > > but I guess uploading files or moving files around. Basically I use 
> > > GlusterFS to store the files/data of the Nextcloud web application where 
> > > I have it mounted using a fuse mount (mount -t glusterfs).
> > > 
> > > 
> > > Regarding the logs I have attached the mount log file from the client and 
> > > below are the relevant log entries from the brick log file of all 3 
> > > nodes. Let me know if you need any other log files. Also if you know any 
> > > "log file sanitizer tool" which can replace sensitive file names with 
> > > random file names in log files that would like to use it as right now I 
> > > have to do that manually.
> > > 
> > > NODE 1 brick log:
> > > 
> > > [2018-05-15 06:54:20.176679] E [MSGID: 113015] 
> > > [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on 
> > > /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE
> > >  [No such file or directory]
> > > 
> > > NODE 2 brick log:
> > > 
> > > [2018-05-15 06:54:20.176415] E [MSGID: 113015] 
> > > [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on 
> > > /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE
> > >  [No such file or directory]
> > > 
> > > NODE 3 (arbiter) brick log:
> > > 
> > > [2018-05-15 06:54:19.898981] W [MSGID: 113103] [posix.c:285:posix_lookup] 
> > > 0-myvol-private-posix: Found stale gfid handle 
> > > /srv/glusterfs/myvol-private/brick/.glusterfs/f0/65/f065a5e7-ac06-445f-add0-83acf8ce4155,
> > >  removing it. [Stale file handle]
> > > 
> > > [2018-05-15 06:54:20.056196] W [MSGID: 113103] [posix.c:285:posix_lookup] 
> > > 0-myvol-private-posix: Found stale gfid handle 
> > > /srv/glusterfs/myvol-private/brick/.glusterfs/8f/a1/8fa15dbd-cd5c-4900-b889-0fe7fce46a13,
> > >  removing it. [Stale file handle]
> > > 
> > > [2018-05-15 06:54:20.172823] I [MSGID: 115056] 
> > > [server-rpc-fops.c:485:server_rmdir_cbk] 0-myvol-private-server: 
> > > 14740125: RMDIR 
> > > /cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE
> > >  (f065a5e7-ac06-445f-add0-83acf8ce4155/OC_DEFAULT_MODULE), client: 
> > > nextcloud.domain.com-7972-2018/05/10-20:31:46:163206-myvol-private-client-2-0-0,
> > >  error-xlator: myvol-private-posix [Directory not empty]
> > > 
> > > [2018-05-15 06:54:20.190911] I [MSGID: 115056] 
> > > [server-rpc-fops.c:485:server_rmdir_cbk] 0-myvol-private-server: 
> > > 14740141: RMDIR 
> > > /cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir 
> > > (72a1613e-2ac0-48bd-8ace-f2f723f3796c/2016.03.15 
> > > AVB_Photovoltaik-Versicherung 2013.pdf), client: 
> > > nextcloud.domain.com-7972-2018/05/10-20:31:46:163206-myvol-private-client-2-0-0,
> > >  error-xlator: myvol-private-posix [Directory not empty]
> > > 
> > > Best regards,
> > > 
> > > Mabi
> > > 
> > > ‐‐‐ Original M