[Gluster-users] Gluster Summit BOF - Testing

2017-11-07 Thread Jonathan Holloway
Hi all,

We had a BoF about Upstream Testing and increasing coverage.

Discussion included:
 - More docs on using the gluster-specific libraries. 
 - Templates, examples, and testcase scripts with common functionality as a 
jumping off point to create a new test script.
 - Reduce the number of systems required by existing libraries (but scale as 
needed). e.g., two instead of eight.
 - Providing scripts, etc. for leveraging Docker, Vagrant, virsh, etc. to 
easily create test environment on laptops, workstations, servers.
 - Access to logs for Jenkins tests.
 - Access to systems for live debugging.
 - What do we test? Maybe need to create upstream test plans.
 - Discussion here on gluster-users and updates in testing section of community 
meeting agenda.

Since returning from Gluster Summit, some of these are already being worked on. 
:-)

Thank you to all the birds of a feather that participated in the discussion!!!
Sweta, did I miss anything in that list?

Cheers,
Jonathan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Enabling Halo sets volume RO

2017-11-07 Thread Jon Cope
Hi all, 

I'm taking a stab at deploying a storage cluster to explore the Halo AFR 
feature and running into some trouble. In GCE, I have 4 instances, each with 
one 10gb brick. 2 instances are in the US and the other 2 are in Asia (with the 
hope that it will drive up latency sufficiently). The bricks make up a 
Replica-4 volume. Before I enable halo, I can mount to volume and r/w files. 

The issue is when I set the `cluster.halo-enabled yes`, I can no longer write 
to the volume: 

[root@jcope-rhs-g2fn vol]# touch /mnt/vol/test1 
touch: setting times of ‘test1’: Read-only file system 

This can be fixed by turning halo off again. While halo is enabled and writes 
return the above message, the mount still shows it to be r/w: 

[root@jcope-rhs-g2fn vol]# mount 
gce-node1:gv0 on /mnt/vol type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
 


Thanks in advace, 
-Jon 


Setup info 
CentOS Linux release 7.4.1708 (Core) 
4 GCE Instances (2 US, 2 Asia) 
1 10gb Brick/Instance 
replica 4 volume 

Packages: 


glusterfs-client-xlators-3.12.1-2.el7.x86_64 
glusterfs-cli-3.12.1-2.el7.x86_64 
python2-gluster-3.12.1-2.el7.x86_64 
glusterfs-3.12.1-2.el7.x86_64 
glusterfs-api-3.12.1-2.el7.x86_64 
glusterfs-fuse-3.12.1-2.el7.x86_64 
glusterfs-server-3.12.1-2.el7.x86_64 
glusterfs-libs-3.12.1-2.el7.x86_64 
glusterfs-geo-replication-3.12.1-2.el7.x86_64 

Logs, beginning when halo is enabled: 


[2017-11-07 22:20:15.029298] W [MSGID: 101095] [xlator.c:213:xlator_dynload] 
0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared 
object file: No such file or directory 
[2017-11-07 22:20:15.204241] W [MSGID: 101095] 
[xlator.c:162:xlator_volopt_dynload] 0-xlator: 
/usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object 
file: No such file or directory 
[2017-11-07 22:20:15.232176] I [MSGID: 106600] 
[glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management: 
nfs/server.so xlator is not installed 
[2017-11-07 22:20:15.235481] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already 
stopped 
[2017-11-07 22:20:15.235512] I [MSGID: 106568] 
[glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is 
stopped 
[2017-11-07 22:20:15.235572] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped 
[2017-11-07 22:20:15.235585] I [MSGID: 106568] 
[glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is 
stopped 
[2017-11-07 22:20:15.235638] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already 
stopped 
[2017-11-07 22:20:15.235650] I [MSGID: 106568] 
[glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is 
stopped 
[2017-11-07 22:20:15.250297] I [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a) 
[0x7fc23442117a] 
-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d) 
[0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fc23f915da5] 
) 0-management: Ran script: /var/lib 
/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv0 -o 
cluster.halo-enabled=yes --gd-workdir=/var/lib/glusterd 
[2017-11-07 22:20:15.255777] I [run.c:190:runner_log] 
(-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a) 
[0x7fc23442117a] 
-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d) 
[0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fc23f915da5] 
) 0-management: Ran script: /var/lib 
/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv0 -o 
cluster.halo-enabled=yes --gd-workdir=/var/lib/glusterd 
[2017-11-07 22:20:47.420098] W [MSGID: 101095] [xlator.c:213:xlator_dynload] 
0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared 
object file: No such file or directory 
[2017-11-07 22:20:47.595960] W [MSGID: 101095] 
[xlator.c:162:xlator_volopt_dynload] 0-xlator: 
/usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object 
file: No such file or directory 
[2017-11-07 22:20:47.631833] I [MSGID: 106600] 
[glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management: 
nfs/server.so xlator is not installed 
[2017-11-07 22:20:47.635109] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already 
stopped 
[2017-11-07 22:20:47.635136] I [MSGID: 106568] 
[glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is 
stopped 
[2017-11-07 22:20:47.635201] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped 
[2017-11-07 22:20:47.635216] I [MSGID: 106568] 
[glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is 
stopped 
[2017-11-07 22:20:47.635284] I [MSGID: 106132] 
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already 
stopped 
[2017-11-07 22:20:47.635297] I [MSGID: 106568] 

Re: [Gluster-users] Problem with getting restapi up

2017-11-07 Thread InterNetX - Juergen Gotteswinter
nevermind, fixed it and created pull request on github

filename / makefile problems

Am 07.11.2017 um 10:51 schrieb InterNetX - Juergen Gotteswinter:
> Hi,
> 
> i am currently struggling around with gluster restapi (not heketi),
> somehow i am a bit stuck. During startup of glusterrestd service it
> drops some python errors, heres a error log output with increased loglevel.
> 
> Maybe someone can give me a hint how to fix this
> 
> -- snip --
> [2017-11-07 10:29:04 +] [30982] [DEBUG] Current configuration:
>   proxy_protocol: False
>   worker_connections: 1000
>   statsd_host: None
>   max_requests_jitter: 0
>   post_fork: 
>   errorlog: /var/log/glusterrest/errors.log
>   enable_stdio_inheritance: False
>   worker_class: sync
>   ssl_version: 2
>   suppress_ragged_eofs: True
>   syslog: False
>   syslog_facility: user
>   when_ready: 
>   pre_fork: 
>   cert_reqs: 0
>   preload_app: False
>   keepalive: 2
>   accesslog: /var/log/glusterrest/access.log
>   group: 0
>   graceful_timeout: 30
>   do_handshake_on_connect: False
>   spew: False
>   workers: 2
>   proc_name: None
>   sendfile: None
>   pidfile: /var/run/glusterrest.pid
>   umask: 0
>   on_reload: 
>   pre_exec: 
>   worker_tmp_dir: None
>   limit_request_fields: 100
>   pythonpath: None
>   on_exit: 
>   config: /usr/local/etc/glusterrest/gunicorn_config.py
>   logconfig: None
>   check_config: False
>   statsd_prefix:
>   secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl',
> 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
>   reload_engine: auto
>   proxy_allow_ips: ['127.0.0.1']
>   pre_request: 
>   post_request: 
>   forwarded_allow_ips: ['127.0.0.1']
>   worker_int: 
>   raw_paste_global_conf: []
>   threads: 1
>   max_requests: 0
>   chdir: /usr/libexec/glusterfs/glusterrest
>   daemon: False
>   user: 0
>   limit_request_line: 4094
>   access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s"
> "%(a)s"
>   certfile: None
>   on_starting: 
>   post_worker_init: 
>   child_exit: 
>   worker_exit: 
>   paste: None
>   default_proc_name: main:app
>   syslog_addr: udp://localhost:514
>   syslog_prefix: None
>   ciphers: TLSv1
>   worker_abort: 
>   loglevel: debug
>   bind: [':8080']
>   raw_env: []
>   initgroups: False
>   capture_output: False
>   reload: False
>   limit_request_field_size: 8190
>   nworkers_changed: 
>   timeout: 30
>   keyfile: None
>   ca_certs: None
>   tmp_upload_dir: None
>   backlog: 2048
>   logger_class: gunicorn.glogging.Logger
> [2017-11-07 10:29:04 +] [30982] [INFO] Starting gunicorn 19.7.1
> [2017-11-07 10:29:04 +] [30982] [DEBUG] Arbiter booted
> [2017-11-07 10:29:04 +] [30982] [INFO] Listening at:
> http://0.0.0.0:8080 (30982)
> [2017-11-07 10:29:04 +] [30982] [INFO] Using worker: sync
> [2017-11-07 10:29:04 +] [30991] [INFO] Booting worker with pid: 30991
> [2017-11-07 10:29:04 +] [30991] [ERROR] Exception in worker process
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578,
> in spawn_worker
> worker.init_process()
>   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
> 126, in init_process
> self.load_wsgi()
>   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
> 135, in load_wsgi
> self.wsgi = self.app.wsgi()
>   File "/usr/lib/python2.7/site-packages/gunicorn/app/base.py", line 67,
> in wsgi
> self.callable = self.load()
>   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
> 65, in load
> return self.load_wsgiapp()
>   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
> 52, in load_wsgiapp
> return util.import_app(self.app_uri)
>   File "/usr/lib/python2.7/site-packages/gunicorn/util.py", line 352, in
> import_app
> __import__(module)
> ImportError: No module named main
> [2017-11-07 10:29:04 +] [30991] [INFO] Worker exiting (pid: 30991)
> [2017-11-07 10:29:04 +] [30982] [INFO] Shutting down: Master
> [2017-11-07 10:29:04 +] [30993] [INFO] Booting worker with pid: 30993
> [2017-11-07 10:29:04 +] [30982] [INFO] Reason: Worker failed to boot.
> [2017-11-07 10:29:04 +] [30993] [ERROR] Exception in worker process
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578,
> in spawn_worker
> worker.init_process()
>   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
> 126, in init_process
> self.load_wsgi()
>   File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line
> 135, in load_wsgi
> self.wsgi = self.app.wsgi()
>   File "/usr/lib/python2.7/site-packages/gunicorn/app/base.py", line 67,
> in wsgi
> self.callable = self.load()
>   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
> 65, in load
> return self.load_wsgiapp()
>   File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line
> 52, in load_wsgiapp
> return 

Re: [Gluster-users] Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.

2017-11-07 Thread Sam McLeod

> On 6 Nov 2017, at 3:32 pm, Laura Bailey  wrote:
> 
> Do the users have permission to see/interact with the directories, in 
> addition to the files?

Yes, full access to directories and files.
Also testing using the root user.

> 
> On Mon, Nov 6, 2017 at 1:55 PM, Nithya Balachandran  > wrote:
> Hi,
> 
> Please provide the gluster volume info. Do you see any errors in the client 
> mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)?


root@int-gluster-01:/var/log/glusterfs  # grep 'dev_static' *.log|grep -v 
cmd_history

glusterd.log:[2017-11-05 22:37:06.934787] W 
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) 
[0x7f5047169e5a] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8) 
[0x7f5047173dc8] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a) 
[0x7f504722a72a] ) 0-management: Lock for vol dev_static not held
glusterd.log:[2017-11-05 22:37:06.934806] W [MSGID: 106118] 
[glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock not 
released for dev_static
glusterd.log:[2017-11-05 22:39:49.924472] W 
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) 
[0x7fde97921e5a] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8) 
[0x7fde9792bdc8] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a) 
[0x7fde979e272a] ) 0-management: Lock for vol dev_static not held
glusterd.log:[2017-11-05 22:39:49.924494] W [MSGID: 106118] 
[glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock not 
released for dev_static
glusterd.log:[2017-11-05 22:41:42.565123] W 
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) 
[0x7fde97921e5a] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8) 
[0x7fde9792bdc8] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a) 
[0x7fde979e272a] ) 0-management: Lock for vol dev_static not held
glusterd.log:[2017-11-05 22:41:42.565227] W [MSGID: 106118] 
[glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock not 
released for dev_static
glusterd.log:[2017-11-05 22:42:06.931060] W 
[glusterd-locks.c:675:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x22e5a) 
[0x7fde97921e5a] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0x2cdc8) 
[0x7fde9792bdc8] 
-->/usr/lib64/glusterfs/3.12.2/xlator/mgmt/glusterd.so(+0xe372a) 
[0x7fde979e272a] ) 0-management: Lock for vol dev_static not held
glusterd.log:[2017-11-05 22:42:06.931090] W [MSGID: 106118] 
[glusterd-handler.c:6309:__glusterd_peer_rpc_notify] 0-management: Lock not 
released for dev_static

> 
> 
> Thanks,
> Nithya
> 
> On 6 November 2017 at 05:13, Sam McLeod  > wrote:
> We've got an issue with Gluster (3.12.x) where clients can't see directories 
> that exist or are created within a mounted volume.
> 
> 
> We can see files, but not directories (with ls, find etc...)
> We can enter (cd) into directories, even if we can't see them.
> 
> - Host typology is: 2 replica, 1 arbiter.
> - Volumes are: replicated and running on XFS on the hosts.
> - Clients are: GlusterFS native fuse client (mount.glusterfs), the same 
> version and op-version as the hosts.
> - Gluster server and client version: 3.12.2 (also found on 3.12.1, unsure 
> about previous versions) running on CentOS 7.
> 
> 
> Examples:
> 
> 
> mount:
> 192.168.0.151:/gluster_vol on /var/lib/mountedgluster type fuse.glusterfs 
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>  
> root@gluster-client:/var/lib/mountedgluster  # ls -la
> total 0
>  
> (note no . or .. directories)
>  
> root@gluster-client:/var/lib/mountedgluster  # touch test
> root@gluster-client:/var/lib/mountedgluster  # ls -la
> total 0
> -rw-r--r--. 1 root root 0 Nov  6 10:10 test
>  
> ("test" file shows up. Still no . or .. directories.)
>  
> root@gluster-client:/var/lib/mountedgluster  # mkdir testdir
> root@gluster-client:/var/lib/mountedgluster  # ls -la
> total 0
> -rw-r--r--. 1 root root 0 Nov  6 10:10 test
>  
> (directory was made, but doesn't show in ls)
>  
> root@gluster-client:/var/lib/mountedgluster  # cd testdir
> root@gluster-client:/var/lib/mountedgluster/testdir  # ls -la
> total 0
>  
> (cd works, no . or .. shown in ls though)
>  
> root@gluster-client:/var/lib/mountedgluster/testdir  # touch test
> root@gluster-client:/var/lib/mountedgluster/testdir  # ls -la
> total 0
> -rw-r--r--. 1 root root 0 Nov  6 10:10 test
>  
> (can create test file in testdir)
>  
>  
> root@gluster-client:/var/lib/mountedgluster/testdir  # cd ..
> root@gluster-client:/var/lib/mountedgluster  # ls -ld testdir
> drwxr-xr-x. 2 root root 4096 Nov  6 10:10 testdir
>  
> (going back to parent 

Re: [Gluster-users] Change IP address of few nodes in GFS 3.8

2017-11-07 Thread Hemant Mamtora
Thanks Atin.

Peer probes were done using FQDN and I was able to make these changes.

The only thing I had to do on rest of the nodes was to flush nscd; after that 
everything was good and I did not have to restart gluster services on these 
rest of the nodes.

- Hemant

On 10/30/17 11:46 AM, Atin Mukherjee wrote:
If the gluster nodes are peer probed through FQDNs then you’re good. If they’re 
done through IPs then for every node you’d need to replace the old IP with new 
IP for all the files in /var/lib/glusterd along with renaming the filenames 
with have the associated old IP and restart all gluster services. I used to 
have a script for this which I shared earlier in users ML, need to dig through 
my mailbox and find it.

On Sun, 29 Oct 2017 at 18:45, Hemant Mamtora 
> wrote:
Folks,

We have a 12 node replicated gluster with 2 X 6.

I need to change IP address in 6 nodes out of the 12 nodes, keeping the
host name same. The 6 nodes that I plan to change IP are part of the 3
sub-volumes.

Is this possible and if so is there any formal process to tell that
there is a change in IP address.

My requirement is to keep the volume up and not bring it down. I can
change IP address for the nodes one at a time. When a IP change is done
that node will be rebooted as well.

Many thanks in advance.

--
- Hemant Mamtora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
--
- Atin (atinm)

--
- Hemant Mamtora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Oracle DB write archive log files on glusterfs

2017-11-07 Thread Hemant Mamtora
Folks,

Currently we have the archive log files from a Oracle DB been written to 
a nas location using NFS. We are planning to move the nas location use 
glusterfs file system. Has anybody been able to write archive log files 
of a Oracle DB onto a glusterfs file system/mount point.

If not done then do you all think that this can be done?

I am planning to perform some testing and wanted to have a better 
understanding of this before I start.

-- 
- Hemant Mamtora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Ignore failed connection messages during copying files with tiering

2017-11-07 Thread Paul
Hi, All,


We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:


W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
…

And then the copy process seems to be suspended. The command df hangs in
the client machine. But if I restart glusterd, then the copy process
continues. However, several minutes later the problems happens again. Later
we find the problem seems to happen when creating directories.

The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is
it related to tiering?

Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] error logged in fuse-mount log file

2017-11-07 Thread Amudhan P
Hi,

I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log
file.

what does this error mean? should i worry about this and how do i resolve
this?

[2017-11-07 11:59:17.218973] W [MSGID: 109005]
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
selfheal fail
ed : 1 subvolumes have unrecoverable errors. path =
/fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
[2017-11-07 11:59:17.218935] I [MSGID: 109063]
[dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies
in /fol1/fol2/fol3/fol4/fol5 (gfid = 3f856ab3-f538-43ee-b408-53dd3da617fb).
Holes=1 overlaps=0
[2017-11-07 11:59:17.199917] W [MSGID: 122019]
[ec-helpers.c:413:ec_loc_gfid_check] 0-glustervol-disperse-5: Mismatching
GFID's in loc

[2017-11-07 11:03:08.999769] W [MSGID: 109005]
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
selfheal fail
ed : 1 subvolumes have unrecoverable errors. path =
/sec_fol1/sec_fol2/sec_fol3/sec_fol4/sec_fol5, gfid =
59b9762e-f419-4d56-9fa2-eea9ebc055b2
[2017-11-07 11:03:08.999749] I [MSGID: 109063]
[dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies
in /sec_fol1/sec_fol2/sec_fol3/sec_fol4/sec_fol5 (gfid =
59b9762e-f419-4d56-9fa2-eea9ebc055b2). Holes=1 overlaps=0
[2017-11-07 11:03:08.980275] W [MSGID: 122019]
[ec-helpers.c:413:ec_loc_gfid_check] 0-glustervol-disperse-7: Mismatching
GFID's in loc


[2017-11-07 12:58:43.569801] I [MSGID: 109069]
[dht-common.c:1324:dht_lookup_unlink_cbk] 0-glustervol-dht: lookup_unlink
returned with op_ret -> 0 and op-errno -> 0 for
/thi_fol1/thi_fol2/thi_fol3/thi_fol4/thi_fol5/thi_fol6/thi_file1
[2017-11-07 12:58:43.528844] I [MSGID: 109045]
[dht-common.c:2012:dht_lookup_everywhere_cbk] 0-glustervol-dht: attempting
deletion of stale linkfile
/thi_fol1/thi_fol2/thi_fol3/thi_fol4/thi_fol5/thi_fol6/thi_file1 on
glustervol-disperse-77 (hashed subvol is glustervol-disperse-106)

regards
Amudhan P
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Subvolume failure

2017-11-07 Thread Hans Henrik Happe
Hi,

I have always found the default behaviour of subvolume failure a bit
confusing for users or applications that need deterministic failure
behaviour.

When a subvolume fails the clients can still use and access files on
other subvolumes. However, accessing files on a failed subvolume returns
an error saying it cannot find the files.

It would be better if it would block until a subvolume comes back online.

An alternative would be that all access would fail when one subvolume is
down.

Is there a way to configure this?

Cheers,
Hans Henrik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Summit BOF - Encryption

2017-11-07 Thread Ivan Rossi
We had a BOF about how to do file-level volume encryption.

Coupled with geo-replication, this feature would be useful for secure
off-site archiving/backup/disaster-recovery of Gluster volumes.

TLDR: It might be possible using EncFS stacked file system on top of a
Gluster
mount, but it is experimental and untested. At the moment, you are on your
own.

- The built-in encryption translator is strongly deprecated and it may be
removed
  altogether from the code base in the future.

- The kernel-based ecryptfs (http://ecryptfs.org/) stacked file system has a
  known bug with NFS and possibly other network file systems.

- Stacking EncFS (https://github.com/vgough/encfs) on top of a Gluster mount
  should, in principle, work with both native and NFS mounts.  Performance
are
  going to be low, but still workable in some of the use cases of interest.

- Long term solution: having a client-side translator based on EncFS code.
ATM
  there is no plan to develop it.

Hope it is useful to others too.

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users