[Gluster-users] mount failing client to gluster cluster.

2018-05-07 Thread Thing
Hi,

On a debian 9 client,


root@kvm01:/var/lib/libvirt# dpkg -l glusterfs-client
8><---
ii  glusterfs-client  3.8.8-1   amd64
   clustered file-system (client package)
root@kvm01:/var/lib/libvirt#
===

I am trying to to do a mount to a Centos 7 gluster setup,

===
[root@glustep1 libvirt]# rpm -q glusterfs
glusterfs-4.0.2-1.el7.x86_64
[root@glustep1 libvirt]#
===

mount -t glusterfs glusterp1.graywitch.co.nz:/gv0/kvm01/images
/var/lib/libvirt/images


the logs are telling me,
=
root@kvm01:/var/lib/libvirt# >
/var/log/glusterfs/var-lib-libvirt-images.log
root@kvm01:/var/lib/libvirt# mount -t glusterfs
glusterp1.graywitch.co.nz:/gv0/kvm01/images/
/var/lib/libvirt/images
Mount failed. Please check the log file for more details.
root@kvm01:/var/lib/libvirt# more
/var/log/glusterfs/var-lib-libvirt-images.log
[2018-05-08 03:33:48.989219] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8
(args: /usr
/sbin/glusterfs --volfile-server=glusterp1.graywitch.co.nz
--volfile-id=/gv0/kvm01/images/ /var/lib/libvirt/images)
[2018-05-08 03:33:48.996244] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-05-08 03:33:48.998694] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk]
0-glusterfs: failed to get the 'volume file' from server
[2018-05-08 03:33:48.998721] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:/gv0/kvm01/images/)
[2018-05-08 03:33:48.998891] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f25cd69ba20]
 -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x558e7de496f4]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x558e7de43444] ) 0-:
received signum (0), s
hutting down
[2018-05-08 03:33:48.998923] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting
'/var/lib/libvirt/images'.
[2018-05-08 03:33:49.019805] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494) [0x7f25cc72c494]
-->/usr/sbin/gluster
fs(glusterfs_sigwaiter+0xf5) [0x558e7de435e5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x558e7de43444] ) 0-:
received signum (15), shutting down
root@kvm01:/var/lib/libvirt#
==

though,

mount -t glusterfs glusterp1.graywitch.co.nz:/gv0 /isos

works fine.

===
root@kvm01:/var/lib/libvirt# df -h
Filesystem  Size  Used Avail Use% Mounted on
udev7.8G 0  7.8G   0% /dev
tmpfs   1.6G  9.2M  1.6G   1% /run
/dev/mapper/kvm01--vg-root   23G  3.8G   18G  18% /
tmpfs   7.8G 0  7.8G   0% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
tmpfs   7.8G 0  7.8G   0% /sys/fs/cgroup
/dev/mapper/kvm01--vg-home  243G   61M  231G   1% /home
/dev/mapper/kvm01--vg-tmp   1.8G  5.6M  1.7G   1% /tmp
/dev/mapper/kvm01--vg-var   9.2G  302M  8.4G   4% /var
/dev/sda1   236M   63M  161M  28% /boot
tmpfs   1.6G  4.0K  1.6G   1% /run/user/115
tmpfs   1.6G 0  1.6G   0% /run/user/1000
glusterp1.graywitch.co.nz:/gv0  932G  247G  685G  27% /isos


also, I can mount the sub-directory fine on the gluster cluster itself,

===
[root@glustep1 libvirt]# df -h
Filesystem Size  Used Avail Use%
Mounted on
/dev/mapper/centos-root 20G  3.4G   17G  17% /
devtmpfs   3.8G 0  3.8G   0% /dev
tmpfs  3.8G  6.1M  3.8G   1%
/dev/shm
tmpfs  3.8G  9.0M  3.8G   1% /run
tmpfs  3.8G 0  3.8G   0%
/sys/fs/cgroup
/dev/sda1  969M  206M  713M  23% /boot
/dev/mapper/centos-tmp 3.9G   33M  3.9G   1% /tmp
/dev/mapper/centos-home 50G  4.3G   46G   9% /home
/dev/mapper/centos-var  20G  341M   20G   2% /var
/dev/mapper/centos-data1   120G   36M  120G   1% /data1
/dev/mapper/centos00-var_lib   9.4G  179M  9.2G   2%
/var/lib
/dev/mapper/vg--gluster--prod1-gluster--prod1  932G  233G  699G  25%
/bricks/brick1
tmpfs  771M   12K  771M   1%
/run/user/42
tmpfs  771M   32K  771M   1%
/run/user/1000
glusterp1:gv0/glusterp1/images 932G  247G  685G  27%
/var/lib/libvirt/images
glusterp1:gv0  932G  247G  685G  27% /isos
[root@glustep1 libvirt]#


Is this a version mis-match thing? or what am I doing wrongly please?
___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-05-07 Thread TomK

On 4/11/2018 11:54 AM, Alex K wrote:

Hey Guy's,

Returning to this topic, after disabling the the quorum:

cluster.quorum-type: none
cluster.server-quorum-type: none

I've ran into a number of gluster errors (see below).

I'm using gluster as the backend for my NFS storage.  I have gluster 
running on two nodes, nfs01 and nfs02.  It's mounted on /n on each host. 
 The path /n is in turn shared out by NFS Ganesha.  It's a two node 
setup with quorum disabled as noted below:


[root@nfs02 ganesha]# mount|grep gv01
nfs02:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


[root@nfs01 glusterfs]# mount|grep gv01
nfs01:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


Gluster always reports as working no matter when I type the below two 
commands:


[root@nfs01 glusterfs]# gluster volume info

Volume Name: gv01
Type: Replicate
Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: nfs01:/bricks/0/gv01
Brick2: nfs02:/bricks/0/gv01
Options Reconfigured:
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.readdir-ahead: on
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
[root@nfs01 glusterfs]# gluster status
unrecognized word: status (position 0)
[root@nfs01 glusterfs]# gluster volume status
Status of volume: gv01
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick nfs01:/bricks/0/gv01  49152 0  Y 1422
Brick nfs02:/bricks/0/gv01  49152 0  Y 1422
Self-heal Daemon on localhost   N/A   N/AY 1248
Self-heal Daemon on nfs02.nix.my.dom   N/A   N/AY   1251

Task Status of Volume gv01
--
There are no active volume tasks

[root@nfs01 glusterfs]#

[root@nfs01 glusterfs]# rpm -aq|grep -Ei gluster
glusterfs-3.13.2-2.el7.x86_64
glusterfs-devel-3.13.2-2.el7.x86_64
glusterfs-fuse-3.13.2-2.el7.x86_64
glusterfs-api-devel-3.13.2-2.el7.x86_64
centos-release-gluster313-1.0-1.el7.centos.noarch
python2-gluster-3.13.2-2.el7.x86_64
glusterfs-client-xlators-3.13.2-2.el7.x86_64
glusterfs-server-3.13.2-2.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64
glusterfs-cli-3.13.2-2.el7.x86_64
centos-release-gluster312-1.0-1.el7.centos.noarch
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-libs-3.13.2-2.el7.x86_64
glusterfs-extra-xlators-3.13.2-2.el7.x86_64
glusterfs-api-3.13.2-2.el7.x86_64
[root@nfs01 glusterfs]#

The short of it is that everything works and mounts on guests work as 
long as I don't try to write to the NFS share from my clients.  If I try 
to write to the share, everything comes apart like this:


-sh-4.2$ pwd
/n/my.dom/tom
-sh-4.2$ ls -altri
total 6258
11715278280495367299 -rw---. 1 t...@my.dom t...@my.dom 231 Feb 17 
20:15 .bashrc
10937819299152577443 -rw---. 1 t...@my.dom t...@my.dom 193 Feb 17 
20:15 .bash_profile
10823746994379198104 -rw---. 1 t...@my.dom t...@my.dom  18 Feb 17 
20:15 .bash_logout
10718721668898812166 drwxr-xr-x. 3 rootroot   4096 Mar 5 
02:46 ..
12008425472191154054 drwx--. 2 t...@my.dom t...@my.dom4096 Mar 18 
03:07 .ssh
13763048923429182948 -rw-rw-r--. 1 t...@my.dom t...@my.dom 6359568 Mar 25 
22:38 opennebula-cores.tar.gz
11674701370106210511 -rw-rw-r--. 1 t...@my.dom t...@my.dom   4 Apr  9 
23:25 meh.txt
 9326637590629964475 -rw-r--r--. 1 t...@my.dom t...@my.dom   24970 May  1 
01:30 nfs-trace-working.dat.gz
 9337343577229627320 -rw---. 1 t...@my.dom t...@my.dom3734 May  1 
23:38 .bash_history
11438151930727967183 drwx--. 3 t...@my.dom t...@my.dom4096 May  1 
23:58 .
 9865389421596220499 -rw-r--r--. 1 t...@my.dom t...@my.dom4096 May  1 
23:58 .meh.txt.swp

-sh-4.2$ touch test.txt
-sh-4.2$ vi test.txt
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri

This is followed by a slew of other errors in apps using the gluster 
volume.  These errors include:


02/05/2018 23:10:52 : epoch 5aea7bd5 : nfs02.nix.my.dom : 
ganesha.nfsd-5891[svc_12] nfs_rpc_process_request :DISP :INFO :Could not 
authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM



==> ganesha-gfapi.log <==
[2018-05-03 04:32:18.009245] I [MSGID: 114021] [client.c:2369:notify] 
0-gv01-client-0: current graph is no longer active, destroying rpc_client
[2018-05-03 04:32:18.009338] I [MSGID: 114021] 

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-05-07 Thread TomK

On 4/11/2018 11:54 AM, Alex K wrote:

Hey Guy's,

Returning to this topic, after disabling the the quorum:

cluster.quorum-type: none
cluster.server-quorum-type: none

I've ran into a number of gluster errors (see below).

I'm using gluster as the backend for my NFS storage.  I have gluster 
running on two nodes, nfs01 and nfs02.  It's mounted on /n on each host. 
 The path /n is in turn shared out by NFS Ganesha.  It's a two node 
setup with quorum disabled as noted below:


[root@nfs02 ganesha]# mount|grep gv01
nfs02:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


[root@nfs01 glusterfs]# mount|grep gv01
nfs01:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


Gluster always reports as working no matter when I type the below two 
commands:


[root@nfs01 glusterfs]# gluster volume info

Volume Name: gv01
Type: Replicate
Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: nfs01:/bricks/0/gv01
Brick2: nfs02:/bricks/0/gv01
Options Reconfigured:
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.readdir-ahead: on
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
[root@nfs01 glusterfs]# gluster status
unrecognized word: status (position 0)
[root@nfs01 glusterfs]# gluster volume status
Status of volume: gv01
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick nfs01:/bricks/0/gv01  49152 0  Y 
1422
Brick nfs02:/bricks/0/gv01  49152 0  Y 
1422
Self-heal Daemon on localhost   N/A   N/AY 
1248

Self-heal Daemon on nfs02.nix.my.dom   N/A   N/AY   1251

Task Status of Volume gv01
--
There are no active volume tasks

[root@nfs01 glusterfs]#

[root@nfs01 glusterfs]# rpm -aq|grep -Ei gluster
glusterfs-3.13.2-2.el7.x86_64
glusterfs-devel-3.13.2-2.el7.x86_64
glusterfs-fuse-3.13.2-2.el7.x86_64
glusterfs-api-devel-3.13.2-2.el7.x86_64
centos-release-gluster313-1.0-1.el7.centos.noarch
python2-gluster-3.13.2-2.el7.x86_64
glusterfs-client-xlators-3.13.2-2.el7.x86_64
glusterfs-server-3.13.2-2.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64
glusterfs-cli-3.13.2-2.el7.x86_64
centos-release-gluster312-1.0-1.el7.centos.noarch
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-libs-3.13.2-2.el7.x86_64
glusterfs-extra-xlators-3.13.2-2.el7.x86_64
glusterfs-api-3.13.2-2.el7.x86_64
[root@nfs01 glusterfs]#

The short of it is that everything works and mounts on guests work as 
long as I don't try to write to the NFS share from my clients.  If I try 
to write to the share, everything comes apart like this:


-sh-4.2$ pwd
/n/my.dom/tom
-sh-4.2$ ls -altri
total 6258
11715278280495367299 -rw---. 1 t...@my.dom t...@my.dom 231 Feb 17 
20:15 .bashrc
10937819299152577443 -rw---. 1 t...@my.dom t...@my.dom 193 Feb 17 
20:15 .bash_profile
10823746994379198104 -rw---. 1 t...@my.dom t...@my.dom  18 Feb 17 
20:15 .bash_logout
10718721668898812166 drwxr-xr-x. 3 rootroot   4096 Mar 
5 02:46 ..
12008425472191154054 drwx--. 2 t...@my.dom t...@my.dom4096 Mar 18 
03:07 .ssh
13763048923429182948 -rw-rw-r--. 1 t...@my.dom t...@my.dom 6359568 Mar 25 
22:38 opennebula-cores.tar.gz
11674701370106210511 -rw-rw-r--. 1 t...@my.dom t...@my.dom   4 Apr  9 
23:25 meh.txt
 9326637590629964475 -rw-r--r--. 1 t...@my.dom t...@my.dom   24970 May  1 
01:30 nfs-trace-working.dat.gz
 9337343577229627320 -rw---. 1 t...@my.dom t...@my.dom3734 May  1 
23:38 .bash_history
11438151930727967183 drwx--. 3 t...@my.dom t...@my.dom4096 May  1 
23:58 .
 9865389421596220499 -rw-r--r--. 1 t...@my.dom t...@my.dom4096 May  1 
23:58 .meh.txt.swp

-sh-4.2$ touch test.txt
-sh-4.2$ vi test.txt
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri

This is followed by a slew of other errors in apps using the gluster 
volume.  These errors include:


02/05/2018 23:10:52 : epoch 5aea7bd5 : nfs02.nix.mds.xyz : 
ganesha.nfsd-5891[svc_12] nfs_rpc_process_request :DISP :INFO :Could not 
authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM



==> ganesha-gfapi.log <==
[2018-05-03 04:32:18.009245] I [MSGID: 114021] [client.c:2369:notify] 
0-gv01-client-0: current graph is no longer active, destroying rpc_client
[2018-05-03 04:32:18.009338] I [MSGID: 114021] 

Re: [Gluster-users] Compiling 3.13.2 under FreeBSD 11.1?

2018-05-07 Thread Kaleb S. KEITHLEY
On 05/07/2018 04:29 AM, Roman Serbski wrote:
> Hello,
> 
> Has anyone managed to successfully compile the latest 3.13.2 under
> FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
> fails:

See https://review.gluster.org/19974

3.13 reached EOL with 4.0. There will be a fix posted for 4.0 soon. In
the mean time I believe your specific problem with 3.13.2 should be
resolved with this:

diff --git a/api/src/glfs.c b/api/src/glfs.c
index 2a7ae2f39..8a9659766 100644
--- a/api/src/glfs.c
+++ b/api/src/glfs.c
@@ -1569,8 +1569,8 @@ out:
 GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_sysrq, 3.10.0);

 int
-glfs_upcall_register (struct glfs *fs, uint32_t event_list,
-  glfs_upcall_cbk cbk, void *data)
+pub_glfs_upcall_register (struct glfs *fs, uint32_t event_list,
+  glfs_upcall_cbk cbk, void *data)
 {
 int ret = 0;

@@ -1618,9 +1618,11 @@ out:
 invalid_fs:
 return ret;
 }
+
 GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_upcall_register, 3.13.0);

-int glfs_upcall_unregister (struct glfs *fs, uint32_t event_list)
+int
+pub_glfs_upcall_unregister (struct glfs *fs, uint32_t event_list)
 {
 int ret = 0;
 /* list of supported upcall events */
@@ -1663,4 +1665,5 @@ out:
 invalid_fs:
 return ret;
 }
+
 GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_upcall_unregister, 3.13.0);



> 
> Making all in src
>   CC   glfs.lo
> cc: warning: argument unused during compilation: '-rdynamic'
> [-Wunused-command-line-argument]
> cc: warning: argument unused during compilation: '-rdynamic'
> [-Wunused-command-line-argument]
> fatal error: error in backend: A @@ version cannot be undefined
> cc: error: clang frontend command failed with exit code 70 (use -v to
> see invocation)
> FreeBSD clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on
> LLVM 4.0.0)
> Target: x86_64-unknown-freebsd11.1
> 
> # uname -a
> FreeBSD int-smtp-03 11.1-RELEASE-p8 FreeBSD 11.1-RELEASE-p8 #0
> r330926: Wed Mar 14 13:45:45 CET 2018
> root@int-build:/usr/obj/usr/src/sys/BSD112017110501VM  amd64
> 
> # pkg info
> argp-standalone-1.3_3  Standalone version of arguments parsing
> functions from GLIBC
> autoconf-2.69_1Automatically configure source code on
> many Un*x platforms
> autoconf-wrapper-20131203  Wrapper script for GNU autoconf
> automake-1.15.1GNU Standards-compliant Makefile generator
> automake-wrapper-20131203  Wrapper script for GNU automake
> bison-3.0.4,1  Parser generator from FSF, (mostly)
> compatible with Yacc
> ca_root_nss-3.36.1 Root certificate bundle from the Mozilla 
> Project
> curl-7.59.0Command line tool and library for
> transferring data with URLs
> cyrus-sasl-2.1.26_13   RFC  SASL (Simple Authentication
> and Security Layer)
> gettext-runtime-0.19.8.1_1 GNU gettext runtime libraries and programs
> glib-2.50.3_2,1Some useful routines of C programming
> (current stable version)
> indexinfo-0.3.1Utility to regenerate the GNU info page index
> libedit-3.1.20170329_2,1   Command line editor library
> libevent-2.1.8_1   API for executing callback functions on
> events or timeouts
> libffi-3.2.1_2 Foreign Function Interface
> libiconv-1.14_11   Character set conversion library
> liblz4-1.8.1.2,1   LZ4 compression library, lossless and very fast
> libnghttp2-1.31.1  HTTP/2.0 C Library
> libtool-2.4.6  Generic shared library support script
> liburcu-0.10.0 Userspace read-copy-update (RCU) data
> synchronization library
> m4-1.4.18,1GNU M4
> mysql57-client-5.7.22_1Multithreaded SQL database (client)
> pcre-8.40_1Perl Compatible Regular Expressions library
> perl5-5.26.2   Practical Extraction and Report Language
> pkg-1.10.5 Package manager
> pkgconf-1.4.2,1Utility to help to configure compiler
> and linker flags
> protobuf-3.5.2 Data interchange format library
> python2-2_3The "meta-port" for version 2 of the
> Python interpreter
> python27-2.7.14_1  Interpreted object-oriented programming 
> language
> readline-7.0.3_1   Library for editing command lines as
> they are typed
> sqlite3-3.23.1 SQL database engine in a C library
> 
> # clang -v
> FreeBSD clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on
> LLVM 4.0.0)
> Target: x86_64-unknown-freebsd11.1
> Thread model: posix
> InstalledDir: /usr/bin
> 
> ./autogen.sh > https://pastebin.com/BJ16SmTM
> 
> ./configure > https://pastebin.com/4SybcRTZ
> 
> make > https://pastebin.com/12YLjPid
> 
> glfs-8a2844.sh > https://pastebin.com/q3q0vWVS
> 
> glfs-8a2844.c > is too big. Please let me know whether you'd be
> interested to see it as well.
> 
> Thank you in advance.
> 

[Gluster-users] Gluster Monthly Newsletter, April 2018

2018-05-07 Thread Amye Scavarda
Announcing mountpoint, August 27-28, 2018
Our inaugural software-defined storage conference combining Gluster,
Ceph and other projects! More details at:
http://lists.gluster.org/pipermail/gluster-users/2018-May/034039.html
CFP at: http://mountpoint.io/

Out of cycle updates for all maintained Gluster versions: New updates
for 3.10, 3.12 and 4.0
http://lists.gluster.org/pipermail/announce/2018-April/98.html

Project Technical Leadership Council Announced
http://lists.gluster.org/pipermail/announce/2018-April/94.html

Gluster and Red Hat Summit:
Gluster’s pairing with oVirt in Community Central - come by!

Talks: (links to recordings provided in May newsletter)
Gluster Colonizer with Ansible: A hands-on workshop:
https://agenda.summit.redhat.com/SessionDetail.aspx?id=154077
Container-native storage and Red Hat Gluster Storage roadmap:
https://agenda.summit.redhat.com/SessionDetail.aspx?id=153766
Red Hat Hyperconverged Infrastructure: Your open hyperconverged solution
https://agenda.summit.redhat.com/SessionDetail.aspx?id=153974

Red Hat Summit: Community Happy Hour
Tuesday, May 8, 2018 (6:30pm - 7:30pm)
https://rhsummithappyhour.eventbrite.com/

Want swag for your meetup? https://www.gluster.org/events/ has a
contact form for us to let us know about your Gluster meetup! We’d
love to hear about Gluster presentations coming up, conference talks
and gatherings. Let us know!

Top Contributing Companies:  Red Hat, Comcast, DataLab, Gentoo Linux,
Facebook, BioDec, Samsung
Top Contributors in April: Nigel Babu, Nithya Balachandran,
Shyamsundar Ranganathan, Xavi Hernandez, Kotresh HR, Pranith Kumar
Karampuri

Noteworthy threads:
[Gluster-users] Proposal to make Design Spec and Document for a
feature mandatory.
http://lists.gluster.org/pipermail/gluster-users/2018-April/033799.html
[Gluster-users] Gluster's proposal to adopt GPL cure enforcement
http://lists.gluster.org/pipermail/gluster-users/2018-April/033954.html
[Gluster-devel] tendrl-release v1.6.3 (milestone-5 2018) is available
http://lists.gluster.org/pipermail/gluster-devel/2018-April/054755.html


Upcoming CFPs:
Mountpoint:
CFP closes June 15th

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Finding performance bottlenecks

2018-05-07 Thread Ben Turner
- Original Message -
> From: "Tony Hoyle" 
> To: "Gluster Users" 
> Sent: Tuesday, May 1, 2018 5:38:38 AM
> Subject: Re: [Gluster-users] Finding performance bottlenecks
> 
> On 01/05/2018 02:27, Thing wrote:
> > Hi,
> > 
> > So is the KVM or Vmware as the host(s)?  I basically have the same setup
> > ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.  I do notice with
> > vmware using NFS disk was pretty slow (40% of a single disk) but this
> > was over 1gb networking which was clearly saturating.  Hence I am moving
> > to KVM to use glusterfs hoping for better performance and bonding, it
> > will be interesting to see which host type runs faster.
> 
> 1gb will always be the bottleneck in that situation - that's going too
> max out at the speed of a single disk or lower.  You need at minimum to
> bond interfaces and preferably go to 10gb to do that.
> 
> Our NFS actually ends up faster than local disk because the read speed
> of the raid is faster than the read speed of the local disk.
> 
> > Which operating system is gluster on?
> 
> Debian Linux.  Supermicro motherboards, 24 core i7 with 128GB of RAM on
> the VM hosts.
> 
> > Did you do iperf between all nodes?
> 
> Yes, around 9.7Gb/s
> 
> It doesn't appear to be raw read speed but iowait.  Under nfs load with
> multiple VMs I get an iowait of around 0.3%.  Under gluster, never less
> than 10% and glusterfsd is often the top of the CPU usage.  This causes
> a load average of ~12 compared to 3 over NFS, and absolutely kills VMs
> esp. Windows ones - one machine I set booting and it was still booting
> 30 minutes later!

Are you properly aligned?  This sounds like the xattr reads / writes used by 
gluster may be eating you IOPs, this is exacerbated when storage is misaligned. 
 I suggest getting on the latest version of oVirt(I have seen this help) and 
evaluate your storage stack.

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/formatting_and_mounting_bricks

pvcreate --dataalign = full stripe(RAID stripe * # of data disks)
vgcreate --extensize = full stripe
lvcreate like normal
mkfs.xfs -f -i size=512 -n size=8192 -d su=,sw= DEVICE

And mount with:

/dev/rhs_vg/rhs_lv/mountpoint  xfs rw,inode64,noatime,nouuid  1 2

I normally used tuned profile rhgs-random-io and the gluster v set group 
virtualization.

HTH

-b

> 
> Tony
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Finding performance bottlenecks

2018-05-07 Thread Ben Turner
- Original Message -
> From: "Darrell Budic" 
> To: "Vincent Royer" , t...@hoyle.me.uk
> Cc: gluster-users@gluster.org
> Sent: Thursday, May 3, 2018 5:24:53 PM
> Subject: Re: [Gluster-users] Finding performance bottlenecks
> 
> Tony’s performance sounds significantly sub par from my experience. I did
> some testing with gluster 3.12 and Ovirt 3.9, on my running production
> cluster when I enabled the glfsapi, even my pre numbers are significantly
> better than what Tony is reporting:
> 
> ———
> Before using gfapi:
> 
> ]# dd if=/dev/urandom of=test.file bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 90.1843 s, 11.9 MB/s
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=test.file of=/dev/null
> 2097152+0 records in
> 2097152+0 records out
> 1073741824 bytes (1.1 GB) copied, 3.94715 s, 272 MB/s

This is no where near what I would expect.  With VMs I am able to saturate a 
10G interface if I run enough IOs from enough VMs and use LVM striping(8 files 
/ PVs) inside the VMs.  So thats 1200 MB of aggregate throughput and each VM 
will do 200-300+ MB / sec writes, 300-400+ reads.

I have seen this issue before though, once it was resolved by an upgrade of 
oVIRT another time I fixed the alignment of the RAID / LVM / XFS stack.  There 
is one instance I haven't yet figured out yet :/  I want to build on a fresh HW 
stack.  Make sure you have everything aligned in the storage stack, writeback 
cache on the RAID controller, jumbo frames, the gluster VM group set, and a 
random IO tuned profile.  If you want to tinker with LVM striping inside the VM 
I have had success with that as well.

Also note:

Using urandom will significantly lower perf, it is dependent on how fast your 
CPU can create random data.  Try /dev/zero or FIO / IOzone / smallfile - 
https://github.com/bengland2/smallfile, that will eliminate CPU as a bottleneck.

Also remember VMs are a heavy random IO workload, you need IOPs on your disks 
to see good perf.  Also, since gluster doesn't have a MD server those IOs are 
moved to xattrs on teh files themselves.  This is a bit of a double edged sword 
as these take IOPs as well and if the backend is not properly aligned this can 
double or triple the IOPs overhead on these small reads and writes that gluster 
uses to in place of a MD server.

HTH

-b

> 
> # hdparm -tT /dev/vda
> 
> /dev/vda:
> Timing cached reads: 17322 MB in 2.00 seconds = 8673.49 MB/sec
> Timing buffered disk reads: 996 MB in 3.00 seconds = 331.97 MB/sec
> 
> # bonnie++ -d . -s 8G -n 0 -m pre-glapi -f -b -u root
> 
> Version 1.97 --Sequential Output-- --Sequential Input- --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
> pre-glapi 8G 196245 30 105331 15 962775 49 1638 34
> Latency 1578ms 1383ms 201ms 301ms
> 
> Version 1.97 --Sequential Output-- --Sequential Input- --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
> pre-glapi 8G 155937 27 102899 14 1030285 54 1763 45
> Latency 694ms 1333ms 114ms 229ms
> 
> (note, sequential reads seem to have been influenced by caching somewhere…)
> 
> After switching to gfapi:
> 
> # dd if=/dev/urandom of=test.file bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 80.8317 s, 13.3 MB/s
> # echo 3 > /proc/sys/vm/drop_caches
> # dd if=test.file of=/dev/null
> 2097152+0 records in
> 2097152+0 records out
> 1073741824 bytes (1.1 GB) copied, 3.3473 s, 321 MB/s
> 
> # hdparm -tT /dev/vda
> 
> /dev/vda:
> Timing cached reads: 17112 MB in 2.00 seconds = 8568.86 MB/sec
> Timing buffered disk reads: 1406 MB in 3.01 seconds = 467.70 MB/sec
> 
> #bonnie++ -d . -s 8G -n 0 -m glapi -f -b -u root
> 
> Version 1.97 --Sequential Output-- --Sequential Input- --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
> glapi 8G 359100 59 185289 24 489575 31 2079 67
> Latency 160ms 355ms 36041us 185ms
> 
> Version 1.97 --Sequential Output-- --Sequential Input- --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
> glapi 8G 341307 57 180546 24 472572 35 2655 61
> Latency 153ms 394ms 101ms 116ms
> 
> So excellent improvement in write throughput, but the significant improvement
> in latency is what was most noticed by users. Anecdotal reports of 2x+
> performance improvements, with one remarking that it’s like having dedicated
> disks :)
> 
> This system is on my production cluster, so it’s not getting exclusive disk
> access, but this VM is not doing anything else itself. The cluster is 3 xeon
> E5-2609 v3 @ 

Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Ben Turner
One thing to remember with arbiters is that they need IOPs, not capacity as 
much.  With a VM use case this is less impactful, but workloads with lots of 
smallfiles can become heavily bottlenecked at the arbiter.  Arbiters only save 
metadata, not data, but metadata needs lots of small reads and writes.  I have 
seen many instances where the the arbiter had considerably less IOPs than the 
other bricks and it lead to perf issues.  With VMs you don't have thousands of 
files so its prolly not a big deal, but in more general purpose workloads its 
important to remember this.

HTH!

-b

- Original Message -
> From: "Dave Sherohman" 
> To: gluster-users@gluster.org
> Sent: Monday, May 7, 2018 7:21:49 AM
> Subject: Re: [Gluster-users] arbiter node on client?
> 
> On Sun, May 06, 2018 at 11:15:32AM +, Gandalf Corvotempesta wrote:
> > is possible to add an arbiter node on the client?
> 
> I've been running in that configuration for a couple months now with no
> problems.  I have 6 data + 3 arbiter bricks hosting VM disk images and
> all three of my arbiter bricks are on one of the kvm hosts.
> 
> > Can I use multiple arbiter for the same volume ? In example, one arbiter on
> > each client.
> 
> I'm pretty sure that you can only have one arbiter per subvolume, and
> I'm not even sure what the point of multiple arbiters over the same data
> would be.
> 
> In my case, I have three subvolumes (three replica pairs), which means I
> need three arbiters and those could be spread across multiple nodes, of
> course, but I don't think saying "I want 12 arbiters instead of 3!"
> would be supported.
> 
> --
> Dave Sherohman
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Gandalf Corvotempesta
Il giorno lun 7 mag 2018 alle ore 13:22 Dave Sherohman 
ha scritto:
> I'm pretty sure that you can only have one arbiter per subvolume, and
> I'm not even sure what the point of multiple arbiters over the same data
> would be.

Multiple arbiter add availability. I can safely shutdown one hypervisor
node (where arbiter is located)
and still have a 100% working cluster with quorum.

Is possible to add arbiter on the fly or must be configured during the
volume creation ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Dave Sherohman
On Sun, May 06, 2018 at 11:15:32AM +, Gandalf Corvotempesta wrote:
> is possible to add an arbiter node on the client?

I've been running in that configuration for a couple months now with no
problems.  I have 6 data + 3 arbiter bricks hosting VM disk images and
all three of my arbiter bricks are on one of the kvm hosts.

> Can I use multiple arbiter for the same volume ? In example, one arbiter on
> each client.

I'm pretty sure that you can only have one arbiter per subvolume, and
I'm not even sure what the point of multiple arbiters over the same data
would be.

In my case, I have three subvolumes (three replica pairs), which means I
need three arbiters and those could be spread across multiple nodes, of
course, but I don't think saying "I want 12 arbiters instead of 3!"
would be supported.

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Compiling 3.13.2 under FreeBSD 11.1?

2018-05-07 Thread Roman Serbski
Hello,

Has anyone managed to successfully compile the latest 3.13.2 under
FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
fails:

Making all in src
  CC   glfs.lo
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
fatal error: error in backend: A @@ version cannot be undefined
cc: error: clang frontend command failed with exit code 70 (use -v to
see invocation)
FreeBSD clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on
LLVM 4.0.0)
Target: x86_64-unknown-freebsd11.1

# uname -a
FreeBSD int-smtp-03 11.1-RELEASE-p8 FreeBSD 11.1-RELEASE-p8 #0
r330926: Wed Mar 14 13:45:45 CET 2018
root@int-build:/usr/obj/usr/src/sys/BSD112017110501VM  amd64

# pkg info
argp-standalone-1.3_3  Standalone version of arguments parsing
functions from GLIBC
autoconf-2.69_1Automatically configure source code on
many Un*x platforms
autoconf-wrapper-20131203  Wrapper script for GNU autoconf
automake-1.15.1GNU Standards-compliant Makefile generator
automake-wrapper-20131203  Wrapper script for GNU automake
bison-3.0.4,1  Parser generator from FSF, (mostly)
compatible with Yacc
ca_root_nss-3.36.1 Root certificate bundle from the Mozilla Project
curl-7.59.0Command line tool and library for
transferring data with URLs
cyrus-sasl-2.1.26_13   RFC  SASL (Simple Authentication
and Security Layer)
gettext-runtime-0.19.8.1_1 GNU gettext runtime libraries and programs
glib-2.50.3_2,1Some useful routines of C programming
(current stable version)
indexinfo-0.3.1Utility to regenerate the GNU info page index
libedit-3.1.20170329_2,1   Command line editor library
libevent-2.1.8_1   API for executing callback functions on
events or timeouts
libffi-3.2.1_2 Foreign Function Interface
libiconv-1.14_11   Character set conversion library
liblz4-1.8.1.2,1   LZ4 compression library, lossless and very fast
libnghttp2-1.31.1  HTTP/2.0 C Library
libtool-2.4.6  Generic shared library support script
liburcu-0.10.0 Userspace read-copy-update (RCU) data
synchronization library
m4-1.4.18,1GNU M4
mysql57-client-5.7.22_1Multithreaded SQL database (client)
pcre-8.40_1Perl Compatible Regular Expressions library
perl5-5.26.2   Practical Extraction and Report Language
pkg-1.10.5 Package manager
pkgconf-1.4.2,1Utility to help to configure compiler
and linker flags
protobuf-3.5.2 Data interchange format library
python2-2_3The "meta-port" for version 2 of the
Python interpreter
python27-2.7.14_1  Interpreted object-oriented programming language
readline-7.0.3_1   Library for editing command lines as
they are typed
sqlite3-3.23.1 SQL database engine in a C library

# clang -v
FreeBSD clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on
LLVM 4.0.0)
Target: x86_64-unknown-freebsd11.1
Thread model: posix
InstalledDir: /usr/bin

./autogen.sh > https://pastebin.com/BJ16SmTM

./configure > https://pastebin.com/4SybcRTZ

make > https://pastebin.com/12YLjPid

glfs-8a2844.sh > https://pastebin.com/q3q0vWVS

glfs-8a2844.c > is too big. Please let me know whether you'd be
interested to see it as well.

Thank you in advance.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users