Re: [Gluster-users] Can I do SSL with Gluster v3.4.2 ?

2017-02-15 Thread Kaushal M
On Thu, Feb 16, 2017 at 3:48 AM, dev  wrote:
> I'm trying to setup SSL transport with glusterfs following the guide
> here: http://blog.gluster.org/author/zbyszek/
>
> I've copied the resulting ca, pem and key files to my server
> (to /etc/ssl) as well as a copy on my gluster client. The link
> above does not explain the proper mount options for mounting the
> volume on the client however.
>
> I've tried searching for the correct options to add to the mount
> command, however nothing has turned up yet. I have found some
> options to place in a volume file such as:
>
>option transport.socket.ssl-enabled on
>option transport tcp
>option direct-io-mode disable
>option transport.socket.ssl-own-cert/etc/ssl/glusterfs.pem
>option transport.socket.ssl-private-key /etc/ssl/glusterfs.key
>option transport.socket.ssl-ca-list /etc/ssl/glusterfs.ca
>
> but mounting with:
>
>glusterfs -f /etc/gluster-pm-vol /mnt/ib-data/hydra
>
> Only gives an error in the logfile such as:
>...
>[socket.c:3594:socket_init] 0-pm1-dump: could not load our cert
>...
>
> I've started to investigate ACL on server, but attempting to
> set auth.ssl-allow results in an error as well.
>
>   # gluster volume info
>   Volume Name: pm1-dump
>   ...
>   client.ssl: on
>   ...
>
> # gluster volume set pm1-dump auth.ssl-allow foo
> volume set: failed: option : auth.ssl-allow does not exist
> Did you mean auth.allow?
>
> # gluster --version
> glusterfs 3.4.2 built on Jan 14 2014 18:05:37
>
>
> Is this version too old (ubuntu 14.04) to use SSL on or am I missing
> something?

This version is just too old. You can get up to date packages for
ubuntu from the gluster community ppa https://launchpad.net/~gluster .
I suggest you use glusterfs-3.8, which is the latest version to have
packages for trusty.

>
> Thanks in advance
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS + Ubuntu + Infiniband RDMA

2017-02-15 Thread Deepak Naidu
Hello,

Does anyone have a working setup of GlusterFS on Ubuntu 14.04.5 LTS using 
Infiniband & RDMA ?

I am planning to use Infiniband(IPoIB) for Cluster-Interconnect & how would 
RMDA be configured. Any info is appreciated.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Can I do SSL with Gluster v3.4.2 ?

2017-02-15 Thread dev
I'm trying to setup SSL transport with glusterfs following the guide
here: http://blog.gluster.org/author/zbyszek/

I've copied the resulting ca, pem and key files to my server
(to /etc/ssl) as well as a copy on my gluster client. The link
above does not explain the proper mount options for mounting the
volume on the client however.

I've tried searching for the correct options to add to the mount
command, however nothing has turned up yet. I have found some
options to place in a volume file such as:

   option transport.socket.ssl-enabled on
   option transport tcp
   option direct-io-mode disable
   option transport.socket.ssl-own-cert/etc/ssl/glusterfs.pem
   option transport.socket.ssl-private-key /etc/ssl/glusterfs.key
   option transport.socket.ssl-ca-list /etc/ssl/glusterfs.ca

but mounting with:

   glusterfs -f /etc/gluster-pm-vol /mnt/ib-data/hydra

Only gives an error in the logfile such as:
   ...
   [socket.c:3594:socket_init] 0-pm1-dump: could not load our cert
   ...

I've started to investigate ACL on server, but attempting to
set auth.ssl-allow results in an error as well.

  # gluster volume info
  Volume Name: pm1-dump
  ...
  client.ssl: on
  ...

# gluster volume set pm1-dump auth.ssl-allow foo
volume set: failed: option : auth.ssl-allow does not exist
Did you mean auth.allow?

# gluster --version
glusterfs 3.4.2 built on Jan 14 2014 18:05:37


Is this version too old (ubuntu 14.04) to use SSL on or am I missing
something?

Thanks in advance
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 90 Brick/Server suggestions?

2017-02-15 Thread Serkan Çoban
Hi,

We are evaluating dell DSS7000 chassis with 90 disks.
Has anyone used that much brick per server?
Any suggestions, advices?

Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS Performance Test

2017-02-15 Thread David Spisla
Hello Gluster-Community,

we want to make some Performance Tests with Gluster (4 Nodes, also with HA) and 
now there is a discussion about the hardware configuration.
Are there any experiences concerning this? It is better to have more CPU-Cores 
OR higher CPU-Rate OR more RAM?

AFAIK the brick processes are using a lot of RAM and I also read that the 
gluster daemon use a lot of CPU-Capacity.
But maybe some of you has more specific informations.

We also had a discussion with Kaushal at the FOSDEM about that issue.

Greetings

David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841

[cid:image001.png@01D239C7.FDF7B430]

iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Optimal shard size & self-heal algorithm for VM hosting?

2017-02-15 Thread Gambit15
Hey guys,
 I keep seeing different recommendations for the best shard sizes for VM
images, from 64MB to 512MB.

What's the benefit of smaller v larger shards?
I'm guessing smaller shards are quicker to heal, but larger shards will
provide better sequential I/O for single clients? Anything else?

I also usually see "cluster.data-self-heal-algorithm: full" is generally
recommended in these cases. Why not "diff"? Is it simply to reduce CPU load
when there's plenty of excess network capacity?

Thanks in advance,
Doug
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

2017-02-15 Thread Niels de Vos
On Wed, Feb 15, 2017 at 02:53:26PM +0100, Pavel Szalbot wrote:
> Hi, tested it with 3.8.8 on client (CentOS) and server (Ubuntu) and
> everything is OK now.

Awesome, many thanks for testing and reporting back the results.

Niels


> 
> -ps
> 
> On Wed, Feb 15, 2017 at 11:49 AM, Pavel Szalbot 
> wrote:
> 
> > Hi Daryl,
> >
> > I must have missed your reply and found out about it when reading about
> > 3.8.9 and searching in gluster-users history.
> >
> > I will test the same setup with gluster 3.8.8 i.e. libvirt
> > 2.0.0-10.el7_3.4, glusterfs 3.8.8-1.el7 and gluster 3.8.8 on servers
> > (Ubuntu) and let you know.
> >
> > This is libvirt log for instance that used gluster storage backend
> > (libvirt 2.0.0, gluster client 3.8.5 and later 3.8.7, probably 3.8.5 on
> > servers, not sure):
> >
> > [2017-01-03 17:10:58.155566] I [MSGID: 104045] [glfs-master.c:91:notify]
> > 0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
> > [2017-01-03 17:10:58.155615] I [MSGID: 114020] [client.c:2356:notify]
> > 0-gv_openstack_0-client-6: parent translators are ready, attempting connect
> > on transport
> > [2017-01-03 17:10:58.186043] I [MSGID: 114020] [client.c:2356:notify]
> > 0-gv_openstack_0-client-7: parent translators are ready, attempting connect
> > on transport
> > [2017-01-03 17:10:58.186518] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> > 0-gv_openstack_0-client-6: changing port to 49156 (from 0)
> > [2017-01-03 17:10:58.215411] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> > 0-gv_openstack_0-client-7: changing port to 49153 (from 0)
> > [2017-01-03 17:10:58.243706] I [MSGID: 114057] [client-handshake.c:1446:
> > select_server_supported_programs] 0-gv_openstack_0-client-6: Using
> > Program GlusterFS 3.3, Num (1298437), Version (330)
> > [2017-01-03 17:10:58.244215] I [MSGID: 114046] 
> > [client-handshake.c:1222:client_setvolume_cbk]
> > 0-gv_openstack_0-client-6: Connected to gv_openstack_0-client-6, attached
> > to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> > [2017-01-03 17:10:58.244235] I [MSGID: 114047] 
> > [client-handshake.c:1233:client_setvolume_cbk]
> > 0-gv_openstack_0-client-6: Server and Client lk-version numbers are not
> > same, reopening the fds
> > [2017-01-03 17:10:58.244318] I [MSGID: 108005]
> > [afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
> > 'gv_openstack_0-client-6' came back up; going online.
> > [2017-01-03 17:10:58.244437] I [MSGID: 114035] 
> > [client-handshake.c:201:client_set_lk_version_cbk]
> > 0-gv_openstack_0-client-6: Server lk version = 1
> > [2017-01-03 17:10:58.246940] I [MSGID: 114057] [client-handshake.c:1446:
> > select_server_supported_programs] 0-gv_openstack_0-client-7: Using
> > Program GlusterFS 3.3, Num (1298437), Version (330)
> > [2017-01-03 17:10:58.247252] I [MSGID: 114046] 
> > [client-handshake.c:1222:client_setvolume_cbk]
> > 0-gv_openstack_0-client-7: Connected to gv_openstack_0-client-7, attached
> > to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> > [2017-01-03 17:10:58.247273] I [MSGID: 114047] 
> > [client-handshake.c:1233:client_setvolume_cbk]
> > 0-gv_openstack_0-client-7: Server and Client lk-version numbers are not
> > same, reopening the fds
> > [2017-01-03 17:10:58.257855] I [MSGID: 114035] 
> > [client-handshake.c:201:client_set_lk_version_cbk]
> > 0-gv_openstack_0-client-7: Server lk version = 1
> > [2017-01-03 17:10:58.259641] I [MSGID: 104041] 
> > [glfs-resolve.c:885:__glfs_active_subvol]
> > 0-gv_openstack_0: switched to graph 6e6f6465-342d-6d69-6372-6f312e707267
> > (0)
> > [2017-01-03 17:10:58.439897] I [MSGID: 104045] [glfs-master.c:91:notify]
> > 0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
> > [2017-01-03 17:10:58.439929] I [MSGID: 114020] [client.c:2356:notify]
> > 0-gv_openstack_0-client-6: parent translators are ready, attempting connect
> > on transport
> > [2017-01-03 17:10:58.519082] I [MSGID: 114020] [client.c:2356:notify]
> > 0-gv_openstack_0-client-7: parent translators are ready, attempting connect
> > on transport
> > [2017-01-03 17:10:58.519527] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> > 0-gv_openstack_0-client-6: changing port to 49156 (from 0)
> > [2017-01-03 17:10:58.550482] I [MSGID: 114057] [client-handshake.c:1446:
> > select_server_supported_programs] 0-gv_openstack_0-client-6: Using
> > Program GlusterFS 3.3, Num (1298437), Version (330)
> > [2017-01-03 17:10:58.550997] I [MSGID: 114046] 
> > [client-handshake.c:1222:client_setvolume_cbk]
> > 0-gv_openstack_0-client-6: Connected to gv_openstack_0-client-6, attached
> > to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> > [2017-01-03 17:10:58.551021] I [MSGID: 114047] 
> > [client-handshake.c:1233:client_setvolume_cbk]
> > 0-gv_openstack_0-client-6: Server and Client lk-version numbers are not
> > same, reopening the fds
> > [2017-01-03 17:10:58.551089] I [MSGID: 108005]
> > [afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
> > 

Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

2017-02-15 Thread Pavel Szalbot
Hi, tested it with 3.8.8 on client (CentOS) and server (Ubuntu) and
everything is OK now.

-ps

On Wed, Feb 15, 2017 at 11:49 AM, Pavel Szalbot 
wrote:

> Hi Daryl,
>
> I must have missed your reply and found out about it when reading about
> 3.8.9 and searching in gluster-users history.
>
> I will test the same setup with gluster 3.8.8 i.e. libvirt
> 2.0.0-10.el7_3.4, glusterfs 3.8.8-1.el7 and gluster 3.8.8 on servers
> (Ubuntu) and let you know.
>
> This is libvirt log for instance that used gluster storage backend
> (libvirt 2.0.0, gluster client 3.8.5 and later 3.8.7, probably 3.8.5 on
> servers, not sure):
>
> [2017-01-03 17:10:58.155566] I [MSGID: 104045] [glfs-master.c:91:notify]
> 0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
> [2017-01-03 17:10:58.155615] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-6: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.186043] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-7: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.186518] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-6: changing port to 49156 (from 0)
> [2017-01-03 17:10:58.215411] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-7: changing port to 49153 (from 0)
> [2017-01-03 17:10:58.243706] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-6: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.244215] I [MSGID: 114046] 
> [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Connected to gv_openstack_0-client-6, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.244235] I [MSGID: 114047] 
> [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.244318] I [MSGID: 108005]
> [afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
> 'gv_openstack_0-client-6' came back up; going online.
> [2017-01-03 17:10:58.244437] I [MSGID: 114035] 
> [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-6: Server lk version = 1
> [2017-01-03 17:10:58.246940] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-7: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.247252] I [MSGID: 114046] 
> [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-7: Connected to gv_openstack_0-client-7, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.247273] I [MSGID: 114047] 
> [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-7: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.257855] I [MSGID: 114035] 
> [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-7: Server lk version = 1
> [2017-01-03 17:10:58.259641] I [MSGID: 104041] 
> [glfs-resolve.c:885:__glfs_active_subvol]
> 0-gv_openstack_0: switched to graph 6e6f6465-342d-6d69-6372-6f312e707267
> (0)
> [2017-01-03 17:10:58.439897] I [MSGID: 104045] [glfs-master.c:91:notify]
> 0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
> [2017-01-03 17:10:58.439929] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-6: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.519082] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-7: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.519527] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-6: changing port to 49156 (from 0)
> [2017-01-03 17:10:58.550482] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-6: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.550997] I [MSGID: 114046] 
> [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Connected to gv_openstack_0-client-6, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.551021] I [MSGID: 114047] 
> [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.551089] I [MSGID: 108005]
> [afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
> 'gv_openstack_0-client-6' came back up; going online.
> [2017-01-03 17:10:58.551199] I [MSGID: 114035] 
> [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-6: Server lk version = 1
> [2017-01-03 17:10:58.554413] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-7: changing port to 49153 (from 

Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

2017-02-15 Thread Pavel Szalbot
Hi Daryl,

I must have missed your reply and found out about it when reading about
3.8.9 and searching in gluster-users history.

I will test the same setup with gluster 3.8.8 i.e. libvirt
2.0.0-10.el7_3.4, glusterfs 3.8.8-1.el7 and gluster 3.8.8 on servers
(Ubuntu) and let you know.

This is libvirt log for instance that used gluster storage backend (libvirt
2.0.0, gluster client 3.8.5 and later 3.8.7, probably 3.8.5 on servers, not
sure):

[2017-01-03 17:10:58.155566] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
[2017-01-03 17:10:58.155615] I [MSGID: 114020] [client.c:2356:notify]
0-gv_openstack_0-client-6: parent translators are ready, attempting connect
on transport
[2017-01-03 17:10:58.186043] I [MSGID: 114020] [client.c:2356:notify]
0-gv_openstack_0-client-7: parent translators are ready, attempting connect
on transport
[2017-01-03 17:10:58.186518] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
0-gv_openstack_0-client-6: changing port to 49156 (from 0)
[2017-01-03 17:10:58.215411] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
0-gv_openstack_0-client-7: changing port to 49153 (from 0)
[2017-01-03 17:10:58.243706] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-gv_openstack_0-client-6: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2017-01-03 17:10:58.244215] I [MSGID: 114046]
[client-handshake.c:1222:client_setvolume_cbk] 0-gv_openstack_0-client-6:
Connected to gv_openstack_0-client-6, attached to remote volume
'/export/gfs_0/gv_openstack_0_brick'.
[2017-01-03 17:10:58.244235] I [MSGID: 114047]
[client-handshake.c:1233:client_setvolume_cbk] 0-gv_openstack_0-client-6:
Server and Client lk-version numbers are not same, reopening the fds
[2017-01-03 17:10:58.244318] I [MSGID: 108005]
[afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
'gv_openstack_0-client-6' came back up; going online.
[2017-01-03 17:10:58.244437] I [MSGID: 114035]
[client-handshake.c:201:client_set_lk_version_cbk]
0-gv_openstack_0-client-6: Server lk version = 1
[2017-01-03 17:10:58.246940] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-gv_openstack_0-client-7: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2017-01-03 17:10:58.247252] I [MSGID: 114046]
[client-handshake.c:1222:client_setvolume_cbk] 0-gv_openstack_0-client-7:
Connected to gv_openstack_0-client-7, attached to remote volume
'/export/gfs_0/gv_openstack_0_brick'.
[2017-01-03 17:10:58.247273] I [MSGID: 114047]
[client-handshake.c:1233:client_setvolume_cbk] 0-gv_openstack_0-client-7:
Server and Client lk-version numbers are not same, reopening the fds
[2017-01-03 17:10:58.257855] I [MSGID: 114035]
[client-handshake.c:201:client_set_lk_version_cbk]
0-gv_openstack_0-client-7: Server lk version = 1
[2017-01-03 17:10:58.259641] I [MSGID: 104041]
[glfs-resolve.c:885:__glfs_active_subvol] 0-gv_openstack_0: switched to
graph 6e6f6465-342d-6d69-6372-6f312e707267 (0)
[2017-01-03 17:10:58.439897] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
[2017-01-03 17:10:58.439929] I [MSGID: 114020] [client.c:2356:notify]
0-gv_openstack_0-client-6: parent translators are ready, attempting connect
on transport
[2017-01-03 17:10:58.519082] I [MSGID: 114020] [client.c:2356:notify]
0-gv_openstack_0-client-7: parent translators are ready, attempting connect
on transport
[2017-01-03 17:10:58.519527] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
0-gv_openstack_0-client-6: changing port to 49156 (from 0)
[2017-01-03 17:10:58.550482] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-gv_openstack_0-client-6: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2017-01-03 17:10:58.550997] I [MSGID: 114046]
[client-handshake.c:1222:client_setvolume_cbk] 0-gv_openstack_0-client-6:
Connected to gv_openstack_0-client-6, attached to remote volume
'/export/gfs_0/gv_openstack_0_brick'.
[2017-01-03 17:10:58.551021] I [MSGID: 114047]
[client-handshake.c:1233:client_setvolume_cbk] 0-gv_openstack_0-client-6:
Server and Client lk-version numbers are not same, reopening the fds
[2017-01-03 17:10:58.551089] I [MSGID: 108005]
[afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
'gv_openstack_0-client-6' came back up; going online.
[2017-01-03 17:10:58.551199] I [MSGID: 114035]
[client-handshake.c:201:client_set_lk_version_cbk]
0-gv_openstack_0-client-6: Server lk version = 1
[2017-01-03 17:10:58.554413] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
0-gv_openstack_0-client-7: changing port to 49153 (from 0)
[2017-01-03 17:10:58.600956] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-gv_openstack_0-client-7: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2017-01-03 17:10:58.601276] I [MSGID: 114046]
[client-handshake.c:1222:client_setvolume_cbk] 0-gv_openstack_0-client-7:
Connected to gv_openstack_0-client-7, attached to 

Re: [Gluster-users] CentOS Storage SIG Repo GlusterFS 3.8.9 testing results

2017-02-15 Thread Niels de Vos
On Tue, Feb 14, 2017 at 10:00:18AM -0800, Daryl lee wrote:
> Niels,
> Thanks for submitting the GlusterFS 3.8.9 package to the testing repo.  I
> have my testing feedback to provide but I'm not sure where you want me to
> post this (should I subscribe to the package mailing list and post it
> there?).For now I'll just post it in gluster-users in case anyone is
> curious about the results.

Many thanks Daryl!
And testing feedback is appreciated, the most important to me is that
the update is not behaving any worse (hopefully better!) than the
previous release.

I'll mark the packages for releasing to the stable repository when I
send out the release announcement. Our mighty Kaleb takes normally care
of packaging the releases for almost all non-CentOS distributions. He
seems to be on holidays and without too few (or no?) other volunteers
the other distributions will have to wait a little longer (HINT!).

Also, if anyone is interested in writing automated tests for the
releases, please let us know! The CentOS Storage SIG packages can be
tested in the CentOS CI in a real multi-server environment. See
https://wiki.centos.org/SpecialInterestGroup/Storage/Gluster/CI for more
details.

Thanks again,
Niels


> Basic testing results for GlusterFS v3.8.9 from centos-gluster38-test
> repository on CentOS Storage SIG Repo.  
> Deployment Targets:  5 test servers running CentOS Linux release 7.3.1611.
> 
> Testing Breakdown: 
> QTY 3 - GlusterFS Brick/Volume servers running REPLICA 3 ARBITER 1 updated
> the following packages:
> 
> 
> =
> Updating:
>  glusterfs x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 510 k
>  glusterfs-api x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 89 k
>  glusterfs-cli x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 183 k
>  glusterfs-client-xlators  x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 782 k
>  glusterfs-fusex86_64
> 3.8.9-1.el7  centos-gluster38-test
> 133 k
>  glusterfs-libsx86_64
> 3.8.9-1.el7  centos-gluster38-test
> 379 k
>  glusterfs-server  x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 1.4 M
> 
> 
> =
> Tests:
> # Package DOWNLOAD/UPDATE/CLEANUP from repo - OK w/ warnings*
> *  warning during updating of glusterfs-server-3.8.9-1.el7.x86_64
> backing up gluster .vol files saved to .rpmsave.   This is expected.
> # Bricks on all 3 servers started - OK
> # Self-Heal Daemon on all 3 servers started - OK
> # First replica Self-Heal - OK
> # Second replica Self-Heal - OK
> # Arbiter replica Self-Heal - OK
> # No errors in GlusterFS service and brick logs.
> 
> 
> QTY 2 - GlusterFS clients deployed the following packages:
> 
> 
> =
> Updating:
>  glusterfs x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 510 k
>  glusterfs-api x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 89 k
>  glusterfs-client-xlators  x86_64
> 3.8.9-1.el7  centos-gluster38-test
> 782 k
>  glusterfs-fusex86_64
> 3.8.9-1.el7  centos-gluster38-test
> 133 k
>  glusterfs-libsx86_64
> 3.8.9-1.el7  centos-gluster38-test
> 379 k
> 
> 
> =
> Tests:
> # Package DOWNLOAD/UPDATE/CLEANUP from repo - OK 
> # Basic FUSE mount RW test to remote GlusterFS volume - OK 
> # No errors in QEMU or GlusterFS FUSE logs 
> # basic functionality test of libvirt gfapi based KVM Virtual Machine
> (Ubuntu & Windows) - OK 
> # basic VM performance test using DD and hdparm (Linux) - OK 
> 
> 
> 
> 
> -Daryl
> 
> 
> 


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users