Re: [Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

2018-02-28 Thread Nithya Balachandran
Hi Jose,

On 28 February 2018 at 22:31, Jose V. Carrión  wrote:

> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation. This task finished successfully (you can see info below)
> and number of files on the 3 nodes were very similar .
>
> For volumedisk1 I only have files of 500MB and they are continuosly
> written in sequential mode. The filename pattern of written files is:
>
> run.node1..rd
> run.node2..rd
> run.node1.0001.rd
> run.node2.0001.rd
> run.node1.0002.rd
> run.node2.0002.rd
> ...
> ...
> run.node1.X.rd
> run.node2.X.rd
>
> (  X ranging from  to infinite )
>
> Curiously stor1data and stor2data maintain similar ratios in bytes:
>
> Filesystem  1K-blocksUsed   Available
> Use% Mounted on
> /dev/sdc1 52737613824 17079174264  35658439560  33%
> /mnt/glusterfs/vol1   -> stor1data
> /dev/sdc1 52737613824 17118810848  35618802976  33%
> /mnt/glusterfs/vol1  ->  stor2data
>
> However the ratio on som3data differs too much (1TB):
> Filesystem   1K-blocksUsedAvailable
> Use% Mounted on
> /dev/sdc1 52737613824 15479191748  37258422076  30%
> /mnt/disk_c/glusterfs/vol1 -> stor3data
> /dev/sdd1 52737613824 15566398604  37171215220  30%
> /mnt/disk_d/glusterfs/vol1 -> stor3data
>
> Thinking in  inodes:
>
> FilesystemInodes   IUsed   IFree  IUse%
> Mounted on
> /dev/sdc1 5273970048  851053  52731189951%
> /mnt/glusterfs/vol1 ->  stor1data
> /dev/sdc1 5273970048  849388  52731206601%
> /mnt/glusterfs/vol1 ->  stor2data
>
> /dev/sdc1 5273970048  846877  52731231711%
> /mnt/disk_c/glusterfs/vol1 -> stor3data
> /dev/sdd1 5273970048  845250  52731247981%
> /mnt/disk_d/glusterfs/vol1 -> stor3data
>
> 851053 (stor1) - 845250 (stor3) = 5803 files of difference !
>

The inode numbers are a little misleading here - gluster uses some to
create its own internal files and directory structures. Based on the
average file size, I think this would actually work out to a difference of
around 2000 files.


>
> In adition, correct me if I'm wrong,  stor3data should have 50% of
> probability to store a new file (even taking into account the algorithm of
> DHT with filename patterns)
>
> Theoretically yes , but again, it depends on the filenames and their hash
distribution.

Please send us the output of :
gluster volume rebalance  status

for the volume.

Regards,
Nithya


> Thanks,
> Greetings.
>
> Jose V.
>
> Status of volume: volumedisk0
> Gluster process TCP Port  RDMA Port  Online
>  Pid
> 
> --
> Brick stor1data:/mnt/glusterfs/vol0/bri
> ck1 49152 0  Y
> 13533
> Brick stor2data:/mnt/glusterfs/vol0/bri
> ck1 49152 0  Y
> 13302
> Brick stor3data:/mnt/disk_b1/glusterfs/
> vol0/brick1 49152 0  Y
> 17371
> Brick stor3data:/mnt/disk_b2/glusterfs/
> vol0/brick1 49153 0  Y
> 17391
> NFS Server on localhost N/A   N/AN
> N/A
> NFS Server on stor3data N/A   N/AN   N/A
> NFS Server on stor2data N/A   N/AN   N/A
>
> Task Status of Volume volumedisk0
> 
> --
> Task : Rebalance
> ID   : 7f5328cb-ed25-4627-9196-fb3e29e0e4ca
> Status   : completed
>
> Status of volume: volumedisk1
> Gluster process TCP Port  RDMA Port  Online
>  Pid
> 
> --
> Brick stor1data:/mnt/glusterfs/vol1/bri
> ck1 49153 0  Y
> 13579
> Brick stor2data:/mnt/glusterfs/vol1/bri
> ck1 49153 0  Y
> 13344
> Brick stor3data:/mnt/disk_c/glusterfs/v
> ol1/brick1  49154 0  Y
> 17439
> Brick stor3data:/mnt/disk_d/glusterfs/v
> ol1/brick1  49155 0  Y
> 17459
> NFS Server on localhost N/A   N/AN
> N/A
> NFS Server on stor3data N/A   N/AN   N/A
> NFS Server on stor2data N/A   N/AN   N/A
>
> Task Status of Volume volumedisk1
> 
> --
> Task 

Re: [Gluster-users] Intermittent mount disconnect due to socket poller error

2018-02-28 Thread Raghavendra Gowdappa
Is it possible to attach logfiles of problematic client and bricks?

On Thu, Mar 1, 2018 at 3:00 AM, Ryan Lee  wrote:

> We've been on the Gluster 3.7 series for several years with things pretty
> stable.  Given that it's reached EOL, yesterday I upgraded to 3.13.2.
> Every Gluster mount and server was disabled then brought back up after the
> upgrade, changing the op-version to 31302 and then trying it all out.
>
> It went poorly.  Every sizable read and write (100's MB) lead to
> 'Transport endpoint not connected' errors on the command line and immediate
> unavailability of the mount.  After unsuccessfully trying to search for
> similar problems with solutions, I ended up downgrading to 3.12.6 and
> changing the op-version to 31202.  That brought us back to usability with
> the majority of those operations succeeding enough to consider it online,
> but there are still occasional mount disconnects that we never saw with 3.7
> - about 6 in the past 18 hours.  It seems these disconnects would never
> come back, either, unless manually re-mounted.  Manually remounting
> reconnects immediately.  They only disconnect the affected client, though
> some simultaneous disconnects have occurred due to simultaneous activity.
> The lower-level log info seems to indicate a socket problem, potentially
> broken on the client side based on timing (but the timing is narrow, and I
> won't claim the clocks are that well synchronized across all our servers).
> The client and one server claim a socket polling error with no data
> available, and the other server claims a writev error.  This seems to lead
> the client to the 'all subvolumes are down' state, even though all other
> clients are still connected.  Has anybody run into this?  Did I miss
> anything moving so many versions ahead?
>
> I've included the output of volume info and some excerpts from the logs.
>  We have two servers running glusterd and two replica volumes with a brick
> on each server.  Both experience disconnects; there are about 10 clients
> for each, with one using both.  We use SSL over internal IPv4. Names in all
> caps were replaced, as were IP addresses.
>
> Let me know if there's anything else I can provide.
>
> % gluster v info VOL
> Volume Name: VOL
> Type: Replicate
> Volume ID: 3207155f-02c6-447a-96c4-5897917345e0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: SERVER1:/glusterfs/VOL-brick1/data
> Brick2: SERVER2:/glusterfs/VOL-brick2/data
> Options Reconfigured:
> config.transport: tcp
> features.selinux: off
> transport.address-family: inet
> nfs.disable: on
> client.ssl: on
> performance.readdir-ahead: on
> auth.ssl-allow: [NAMES, including CLIENT]
> server.ssl: on
> ssl.certificate-depth: 3
>
> Log excerpts (there was nothing related in glusterd.log):
>
> CLIENT:/var/log/glusterfs/mnt-VOL.log
> [2018-02-28 19:35:58.378334] E [socket.c:2648:socket_poller]
> 0-VOL-client-1: socket_poller SERVER2:49153 failed (No data available)
> [2018-02-28 19:35:58.477154] E [MSGID: 108006]
> [afr-common.c:5164:__afr_handle_child_down_event] 0-VOL-replicate-0: All
> subvolumes are down. Going offline until atleast one of them comes back up.
> [2018-02-28 19:35:58.486146] E [MSGID: 101046]
> [dht-common.c:1501:dht_lookup_dir_cbk] 0-VOL-dht: dict is null <67 times>
> 
> [2018-02-28 19:38:06.428607] E [socket.c:2648:socket_poller]
> 0-VOL-client-1: socket_poller SERVER2:24007 failed (No data available)
> [2018-02-28 19:40:12.548650] E [socket.c:2648:socket_poller]
> 0-VOL-client-1: socket_poller SERVER2:24007 failed (No data available)
>
> 
>
>
> SERVER2:/var/log/glusterfs/bricks/VOL-brick2.log
> [2018-02-28 19:35:58.379953] E [socket.c:2632:socket_poller]
> 0-tcp.VOL-server: poll error on socket
> [2018-02-28 19:35:58.380530] I [MSGID: 115036]
> [server.c:527:server_rpc_notify] 0-VOL-server: disconnecting connection
> from CLIENT-30688-2018/02/28-03:11:39:784734-VOL-client-1-0-0
> [2018-02-28 19:35:58.380932] I [socket.c:3672:socket_submit_reply]
> 0-tcp.VOL-server: not connected (priv->connected = -1)
> [2018-02-28 19:35:58.380960] E [rpcsvc.c:1364:rpcsvc_submit_generic]
> 0-rpc-service: failed to submit message (XID: 0xa4e, Program: GlusterFS
> 3.3, ProgVers: 330, Proc: 25) to rpc-transport (tcp.uploads-server)
> [2018-02-28 19:35:58.381124] E [server.c:195:server_submit_reply]
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.6/xlator/debug/io-stats.so(+0x1ae6a)
> [0x7f97bd37ee6a] -->/usr/lib/x86_64-linux-gnu/g
> lusterfs/3.12.6/xlator/protocol/server.so(+0x1d4c8) [0x7f97bcf1f4c8]
> -->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.6/xlator/protocol/server.so(+0x8bd5)
> [0x7f97bcf0abd5] ) 0-: Reply submission failed
> [2018-02-28 19:35:58.381196] I [MSGID: 101055]
> [client_t.c:443:gf_client_unref] 0-VOL-server: Shutting down connection
> CLIENT-30688-2018/02/28-03:11:39:784734-VOL-client-1-0-0
> [2018-02-28 19:40:58.351350] I [addr.c:55:compare_addr_and_update]
> 

[Gluster-users] Gluster Monthly Newsletter, February 2018

2018-02-28 Thread Amye Scavarda
Gluster Monthly Newsletter, February 2018


Special thanks to all of our contributors working to get Gluster 4.0 out
into the wild.

Over the coming weeks, we’ll be posting on the blog about some of the new
improvements coming out in Gluster 4.0, so watch for that!


Glustered: A Gluster Community Gathering is happening on March 8, in
connection with Incontro DevOps 2018. More details here:
http://www.incontrodevops.it/events/glustered-2018/


Event planning for next year:

As part of the Community Working Group issue queue, we’re asking for
recommendations of events that Gluster should be focusing on.
https://github.com/gluster/community/issues/7 has more details, we’re
planning from March 2018 through March 2019 and we’d like your input! Where
should Gluster be at?


Want swag for your meetup? https://www.gluster.org/events/ has a contact
form for us to let us know about your Gluster meetup! We’d love to hear
about Gluster presentations coming up, conference talks and gatherings. Let
us know!


Top Contributing Companies:  Red Hat,  Gluster, Inc.,  Facebook, Gentoo
Linux, Samsung

Top Contributors in February: N Balachandran, Poornima G, Amar Tumballi,
Atin Mukherjee, Vitalii Koriakov


Noteworthy threads:

[Gluster-users] [FOSDEM'18] Optimizing Software Defined Storage for the Age
of Flash

http://lists.gluster.org/pipermail/gluster-users/2018-February/033512.html

[Gluster-users] Heketi v6.0.0 available for download

http://lists.gluster.org/pipermail/gluster-users/2018-February/033559.html

[Gluster-users] Glustered 2018 schedule

http://lists.gluster.org/pipermail/gluster-users/2018-February/033630.html

[Gluster-devel] New web proxy for our web application

http://lists.gluster.org/pipermail/gluster-devel/2018-February/054501.html

[Gluster-devel] Announcing Softserve- serve yourself a VM

http://lists.gluster.org/pipermail/gluster-devel/2018-February/054513.html


Upcoming CFPs:

LinuxCon China - March 4, 2018

https://www.lfasiallc.com/linuxcon-containercon-cloudopen-china/cfp

Open Source Summit North America - Sunday, April 29, 2018

https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/program/cfp/


--
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Intermittent mount disconnect due to socket poller error

2018-02-28 Thread Ryan Lee
We've been on the Gluster 3.7 series for several years with things 
pretty stable.  Given that it's reached EOL, yesterday I upgraded to 
3.13.2.  Every Gluster mount and server was disabled then brought back 
up after the upgrade, changing the op-version to 31302 and then trying 
it all out.


It went poorly.  Every sizable read and write (100's MB) lead to 
'Transport endpoint not connected' errors on the command line and 
immediate unavailability of the mount.  After unsuccessfully trying to 
search for similar problems with solutions, I ended up downgrading to 
3.12.6 and changing the op-version to 31202.  That brought us back to 
usability with the majority of those operations succeeding enough to 
consider it online, but there are still occasional mount disconnects 
that we never saw with 3.7 - about 6 in the past 18 hours.  It seems 
these disconnects would never come back, either, unless manually 
re-mounted.  Manually remounting reconnects immediately.  They only 
disconnect the affected client, though some simultaneous disconnects 
have occurred due to simultaneous activity.  The lower-level log info 
seems to indicate a socket problem, potentially broken on the client 
side based on timing (but the timing is narrow, and I won't claim the 
clocks are that well synchronized across all our servers).  The client 
and one server claim a socket polling error with no data available, and 
the other server claims a writev error.  This seems to lead the client 
to the 'all subvolumes are down' state, even though all other clients 
are still connected.  Has anybody run into this?  Did I miss anything 
moving so many versions ahead?


I've included the output of volume info and some excerpts from the logs. 
  We have two servers running glusterd and two replica volumes with a 
brick on each server.  Both experience disconnects; there are about 10 
clients for each, with one using both.  We use SSL over internal IPv4. 
Names in all caps were replaced, as were IP addresses.


Let me know if there's anything else I can provide.

% gluster v info VOL
Volume Name: VOL
Type: Replicate
Volume ID: 3207155f-02c6-447a-96c4-5897917345e0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: SERVER1:/glusterfs/VOL-brick1/data
Brick2: SERVER2:/glusterfs/VOL-brick2/data
Options Reconfigured:
config.transport: tcp
features.selinux: off
transport.address-family: inet
nfs.disable: on
client.ssl: on
performance.readdir-ahead: on
auth.ssl-allow: [NAMES, including CLIENT]
server.ssl: on
ssl.certificate-depth: 3

Log excerpts (there was nothing related in glusterd.log):

CLIENT:/var/log/glusterfs/mnt-VOL.log
[2018-02-28 19:35:58.378334] E [socket.c:2648:socket_poller] 
0-VOL-client-1: socket_poller SERVER2:49153 failed (No data available)
[2018-02-28 19:35:58.477154] E [MSGID: 108006] 
[afr-common.c:5164:__afr_handle_child_down_event] 0-VOL-replicate-0: All 
subvolumes are down. Going offline until atleast one of them comes back up.
[2018-02-28 19:35:58.486146] E [MSGID: 101046] 
[dht-common.c:1501:dht_lookup_dir_cbk] 0-VOL-dht: dict is null <67 times>


[2018-02-28 19:38:06.428607] E [socket.c:2648:socket_poller] 
0-VOL-client-1: socket_poller SERVER2:24007 failed (No data available)
[2018-02-28 19:40:12.548650] E [socket.c:2648:socket_poller] 
0-VOL-client-1: socket_poller SERVER2:24007 failed (No data available)





SERVER2:/var/log/glusterfs/bricks/VOL-brick2.log
[2018-02-28 19:35:58.379953] E [socket.c:2632:socket_poller] 
0-tcp.VOL-server: poll error on socket
[2018-02-28 19:35:58.380530] I [MSGID: 115036] 
[server.c:527:server_rpc_notify] 0-VOL-server: disconnecting connection 
from CLIENT-30688-2018/02/28-03:11:39:784734-VOL-client-1-0-0
[2018-02-28 19:35:58.380932] I [socket.c:3672:socket_submit_reply] 
0-tcp.VOL-server: not connected (priv->connected = -1)
[2018-02-28 19:35:58.380960] E [rpcsvc.c:1364:rpcsvc_submit_generic] 
0-rpc-service: failed to submit message (XID: 0xa4e, Program: GlusterFS 
3.3, ProgVers: 330, Proc: 25) to rpc-transport (tcp.uploads-server)
[2018-02-28 19:35:58.381124] E [server.c:195:server_submit_reply] 
(-->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.6/xlator/debug/io-stats.so(+0x1ae6a) 
[0x7f97bd37ee6a] 
-->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.6/xlator/protocol/server.so(+0x1d4c8) 
[0x7f97bcf1f4c8] 
-->/usr/lib/x86_64-linux-gnu/glusterfs/3.12.6/xlator/protocol/server.so(+0x8bd5) 
[0x7f97bcf0abd5] ) 0-: Reply submission failed
[2018-02-28 19:35:58.381196] I [MSGID: 101055] 
[client_t.c:443:gf_client_unref] 0-VOL-server: Shutting down connection 
CLIENT-30688-2018/02/28-03:11:39:784734-VOL-client-1-0-0
[2018-02-28 19:40:58.351350] I [addr.c:55:compare_addr_and_update] 
0-/glusterfs/uploads-brick2/data: allowed = "*", received addr = "CLIENT"
[2018-02-28 19:40:58.351684] I [login.c:34:gf_auth] 0-auth/login: 
connecting user name: CLIENT


SERVER1:/var/log/glusterfs/bricks/VOL-brick1.log
[2018-02-28 19:35:58.509713] W [socket.c:593:__socket_rwv] 

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-28 Thread Javier Romero
Hi all,

Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel
3.10.0-693.17.1.el7.x86_64

This package works ok
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm

# yum install 
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
# yum install glusterfs-server
# systemctl start glusterd
# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
vendor preset: disabled)
   Active: active (running) since Wed 2018-02-28 09:18:46 -03; 53s ago
 Main PID: 2070 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─2070 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Feb 28 09:18:46 centos-7 systemd[1]: Starting GlusterFS, a clustered
file-system server...
Feb 28 09:18:46 centos-7 systemd[1]: Started GlusterFS, a clustered
file-system server.




This one fails 
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm

# yum install -y
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
Loaded plugins: fastestmirror, langpacks
glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
  | 571 kB  00:00:00
Examining /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm:
glusterfs-4.0.0-0.1.rc1.el7.x86_64
Marking /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
as an update to glusterfs-3.12.6-1.el7.x86_64
Resolving Dependencies
--> Running transaction check
---> Package glusterfs.x86_64 0:3.12.6-1.el7 will be updated
--> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
glusterfs-server-3.12.6-1.el7.x86_64
base
  | 3.6 kB  00:00:00
centos-gluster312
  | 2.9 kB  00:00:00
extras
  | 3.4 kB  00:00:00
purpleidea-vagrant-libvirt
  | 3.0 kB  00:00:00
updates
  | 3.4 kB  00:00:00
centos-gluster312/7/x86_64/primary_db
  |  87 kB  00:00:00
Loading mirror speeds from cached hostfile
 * base: centos.xfree.com.ar
 * extras: centos.xfree.com.ar
 * updates: centos.xfree.com.ar
--> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
glusterfs-api-3.12.6-1.el7.x86_64
--> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
glusterfs-fuse-3.12.6-1.el7.x86_64
---> Package glusterfs.x86_64 0:4.0.0-0.1.rc1.el7 will be an update
--> Processing Dependency: glusterfs-libs = 4.0.0-0.1.rc1.el7 for
package: glusterfs-4.0.0-0.1.rc1.el7.x86_64
--> Finished Dependency Resolution
Error: Package: glusterfs-server-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   Requires: glusterfs = 3.12.6-1.el7
   Removing: glusterfs-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   glusterfs = 3.12.6-1.el7
   Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
(/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
   glusterfs = 4.0.0-0.1.rc1.el7
   Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
   glusterfs = 3.8.4-18.4.el7.centos
   Available: glusterfs-3.12.0-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.0-1.el7
   Available: glusterfs-3.12.1-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-1.el7
   Available: glusterfs-3.12.1-2.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-2.el7
   Available: glusterfs-3.12.3-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.3-1.el7
   Available: glusterfs-3.12.4-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.4-1.el7
   Available: glusterfs-3.12.5-2.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.5-2.el7
Error: Package: glusterfs-api-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   Requires: glusterfs = 3.12.6-1.el7
   Removing: glusterfs-3.12.6-1.el7.x86_64 (@centos-gluster312-test)
   glusterfs = 3.12.6-1.el7
   Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
(/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
   glusterfs = 4.0.0-0.1.rc1.el7
   Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
   glusterfs = 3.8.4-18.4.el7.centos
   Available: glusterfs-3.12.0-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.0-1.el7
   Available: glusterfs-3.12.1-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-1.el7
   Available: glusterfs-3.12.1-2.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.1-2.el7
   Available: glusterfs-3.12.3-1.el7.x86_64 (centos-gluster312)
   glusterfs = 3.12.3-1.el7
   Available: 

Re: [Gluster-users] [Gluster-devel] Release 4.0: RC1 tagged

2018-02-28 Thread Javier Romero
Hi Shyam,

Let me know once it can be tested on the last release of Centos 7. I can
help on this and return some feedback.

Regards,



Javier Romero

Javier Romero



2018-02-26 16:03 GMT-03:00 Shyam Ranganathan :

> Hi,
>
> RC1 is tagged in the code, and the request for packaging the same is on
> its way.
>
> We should have packages as early as today, and request the community to
> test the same and return some feedback.
>
> We have about 3-4 days (till Thursday) for any pending fixes and the
> final release to happen, so shout out in case you face any blockers.
>
> The RC1 packages should land here:
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/
> and like so for CentOS,
> CentOS7:
>   # yum install
> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-
> gluster40-0.9-1.el7.centos.x86_64.rpm
>   # yum install glusterfs-server
>
> Thanks,
> Gluster community
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

2018-02-28 Thread Jose V . Carrión
Hi Nithya,

My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .

For volumedisk1 I only have files of 500MB and they are continuosly written
in sequential mode. The filename pattern of written files is:

run.node1..rd
run.node2..rd
run.node1.0001.rd
run.node2.0001.rd
run.node1.0002.rd
run.node2.0002.rd
...
...
run.node1.X.rd
run.node2.X.rd

(  X ranging from  to infinite )

Curiously stor1data and stor2data maintain similar ratios in bytes:

Filesystem  1K-blocksUsed   Available
Use% Mounted on
/dev/sdc1 52737613824 17079174264  35658439560  33%
/mnt/glusterfs/vol1   -> stor1data
/dev/sdc1 52737613824 17118810848  35618802976  33%
/mnt/glusterfs/vol1  ->  stor2data

However the ratio on som3data differs too much (1TB):
Filesystem   1K-blocksUsedAvailable
Use% Mounted on
/dev/sdc1 52737613824 15479191748  37258422076  30%
/mnt/disk_c/glusterfs/vol1 -> stor3data
/dev/sdd1 52737613824 15566398604  37171215220  30%
/mnt/disk_d/glusterfs/vol1 -> stor3data

Thinking in  inodes:

FilesystemInodes   IUsed   IFree  IUse%
Mounted on
/dev/sdc1 5273970048  851053  52731189951%
/mnt/glusterfs/vol1 ->  stor1data
/dev/sdc1 5273970048  849388  52731206601%
/mnt/glusterfs/vol1 ->  stor2data

/dev/sdc1 5273970048  846877  52731231711%
/mnt/disk_c/glusterfs/vol1 -> stor3data
/dev/sdd1 5273970048  845250  52731247981%
/mnt/disk_d/glusterfs/vol1 -> stor3data

851053 (stor1) - 845250 (stor3) = 5803 files of difference !

In adition, correct me if I'm wrong,  stor3data should have 50% of
probability to store a new file (even taking into account the algorithm of
DHT with filename patterns)

Thanks,
Greetings.

Jose V.

Status of volume: volumedisk0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick stor1data:/mnt/glusterfs/vol0/bri
ck1 49152 0  Y
13533
Brick stor2data:/mnt/glusterfs/vol0/bri
ck1 49152 0  Y
13302
Brick stor3data:/mnt/disk_b1/glusterfs/
vol0/brick1 49152 0  Y
17371
Brick stor3data:/mnt/disk_b2/glusterfs/
vol0/brick1 49153 0  Y
17391
NFS Server on localhost N/A   N/AN
N/A
NFS Server on stor3data N/A   N/AN   N/A
NFS Server on stor2data N/A   N/AN   N/A

Task Status of Volume volumedisk0
--
Task : Rebalance
ID   : 7f5328cb-ed25-4627-9196-fb3e29e0e4ca
Status   : completed

Status of volume: volumedisk1
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick stor1data:/mnt/glusterfs/vol1/bri
ck1 49153 0  Y
13579
Brick stor2data:/mnt/glusterfs/vol1/bri
ck1 49153 0  Y
13344
Brick stor3data:/mnt/disk_c/glusterfs/v
ol1/brick1  49154 0  Y
17439
Brick stor3data:/mnt/disk_d/glusterfs/v
ol1/brick1  49155 0  Y
17459
NFS Server on localhost N/A   N/AN
N/A
NFS Server on stor3data N/A   N/AN   N/A
NFS Server on stor2data N/A   N/AN   N/A

Task Status of Volume volumedisk1
--
Task : Rebalance
ID   : d0048704-beeb-4a6a-ae94-7e7916423fd3
Status   : completed


2018-02-28 15:40 GMT+01:00 Nithya Balachandran :

> Hi Jose,
>
> On 28 February 2018 at 18:28, Jose V. Carrión  wrote:
>
>> Hi Nithya,
>>
>> I applied the workarround for this bug and now df shows the right size:
>>
>> That is good to hear.
>
>
>
>> [root@stor1 ~]# df -h
>> FilesystemSize  Used Avail Use% Mounted on
>> /dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
>> /dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>>   101T  3,3T   97T   4% /volumedisk0
>> stor1data:/volumedisk1
>>  

Re: [Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

2018-02-28 Thread Nithya Balachandran
Hi Jose,

On 28 February 2018 at 18:28, Jose V. Carrión  wrote:

> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.



> [root@stor1 ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
> /dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
>   101T  3,3T   97T   4% /volumedisk0
> stor1data:/volumedisk1
>   197T   61T  136T  31% /volumedisk1
>
>
> [root@stor2 ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
> /dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
> stor2data:/volumedisk0
>   101T  3,3T   97T   4% /volumedisk0
> stor2data:/volumedisk1
>   197T   61T  136T  31% /volumedisk1
>
>
> [root@stor3 ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sdb1  25T  638G   24T   3% /mnt/disk_b1/glusterfs/vol0
> /dev/sdb2  25T  654G   24T   3% /mnt/disk_b2/glusterfs/vol0
> /dev/sdc1  50T   15T   35T  30% /mnt/disk_c/glusterfs/vol1
> /dev/sdd1  50T   15T   35T  30% /mnt/disk_d/glusterfs/vol1
> stor3data:/volumedisk0
>   101T  3,3T   97T   4% /volumedisk0
> stor3data:/volumedisk1
>   197T   61T  136T  31% /volumedisk1
>
>
> However I'm concerned because, as you can see, the volumedisk0 on
> stor3data is composed by 2 bricks on thesame disk but on different
> partitions (/dev/sdb1 and /dev/sdb2).
> After to aplly the workarround, the  shared-brick-count parameter was
> setted to 1 in all the bricks and all the servers (see below). Could be
> this an issue ?
>
> No, this is correct. The shared-brick-count will be > 1 only if multiple
bricks share the same partition.



> Also, I can check that stor3data is now unbalanced respect stor1data and
> stor2data. The three nodes have the same size of brick but stor3data bricks
> have used 1TB less than stor1data and stor2data:
>


This does not necessarily indicate a problem. The distribution need not be
exactly equal and depends on the filenames. Can you provide more
information on the kind of dataset (how many files, sizes etc) on this
volume? Did you create the volume with all 4 bricks or add some later?

Regards,
Nithya

>
> stor1data:
> /dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
> /dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
>
> stor2data bricks:
> /dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
> /dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
>
> stor3data bricks:
>   /dev/sdb1  25T  638G   24T   3% /mnt/disk_b1/glusterfs/vol0
>   /dev/sdb2  25T  654G   24T   3% /mnt/disk_b2/glusterfs/vol0
>dev/sdc1  50T   15T   35T  30% /mnt/disk_c/glusterfs/vol1
>/dev/sdd1 50T   15T   35T  30% /mnt/disk_d/glusterfs/vol1
>
>
> [root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol:3:option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:option
> shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol:3:option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:option
> shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
>option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-
> glusterfs-vol1-brick1.vol.rpmsave:3:option shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
>option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-
> glusterfs-vol1-brick1.vol.rpmsave:3:option shared-brick-count 0
>
> [root@stor2 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol:3:option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:option
> shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol:3:option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:option
> shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
>option shared-brick-count 1
> 

[Gluster-users] Glustered 2018 schedule

2018-02-28 Thread Ivan Rossi
Today we published the program for the "Glustered 2018" meeting (Bologna,
Italy, 2018-03-08)
Hope to see some of you here.

 http://www.incontrodevops.it/events/glustered-2018/

Ivan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 4.0: RC1 tagged

2018-02-28 Thread Pranith Kumar Karampuri
I found the following memory leak present in 3.13, 4.0 and master:
https://bugzilla.redhat.com/show_bug.cgi?id=1550078

I will clone/port to 4.0 as soon as the patch is merged.

On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero  wrote:

> Hi all,
>
> Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel
> 3.10.0-693.17.1.el7.x86_64
>
> This package works ok
> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-
> gluster40-0.9-1.el7.centos.x86_64.rpm
>
> # yum install http://cbs.centos.org/kojifiles/work/tasks/1548/
> 311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
> # yum install glusterfs-server
> # systemctl start glusterd
> # systemctl status glusterd
> ● glusterd.service - GlusterFS, a clustered file-system server
>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
> vendor preset: disabled)
>Active: active (running) since Wed 2018-02-28 09:18:46 -03; 53s ago
>  Main PID: 2070 (glusterd)
>CGroup: /system.slice/glusterd.service
>└─2070 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
> INFO
>
> Feb 28 09:18:46 centos-7 systemd[1]: Starting GlusterFS, a clustered
> file-system server...
> Feb 28 09:18:46 centos-7 systemd[1]: Started GlusterFS, a clustered
> file-system server.
>
>
>
>
> This one fails http://cbs.centos.org/kojifiles/work/tasks/1548/
> 311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
>
> # yum install -y
> https://buildlogs.centos.org/centos/7/storage/x86_64/
> gluster-4.0/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
> Loaded plugins: fastestmirror, langpacks
> glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
>   | 571 kB  00:00:00
> Examining /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm:
> glusterfs-4.0.0-0.1.rc1.el7.x86_64
> Marking /var/tmp/yum-root-BIfa9_/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm
> as an update to glusterfs-3.12.6-1.el7.x86_64
> Resolving Dependencies
> --> Running transaction check
> ---> Package glusterfs.x86_64 0:3.12.6-1.el7 will be updated
> --> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
> glusterfs-server-3.12.6-1.el7.x86_64
> base
>   | 3.6 kB  00:00:00
> centos-gluster312
>   | 2.9 kB  00:00:00
> extras
>   | 3.4 kB  00:00:00
> purpleidea-vagrant-libvirt
>   | 3.0 kB  00:00:00
> updates
>   | 3.4 kB  00:00:00
> centos-gluster312/7/x86_64/primary_db
>   |  87 kB  00:00:00
> Loading mirror speeds from cached hostfile
>  * base: centos.xfree.com.ar
>  * extras: centos.xfree.com.ar
>  * updates: centos.xfree.com.ar
> --> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
> glusterfs-api-3.12.6-1.el7.x86_64
> --> Processing Dependency: glusterfs = 3.12.6-1.el7 for package:
> glusterfs-fuse-3.12.6-1.el7.x86_64
> ---> Package glusterfs.x86_64 0:4.0.0-0.1.rc1.el7 will be an update
> --> Processing Dependency: glusterfs-libs = 4.0.0-0.1.rc1.el7 for
> package: glusterfs-4.0.0-0.1.rc1.el7.x86_64
> --> Finished Dependency Resolution
> Error: Package: glusterfs-server-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>Requires: glusterfs = 3.12.6-1.el7
>Removing: glusterfs-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>glusterfs = 3.12.6-1.el7
>Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
> (/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
>glusterfs = 4.0.0-0.1.rc1.el7
>Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
>glusterfs = 3.8.4-18.4.el7.centos
>Available: glusterfs-3.12.0-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.0-1.el7
>Available: glusterfs-3.12.1-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.1-1.el7
>Available: glusterfs-3.12.1-2.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.1-2.el7
>Available: glusterfs-3.12.3-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.3-1.el7
>Available: glusterfs-3.12.4-1.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.4-1.el7
>Available: glusterfs-3.12.5-2.el7.x86_64 (centos-gluster312)
>glusterfs = 3.12.5-2.el7
> Error: Package: glusterfs-api-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>Requires: glusterfs = 3.12.6-1.el7
>Removing: glusterfs-3.12.6-1.el7.x86_64
> (@centos-gluster312-test)
>glusterfs = 3.12.6-1.el7
>Updated By: glusterfs-4.0.0-0.1.rc1.el7.x86_64
> (/glusterfs-4.0.0-0.1.rc1.el7.x86_64)
>glusterfs = 4.0.0-0.1.rc1.el7
>Available: glusterfs-3.8.4-18.4.el7.centos.x86_64 (base)
>glusterfs = 3.8.4-18.4.el7.centos
>

Re: [Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

2018-02-28 Thread Jose V . Carrión
Hi Nithya,

I applied the workarround for this bug and now df shows the right size:

[root@stor1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
/dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
  101T  3,3T   97T   4% /volumedisk0
stor1data:/volumedisk1
  197T   61T  136T  31% /volumedisk1


[root@stor2 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
/dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1
stor2data:/volumedisk0
  101T  3,3T   97T   4% /volumedisk0
stor2data:/volumedisk1
  197T   61T  136T  31% /volumedisk1


[root@stor3 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb1  25T  638G   24T   3% /mnt/disk_b1/glusterfs/vol0
/dev/sdb2  25T  654G   24T   3% /mnt/disk_b2/glusterfs/vol0
/dev/sdc1  50T   15T   35T  30% /mnt/disk_c/glusterfs/vol1
/dev/sdd1  50T   15T   35T  30% /mnt/disk_d/glusterfs/vol1
stor3data:/volumedisk0
  101T  3,3T   97T   4% /volumedisk0
stor3data:/volumedisk1
  197T   61T  136T  31% /volumedisk1


However I'm concerned because, as you can see, the volumedisk0 on stor3data
is composed by 2 bricks on thesame disk but on different partitions
(/dev/sdb1 and /dev/sdb2).
After to aplly the workarround, the  shared-brick-count parameter was
setted to 1 in all the bricks and all the servers (see below). Could be
this an issue ?

Also, I can check that stor3data is now unbalanced respect stor1data and
stor2data. The three nodes have the same size of brick but stor3data bricks
have used 1TB less than stor1data and stor2data:

stor1data:
/dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
/dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1

stor2data bricks:
/dev/sdb1  26T  1,1T   25T   4% /mnt/glusterfs/vol0
/dev/sdc1  50T   16T   34T  33% /mnt/glusterfs/vol1

stor3data bricks:
  /dev/sdb1  25T  638G   24T   3% /mnt/disk_b1/glusterfs/vol0
  /dev/sdb2  25T  654G   24T   3% /mnt/disk_b2/glusterfs/vol0
   dev/sdc1  50T   15T   35T  30% /mnt/disk_c/glusterfs/vol1
   /dev/sdd1 50T   15T   35T  30% /mnt/disk_d/glusterfs/vol1


[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 0

[root@stor2 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 0
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 0

[root@stor3t ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3:
   option shared-brick-count 1
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:
   option shared-brick-count 1