Re: [Gluster-users] [ovirt-users] How do you oVirt?

2017-03-21 Thread Ramesh Nachimuthu
Hi Gluster Users,

  oVirt team is running a survey to get insights about oVirt deployment. I know 
many Gluster users running oVirt with Gluster Storage. So, if you running oVirt 
with Gluster, then please take this survey and let us know more insights about 
your (oVirt+Gluster) deployment.

Survey link: 
https://docs.google.com/forms/d/e/1FAIpQLSdloxiIP2HrW2HguU0UVbNtKgpSBaJXj-Z9lxyNAR7B9_S0Zg/viewform?usp=fb_send_twt

Survey will close on April 15th


Thanks with Regards,
Ramesh




- Original Message -
> From: "Sandro Bonazzola" 
> To: annou...@ovirt.org, "users" 
> Sent: Tuesday, March 21, 2017 9:25:19 PM
> Subject: [ovirt-users] How do you oVirt?
> 
> As we continue to develop oVirt 4.2 and future releases, the Development and
> Integration teams at Red Hat would value
> insights on how you are deploying the oVirt environment.
> Please help us to hit the mark by completing this short survey. Survey will
> close on April 15th
> 
> Here's the link to the survey:
> https://docs.google.com/forms/d/e/1FAIpQLSdloxiIP2HrW2HguU0UVbNtKgpSBaJXj-Z9lxyNAR7B9_S0Zg/viewform?usp=fb_send_twt
> 
> Thanks,
> 
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> 
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-05 Thread Ramesh Nachimuthu

+gluster-users 


Regards,
Ramesh

- Original Message -
> From: "Arman Khalatyan" 
> To: "Juan Pablo" 
> Cc: "users" , "FERNANDO FREDIANI" 
> Sent: Friday, March 3, 2017 8:32:31 PM
> Subject: Re: [ovirt-users] Replicated Glusterfs on top of ZFS
> 
> The problem itself is not the streaming data performance., and also dd zero
> does not help much in the production zfs running with compression.
> the main problem comes when the gluster is starting to do something with
> that, it is using xattrs, probably accessing extended attributes inside the
> zfs is slower than XFS.
> Also primitive find file or ls -l in the (dot)gluster folders takes ages:
> 
> now I can see that arbiter host has almost 100% cache miss during the
> rebuild, which is actually natural while he is reading always the new
> datasets:
> [root@clei26 ~]# arcstat.py 1
> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
> 15:57:31 29 29 100 29 100 0 0 29 100 685M 31G
> 15:57:32 530 476 89 476 89 0 0 457 89 685M 31G
> 15:57:33 480 467 97 467 97 0 0 463 97 685M 31G
> 15:57:34 452 443 98 443 98 0 0 435 97 685M 31G
> 15:57:35 582 547 93 547 93 0 0 536 94 685M 31G
> 15:57:36 439 417 94 417 94 0 0 393 94 685M 31G
> 15:57:38 435 392 90 392 90 0 0 374 89 685M 31G
> 15:57:39 364 352 96 352 96 0 0 352 96 685M 31G
> 15:57:40 408 375 91 375 91 0 0 360 91 685M 31G
> 15:57:41 552 539 97 539 97 0 0 539 97 685M 31G
> 
> It looks like we cannot have in the same system performance and reliability
> :(
> Simply final conclusion is with the single disk+ssd even zfs doesnot help to
> speedup the glusterfs healing.
> I will stop here:)
> 
> 
> 
> 
> On Fri, Mar 3, 2017 at 3:35 PM, Juan Pablo < pablo.localh...@gmail.com >
> wrote:
> 
> 
> 
> cd to inside the pool path
> then dd if=/dev/zero of= test.tt bs=1M
> leave it runing 5/10 minutes.
> do ctrl+c paste result here.
> etc.
> 
> 2017-03-03 11:30 GMT-03:00 Arman Khalatyan < arm2...@gmail.com > :
> 
> 
> 
> No, I have one pool made of the one disk and ssd as a cache and log device.
> I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate (Arbiter)=
> replica 2+1!
> That how much you can push into compute nodes(they have only 3 disk slots).
> 
> 
> On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo < pablo.localh...@gmail.com >
> wrote:
> 
> 
> 
> ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should have 1
> pool, with zlog+cache if you are looking for performance.
> also, dont mix drives.
> whats the performance issue you are facing?
> 
> 
> regards,
> 
> 2017-03-03 11:00 GMT-03:00 Arman Khalatyan < arm2...@gmail.com > :
> 
> 
> 
> This is CentOS 7.3 ZoL version 0.6.5.9-1
> 
> 
> 
> 
> 
> [root@clei22 ~]# lsscsi
> 
> [2:0:0:0] disk ATA INTEL SSDSC2CW24 400i /dev/sda
> 
> [3:0:0:0] disk ATA HGST HUS724040AL AA70 /dev/sdb
> 
> [4:0:0:0] disk ATA WDC WD2002FYPS-0 1G01 /dev/sdc
> 
> 
> 
> 
> [root@clei22 ~]# pvs ;vgs;lvs
> 
> PV VG Fmt Attr PSize PFree
> 
> /dev/mapper/INTEL_SSDSC2CW240A3_CVCV306302RP240CGN vg_cache lvm2 a-- 223.57g
> 0
> 
> /dev/sdc2 centos_clei22 lvm2 a-- 1.82t 64.00m
> 
> VG #PV #LV #SN Attr VSize VFree
> 
> centos_clei22 1 3 0 wz--n- 1.82t 64.00m
> 
> vg_cache 1 2 0 wz--n- 223.57g 0
> 
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> 
> home centos_clei22 -wi-ao 1.74t
> 
> root centos_clei22 -wi-ao 50.00g
> 
> swap centos_clei22 -wi-ao 31.44g
> 
> lv_cache vg_cache -wi-ao 213.57g
> 
> lv_slog vg_cache -wi-ao 10.00g
> 
> 
> 
> 
> [root@clei22 ~]# zpool status -v
> 
> pool: zclei22
> 
> state: ONLINE
> 
> scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
> 
> config:
> 
> 
> 
> 
> NAME STATE READ WRITE CKSUM
> 
> zclei22 ONLINE 0 0 0
> 
> HGST_HUS724040ALA640_PN2334PBJ4SV6T1 ONLINE 0 0 0
> 
> logs
> 
> lv_slog ONLINE 0 0 0
> 
> cache
> 
> lv_cache ONLINE 0 0 0
> 
> 
> 
> 
> errors: No known data errors
> 
> 
> ZFS config:
> 
> 
> 
> [root@clei22 ~]# zfs get all zclei22/01
> 
> NAME PROPERTY VALUE SOURCE
> 
> zclei22/01 type filesystem -
> 
> zclei22/01 creation Tue Feb 28 14:06 2017 -
> 
> zclei22/01 used 389G -
> 
> zclei22/01 available 3.13T -
> 
> zclei22/01 referenced 389G -
> 
> zclei22/01 compressratio 1.01x -
> 
> zclei22/01 mounted yes -
> 
> zclei22/01 quota none default
> 
> zclei22/01 reservation none default
> 
> zclei22/01 recordsize 128K local
> 
> zclei22/01 mountpoint /zclei22/01 default
> 
> zclei22/01 sharenfs off default
> 
> zclei22/01 checksum on default
> 
> zclei22/01 compression off local
> 
> zclei22/01 atime on default
> 
> zclei22/01 devices on default
> 
> zclei22/01 exec on default
> 
> zclei22/01 setuid on default
> 
> zclei22/01 readonly off default
> 
> zclei22/01 zoned off default
> 
> zclei22/01 snapdir hidden default
> 
> zclei22/01 aclinherit restricted default
> 
> zclei22/01 canmount on default
> 
> zclei22/01 xattr sa local
> 
> zclei22/01 copies 1 default
> 
> zclei22/01 version 5 -
> 
> zclei22/01 utf8only off -
> 
> zclei22/01 normalization none -
> 
> zclei22/01

Re: [Gluster-users] install Gluster 3.9 on CentOS

2017-01-02 Thread Ramesh Nachimuthu




- Original Message -
> From: "Niels de Vos" 
> To: "Kaleb Keithley" 
> Cc: "Ramesh Nachimuthu" , "Grant Ridder" 
> , gluster-users@gluster.org
> Sent: Monday, January 2, 2017 5:28:34 PM
> Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
> 
> On Wed, Dec 28, 2016 at 06:40:35AM -0500, Kaleb Keithley wrote:
> > Hi,
> > 
> > Just send Niels (nde...@redhat.com) an email telling him you've tested it.
> 
> Yes, thats correct. There is no web interface for giving karma to
> packages like there is for Fedora. It is best to inform the 3.9 release
> maintainers on one of the lists and put me in CC. Once someone other
> than me checked the packages and is happy with them, I can mark them for
> signing and releasing by the CentOS release engineering team.
> 

Thanks Niels. I'm not sure about who is the 3.9 maintainer but I have 
personally verified the basic things like volume creation in 3.9 and it works. 
May me I will I include Kasturi, Sas to give their feedback on gluster 3.9.

Kasturi, Sas: Are you using gluster 3.9 in your testing?. If not, can you 
include it in some tests and share your feedback with Niels?.

Regards,
Ramesh

> Thanks,
> Niels
> 
> 
> > 
> > Thanks
> > 
> > 
> > - Original Message -
> > > From: "Ramesh Nachimuthu" 
> > > To: "Kaleb S. KEITHLEY" 
> > > Cc: "Grant Ridder" , gluster-users@gluster.org
> > > Sent: Wednesday, December 28, 2016 12:26:03 AM
> > > Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
> > > 
> > > Hi Kaleb,
> > > 
> > > I can give the karma. Do you know where these builds are listed in Centos
> > > Update system?
> > > 
> > > 
> > > Regards,
> > > Ramesh
> > > 
> > > 
> > > 
> > > - Original Message -
> > > > From: "Kaleb S. KEITHLEY" 
> > > > To: "Grant Ridder" , gluster-users@gluster.org
> > > > Sent: Wednesday, December 21, 2016 12:06:33 AM
> > > > Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
> > > > 
> > > > On 12/20/2016 12:19 PM, Grant Ridder wrote:
> > > > > Hi,
> > > > >
> > > > > I am not seeing 3.9 in the Storage SIG for CentOS 6 or 7
> > > > > http://mirror.centos.org/centos/7.2.1511/storage/x86_64/
> > > > > http://mirror.centos.org/centos/6.8/storage/x86_64/
> > > > >
> > > > > However, i do see it
> > > > > here: http://buildlogs.centos.org/centos/7/storage/x86_64/
> > > > >
> > > > > Is that expected?
> > > > 
> > > > Yes.
> > > > 
> > > > > did the Storage SIG repo change locations?
> > > > 
> > > > No.
> > > > 
> > > > Until someone tests and gives positive feedback they remain in
> > > > buildlogs.
> > > > 
> > > > Much the same way Fedora RPMs remain in Updates-Testing until they
> > > > receive +3 karma (or wait for 14 days).
> > > > 
> > > > --
> > > > 
> > > > Kaleb
> > > > 
> > > > 
> > > > ___
> > > > Gluster-users mailing list
> > > > Gluster-users@gluster.org
> > > > http://www.gluster.org/mailman/listinfo/gluster-users
> > > > 
> > > 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] install Gluster 3.9 on CentOS

2016-12-27 Thread Ramesh Nachimuthu
Hi Kaleb,

I can give the karma. Do you know where these builds are listed in Centos 
Update system?


Regards,
Ramesh



- Original Message -
> From: "Kaleb S. KEITHLEY" 
> To: "Grant Ridder" , gluster-users@gluster.org
> Sent: Wednesday, December 21, 2016 12:06:33 AM
> Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
> 
> On 12/20/2016 12:19 PM, Grant Ridder wrote:
> > Hi,
> >
> > I am not seeing 3.9 in the Storage SIG for CentOS 6 or 7
> > http://mirror.centos.org/centos/7.2.1511/storage/x86_64/
> > http://mirror.centos.org/centos/6.8/storage/x86_64/
> >
> > However, i do see it
> > here: http://buildlogs.centos.org/centos/7/storage/x86_64/
> >
> > Is that expected?
> 
> Yes.
> 
> > did the Storage SIG repo change locations?
> 
> No.
> 
> Until someone tests and gives positive feedback they remain in buildlogs.
> 
> Much the same way Fedora RPMs remain in Updates-Testing until they
> receive +3 karma (or wait for 14 days).
> 
> --
> 
> Kaleb
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-20 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: us...@ovirt.org, gluster-users@gluster.org, "Ravishankar Narayanankutty" 
> 
> Sent: Tuesday, December 20, 2016 4:15:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > - Original Message -
> > > From: "Giuseppe Ragusa" 
> > > To: "Ramesh Nachimuthu" 
> > > Cc: us...@ovirt.org
> > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > GlusterFS 3.7.17
> > > 
> > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > > clic sul collegamento seguente.
> > > 
> > > 
> > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > 
> > > 
> > > Da: Ramesh Nachimuthu 
> > > Inviato: lunedì 12 dicembre 2016 09.32
> > > A: Giuseppe Ragusa
> > > Cc: us...@ovirt.org
> > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > 
> > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > Hi all,
> > > >
> > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all
> > > > on
> > > > CentOS 7.2):
> > > >
> > > >  From /var/log/messages:
> > > >
> > > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in #012
> > > > **kwargs)#012
> > > > File "", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > Engine
> > > > VM OVF from the OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > > path:
> > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > > an OVF for HE VM, trying to convert
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > > vm.conf from OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current
> > > > state
> > > > En

Re: [Gluster-users] [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: us...@ovirt.org
> Sent: Friday, December 16, 2016 2:42:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> clic sul collegamento seguente.
> 
> 
> <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> 
> vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> 
> 
> 
> Da: Ramesh Nachimuthu 
> Inviato: lunedì 12 dicembre 2016 09.32
> A: Giuseppe Ragusa
> Cc: us...@ovirt.org
> Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> 
> On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > Hi all,
> >
> > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > CentOS 7.2):
> >
> >  From /var/log/messages:
> >
> > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 2, in glusterVolumeStatus#012  File
> > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> >   'device'
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > VM OVF from the OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > path:
> > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > an OVF for HE VM, trying to convert
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > vm.conf from OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > EngineUp (score: 3400)
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > host read.mgmt.private (id: 2, score: 3400)
> > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 

Re: [Gluster-users] [ovirt-users] have to run gluster peer detach to remove a host form a gluster cluster.

2016-01-17 Thread Ramesh Nachimuthu

Hi Nathanaël Blanchet,

This could be because of the recent change we did in Gluster cluster.  
We stop all gluster process when a gluster host is moved to Maintenance. 
Can u attach the engine log for further analysis?


Regards,
Ramesh

On 01/15/2016 09:28 PM, Nathanaël Blanchet wrote:



Hi all,

When I want to remove an host from a gluster cluster, engines tells me
that it fails to remove the host.
Once I run a manual gluster peer detach , then the host is
successfully removed.
It seems to be bug, doesn't it?



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nagios-gluster plugin

2015-11-30 Thread Ramesh Nachimuthu
You are hitting the nrpe payload size issue. currently NRPE supports only 1024 
bytes as payload. We have to increase the payload size. This issue is being 
tracked in nagios tracker http://tracker.nagios.org/view.php?id=564. In the 
mean time, you can rebuild nrpe with the patch 
http://tracker.nagios.org/file_download.php?file_id=269&type=bug and try again.

Note: You have to update nrpe on the storage node and nrpe-plugins on the 
nagios server side after rebuilding nrpe with the above patch.

Regards,
Ramesh

- Original Message -
> From: "Amudhan P" 
> To: gluster-users@gluster.org
> Sent: Monday, November 30, 2015 2:37:40 PM
> Subject: [Gluster-users] nagios-gluster plugin
> 
> Hi,
> 
> I am trying to use nagios-gluster plugin to monitor my gluster test setup in
> Ubuntu 14.04 server.
> 
> OS : Ubuntu 14.04
> Gluster version : 3.7.6
> Nagios version : core 3.5.1
> 
> My current setup.
> 
> node 1 = nagios monitor server
> node 2 = gluster data node with 10 brick (172.16.5.66)
> node 3 = gluster data node with 10 brick
> 
> 
> normal nagios nrpe command works fine
> 
> root@node1:~$ /usr/lib/nagios/plugins/check_nrpe -H 172.16.5.66 -c check_load
> OK - load average: 0.00, 0.01, 0.05|load1=0.000;15.000;30.000;0;
> load5=0.010;10.000;25.000;0; load15=0.050;5.000;20.000;
> 
> But when i try to run discovery.py.i am getting error below
> 
> root@node1:~$ /usr/local/lib/nagios/plugins/gluster/discovery.py -c vmgfstst
> -H 172.16.5.66
> Traceback (most recent call last):
> File "/usr/local/lib/nagios/plugins/gluster/discovery.py", line 541, in
> 
> clusterdata = discoverCluster(args.hostip, args.cluster, args.timeout)
> File "/usr/local/lib/nagios/plugins/gluster/discovery.py", line 90, in
> discoverCluster
> componentlist = discoverVolumes(hostip, timeout)
> File "/usr/local/lib/nagios/plugins/gluster/discovery.py", line 58, in
> discoverVolumes
> timeout=timeout)
> File "/usr/local/lib/nagios/plugins/gluster/server_utils.py", line 118, in
> execNRPECommand
> resultDict = json.loads(outputStr)
> File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
> return _default_decoder.decode(s)
> File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
> obj, end = self.scan_once(s, idx)
> ValueError: ('Invalid control character at: line 1 column 1024 (char 1023)',
> '{"vmgfsvol1": {"name": "vmgfsvol1", "disperseCount": "10", "bricks":
> [{"brickpath": "/media/disk1", "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk2",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk3",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk4",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk5",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk6",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk7",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk8",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk9",
> "brickaddre\n')
> 
> 
> But when i run discover volume list command it works.
> rootr@node1:~$ /usr/lib/nagios/plugins/check_nrpe -H 172.16.5.66 -c
> discover_volume_list
> {"vmgfsvol1": {"type": "DISTRIBUTED_DISPERSE", "name": "vmgfsvol1"}}
> 
> 
> Looking for help to solve this issue.
> 
> 
> regards
> Amudhan P
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster & Nagios False Alerts

2015-10-20 Thread Ramesh Nachimuthu



On 10/20/2015 04:53 PM, Gary Armstrong wrote:

Hi Gluster experts,

I have a three brick setup that I'm monitoring with Gluster, which 
throws a warning when 2 bricks are detected and critical when 1 or 
less are detected.


Every now and again, seemingly randomly, I will get a critical warning 
on one of the three servers, saying that no bricks are found.  This is 
incorrect as when you log onto the server and check gluster vol status 
all three bricks are online and healthy. After a few minutes Nagios 
returns to reporting a healthy volume.


Does anyone else monitor their gluster volume with Nagios, and see 
random critical alerts?




We have never experienced this issue in our test setup. Can u post the 
nagios log from  /var/log/nagios/nagios.log.


Regards,
Ramesh


Cheers,
Gary



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster-Nagios

2015-10-09 Thread Ramesh Nachimuthu



On 10/09/2015 11:44 AM, Punit Dambiwal wrote:

Hi Sahina,

I have done the same but still the same result...



Please update the nrpe package at the storage nodes and 
nagios-plugins-nrpe at the nagios server side with the new build and 
restart nrpe service on all the nodes. Also run 
*/usr/lib64/nagios/plugins/check_nrpe -H  *to check basic 
nrpe checks are working.


Regards,
Ramesh



On Fri, Oct 9, 2015 at 12:23 PM, Sahina Bose <mailto:sab...@redhat.com>> wrote:


You can update the packages with the ones built from source.
You will need to update both the client and server nrpe packages
with the modified payload limit to resolve this
- nagios-plugins-nrpe
- nrpe

Have you done that?


On 10/09/2015 07:17 AM, Punit Dambiwal wrote:

Hi Ramesh,

Even after recompile nrpe with increased value still the same
issue...

Thanks,
Punit

On Fri, Oct 9, 2015 at 9:21 AM, Punit Dambiwal mailto:hypu...@gmail.com>> wrote:

Hi Ramesh,

Thanks for the update...as i have install nagios and nrpe via
yum,should i need to remove nrpe and reinstall through source
package ??

Thanks,
Punit

On Thu, Oct 8, 2015 at 6:49 PM, Ramesh Nachimuthu
mailto:rnach...@redhat.com>> wrote:

Looks like you are hitting the NRPE Payload issue.
Standard NRPE packages from epel/fedora has 1024 bytes
payload limit. We have to increment this to 8192 to fix
this. You can see more info at

http://serverfault.com/questions/613288/truncating-return-data-as-it-is-bigger-then-nrpe-allows.


Let me know if u need any more info.

Regards,
Ramesh


On 10/08/2015 02:48 PM, Punit Dambiwal wrote:

Hi,

I am getting the following error :-


[root@monitor-001 yum.repos.d]#
/usr/lib64/nagios/plugins/gluster/discovery.py -c ssd -H
stor1
Traceback (most recent call last):
  File "/usr/lib64/nagios/plugins/gluster/discovery.py",
line 510, in 
clusterdata = discoverCluster(args.hostip,
args.cluster, args.timeout)
  File "/usr/lib64/nagios/plugins/gluster/discovery.py",
line 88, in discoverCluster
componentlist = discoverVolumes(hostip, timeout)
  File "/usr/lib64/nagios/plugins/gluster/discovery.py",
line 56, in discoverVolumes
timeout=timeout)
  File
"/usr/lib64/nagios/plugins/gluster/server_utils.py",
line 107, in execNRPECommand
resultDict = json.loads(outputStr)
  File "/usr/lib64/python2.6/json/__init__.py", line
307, in loads
return _default_decoder.decode(s)
  File "/usr/lib64/python2.6/json/decoder.py", line 319,
in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib64/python2.6/json/decoder.py", line 336,
in raw_decode
obj, end = self._scanner.iterscan(s, **kw).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55,
in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 183,
in JSONObject
value, end = iterscan(s, idx=end,
context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55,
in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 183,
in JSONObject
value, end = iterscan(s, idx=end,
context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55,
in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 217,
in JSONArray
value, end = iterscan(s, idx=end,
context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55,
in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 183,
in JSONObject
value, end = iterscan(s, idx=end,
context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55,
in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 155,
in JSONString
return scanstring(match.string, matc

Re: [Gluster-users] Gluster-Nagios

2015-10-08 Thread Ramesh Nachimuthu
Looks like you are hitting the NRPE Payload issue. Standard NRPE 
packages from epel/fedora has 1024 bytes payload limit. We have to 
increment this to 8192 to fix this. You can see more info at 
http://serverfault.com/questions/613288/truncating-return-data-as-it-is-bigger-then-nrpe-allows. 



Let me know if u need any more info.

Regards,
Ramesh

On 10/08/2015 02:48 PM, Punit Dambiwal wrote:

Hi,

I am getting the following error :-


[root@monitor-001 yum.repos.d]# 
/usr/lib64/nagios/plugins/gluster/discovery.py -c ssd -H stor1

Traceback (most recent call last):
  File "/usr/lib64/nagios/plugins/gluster/discovery.py", line 510, in 


clusterdata = discoverCluster(args.hostip, args.cluster, args.timeout)
  File "/usr/lib64/nagios/plugins/gluster/discovery.py", line 88, in 
discoverCluster

componentlist = discoverVolumes(hostip, timeout)
  File "/usr/lib64/nagios/plugins/gluster/discovery.py", line 56, in 
discoverVolumes

timeout=timeout)
  File "/usr/lib64/nagios/plugins/gluster/server_utils.py", line 107, 
in execNRPECommand

resultDict = json.loads(outputStr)
  File "/usr/lib64/python2.6/json/__init__.py", line 307, in loads
return _default_decoder.decode(s)
  File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib64/python2.6/json/decoder.py", line 336, in raw_decode
obj, end = self._scanner.iterscan(s, **kw).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 183, in JSONObject
value, end = iterscan(s, idx=end, context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 183, in JSONObject
value, end = iterscan(s, idx=end, context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 217, in JSONArray
value, end = iterscan(s, idx=end, context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 183, in JSONObject
value, end = iterscan(s, idx=end, context=context).next()
  File "/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
rval, next_pos = action(m, context)
  File "/usr/lib64/python2.6/json/decoder.py", line 155, in JSONString
return scanstring(match.string, match.end(), encoding, strict)
ValueError: ('Invalid control character at: line 1 column 1023 (char 
1023)', '{"ssd": {"name": "ssd", "disperseCount": "0", "bricks": 
[{"brickpath": "/bricks/b/vol1", "brickaddress": "stor1", "hostUuid": 
"5fcb5150-f0a5-4af8-b383-11fa5d3f82f0"}, {"brickpath": 
"/bricks/b/vol1", "brickaddress": "stor2", "hostUuid": 
"b78d42c1-6ad7-4044-b900-3ccfe915859f"}, {"brickpath": 
"/bricks/b/vol1", "brickaddress": "stor3", "hostUuid": 
"40500a9d-418d-4cc0-aec5-6efbfb3c24e5"}, {"brickpath": 
"/bricks/b/vol1", "brickaddress": "stor4", "hostUuid": 
"5886ef94-df5e-4845-a54c-0e01546d66ea"}, {"brickpath": 
"/bricks/c/vol1", "brickaddress": "stor1", "hostUuid": 
"5fcb5150-f0a5-4af8-b383-11fa5d3f82f0"}, {"brickpath": 
"/bricks/c/vol1", "brickaddress": "stor2", "hostUuid": 
"b78d42c1-6ad7-4044-b900-3ccfe915859f"}, {"brickpath": 
"/bricks/c/vol1", "brickaddress": "stor3", "hostUuid": 
"40500a9d-418d-4cc0-aec5-6efbfb3c24e5"}, {"brickpath": 
"/bricks/c/vol1", "brickaddress": "stor4", "hostUuid": 
"5886ef94-df5e-4845-a54c-0e01546d66ea"}, {"brickpath": 
"/bricks/d/vol1", "brickaddress": "stor1", "hostUuid": 
"5fcb5150-f0a5-4a\n')

[root@monitor-001 yum.repos.d]#
-

--
[root@monitor-001 yum.repos.d]# /usr/lib64/nagios/plugins/check_nrpe 
-H stor1 -c discover_volume_list
{"ssd": {"type": "DISTRIBUTED_REPLICATE", "name": "ssd"}, "lockvol": 
{"type": "REPLICATE", "name": "lockvol"}}

[root@monitor-001 yum.repos.d]#
--

Please help me to solve this issue...

Thanks,
Punit

On Fri, Oct 2, 2015 at 12:15 AM, Sahina Bose > wrote:


The gluster-nagios packages have not been tested on Ubuntu

Looking at the error below, it looks like the rpm has not updated
the nrpe.cfg correctly. You may need to edit the spec file for the
config file paths on Ubuntu and rebuild.


On 10/01/2015 05:45 PM, Amudhan P wrote:

OSError: [Errno 2] No such file or directory is now sorted out by
by changing NRPE_PATH  in "constants.py".

now if i run discovery.py

testusr@gfsovirt:/usr/local/lib/nagios/plugins/gluster$ sudo
python discovery.py -c vm-gfs -H 192.168.1.11
Failed to execute NRPE command 'discover_volume_list' in host
'192.168.1.11'
Error : NRPE: Command 'discover_volume_list' not defined
Make sure NR

Re: [Gluster-users] Gluster-Nagios

2015-09-28 Thread Ramesh Nachimuthu

Oops. Its not a web url. You can run git clone with those URLs.

Regards,
Ramesh

On 09/28/2015 05:54 PM, Mathieu Chateau wrote:

Hello,

from internet I get "Not Found" for gluster-nagios* url
Is it published ?

Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-09-28 14:21 GMT+02:00 Ramesh Nachimuthu <mailto:rnach...@redhat.com>>:




On 09/24/2015 10:21 PM, André Bauer wrote:

I would also love to see packages for Ubuntu.

Are the sources of the Nagios plugins available somewhere?


Yes. You can find them at http://review.gluster.org/

gluster-nagios-common :
http://review.gluster.org/gluster-nagios-common
gluster-nagios-addons: http://review.gluster.org/gluster-nagios-addons
nagios-server-addons: http://review.gluster.org/nagios-server-addons


Regards
André

Am 20.09.2015 um 11:02 schrieb Prof. Dr. Michael Schefczyk:

Dear All,

In June 2014, the gluster-nagios team (thanks!) published the availability 
of gluster-nagios-common and gluster-nagios-addons on this list. As far as I 
can tell, this quite extensive gluster nagios monitoring tool is available for 
el6 only. Are there known plans to make this available for el7 outside the 
RHEL-repos 
(http://ftp.redhat.de/pub/redhat/linux/enterprise/7Server/en/RHS/SRPMS/), e.g. 
for use with oVirt / Centos 7 also? It would be good to be able to monitor 
gluster without playing around with scripts from sources other than a rpm repo.

Regards,

Michael
___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster-Nagios

2015-09-28 Thread Ramesh Nachimuthu



On 09/24/2015 10:21 PM, André Bauer wrote:

I would also love to see packages for Ubuntu.

Are the sources of the Nagios plugins available somewhere?


Yes. You can find them at http://review.gluster.org/

gluster-nagios-common : http://review.gluster.org/gluster-nagios-common
gluster-nagios-addons: http://review.gluster.org/gluster-nagios-addons
nagios-server-addons: http://review.gluster.org/nagios-server-addons


Regards
André

Am 20.09.2015 um 11:02 schrieb Prof. Dr. Michael Schefczyk:

Dear All,

In June 2014, the gluster-nagios team (thanks!) published the availability of 
gluster-nagios-common and gluster-nagios-addons on this list. As far as I can 
tell, this quite extensive gluster nagios monitoring tool is available for el6 
only. Are there known plans to make this available for el7 outside the 
RHEL-repos 
(http://ftp.redhat.de/pub/redhat/linux/enterprise/7Server/en/RHS/SRPMS/), e.g. 
for use with oVirt / Centos 7 also? It would be good to be able to monitor 
gluster without playing around with scripts from sources other than a rpm repo.

Regards,

Michael
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quorum For replica 3 storage

2015-07-20 Thread Ramesh Nachimuthu



On 07/20/2015 12:44 PM, Punit Dambiwal wrote:

Hi Atin,

Earlier i was adding the brick and create the volume through ovirt 
itselfthis time i did it through commandline...i added all 15 
bricks 5*3 through command line


It seems before optimize for virt storage it all ok...but after that 
one quorum start and it failed the whole...




"Optimize for Virt" action in oVirt sets the following volume options.

group = > virt
storage.owner-uid => 36
storage.owner-gid => 36
server.allow-insecure => on
network.ping-timeout => 10

Regards,
Ramesh

On Mon, Jul 20, 2015 at 3:07 PM, Atin Mukherjee > wrote:


In one of your earlier email you mentioned that after adding a brick
volume status stopped working. Can you point me to the glusterd
log for
that transaction?

~Atin

On 07/20/2015 12:11 PM, Punit Dambiwal wrote:
> Hi Atin,
>
> Please find the below details :-
>
> [image: Inline image 1]
>
> [image: Inline image 2]
>
> Now when i set the optimize for the virt storage under ovirt and
restart
> glusterd service on any node...it start failing the quorum..
>
> [image: Inline image 3]
>
> [image: Inline image 4]
>
> Thanks,
> Punit
>
> On Mon, Jul 20, 2015 at 10:44 AM, Punit Dambiwal mailto:hypu...@gmail.com>> wrote:
>
>> HI Atin,
>>
>> Apologies for the delay response...
>>
>> 1. When you added the brick was the command successful?
 Yes..it was successful..
>> 2. If volume status is failing what's output its throwing in
the console
>> and how about the glusterd log?
 I will reproduce the issue again and update you..
>>
>> On Mon, Jul 13, 2015 at 11:46 AM, Atin Mukherjee
mailto:amukh...@redhat.com>>
>> wrote:
>>
>>>
>>>
>>> On 07/13/2015 05:19 AM, Punit Dambiwal wrote:
 Hi Sathees,

 With 3 bricks i can get the gluster volume statusbut
after added
>>> more
 brickscan not get gluster volume status
>>> The information is still incomplete in respect to analyze the
problem.
>>> Further questions:
>>>
>>> 1. When you added the brick was the command successful?
>>> 2. If volume status is failing what's output its throwing in
the console
>>> and how about the glusterd log?
>>>
>>> ~Atin

 On Sun, Jul 12, 2015 at 11:09 AM, SATHEESARAN
mailto:sasun...@redhat.com>>
>>> wrote:

> On 07/11/2015 02:46 PM, Atin Mukherjee wrote:
>
>>
>> On 07/10/2015 03:03 PM, Punit Dambiwal wrote:
>>
>>> Hi,
>>>
>>> I have deployed one replica 3 storage...but i am facing
some issue
>>> with
>>> quorum...
>>>
>>> Let me elaborate more :-
>>>
>>> 1. I have 3 node machines and every machine has 5
HDD(Bricks)...No
>>> RAID...Just JBOD...
>>> 2. Gluster working fine when just add 3 HDD as below :-
>>>
>>> B HDD from server 1
>>> B HDD from server 2
>>> B HDD from server 3
>>>
>>> But when i add more bricks as below :-
>>>
>>> ---
>>> [root@stor1 ~]# gluster volume info
>>>
>>> Volume Name: 3TB
>>> Type: Distributed-Replicate
>>> Volume ID: 5be9165c-3402-4083-b3db-b782da2fb8d8
>>> Status: Stopped
>>> Number of Bricks: 5 x 3 = 15
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: stor1:/bricks/b/vol1
>>> Brick2: stor2:/bricks/b/vol1
>>> Brick3: stor3:/bricks/b/vol1
>>> Brick4: stor1:/bricks/c/vol1
>>> Brick5: stor2:/bricks/c/vol1
>>> Brick6: stor3:/bricks/c/vol1
>>> Brick7: stor1:/bricks/d/vol1
>>> Brick8: stor2:/bricks/d/vol1
>>> Brick9: stor3:/bricks/d/vol1
>>> Brick10: stor1:/bricks/e/vol1
>>> Brick11: stor2:/bricks/e/vol1
>>> Brick12: stor3:/bricks/e/vol1
>>> Brick13: stor1:/bricks/f/vol1
>>> Brick14: stor2:/bricks/f/vol1
>>> Brick15: stor3:/bricks/f/vol1
>>> Options Reconfigured:
>>> nfs.disable: off
>>> user.cifs: enable
>>> auth.allow: *
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: enable
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> 
>>>
>>> Brick added successfully without any error but after 1 min
quorum
>>> failed
>>> and gluster stop working...
>>>
>> Punit,
>
> And what do