Re: [Gluster-users] [Gluster-infra] Which version of GlusterFS do you recommend?

2014-11-19 Thread Pranith Kumar Karampuri


On 11/20/2014 04:25 AM, Vince Loschiavo wrote:
I'm running 3.6.1 in pre-production right now.  So far so good.  No 
critical bugs found.

What tests do you run?

Pranith

Centos 6.5,
QEMU/KVM
Fuse Mount

On Wed, Nov 19, 2014 at 1:39 PM, Joe Julian > wrote:



On 11/19/2014 01:34 PM, Justin Clift wrote:

On Wed, 19 Nov 2014 17:26:15 +0100
Andreas Hollaus mailto:andreas.holl...@ericsson.com>> wrote:

Hi,

I'm curious about the different 'families' of GlusterFS
(3.4, 3.5 &
3.6). What's the differences between them and how do I
know which one
will be most suitable for my application (depending on if I
prioritize robustness or lots of features)?

Hmm, this is might help from the robustness/features
perspective: :)

  * 3.4.x series has been around for ages now, so is pretty battle
tested.  We still release patch versions for this for
important
bugs which show up.

3.4 has some, imho, critical known bugs with fixes that have
already been applied to 3.5 and were not backported. For this lack
of support I no longer recommend 3.4.


  * 3.5.x series has been around a while as well, and is also
pretty
well tested by now.  It has more features / and several
internal
optimisations/improvements over the 3.4.x series. We release
patches for this series too for important bugs that show up.

This is the version I currently recommend.


  * 3.6.x series just came out.  It's our latest and greatest
feature
set, but may be a bit "bleeding edge" until the next patch
release
(3.6.2), which should be coming out soon.

I'm still waiting on significant reports of success before I'll
recommend 3.6. I also watch for bugs that can only be fixed in
this release, or lack of support for prior releases, or
significant improvements in usability before I upgrade my
recommendations.


This makes me realise we really need a version/features table
on the
website, with ticks and crosses to show which version of GlusterFS
added what. :D

+ Justin


___
Gluster-users mailing list
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users




--
-Vince Loschiavo


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

2014-11-19 Thread Nishanth Thomas
Hi Vince,

Thank you for the quick response.

For the time being to reduce the frequency the frequency of the alerts, please 
check whether flapping enabled. If not, please go ahead enable it. It will 
suppress the alerts if there is a frequent change in the service status.

Currently the plugin checks for the number of un-synced entries and if it is 
grater than 0, change the state of the service which send the alert. Probably 
this part requires a change and we may have to introduce some thresholds using 
which decision can be taken whether to change the state of the service or not.

Regarding why the upgrade causing the files to go out of sync, someone else 
needs to answer.

Thanks,
Nishanth  

- Original Message -
From: "Vince Loschiavo" 
To: "Nishanth Thomas" 
Cc: "Humble Devassy Chirammal" , 
"gluster-users@gluster.org" , "Sahina Bose" 

Sent: Wednesday, November 19, 2014 11:46:28 PM
Subject: Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

Thank you!

I think we may need some sort of dampening method and more specific input
into Nagios.  i.e. Details on which files are out-of-sync, versus just the
number of files out-of-sync.

I'm using these:  http://download.gluster.org/pub/gluster/glusterfs-nagios/


On Wed, Nov 19, 2014 at 10:14 AM, Nishanth Thomas 
wrote:

> Hi Vince,
>
> Are you referring the monitoring scripts mentioned in the blog(
> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/)
> or the scripts part of the gluster(
> http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html)?
> Please confirm?
>
> Thanks,
> Nishanth
>
> - Original Message -
> From: "Humble Devassy Chirammal" 
> To: "Vince Loschiavo" 
> Cc: "gluster-users@gluster.org" , "Sahina
> Bose" , ntho...@redhat.com
> Sent: Wednesday, November 19, 2014 11:22:18 PM
> Subject: Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios
> related)
>
> Hi Vince,
> It could be a behavioural change in heal process output capture with latest
> GlusterFS. If that is the case, we may tune the interval which  nagios
> collect heal info output  or some other settings to avoid continuous
> alerts. I am Ccing  gluster nagios devs.
>
> --Humble
>
> --Humble
>
>
> On Wed, Nov 19, 2014 at 9:50 PM, Vince Loschiavo 
> wrote:
>
> >
> > Hello Gluster Community,
> >
> > I have been using the Nagios monitoring scripts, mentioned in the below
> > thread, on 3.5.2 with great success. The most useful of these is the self
> > heal.
> >
> > However, I've just upgraded to 3.6.1 on the lab and the self heal daemon
> > has become quite aggressive.  I continually get alerts/warnings on 3.6.1
> > that virt disk images need self heal, then they clear.  This is not the
> > case on 3.5.2.  This
> >
> > Configuration:
> > 2 node, 2 brick replicated volume with 2x1GB LAG network between the
> peers
> > using this volume as a QEMU/KVM virt image store through the fuse mount
> on
> > Centos 6.5.
> >
> > Example:
> > on 3.5.2:
> > *gluster volume heal volumename info:  *shows the bricks and number of
> > entries to be healed: 0
> >
> > On v3.5.2 - During normal gluster operations, I can run this command over
> > and over again, 2-4 times per second, and it will always show 0 entries
> to
> > be healed.  I've used this as an indicator that the bricks are
> > synchronized.
> >
> > Last night, I upgraded to 3.6.1 in lab and I'm seeing different behavior.
> > Running *gluster volume heal volumename info*, during normal operations,
> > will show a file out-of-sync, seemingly between every block written to
> disk
> > then synced to the peer.  I can run the command over and over again, 2-4
> > times per second, and it will almost always show something out of sync.
> > The individual files change, meaning:
> >
> > Example:
> > 1st Run: shows file1 out of sync
> > 2nd run: shows file 2 and file 3 out of sync but file 1 is now in sync
> > (not in the list)
> > 3rd run: shows file 3 and file 4 out of sync but file 1 and 2 are in sync
> > (not in the list).
> > ...
> > nth run: shows 0 files out of sync
> > nth+1 run: shows file 3 and 12 out of sync.
> >
> > From looking at the virtual machines running off this gluster volume,
> it's
> > obvious that gluster is working well.  However, this obviously plays
> havoc
> > with Nagios and alerts.  Nagios will run the heal info and get different
> > and non-useful results each time, and will send alerts.
> >
> > Is this behavior change (3.5.2 vs 3.6.1) expected?  Is there a way to
> tune
> > the settings or change the monitoring method to get better results into
> > Nagios.
> >
> > Thank you,
> >
> > --
> > -Vince Loschiavo
> >
> >
> > On Wed, Nov 19, 2014 at 4:35 AM, Humble Devassy Chirammal <
> > humble.deva...@gmail.com> wrote:
> >
> >> Hi Gopu,
> >>
> >> Awesome !!
> >>
> >> We can  have a Gluster blog about this implementation.
> >>
> >> --Humble
> >>
> >>
> >>
> >> --Humble
> >>
> >>
> >> On Wed, Nov 19, 2014 at 5:38 PM, Gopu Krishnan <
> gopukrishn

Re: [Gluster-users] NFS crashes - bug 1010241

2014-11-19 Thread Shawn Heisey
On 11/19/2014 6:53 PM, Ravishankar N wrote:
> Heterogeneous op-version cluster is not supported. You would need to upgrade 
> all servers.
> 
> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

I would be running 3.4.2 bricks with a later 3.4.x release on the NFS
peers, not different minor versions.  I was hoping that at least would
be a setup that is likely to work.  I would not expect things to work
right on a long-term basis if I mixed 3.4.2 bricks with 3.5 or 3.6 NFS
servers.

I could really use a fixed 3.4.x, but having just read Joe Julian's
message saying that he no longer recommends 3.4 because of the large
number of bugfixes that have not been backported, I am not holding my
breath.  My monitor/restart script manages the problem fairly
effectively, and we won't be using Gluster for longer than a few more
months.

I would be willing to try patching the 3.4.2 source and installing new
binaries, if someone can tell exactly me how to obtain the proper source
and how to build new RPM packages (CentOS 6).  I installed 3.4.2 using
the glusterfs-epel.repo file from download.gluster.org when 3.4.2 was new.

Thanks,
Shawn

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS crashes - bug 1010241

2014-11-19 Thread Paul Robert Marino
In my experience this unusually happens because of NFS lockd trying too traverse a firewall. Turn off NFS locking  on the source host and you will be fine. The root cause is not a problem with cluster its actually a deficiency in the NFS RFCs about RPC which has never been properly addressed.-- Sent from my HP Pre3On Nov 19, 2014 10:45 PM, Alex Crow  wrote: Also if OP is on non-supported gluster 3.4.x rather than RHSS or at 
least 3.5.x, and given sufficient space, how about taking enough hosts 
out of the cluster to bring fully up to date and store the data, syncing 
the data across, updating the originals, syncing back and then adding 
back the hosts you took out to to the first backup?
On 20/11/14 01:53, Ravishankar N wrote:
> On 11/19/2014 10:11 PM, Shawn Heisey wrote:
>> We are running into this crash stacktrace on 3.4.2.
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1010241
>>
>> The NFS process dies with no predictability.  I've written a shell
>> script that detects the crash and runs a process to completely kill all
>> gluster processes and restart glusterd, which has eliminated
>> customer-facing fallout from these problems.
>
> No kill required. `gluster volume start  force` should re-spawn the dead processes.
>
>
>> Because of continual stability problems from day one, the gluster
>> storage is being phased out, but there are many terabytes of data still
>> used there.  It would be nice to have it remain stable while we still
>> use it.  As soon as we can fully migrate all data to another storage
>> solution, the gluster machines will be decommissioned.
>>
>> That BZ id is specific to version 3.6, and it's always difficult for
>> mere mortals to determine which fixes have been backported to earlier
>> releases.
>>
> A (not so?)  easy way is to clone the source, checkout into the desired branch and grep the git-log for the commit message you're interested in.
>
>> Has the fix for bug 1010241 been backported to any 3.4 release?
> I just did the grep and no it's not. I don't know if a backport is possible.(CC'ed the respective devs). The (two) fixes are present in 3.5 though.
>
>
>If so,
>> is it possible for me to upgrade my servers without being concerned
>> about the distributed+replicated volume going offline?  When we upgraded
>> from 3.3 to 3.4, the volume was not fully functional as soon as we
>> upgraded one server, and did not become fully functional until all
>> servers were upgraded and rebooted.
>>
>> Assuming again that there is a 3.4 version with the fix ... the gluster
>> peers that I use for NFS do not have any bricks.  Would I need to
>> upgrade ALL the servers, or could I get away with just upgrading the
>> servers that are being used for NFS?
>
> Heterogeneous op-version cluster is not supported. You would need to upgrade all servers.
>
> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
>
> Thanks,
> Ravi
>
>> Thanks,
>> Shawn
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>

-- 
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS crashes - bug 1010241

2014-11-19 Thread Alex Crow
Also if OP is on non-supported gluster 3.4.x rather than RHSS or at 
least 3.5.x, and given sufficient space, how about taking enough hosts 
out of the cluster to bring fully up to date and store the data, syncing 
the data across, updating the originals, syncing back and then adding 
back the hosts you took out to to the first backup?





On 20/11/14 01:53, Ravishankar N wrote:

On 11/19/2014 10:11 PM, Shawn Heisey wrote:

We are running into this crash stacktrace on 3.4.2.

https://bugzilla.redhat.com/show_bug.cgi?id=1010241

The NFS process dies with no predictability.  I've written a shell
script that detects the crash and runs a process to completely kill all
gluster processes and restart glusterd, which has eliminated
customer-facing fallout from these problems.


No kill required. `gluster volume start  force` should re-spawn the 
dead processes.



Because of continual stability problems from day one, the gluster
storage is being phased out, but there are many terabytes of data still
used there.  It would be nice to have it remain stable while we still
use it.  As soon as we can fully migrate all data to another storage
solution, the gluster machines will be decommissioned.

That BZ id is specific to version 3.6, and it's always difficult for
mere mortals to determine which fixes have been backported to earlier
releases.


A (not so?)  easy way is to clone the source, checkout into the desired branch 
and grep the git-log for the commit message you're interested in.


Has the fix for bug 1010241 been backported to any 3.4 release?

I just did the grep and no it's not. I don't know if a backport is 
possible.(CC'ed the respective devs). The (two) fixes are present in 3.5 though.


   If so,

is it possible for me to upgrade my servers without being concerned
about the distributed+replicated volume going offline?  When we upgraded
from 3.3 to 3.4, the volume was not fully functional as soon as we
upgraded one server, and did not become fully functional until all
servers were upgraded and rebooted.

Assuming again that there is a 3.4 version with the fix ... the gluster
peers that I use for NFS do not have any bricks.  Would I need to
upgrade ALL the servers, or could I get away with just upgrading the
servers that are being used for NFS?


Heterogeneous op-version cluster is not supported. You would need to upgrade 
all servers.

http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

Thanks,
Ravi


Thanks,
Shawn
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS crashes - bug 1010241

2014-11-19 Thread Ravishankar N
On 11/19/2014 10:11 PM, Shawn Heisey wrote:
> We are running into this crash stacktrace on 3.4.2.
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1010241
> 
> The NFS process dies with no predictability.  I've written a shell
> script that detects the crash and runs a process to completely kill all
> gluster processes and restart glusterd, which has eliminated
> customer-facing fallout from these problems.


No kill required. `gluster volume start  force` should re-spawn the 
dead processes.


> 
> Because of continual stability problems from day one, the gluster
> storage is being phased out, but there are many terabytes of data still
> used there.  It would be nice to have it remain stable while we still
> use it.  As soon as we can fully migrate all data to another storage
> solution, the gluster machines will be decommissioned.
> 
> That BZ id is specific to version 3.6, and it's always difficult for
> mere mortals to determine which fixes have been backported to earlier
> releases.
> 

A (not so?)  easy way is to clone the source, checkout into the desired branch 
and grep the git-log for the commit message you're interested in.

> Has the fix for bug 1010241 been backported to any 3.4 release?

I just did the grep and no it's not. I don't know if a backport is 
possible.(CC'ed the respective devs). The (two) fixes are present in 3.5 though.


  If so,
> is it possible for me to upgrade my servers without being concerned
> about the distributed+replicated volume going offline?  When we upgraded
> from 3.3 to 3.4, the volume was not fully functional as soon as we
> upgraded one server, and did not become fully functional until all
> servers were upgraded and rebooted.
> 
> Assuming again that there is a 3.4 version with the fix ... the gluster
> peers that I use for NFS do not have any bricks.  Would I need to
> upgrade ALL the servers, or could I get away with just upgrading the
> servers that are being used for NFS?


Heterogeneous op-version cluster is not supported. You would need to upgrade 
all servers.

http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

Thanks,
Ravi

> 
> Thanks,
> Shawn
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] socket time out in fio benchmark

2014-11-19 Thread Justin Clift
On Fri, 14 Nov 2014 10:12:30 +0800 (CST)
"shuau li"  wrote:

> Hi all,
> 
> Now I try glusterfs 3.6.0, use the same configuration to do test.
> But the situation has not improved, several hours later, bricks' log
> still says "socket disconnection". I doubt  this problem may be
> caused by epoll, so I patch system using
> "http://review.gluster.org/#/c/3842/13//COMMIT_MSG";, but it seems do
> not work.

Ok, sounds like an problem that needs looking into then.  Out of
curiosity, which operating system are you running this all on?

We'll probably need to collect the logs from the nodes too, in order
to investigate.

I personally don't have the skill to do that :(, but hopefully one
of the other guys can assist. (?)

Regards and best wishes,

Justin Clift

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-infra] Which version of GlusterFS do you recommend?

2014-11-19 Thread Vince Loschiavo
I'm running 3.6.1 in pre-production right now.  So far so good.  No
critical bugs found.
Centos 6.5,
QEMU/KVM
Fuse Mount

On Wed, Nov 19, 2014 at 1:39 PM, Joe Julian  wrote:

>
> On 11/19/2014 01:34 PM, Justin Clift wrote:
>
>> On Wed, 19 Nov 2014 17:26:15 +0100
>> Andreas Hollaus  wrote:
>>
>>> Hi,
>>>
>>> I'm curious about the different 'families' of GlusterFS (3.4, 3.5 &
>>> 3.6). What's the differences between them and how do I know which one
>>> will be most suitable for my application (depending on if I
>>> prioritize robustness or lots of features)?
>>>
>> Hmm, this is might help from the robustness/features perspective: :)
>>
>>   * 3.4.x series has been around for ages now, so is pretty battle
>> tested.  We still release patch versions for this for important
>> bugs which show up.
>>
> 3.4 has some, imho, critical known bugs with fixes that have already been
> applied to 3.5 and were not backported. For this lack of support I no
> longer recommend 3.4.
>
>>
>>   * 3.5.x series has been around a while as well, and is also pretty
>> well tested by now.  It has more features / and several internal
>> optimisations/improvements over the 3.4.x series.  We release
>> patches for this series too for important bugs that show up.
>>
> This is the version I currently recommend.
>
>>
>>   * 3.6.x series just came out.  It's our latest and greatest feature
>> set, but may be a bit "bleeding edge" until the next patch release
>> (3.6.2), which should be coming out soon.
>>
> I'm still waiting on significant reports of success before I'll recommend
> 3.6. I also watch for bugs that can only be fixed in this release, or lack
> of support for prior releases, or significant improvements in usability
> before I upgrade my recommendations.
>
>>
>> This makes me realise we really need a version/features table on the
>> website, with ticks and crosses to show which version of GlusterFS
>> added what. :D
>>
>> + Justin
>>
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
-Vince Loschiavo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-infra] Which version of GlusterFS do you recommend?

2014-11-19 Thread Joe Julian


On 11/19/2014 01:34 PM, Justin Clift wrote:

On Wed, 19 Nov 2014 17:26:15 +0100
Andreas Hollaus  wrote:

Hi,

I'm curious about the different 'families' of GlusterFS (3.4, 3.5 &
3.6). What's the differences between them and how do I know which one
will be most suitable for my application (depending on if I
prioritize robustness or lots of features)?

Hmm, this is might help from the robustness/features perspective: :)

  * 3.4.x series has been around for ages now, so is pretty battle
tested.  We still release patch versions for this for important
bugs which show up.
3.4 has some, imho, critical known bugs with fixes that have already 
been applied to 3.5 and were not backported. For this lack of support I 
no longer recommend 3.4.


  * 3.5.x series has been around a while as well, and is also pretty
well tested by now.  It has more features / and several internal
optimisations/improvements over the 3.4.x series.  We release
patches for this series too for important bugs that show up.

This is the version I currently recommend.


  * 3.6.x series just came out.  It's our latest and greatest feature
set, but may be a bit "bleeding edge" until the next patch release
(3.6.2), which should be coming out soon.
I'm still waiting on significant reports of success before I'll 
recommend 3.6. I also watch for bugs that can only be fixed in this 
release, or lack of support for prior releases, or significant 
improvements in usability before I upgrade my recommendations.


This makes me realise we really need a version/features table on the
website, with ticks and crosses to show which version of GlusterFS
added what. :D

+ Justin



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Which version of GlusterFS do you recommend?

2014-11-19 Thread Justin Clift
On Wed, 19 Nov 2014 17:26:15 +0100
Andreas Hollaus  wrote:
> Hi,
> 
> I'm curious about the different 'families' of GlusterFS (3.4, 3.5 &
> 3.6). What's the differences between them and how do I know which one
> will be most suitable for my application (depending on if I
> prioritize robustness or lots of features)?

Hmm, this is might help from the robustness/features perspective: :)

 * 3.4.x series has been around for ages now, so is pretty battle
   tested.  We still release patch versions for this for important
   bugs which show up.

 * 3.5.x series has been around a while as well, and is also pretty
   well tested by now.  It has more features / and several internal
   optimisations/improvements over the 3.4.x series.  We release
   patches for this series too for important bugs that show up.

 * 3.6.x series just came out.  It's our latest and greatest feature
   set, but may be a bit "bleeding edge" until the next patch release
   (3.6.2), which should be coming out soon.

This makes me realise we really need a version/features table on the
website, with ticks and crosses to show which version of GlusterFS
added what. :D

+ Justin

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

2014-11-19 Thread Vince Loschiavo
Thank you!

I think we may need some sort of dampening method and more specific input
into Nagios.  i.e. Details on which files are out-of-sync, versus just the
number of files out-of-sync.

I'm using these:  http://download.gluster.org/pub/gluster/glusterfs-nagios/


On Wed, Nov 19, 2014 at 10:14 AM, Nishanth Thomas 
wrote:

> Hi Vince,
>
> Are you referring the monitoring scripts mentioned in the blog(
> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/)
> or the scripts part of the gluster(
> http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html)?
> Please confirm?
>
> Thanks,
> Nishanth
>
> - Original Message -
> From: "Humble Devassy Chirammal" 
> To: "Vince Loschiavo" 
> Cc: "gluster-users@gluster.org" , "Sahina
> Bose" , ntho...@redhat.com
> Sent: Wednesday, November 19, 2014 11:22:18 PM
> Subject: Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios
> related)
>
> Hi Vince,
> It could be a behavioural change in heal process output capture with latest
> GlusterFS. If that is the case, we may tune the interval which  nagios
> collect heal info output  or some other settings to avoid continuous
> alerts. I am Ccing  gluster nagios devs.
>
> --Humble
>
> --Humble
>
>
> On Wed, Nov 19, 2014 at 9:50 PM, Vince Loschiavo 
> wrote:
>
> >
> > Hello Gluster Community,
> >
> > I have been using the Nagios monitoring scripts, mentioned in the below
> > thread, on 3.5.2 with great success. The most useful of these is the self
> > heal.
> >
> > However, I've just upgraded to 3.6.1 on the lab and the self heal daemon
> > has become quite aggressive.  I continually get alerts/warnings on 3.6.1
> > that virt disk images need self heal, then they clear.  This is not the
> > case on 3.5.2.  This
> >
> > Configuration:
> > 2 node, 2 brick replicated volume with 2x1GB LAG network between the
> peers
> > using this volume as a QEMU/KVM virt image store through the fuse mount
> on
> > Centos 6.5.
> >
> > Example:
> > on 3.5.2:
> > *gluster volume heal volumename info:  *shows the bricks and number of
> > entries to be healed: 0
> >
> > On v3.5.2 - During normal gluster operations, I can run this command over
> > and over again, 2-4 times per second, and it will always show 0 entries
> to
> > be healed.  I've used this as an indicator that the bricks are
> > synchronized.
> >
> > Last night, I upgraded to 3.6.1 in lab and I'm seeing different behavior.
> > Running *gluster volume heal volumename info*, during normal operations,
> > will show a file out-of-sync, seemingly between every block written to
> disk
> > then synced to the peer.  I can run the command over and over again, 2-4
> > times per second, and it will almost always show something out of sync.
> > The individual files change, meaning:
> >
> > Example:
> > 1st Run: shows file1 out of sync
> > 2nd run: shows file 2 and file 3 out of sync but file 1 is now in sync
> > (not in the list)
> > 3rd run: shows file 3 and file 4 out of sync but file 1 and 2 are in sync
> > (not in the list).
> > ...
> > nth run: shows 0 files out of sync
> > nth+1 run: shows file 3 and 12 out of sync.
> >
> > From looking at the virtual machines running off this gluster volume,
> it's
> > obvious that gluster is working well.  However, this obviously plays
> havoc
> > with Nagios and alerts.  Nagios will run the heal info and get different
> > and non-useful results each time, and will send alerts.
> >
> > Is this behavior change (3.5.2 vs 3.6.1) expected?  Is there a way to
> tune
> > the settings or change the monitoring method to get better results into
> > Nagios.
> >
> > Thank you,
> >
> > --
> > -Vince Loschiavo
> >
> >
> > On Wed, Nov 19, 2014 at 4:35 AM, Humble Devassy Chirammal <
> > humble.deva...@gmail.com> wrote:
> >
> >> Hi Gopu,
> >>
> >> Awesome !!
> >>
> >> We can  have a Gluster blog about this implementation.
> >>
> >> --Humble
> >>
> >>
> >>
> >> --Humble
> >>
> >>
> >> On Wed, Nov 19, 2014 at 5:38 PM, Gopu Krishnan <
> gopukrishnan...@gmail.com
> >> > wrote:
> >>
> >>> Thanks for all your help... I was able to configure nagios using the
> >>> glusterfs plugin. Following link shows how I configured it. Hope it
> helps
> >>> someone else.:
> >>>
> >>>
> >>>
> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/
> >>>
> >>> On Sun, Nov 16, 2014 at 11:44 AM, Humble Devassy Chirammal <
> >>> humble.deva...@gmail.com> wrote:
> >>>
>  Hi,
> 
>  Please look at this thread
>  http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html
> 
>  Btw,  if you are around, we have a talk on same topic in upcoming
>  GlusterFS India meetup.
> 
>  Details can be fetched from:
>   http://www.meetup.com/glusterfs-India/
> 
>  --Humble
> 
>  --Humble
> 
> 
>  On Sun, Nov 16, 2014 at 11:23 AM, Gopu Krishnan <
>  gopukrishnan...@gmail.com> wrote:
> 
> > How can we monitor the glusters and alert us if something h

Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-11-19 Thread Prasun Gera
This is again broken after RHBA-2014:1875 on RHSS 3. It was fixed for a
while after Niels' last post. Tried to run update today, and got a bunch of
libvirt conflicts.

On Sun, Nov 2, 2014 at 3:04 PM, Niels de Vos  wrote:

> On Sun, Nov 02, 2014 at 07:06:14PM +, Alan Orth wrote:
> > What's the prevailing wisdom on this?  For now I've just done the
> following
> > on my (CentOS) GlusterFS client:
> >
> > yum update -x "glusterfs*"
>
> (1) The main problem is that RHEL/CentOS now provides the glusterfs
> client packages in a version that is higher than the community GlusterFS
> packages. RHEL (or rather Red Hat Storage) took a pre-release of
> glusterfs-3.6 and included that in RHEL-6.6. This means that on
> updating, yum will find the newer RHEL packages, but there is no
> matching glusterfs-server. If you have glusterfs-server installed from
> the gluster.org repository, yum can not update glusterfs-server, and
> dependency errors are the result.
>
> (2) There is an other complication in solving this issue. The 3.6
> release from the community provides libgfapi.so.7, whereas the RHEL
> packages provide libgfapi.so.0. Any applications (like qemu and
> samba-vfs-gluster) that link against the library, would need to be
> re-build when the community packages are used.
>
> The first problem is going to be solved by releasing glusterfs-3.6.1 in
> the following days. This makes the community version have a higher
> version than the packages in RHEL/CentOS, and yum would prefer these and
> it can resolve all dependencies, including glusterfs-server.
>
> The second problem is a little more complicated to solve. We have been
> discussing reverting the change that causes the version of the library
> to increase, and replace it with a more fine-grained symbol-versioning.
> A rebuild of the related packages would then not be needed, and the
> libgfapi.so library will get loaded without issue by the existing
> binaries in the RHEL/CentOS repositories/channels.
>
> We'll be updating the mailinglists about progress on these topics, and I
> think a blog post will be published too.
>
> Cheers,
> Niels
>
> >
> > Cheers,
> >
> >
> > On Thu Oct 30 2014 at 7:05:48 PM Chad Feller  wrote:
> >
> > > On 10/27/2014 07:05 AM, Prasun Gera wrote:
> > > > Just wanted to check if this issue was reproduced by RedHat. I don't
> > > > see any updates on Satellite yet that resolve this.
> > >
> > > To exasperate this issue, CentOS 6.6 has now dropped, including
> > > GlusterFS 3.6.0, making the problem more widespread.
> > >
> > >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > >
>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

2014-11-19 Thread Nishanth Thomas
Hi Vince,

Are you referring the monitoring scripts mentioned in the blog( 
http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/)
 or the scripts part of the 
gluster(http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html)?
Please confirm?

Thanks,
Nishanth

- Original Message -
From: "Humble Devassy Chirammal" 
To: "Vince Loschiavo" 
Cc: "gluster-users@gluster.org" , "Sahina Bose" 
, ntho...@redhat.com
Sent: Wednesday, November 19, 2014 11:22:18 PM
Subject: Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

Hi Vince,
It could be a behavioural change in heal process output capture with latest
GlusterFS. If that is the case, we may tune the interval which  nagios
collect heal info output  or some other settings to avoid continuous
alerts. I am Ccing  gluster nagios devs.

--Humble

--Humble


On Wed, Nov 19, 2014 at 9:50 PM, Vince Loschiavo 
wrote:

>
> Hello Gluster Community,
>
> I have been using the Nagios monitoring scripts, mentioned in the below
> thread, on 3.5.2 with great success. The most useful of these is the self
> heal.
>
> However, I've just upgraded to 3.6.1 on the lab and the self heal daemon
> has become quite aggressive.  I continually get alerts/warnings on 3.6.1
> that virt disk images need self heal, then they clear.  This is not the
> case on 3.5.2.  This
>
> Configuration:
> 2 node, 2 brick replicated volume with 2x1GB LAG network between the peers
> using this volume as a QEMU/KVM virt image store through the fuse mount on
> Centos 6.5.
>
> Example:
> on 3.5.2:
> *gluster volume heal volumename info:  *shows the bricks and number of
> entries to be healed: 0
>
> On v3.5.2 - During normal gluster operations, I can run this command over
> and over again, 2-4 times per second, and it will always show 0 entries to
> be healed.  I've used this as an indicator that the bricks are
> synchronized.
>
> Last night, I upgraded to 3.6.1 in lab and I'm seeing different behavior.
> Running *gluster volume heal volumename info*, during normal operations,
> will show a file out-of-sync, seemingly between every block written to disk
> then synced to the peer.  I can run the command over and over again, 2-4
> times per second, and it will almost always show something out of sync.
> The individual files change, meaning:
>
> Example:
> 1st Run: shows file1 out of sync
> 2nd run: shows file 2 and file 3 out of sync but file 1 is now in sync
> (not in the list)
> 3rd run: shows file 3 and file 4 out of sync but file 1 and 2 are in sync
> (not in the list).
> ...
> nth run: shows 0 files out of sync
> nth+1 run: shows file 3 and 12 out of sync.
>
> From looking at the virtual machines running off this gluster volume, it's
> obvious that gluster is working well.  However, this obviously plays havoc
> with Nagios and alerts.  Nagios will run the heal info and get different
> and non-useful results each time, and will send alerts.
>
> Is this behavior change (3.5.2 vs 3.6.1) expected?  Is there a way to tune
> the settings or change the monitoring method to get better results into
> Nagios.
>
> Thank you,
>
> --
> -Vince Loschiavo
>
>
> On Wed, Nov 19, 2014 at 4:35 AM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi Gopu,
>>
>> Awesome !!
>>
>> We can  have a Gluster blog about this implementation.
>>
>> --Humble
>>
>>
>>
>> --Humble
>>
>>
>> On Wed, Nov 19, 2014 at 5:38 PM, Gopu Krishnan > > wrote:
>>
>>> Thanks for all your help... I was able to configure nagios using the
>>> glusterfs plugin. Following link shows how I configured it. Hope it helps
>>> someone else.:
>>>
>>>
>>> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/
>>>
>>> On Sun, Nov 16, 2014 at 11:44 AM, Humble Devassy Chirammal <
>>> humble.deva...@gmail.com> wrote:
>>>
 Hi,

 Please look at this thread
 http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html

 Btw,  if you are around, we have a talk on same topic in upcoming
 GlusterFS India meetup.

 Details can be fetched from:
  http://www.meetup.com/glusterfs-India/

 --Humble

 --Humble


 On Sun, Nov 16, 2014 at 11:23 AM, Gopu Krishnan <
 gopukrishnan...@gmail.com> wrote:

> How can we monitor the glusters and alert us if something happened
> wrong. I found some nagios plugins and didn't work until this time. I am
> still experimenting with those. Any suggestions would be much helpful
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
>
>
> ___
> Gluster-users mailing list
> G

Re: [Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

2014-11-19 Thread Humble Devassy Chirammal
Hi Vince,
It could be a behavioural change in heal process output capture with latest
GlusterFS. If that is the case, we may tune the interval which  nagios
collect heal info output  or some other settings to avoid continuous
alerts. I am Ccing  gluster nagios devs.

--Humble

--Humble


On Wed, Nov 19, 2014 at 9:50 PM, Vince Loschiavo 
wrote:

>
> Hello Gluster Community,
>
> I have been using the Nagios monitoring scripts, mentioned in the below
> thread, on 3.5.2 with great success. The most useful of these is the self
> heal.
>
> However, I've just upgraded to 3.6.1 on the lab and the self heal daemon
> has become quite aggressive.  I continually get alerts/warnings on 3.6.1
> that virt disk images need self heal, then they clear.  This is not the
> case on 3.5.2.  This
>
> Configuration:
> 2 node, 2 brick replicated volume with 2x1GB LAG network between the peers
> using this volume as a QEMU/KVM virt image store through the fuse mount on
> Centos 6.5.
>
> Example:
> on 3.5.2:
> *gluster volume heal volumename info:  *shows the bricks and number of
> entries to be healed: 0
>
> On v3.5.2 - During normal gluster operations, I can run this command over
> and over again, 2-4 times per second, and it will always show 0 entries to
> be healed.  I've used this as an indicator that the bricks are
> synchronized.
>
> Last night, I upgraded to 3.6.1 in lab and I'm seeing different behavior.
> Running *gluster volume heal volumename info*, during normal operations,
> will show a file out-of-sync, seemingly between every block written to disk
> then synced to the peer.  I can run the command over and over again, 2-4
> times per second, and it will almost always show something out of sync.
> The individual files change, meaning:
>
> Example:
> 1st Run: shows file1 out of sync
> 2nd run: shows file 2 and file 3 out of sync but file 1 is now in sync
> (not in the list)
> 3rd run: shows file 3 and file 4 out of sync but file 1 and 2 are in sync
> (not in the list).
> ...
> nth run: shows 0 files out of sync
> nth+1 run: shows file 3 and 12 out of sync.
>
> From looking at the virtual machines running off this gluster volume, it's
> obvious that gluster is working well.  However, this obviously plays havoc
> with Nagios and alerts.  Nagios will run the heal info and get different
> and non-useful results each time, and will send alerts.
>
> Is this behavior change (3.5.2 vs 3.6.1) expected?  Is there a way to tune
> the settings or change the monitoring method to get better results into
> Nagios.
>
> Thank you,
>
> --
> -Vince Loschiavo
>
>
> On Wed, Nov 19, 2014 at 4:35 AM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi Gopu,
>>
>> Awesome !!
>>
>> We can  have a Gluster blog about this implementation.
>>
>> --Humble
>>
>>
>>
>> --Humble
>>
>>
>> On Wed, Nov 19, 2014 at 5:38 PM, Gopu Krishnan > > wrote:
>>
>>> Thanks for all your help... I was able to configure nagios using the
>>> glusterfs plugin. Following link shows how I configured it. Hope it helps
>>> someone else.:
>>>
>>>
>>> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/
>>>
>>> On Sun, Nov 16, 2014 at 11:44 AM, Humble Devassy Chirammal <
>>> humble.deva...@gmail.com> wrote:
>>>
 Hi,

 Please look at this thread
 http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html

 Btw,  if you are around, we have a talk on same topic in upcoming
 GlusterFS India meetup.

 Details can be fetched from:
  http://www.meetup.com/glusterfs-India/

 --Humble

 --Humble


 On Sun, Nov 16, 2014 at 11:23 AM, Gopu Krishnan <
 gopukrishnan...@gmail.com> wrote:

> How can we monitor the glusters and alert us if something happened
> wrong. I found some nagios plugins and didn't work until this time. I am
> still experimenting with those. Any suggestions would be much helpful
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS crashes - bug 1010241

2014-11-19 Thread Shawn Heisey
We are running into this crash stacktrace on 3.4.2.

https://bugzilla.redhat.com/show_bug.cgi?id=1010241

The NFS process dies with no predictability.  I've written a shell
script that detects the crash and runs a process to completely kill all
gluster processes and restart glusterd, which has eliminated
customer-facing fallout from these problems.

Because of continual stability problems from day one, the gluster
storage is being phased out, but there are many terabytes of data still
used there.  It would be nice to have it remain stable while we still
use it.  As soon as we can fully migrate all data to another storage
solution, the gluster machines will be decommissioned.

That BZ id is specific to version 3.6, and it's always difficult for
mere mortals to determine which fixes have been backported to earlier
releases.

Has the fix for bug 1010241 been backported to any 3.4 release?  If so,
is it possible for me to upgrade my servers without being concerned
about the distributed+replicated volume going offline?  When we upgraded
from 3.3 to 3.4, the volume was not fully functional as soon as we
upgraded one server, and did not become fully functional until all
servers were upgraded and rebooted.

Assuming again that there is a 3.4 version with the fix ... the gluster
peers that I use for NFS do not have any bricks.  Would I need to
upgrade ALL the servers, or could I get away with just upgrading the
servers that are being used for NFS?

Thanks,
Shawn
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Which version of GlusterFS do you recommend?

2014-11-19 Thread Andreas Hollaus
Hi,

I'm curious about the different 'families' of GlusterFS (3.4, 3.5 & 3.6).
What's the differences between them and how do I know which one will be most 
suitable
for my application (depending on if I prioritize robustness or lots of 
features)?

-- 
Regards Andreas

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] v3.6.1 vs v3.5.2 self heal - help (Nagios related)

2014-11-19 Thread Vince Loschiavo
Hello Gluster Community,

I have been using the Nagios monitoring scripts, mentioned in the below
thread, on 3.5.2 with great success. The most useful of these is the self
heal.

However, I've just upgraded to 3.6.1 on the lab and the self heal daemon
has become quite aggressive.  I continually get alerts/warnings on 3.6.1
that virt disk images need self heal, then they clear.  This is not the
case on 3.5.2.  This

Configuration:
2 node, 2 brick replicated volume with 2x1GB LAG network between the peers
using this volume as a QEMU/KVM virt image store through the fuse mount on
Centos 6.5.

Example:
on 3.5.2:
*gluster volume heal volumename info:  *shows the bricks and number of
entries to be healed: 0

On v3.5.2 - During normal gluster operations, I can run this command over
and over again, 2-4 times per second, and it will always show 0 entries to
be healed.  I've used this as an indicator that the bricks are
synchronized.

Last night, I upgraded to 3.6.1 in lab and I'm seeing different behavior.
Running *gluster volume heal volumename info*, during normal operations,
will show a file out-of-sync, seemingly between every block written to disk
then synced to the peer.  I can run the command over and over again, 2-4
times per second, and it will almost always show something out of sync.
The individual files change, meaning:

Example:
1st Run: shows file1 out of sync
2nd run: shows file 2 and file 3 out of sync but file 1 is now in sync (not
in the list)
3rd run: shows file 3 and file 4 out of sync but file 1 and 2 are in sync
(not in the list).
...
nth run: shows 0 files out of sync
nth+1 run: shows file 3 and 12 out of sync.

>From looking at the virtual machines running off this gluster volume, it's
obvious that gluster is working well.  However, this obviously plays havoc
with Nagios and alerts.  Nagios will run the heal info and get different
and non-useful results each time, and will send alerts.

Is this behavior change (3.5.2 vs 3.6.1) expected?  Is there a way to tune
the settings or change the monitoring method to get better results into
Nagios.

Thank you,

-- 
-Vince Loschiavo


On Wed, Nov 19, 2014 at 4:35 AM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> Hi Gopu,
>
> Awesome !!
>
> We can  have a Gluster blog about this implementation.
>
> --Humble
>
>
>
> --Humble
>
>
> On Wed, Nov 19, 2014 at 5:38 PM, Gopu Krishnan 
> wrote:
>
>> Thanks for all your help... I was able to configure nagios using the
>> glusterfs plugin. Following link shows how I configured it. Hope it helps
>> someone else.:
>>
>>
>> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/
>>
>> On Sun, Nov 16, 2014 at 11:44 AM, Humble Devassy Chirammal <
>> humble.deva...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Please look at this thread
>>> http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html
>>>
>>> Btw,  if you are around, we have a talk on same topic in upcoming
>>> GlusterFS India meetup.
>>>
>>> Details can be fetched from:
>>>  http://www.meetup.com/glusterfs-India/
>>>
>>> --Humble
>>>
>>> --Humble
>>>
>>>
>>> On Sun, Nov 16, 2014 at 11:23 AM, Gopu Krishnan <
>>> gopukrishnan...@gmail.com> wrote:
>>>
 How can we monitor the glusters and alert us if something happened
 wrong. I found some nagios plugins and didn't work until this time. I am
 still experimenting with those. Any suggestions would be much helpful

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Monitoring Nagios

2014-11-19 Thread Humble Devassy Chirammal
Hi Gopu,

Awesome !!

We can  have a Gluster blog about this implementation.

--Humble



--Humble


On Wed, Nov 19, 2014 at 5:38 PM, Gopu Krishnan 
wrote:

> Thanks for all your help... I was able to configure nagios using the
> glusterfs plugin. Following link shows how I configured it. Hope it helps
> someone else.:
>
>
> http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/
>
> On Sun, Nov 16, 2014 at 11:44 AM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi,
>>
>> Please look at this thread
>> http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html
>>
>> Btw,  if you are around, we have a talk on same topic in upcoming
>> GlusterFS India meetup.
>>
>> Details can be fetched from:
>>  http://www.meetup.com/glusterfs-India/
>>
>> --Humble
>>
>> --Humble
>>
>>
>> On Sun, Nov 16, 2014 at 11:23 AM, Gopu Krishnan <
>> gopukrishnan...@gmail.com> wrote:
>>
>>> How can we monitor the glusters and alert us if something happened
>>> wrong. I found some nagios plugins and didn't work until this time. I am
>>> still experimenting with those. Any suggestions would be much helpful
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Monitoring Nagios

2014-11-19 Thread Gopu Krishnan
Thanks for all your help... I was able to configure nagios using the
glusterfs plugin. Following link shows how I configured it. Hope it helps
someone else.:

http://gopukrish.wordpress.com/2014/11/16/monitor-glusterfs-using-nagios-plugin/

On Sun, Nov 16, 2014 at 11:44 AM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> Hi,
>
> Please look at this thread
> http://gluster.org/pipermail/gluster-users.old/2014-June/017819.html
>
> Btw,  if you are around, we have a talk on same topic in upcoming
> GlusterFS India meetup.
>
> Details can be fetched from:
>  http://www.meetup.com/glusterfs-India/
>
> --Humble
>
> --Humble
>
>
> On Sun, Nov 16, 2014 at 11:23 AM, Gopu Krishnan  > wrote:
>
>> How can we monitor the glusters and alert us if something happened wrong.
>> I found some nagios plugins and didn't work until this time. I am still
>> experimenting with those. Any suggestions would be much helpful
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users