Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-27 Thread Justin Clift
On 27/03/2014, at 6:58 PM, Jeff Darcy wrote: > I see two separate bugs there. > > 1. A missing package requirement > 2. The process hanging in a reproducible way. > > I've submitted a fix for #2. > > http://review.gluster.org/#/c/7360/ Sounds like this should be a late candidate for 3.4.3? If

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-27 Thread Jeff Darcy
- Original Message - > I see two separate bugs there. > 1. A missing package requirement > 2. The process hanging in a reproducible way. I've submitted a fix for #2. http://review.gluster.org/#/c/7360/ ___ Gluster-users mailing list Gluster-u

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-26 Thread Joe Julian
> >Cheers, > >Steve > >From: Carlos Capriotti [mailto:capriotti.car...@gmail.com] >Sent: 25 March 2014 12:30 >To: Steve Thomas >Cc: gluster-users@gluster.org >Subject: Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5 > >Steve: > >Tested that myself - not the nagi

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-26 Thread Steve Thomas
ombie processes are no longer being "created". Cheers, Steve From: Carlos Capriotti [mailto:capriotti.car...@gmail.com] Sent: 25 March 2014 12:30 To: Steve Thomas Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5 Steve: Tested that myself - not

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-25 Thread Carlos Capriotti
Steve: Tested that myself - not the nagios part, but the gluster commands you posted later - and no errors or zombies. Somebody else reported the same, so, sounds consistent. There must be another process there biting your gluster, turning it into a haunted scenario. Cheers, Carlos On Thu, M

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-24 Thread Viktor Villafuerte
let $((bricksfound++)) >else > errors=("${errors[@]}" "$brick offline") >fi >;; > esac > done < <( sudo gluster volume status ${VOLUME} detail) > > > Anyone spot why this woul

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-24 Thread Steve Thomas
: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Steve Thomas Sent: 24 March 2014 11:55 To: Carlos Capriotti Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5 Hi Carlos, Thanks for coming back to me... in response to your querie

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-24 Thread Steve Thomas
e errors=("${errors[@]}" "$brick offline") fi ;; esac done < <( sudo gluster volume status ${VOLUME} detail) Anyone spot why this would be an issue? Thanks, Steve From: Carlos Capriotti [mailto:capriotti.car...@gmail.com] Sen

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-22 Thread Carlos Capriotti
ok, let's see if we can gather more info. I am not a specialist, but you know... another pair of eyes. My system has a single glusterd process and it has a pretty low PID, meaning it has not crashed. What is your PID for your glusterd ? how many zombie processes are there reported by top ? I've

Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-21 Thread Steve Thomas
Hi all... Further investigation shows in excess of 500 glusterd zombie processes and continuing to climb on the box ... Any suggestions? Am happy to provide logs etc to get to the bottom of this _ From: Steve Thomas Sent: 21 March 2014 13:21 To: '

[Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-21 Thread Steve Thomas
Hi, I'm running Gluster 3.4.2 on Redhat 6.5 with 4 servers with a brick on each. This brick is mounted locally and used by apache to server audio files for an IVR system. Each of these audio files are typically around 80-100Kb. System appears to be working ok in terms of health and status via g

[Gluster-users] Gluster 3.4.2 on Redhat 6.5

2014-03-20 Thread Steve Thomas
Hi, I'm running Gluster 3.4.2 on Redhat 6.5 with 4 servers with a brick on each. This brick is mounted locally and used by apache to server audio files for an IVR system. Each of these audio files are typically around 80-100Kb. System appears to be working ok in terms of health and status via g