Re: [Gluster-users] glusterd crashing

2015-10-16 Thread Gene Liverman
Bug filed https://bugzilla.redhat.com/show_bug.cgi?id=1272436





--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!




On Thu, Oct 8, 2015 at 1:48 PM, Vijay Bellur <vbel...@redhat.com> wrote:

> On Thursday 08 October 2015 11:01 PM, Gene Liverman wrote:
>
>> Happy to do so... what all info should go in the bug report?
>>
>>
> Guidelines for logging a bug are available at [1]. Please try to provide
> relevant data requested in the Package Information, Cluster Information,
> Volume Information and Logs section of the guidelines page. Information
> pertaining to bricks, volume details & statedump can be skipped as it is
> the management daemon that is crashing here. Attaching the entire glusterd
> log would be helpful.
>
> Regards,
> Vijay
>
> [1]
> http://www.gluster.org/community/documentation/index.php/Bug_reporting_guidelines
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Suggested method for replacing an entire node

2015-10-08 Thread Gene Liverman
 Thanks for all the replies! Just to make sure I have this right, the
following should work for *both* machines with and machines without a
currently populated brick if the name and IP stay the same:

   - reinstall os
   - reinstall gluster software
   - start gluster

Do I need to do any peer probing or anything else? Do I need to do any
brick removal / adding (I'm thinking no but want to make sure)?




Thanks,
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!



On Thu, Oct 8, 2015 at 9:52 AM, Alastair Neil <ajneil.t...@gmail.com> wrote:

> Ahh that is good to know.
>
> On 8 October 2015 at 09:50, Atin Mukherjee <atin.mukherje...@gmail.com>
> wrote:
>
>> -Atin
>> Sent from one plus one
>> On Oct 8, 2015 7:17 PM, "Alastair Neil" <ajneil.t...@gmail.com> wrote:
>> >
>> > I think you should back up /var/lib/glusterd and then restore it after
>> the reinstall and installation of glusterfs packages.  Assuming the node
>> will have the same hostname and ip addresses and you are installing the
>> same version gluster bits, I think it should be fine.  I am assuming you
>> are not using ssl for the connections if so you will need to back up the
>> keys for that too.
>> If the same machine is used with out hostname/ IP change, backing up
>> glusterd configuration *is not* needed as syncing the configuration will be
>> taken care peer handshaking.
>>
>> >
>> > -Alastair
>> >
>> > On 8 October 2015 at 00:12, Atin Mukherjee <amukh...@redhat.com> wrote:
>> >>
>> >>
>> >>
>> >> On 10/07/2015 10:28 PM, Gene Liverman wrote:
>> >> > I want to replace my existing CentOS 6 nodes with CentOS 7 ones. Is
>> >> > there a recommended way to go about this from the perspective of
>> >> > Gluster? I am running a 3 node replicated cluster (3 servers each
>> with 1
>> >> > brick). In case it makes a difference, my bricks are on separate
>> drives
>> >> > formatted as XFS so it is possible that I can do my OS reinstall
>> without
>> >> > wiping out the data on two nodes (the third had a hardware failure
>> so it
>> >> > will be fresh from the ground up).
>> >> That's possible. You could do the re-installation one at a time. Once
>> >> the node comes back online self heal daemon will take care of healing
>> >> the data. AFR team can correct me if I am wrong.
>> >>
>> >> Thanks,
>> >> Atin
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > Thanks,
>> >> > *Gene Liverman*
>> >> > Systems Integration Architect
>> >> > Information Technology Services
>> >> > University of West Georgia
>> >> > glive...@westga.edu <mailto:glive...@westga.edu>
>> >> >
>> >> > ITS: Making Technology Work for You!
>> >> >
>> >> >
>> >> > ___
>> >> > Gluster-users mailing list
>> >> > Gluster-users@gluster.org
>> >> > http://www.gluster.org/mailman/listinfo/gluster-users
>> >> >
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>> >
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd crashing

2015-10-08 Thread Gene Liverman
Happy to do so... what all info should go in the bug report?





--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!




On Thu, Oct 8, 2015 at 1:04 PM, Vijay Bellur <vbel...@redhat.com> wrote:

> On Wednesday 07 October 2015 09:20 PM, Atin Mukherjee wrote:
>
>> This looks like a glibc corruption to me. Which distribution platform
>> are you running Gluster on?
>>
>>
> A crash in glibc would mostly be due to memory corruption caused by the
> application. Can we please open a tracking bug if not done yet?
>
> Thanks,
> Vijay
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to replace a dead brick? (3.6.5)

2015-10-08 Thread Gene Liverman
So... this kinda applies to me too and I want to get some clarification: I
have the following setup

# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: fc50d049-cebe-4a3f-82a6-748847226099
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: eapps-gluster01:/export/sdb1/gv0
Brick2: eapps-gluster02:/export/sdb1/gv0
Brick3: eapps-gluster03:/export/sdb1/gv0
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.drc: off

eapps-gluster03 had a hard drive failure so I replaced it, formatted the
drive and now need gluster to be happy again. Gluster put a .glusterfs
folder in /export/sdb1/gv0 but nothing else has shown up and the brick is
offline. I read the docs on replacing a brick but seem to be missing
something and would appreciate some help. Thanks!





--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!



On Thu, Oct 8, 2015 at 2:46 PM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:

> On 3.7.4, all you need to do is execute "gluster volume replace-brick
>  commit force" and rest will be taken care by afr. We are in the
> process of coming up with new commands like "gluster volume reset-brick
>  start/commit" for wiping/re-formatting of the disk. So wait just
> a little longer :-).
>
> Pranith
>
>
> On 10/08/2015 11:26 AM, Lindsay Mathieson wrote:
>
>
> On 8 October 2015 at 07:19, Joe Julian <j...@julianfamily.org> wrote:
>
>> I documented this on my blog at
>> https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is
>> still accurate for the latest version.
>>
>> The bug report I filed for this was closed without resolution. I assume
>> there's no plans for ever making this easy for administrators.
>> https://bugzilla.redhat.com/show_bug.cgi?id=991084
>>
>
>
> Yes, its the sort of workaround one can never remember in an emergency,
> you'd have to google it up ...
>
> In the case I was working with, probably easier and quicker to do a
> remove-brick/add-brick.
>
> thanks,
>
>
> --
> Lindsay
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd crashing

2015-10-07 Thread Gene Liverman
) at
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x003b91433e05 in abort () at abort.c:92
#2  0x003b91470537 in __libc_message (do_abort=2, fmt=0x3b915588c0 "***
glibc detected *** %s: %s: 0x%s ***\n") at
../sysdeps/unix/sysv/linux/libc_fatal.c:198

---Type  to continue, or q  to quit---
#3  0x003b91475f4e in malloc_printerr (action=3, str=0x3b9155687d
"corrupted double-linked list", ptr=, ar_ptr=) at malloc.c:6350
#4  0x003b914763d3 in malloc_consolidate (av=0x7fee9020) at
malloc.c:5216
#5  0x003b91479c28 in _int_malloc (av=0x7fee9020, bytes=) at malloc.c:4415
#6  0x003b9147a7ed in __libc_calloc (n=,
elem_size=) at malloc.c:4093
#7  0x003b9345c81f in __gf_calloc (nmemb=,
size=, type=59, typestr=0x7fee9ed2d708
"gf_common_mt_rpc_trans_t") at mem-pool.c:117
#8  0x7fee9ed2830b in socket_server_event_handler (fd=, idx=, data=0xf3eca0, poll_in=1, poll_out=,
poll_err=) at socket.c:2622
#9  0x003b9348b0a0 in event_dispatch_epoll_handler (data=0xf408b0) at
event-epoll.c:575
#10 event_dispatch_epoll_worker (data=0xf408b0) at event-epoll.c:678
#11 0x003b91807a51 in start_thread (arg=0x7fee9db3b700) at
pthread_create.c:301
#12 0x003b914e893d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:115







--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!




On Wed, Oct 7, 2015 at 12:06 AM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On 10/07/2015 09:34 AM, Atin Mukherjee wrote:
> >
> >
> > On 10/06/2015 08:15 PM, Gene Liverman wrote:
> >> Sorry for the delay... they joys of multiple proverbial fires at once.
> >> In /var/log/messages I found this for our most recent crash:
> >>
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> pending frames:
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> patchset: git://git.gluster.com/glusterfs.git
> >> <http://git.gluster.com/glusterfs.git>
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> signal received: 6
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: time
> >> of crash:
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> 2015-10-03 04:26:21
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> configuration details:
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: argp
> 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> backtrace 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> dlfcn 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> libpthread 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> llistxattr 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> setfsid 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> spinlock 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> epoll.h 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> xattr.h 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> st_atim.tv_nsec 1
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> >> package-string: glusterfs 3.7.4
> >> Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
> -
> >>
> >>
> >> I have posted etc-glusterfs-glusterd.vol.log
> >> to http://pastebin.com/Pzq1j5J3. I also put the core file and an
> >> sosreport on my web server for you but don't want to leave them there
> >> for long so I'd appreciate it if you'd let me know once you get them.
> >> They are at the following url's:
> >> http://www.westga.edu/~gliverma/tmp-files/core.36992
> > Could you get the backtrace and share with us with the following
> commands:
> >
> > $ gdb glusterd2 
> > $ bt
> Also "t a a bt" output in gdb might help.
> >
> >>
> http://www.westga.edu/~gliverma/tmp-files/sosreport-gliverman.gluster-crashing-20151006101239.tar.xz
> >>
> http://www.westga.edu/~gliverma/tmp-files/sosreport-gliverman.gluster-crashing-20151006101239.tar.xz.md5
> >>
> >>
> >>
> >>
> >> Thanks again for the help!
> >> *Gene Liverman*
> >> Systems Integration Architect
> >> Information Technolo

Re: [Gluster-users] glusterd crashing

2015-10-07 Thread Gene Liverman
There are a couple of answers to that question...

   - The core dump is from a fully patched RHEL 6 box. This is my primary
   box
   - The other two nodes are fully patched CentOS 6.






--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!




On Wed, Oct 7, 2015 at 11:50 AM, Atin Mukherjee <atin.mukherje...@gmail.com>
wrote:

> This looks like a glibc corruption to me. Which distribution platform are
> you running Gluster on?
>
> -Atin
> Sent from one plus one
> On Oct 7, 2015 9:12 PM, "Gene Liverman" <glive...@westga.edu> wrote:
>
>> Both of the requested trace commands are below:
>>
>> Core was generated by `/usr/sbin/glusterd
>> --pid-file=/var/run/glusterd.pid'.
>> Program terminated with signal 6, Aborted.
>> #0  0x003b91432625 in raise (sig=) at
>> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
>> 64return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
>>
>>
>>
>> (gdb) bt
>> #0  0x003b91432625 in raise (sig=) at
>> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
>> #1  0x003b91433e05 in abort () at abort.c:92
>> #2  0x003b91470537 in __libc_message (do_abort=2, fmt=0x3b915588c0
>> "*** glibc detected *** %s: %s: 0x%s ***\n") at
>> ../sysdeps/unix/sysv/linux/libc_fatal.c:198
>> #3  0x003b91475f4e in malloc_printerr (action=3, str=0x3b9155687d
>> "corrupted double-linked list", ptr=, ar_ptr=> optimized out>) at malloc.c:6350
>> #4  0x003b914763d3 in malloc_consolidate (av=0x7fee9020) at
>> malloc.c:5216
>> #5  0x003b91479c28 in _int_malloc (av=0x7fee9020, bytes=> optimized out>) at malloc.c:4415
>> #6  0x003b9147a7ed in __libc_calloc (n=,
>> elem_size=) at malloc.c:4093
>> #7  0x003b9345c81f in __gf_calloc (nmemb=,
>> size=, type=59, typestr=0x7fee9ed2d708
>> "gf_common_mt_rpc_trans_t") at mem-pool.c:117
>> #8  0x7fee9ed2830b in socket_server_event_handler (fd=> optimized out>, idx=, data=0xf3eca0, poll_in=1,
>> poll_out=,
>> poll_err=) at socket.c:2622
>> #9  0x003b9348b0a0 in event_dispatch_epoll_handler (data=0xf408b0) at
>> event-epoll.c:575
>> #10 event_dispatch_epoll_worker (data=0xf408b0) at event-epoll.c:678
>> #11 0x003b91807a51 in start_thread (arg=0x7fee9db3b700) at
>> pthread_create.c:301
>> #12 0x003b914e893d in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>>
>>
>>
>>
>> (gdb) t a a bt
>>
>> Thread 9 (Thread 0x7fee9e53c700 (LWP 37122)):
>> #0  pthread_cond_wait@@GLIBC_2.3.2 () at
>> ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:183
>> #1  0x7fee9fffcf93 in hooks_worker (args=) at
>> glusterd-hooks.c:534
>> #2  0x003b91807a51 in start_thread (arg=0x7fee9e53c700) at
>> pthread_create.c:301
>> #3  0x003b914e893d in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>>
>> Thread 8 (Thread 0x7feea0c99700 (LWP 36996)):
>> #0  pthread_cond_timedwait@@GLIBC_2.3.2 () at
>> ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:239
>> #1  0x003b9346cbdb in syncenv_task (proc=0xefa8c0) at syncop.c:607
>> #2  0x003b93472cb0 in syncenv_processor (thdata=0xefa8c0) at
>> syncop.c:699
>> #3  0x003b91807a51 in start_thread (arg=0x7feea0c99700) at
>> pthread_create.c:301
>> #4  0x003b914e893d in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>>
>> Thread 7 (Thread 0x7feea209b700 (LWP 36994)):
>> #0  do_sigwait (set=, sig=0x7feea209ae5c) at
>> ../sysdeps/unix/sysv/linux/sigwait.c:65
>> #1  __sigwait (set=, sig=0x7feea209ae5c) at
>> ../sysdeps/unix/sysv/linux/sigwait.c:100
>> #2  0x00405dfb in glusterfs_sigwaiter (arg=)
>> at glusterfsd.c:1989
>> #3  0x003b91807a51 in start_thread (arg=0x7feea209b700) at
>> pthread_create.c:301
>> #4  0x003b914e893d in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>>
>> Thread 6 (Thread 0x7feea2a9c700 (LWP 36993)):
>> #0  0x003b9180efbd in nanosleep () at
>> ../sysdeps/unix/syscall-template.S:82
>> #1  0x003b934473ea in gf_timer_proc (ctx=0xecc010) at timer.c:205
>> #2  0x003b91807a51 in start_thread (arg=0x7feea2a9c700) at
>> pthread_create.c:301
>> #3  0x003b914e893d in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
>>
>> Thread 5 (Thread 0x7feea9e04740 (LWP 36992)):
>> #0  0x003b918082ad in pthread_join (threadid=1

[Gluster-users] Suggested method for replacing an entire node

2015-10-07 Thread Gene Liverman
I want to replace my existing CentOS 6 nodes with CentOS 7 ones. Is there a
recommended way to go about this from the perspective of Gluster? I am
running a 3 node replicated cluster (3 servers each with 1 brick). In case
it makes a difference, my bricks are on separate drives formatted as XFS so
it is possible that I can do my OS reinstall without wiping out the data on
two nodes (the third had a hardware failure so it will be fresh from the
ground up).




Thanks,
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd crashing

2015-10-06 Thread Gene Liverman
Sorry for the delay... they joys of multiple proverbial fires at once. In
/var/log/messages I found this for our most recent crash:

Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: pending
frames:
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
patchset: git://git.gluster.com/glusterfs.git
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: signal
received: 6
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: time of
crash:
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
2015-10-03 04:26:21
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
configuration details:
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: argp 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
backtrace 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: dlfcn 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
libpthread 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
llistxattr 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: setfsid 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: spinlock
1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: epoll.h 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: xattr.h 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
st_atim.tv_nsec 1
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]:
package-string: glusterfs 3.7.4
Oct  3 00:26:21 eapps-gluster01 etc-glusterfs-glusterd.vol[36992]: -


I have posted etc-glusterfs-glusterd.vol.log to http://pastebin.com/Pzq1j5J3.
I also put the core file and an sosreport on my web server for you but
don't want to leave them there for long so I'd appreciate it if you'd let
me know once you get them. They are at the following url's:
http://www.westga.edu/~gliverma/tmp-files/core.36992
http://www.westga.edu/~gliverma/tmp-files/sosreport-gliverman.gluster-crashing-20151006101239.tar.xz
http://www.westga.edu/~gliverma/tmp-files/sosreport-gliverman.gluster-crashing-20151006101239.tar.xz.md5




Thanks again for the help!
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!




On Fri, Oct 2, 2015 at 11:18 AM, Gaurav Garg <gg...@redhat.com> wrote:

> >> Pulling those logs now but how do I generate the core file you are
> asking
> for?
>
> When there is crash then core file automatically generated based on your
> *ulimit* set option. you can find location of core file in your root or
> current working directory or where ever you have set your core dump file
> location. core file gives you information regarding crash, where exactly
> crash happened.
> you can find appropriate core file by looking at crash time in glusterd
> log's by searching "crash" keyword. you can also paste few line's just
> above latest "crash" keyword in glusterd logs.
>
> Just for your curiosity if you willing to look where it crash then you can
> debug it by #gdb -c  glusterd
>
> Thank you...
>
> Regards,
> Gaurav
>
> - Original Message -
> From: "Gene Liverman" <glive...@westga.edu>
> To: "Gaurav Garg" <gg...@redhat.com>
> Cc: "gluster-users" <gluster-users@gluster.org>
> Sent: Friday, October 2, 2015 8:28:49 PM
> Subject: Re: [Gluster-users] glusterd crashing
>
> Pulling those logs now but how do I generate the core file you are asking
> for?
>
>
>
>
>
> --
> *Gene Liverman*
> Systems Integration Architect
> Information Technology Services
> University of West Georgia
> glive...@westga.edu
> 678.839.5492
>
> ITS: Making Technology Work for You!
>
>
>
>
> On Fri, Oct 2, 2015 at 2:25 AM, Gaurav Garg <gg...@redhat.com> wrote:
>
> > Hi Gene,
> >
> > you have paste glustershd log. we asked you to paste glusterd log.
> > glusterd and glustershd both are different process. with this information
> > we can't find out why your glusterd crashed. could you paste *glusterd*
> > logs (/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log*) in
> > pastebin (not in this mail thread) and give the link of pastebin in this
> > mail thread. Can you also attach core file or you can paste backtrace of
> > that core dump file.
> > It will be great if you give us sos report of the node where the crash
> > happen.
> >
> > Thanx,
> >
> > ~Gaurav
> >
> > - Original Message -
> > From: "Gene Liverman" <glive...@westga.edu>
> > To: "gluster-users" <gluster-users@gluster.org>
> > Sent: Friday, October 2

Re: [Gluster-users] glusterd crashing

2015-10-02 Thread Gene Liverman
Pulling those logs now but how do I generate the core file you are asking
for?





--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!




On Fri, Oct 2, 2015 at 2:25 AM, Gaurav Garg <gg...@redhat.com> wrote:

> Hi Gene,
>
> you have paste glustershd log. we asked you to paste glusterd log.
> glusterd and glustershd both are different process. with this information
> we can't find out why your glusterd crashed. could you paste *glusterd*
> logs (/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log*) in
> pastebin (not in this mail thread) and give the link of pastebin in this
> mail thread. Can you also attach core file or you can paste backtrace of
> that core dump file.
> It will be great if you give us sos report of the node where the crash
> happen.
>
> Thanx,
>
> ~Gaurav
>
> - Original Message -
> From: "Gene Liverman" <glive...@westga.edu>
> To: "gluster-users" <gluster-users@gluster.org>
> Sent: Friday, October 2, 2015 4:47:00 AM
> Subject: Re: [Gluster-users] glusterd crashing
>
> Sorry for the delay. Here is what's installed:
> # rpm -qa | grep gluster
> glusterfs-geo-replication-3.7.4-2.el6.x86_64
> glusterfs-client-xlators-3.7.4-2.el6.x86_64
> glusterfs-3.7.4-2.el6.x86_64
> glusterfs-libs-3.7.4-2.el6.x86_64
> glusterfs-api-3.7.4-2.el6.x86_64
> glusterfs-fuse-3.7.4-2.el6.x86_64
> glusterfs-server-3.7.4-2.el6.x86_64
> glusterfs-cli-3.7.4-2.el6.x86_64
>
> The cmd_history.log file is attached.
> In gluster.log I have filtered out a bunch of lines like the one below due
> to make them more readable. I had a node down for multiple days due to
> maintenance and another one went down due to a hardware failure during that
> time too.
> [2015-10-01 00:16:09.643631] W [MSGID: 114031]
> [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-gv0-client-0: remote
> operation failed. Path: 
> (31f17f8c-6c96-4440-88c0-f813b3c8d364) [No such file or directory]
>
> I also filtered out a boat load of self heal lines like these two:
> [2015-10-01 15:14:14.851015] I [MSGID: 108026]
> [afr-self-heal-metadata.c:56:__afr_selfheal_metadata_do] 0-gv0-replicate-0:
> performing metadata selfheal on f78a47db-a359-430d-a655-1d217eb848c3
> [2015-10-01 15:14:14.856392] I [MSGID: 108026]
> [afr-self-heal-common.c:651:afr_log_selfheal] 0-gv0-replicate-0: Completed
> metadata selfheal on f78a47db-a359-430d-a655-1d217eb848c3. source=0 sinks=1
>
>
> [root@eapps-gluster01 glusterfs]# cat glustershd.log |grep -v 'remote
> operation failed' |grep -v 'self-heal'
> [2015-09-27 08:46:56.893125] E [rpc-clnt.c:201:call_bail] 0-glusterfs:
> bailing out frame type(GlusterFS Handshake) op(GETSPEC(2)) xid = 0x6 sent =
> 2015-09-27 08:16:51.742731. timeout = 1800 for 127.0.0.1:24007
> [2015-09-28 12:54:17.524924] W [socket.c:588:__socket_rwv] 0-glusterfs:
> readv on 127.0.0.1:24007 failed (Connection reset by peer)
> [2015-09-28 12:54:27.844374] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2015-09-28 12:57:03.485027] W [socket.c:588:__socket_rwv] 0-gv0-client-2:
> readv on 160.10.31.227:24007 failed (Connection reset by peer)
> [2015-09-28 12:57:05.872973] E [socket.c:2278:socket_connect_finish]
> 0-gv0-client-2: connection to 160.10.31.227:24007 failed (Connection
> refused)
> [2015-09-28 12:57:38.490578] W [socket.c:588:__socket_rwv] 0-glusterfs:
> readv on 127.0.0.1:24007 failed (No data available)
> [2015-09-28 12:57:49.054475] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2015-09-28 13:01:12.062960] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0() [0x3c65e07a51]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
> signum (15), shutting down
> [2015-09-28 13:01:12.981945] I [MSGID: 100030] [glusterfsd.c:2301:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.4
> (args: /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
> /var/lib/glusterd/glustershd/run/glustershd.pid -l
> /var/log/glusterfs/glustershd.log -S
> /var/run/gluster/9a9819e90404187e84e67b01614bbe10.socket --xlator-option
> *replicate*.node-uuid=416d712a-06fc-4b3c-a92f-8c82145626ff)
> [2015-09-28 13:01:13.009171] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2015-09-28 13:01:13.092483] I [graph.c:269:gf_add_cmdline_options]
> 0-gv0-replicate-0: adding option 'node-uuid' for volume 'gv0-replicate-0'
> with value '416d712a-06fc-4b3c

[Gluster-users] glusterd crashing

2015-09-30 Thread Gene Liverman
In the last few days I've started having issues with my glusterd service 
crashing. When it goes down it seems to do so on all nodes in my replicated 
volume. How can I troubleshoot this? I'm on a mix of CentOS 6 and RHEL 6. 
Thanks!



Gene Liverman 
Systems Integration Architect 
Information Technology Services 
University of West Georgia 
glive...@westga.edu 


Sent from Outlook on my iPhone___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Repos for EPEL 5

2015-06-16 Thread Gene Liverman
I have servers set to pull from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64
yet when I go there and work back up the path to the EPEL.repo folder I
only see 6  7 now.  Is this a mistake or was support for EPEL 5 dropped?




Thanks,
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Gene Liverman
Very nice! I see a small Puppet module and a Vagrant setup in my immediate
future for using this. Thanks for sharing!

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Dec 3, 2014 9:03 PM, Aravinda m...@aravindavk.in wrote:

 Hi All,

 I created a small local installable web app called gdash, a simple
 dashboard for GlusterFS.

 gdash is a super-young project, which shows GlusterFS volume information
 about local, remote clusters. This app is based on GlusterFS's capability
 of executing gluster volume info and gluster volume status commands for a
 remote server using --remote-host option.

 It is very easy to install using pip or easy_install.

 Check my blog post for more in detail(with screenshots).
 http://aravindavk.in/blog/introducing-gdash/

 Comments and Suggestions Welcome.

 --
 regards
 Aravinda
 http://aravindavk.in


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL 5 Repo Broken

2014-11-11 Thread Gene Liverman
Awesome, thanks!





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.


On Tue, Nov 11, 2014 at 11:19 AM, Niels de Vos nde...@redhat.com wrote:

 On Mon, Nov 10, 2014 at 01:49:41PM -0500, Gene Liverman wrote:
  On a RHEL 5 server I run the line
 
  baseurl=
 http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
 
  gets translated to
 
  baseurl=
 http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64/
 
 
  http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/ is
  missing the directories for epel-5Client, epel-5Server, and
  epel-5Workstation
 
  Can someone take a look at this?

 This should have been fixed now. Please let us know if there are still
 issues.

 In future, please send an email to gluster-in...@gluster.org to get it
 to the people that can take care of it.

 Alternatively, you can file a bug against the project-infrastructure
 component in Bugzilla (sends email automatically too):
 -
 https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFScomponent=project-infrastructure

 Thanks,
 Niels

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] SNMP monitoring

2014-11-10 Thread Gene Liverman
I'd like to second this request for suggestions. I'm not as far along so I
need to do some operational monitoring too still. Unlike Paul, I don't use
Nagios but instead use Zabbix. Any and all tips would be appreciated.

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Nov 10, 2014 11:58 AM, Osborne, Paul (paul.osbo...@canterbury.ac.uk) 
paul.osbo...@canterbury.ac.uk wrote:

 Hi,

 I have a feeling that this may have been chewed over before, so apologies
 if that is the case.

 I would like to do some SNMP monitoring of the gluster cluster that I now
 have operational for performance, usage etc specifically for trend analysis
 rather than operational monitoring which I already have in place with
 Nagios. Disk usage is relatively easy via the Linux SNMP queries anyway -
 however GFS performance, failover etc are somewhat non-obvious to me.

 Also management like shiny graphs...

 Some reading and googling reveal a couple of projects on GitHub that may
 be doing what I need, however rather than just try what could be random
 code, is there anything that the users here can recommend?

 Thanks

 Paul

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] RHEL 5 Repo Broken

2014-11-10 Thread Gene Liverman
On a RHEL 5 server I run the line

baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/

gets translated to

baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64/


http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/ is
missing the directories for epel-5Client, epel-5Server, and
epel-5Workstation

Can someone take a look at this?




--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How about replacing old versions in Bugzilla by deprecated

2014-11-09 Thread Gene Liverman
I think it's a good idea.

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Nov 9, 2014 6:54 AM, Niels de Vos nde...@redhat.com wrote:

 Hi,

 we've been working on triaging bugs against unsupported versions. I
 think we would do our community users a favour if we have a deprecated
 or old unsupported version in Bugzilla. There should be no need for
 users to select old versions that we do not update anymore.

 There are still many bugs that need checking and some form of an update
 in them. Everything bug that is = 3.3 could be moved to the unsupported
 version to at least get a generic message out.

 http://goo.gl/IA7zaq contains a report of all open bugs/versions. Many
 of the old bugs are feature requests, and just need to be labeled as
 such and moved to the mainline version. Others could possibly get
 closed as a duplicate in case there is a fix in the current releases.

 Do you think that this is a good, or bad idea? Please let us know
 before, or during the next Bug Triage meeting on Tuesday 12:00 UTC:
 - https://public.pad.fsfe.org/p/gluster-bug-triage

 Responses by email are welcome, additional visitors that voice their
 opinion during the IRC meeting would be appreciated too.

 Thanks,
 Niels
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster on SPARC / Solaris

2014-10-27 Thread Gene Liverman
I have a SPARC server that I'd like to utilize Gluster on and was wondering
if there is any support for that architecture?  I am game to run Linux or
Solaris or whatever on the box. Thanks!




--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS not start on localhost

2014-10-23 Thread Gene Liverman
Could you also provide the output of this command:
$ mount | column -t

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Oct 23, 2014 10:07 AM, Niels de Vos nde...@redhat.com wrote:

 The only way I can manage to hit this issue too, is by mounting an
 NFS-export on the Gluster server that starts the Gluster/NFS process.
 There is not crash happening on my side, Gluster/NFS just fails to
 start.

 Steps to reproduce:
 1. mount -t nfs nas.example.net:/export /mnt
 2. systemctl start glusterd

 After this, the error about being unable to register NLM4 is in
 /var/log/glusterfs/nfs.log.

 This is expected, because the Linux kernel NFS-server requires an NLM
 service in portmap/rpcbind (nlockmgr). You can verify what process
 occupies the service slot in rpcbind like this:

 1. list the rpc-programs and their port numbers

 # rpcinfo -p

 2. check the process that listens on the TCP-port for nlockmgr (port
32770 was returned by the command from point 1)

 # netstat -nlpt | grep -w 32770

 If the right column in the output lists 'glusterfs', then the
 Gluster/NFS process could register successfully and is handling the NLM4
 calls. However, if the right columnt contains a single '-', the Linux
 kernel module 'lockd' is handling the NLM4 calls. Gluster/NFS can not
 work together with the Linux kernel NFS-client (mountpoint) or the Linux
 kernel NFS-server.

 Does this help? If something is unclear, post the output  if the above
 commands and tell us what further details you want to see clarified.

 Cheers,
 Niels


 On Mon, Oct 20, 2014 at 12:53:46PM +0200, Demeter Tibor wrote:
 
  Hi,
 
  Thank you for you reply.
 
  I did your recommendations, but there are no changes.
 
  In the nfs.log there are no new things.
 
 
  [root@node0 glusterfs]# reboot
  Connection to 172.16.0.10 closed by remote host.
  Connection to 172.16.0.10 closed.
  [tdemeter@sirius-31 ~]$ ssh root@172.16.0.10
  root@172.16.0.10's password:
  Last login: Mon Oct 20 11:02:13 2014 from 192.168.133.106
  [root@node0 ~]# systemctl status nfs.target
  nfs.target - Network File System Server
 Loaded: loaded (/usr/lib/systemd/system/nfs.target; disabled)
 Active: inactive (dead)
 
  [root@node0 ~]# gluster volume status engine
  Status of volume: engine
  Gluster process   Port
 Online  Pid
 
 --
  Brick gs00.itsmart.cloud:/gluster/engine0 50160   Y
  3271
  Brick gs01.itsmart.cloud:/gluster/engine1 50160   Y   595
  NFS Server on localhost   N/A N
  N/A
  Self-heal Daemon on localhost N/A Y
  3286
  NFS Server on gs01.itsmart.cloud  2049Y
  6951
  Self-heal Daemon on gs01.itsmart.cloudN/A Y
  6958
 
  Task Status of Volume engine
 
 --
  There are no active volume tasks
 
  [root@node0 ~]# systemctl status
  Display all 262 possibilities? (y or n)
  [root@node0 ~]# systemctl status nfs-lock
  nfs-lock.service - NFS file locking service.
 Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
 Active: inactive (dead)
 
  [root@node0 ~]# systemctl stop nfs-lock
  [root@node0 ~]# systemctl restart gluster
  glusterd.serviceglusterfsd.service  gluster.mount
  [root@node0 ~]# systemctl restart gluster
  glusterd.serviceglusterfsd.service  gluster.mount
  [root@node0 ~]# systemctl restart glusterfsd.service
  [root@node0 ~]# systemctl restart glusterd.service
  [root@node0 ~]# gluster volume status engine
  Status of volume: engine
  Gluster process   Port
 Online  Pid
 
 --
  Brick gs00.itsmart.cloud:/gluster/engine0 50160   Y
  5140
  Brick gs01.itsmart.cloud:/gluster/engine1 50160   Y
  2037
  NFS Server on localhost   N/A N
  N/A
  Self-heal Daemon on localhost N/A N   N/A
  NFS Server on gs01.itsmart.cloud  2049Y
  6951
  Self-heal Daemon on gs01.itsmart.cloudN/A Y
  6958
 
 
  Any other idea?
 
  Tibor
 
 
 
 
 
 
 
 
  - Eredeti üzenet -
   On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote

Re: [Gluster-users] Jumbo frames

2014-10-21 Thread Gene Liverman
Personally, I think all replication benefits from a 9000 MTU. Bigger frame
equals faster replication.

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Oct 21, 2014 6:26 AM, Jean R. Franco jfra...@maila.net.br wrote:

 Dear all,

 We're using Huawei's S5300 manageable switches in our deployment.
 Would we benefit from higher size jumbo frames?
 Right now it's set at 1600.

 Many thanks,

 Jean

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-16 Thread Gene Liverman
Adding the priorities fixed it for me. Thanks!





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu


ITS: Making Technology Work for You!



On Wed, Oct 15, 2014 at 6:00 PM, Prasun Gera prasun.g...@gmail.com wrote:

 I am affected by this too, although I am not using the community packages.
 There seem to conflicts between the storage server and regular channels
 within RHN.

 On Tue, Oct 14, 2014 at 2:14 PM, Chad Feller fel...@unr.edu wrote:

 I was hit by this too this morning:

 What you can do is install the yum-plugin-priorities package, and then
 add a weight to (say, priority=50) to each section of your glusterfs.repo
 file.  That will weight the community glusterfs packages higher, causing
 yum to ignore the newer version available via rhn.

 If you're using a configuration management tool like puppet, pushing out
 this workaround is to all your servers incredibly simple and should take
 but a couple minutes.

 -Chad



 On 10/14/2014 09:56 AM, daryl herzmann wrote:

 Howdy,

 I'm getting RPM conflicts as the RHEL6.6 update attempts to update
 gluster client RPMs, but the latest release from gluster.org is 3.5.2 ?

 Am I missing something obvious here?  Seems strange for RHEL to include
 a version that isn't available upstream yet :)

 https://bugzilla.redhat.com/show_bug.cgi?id=1095604

 Is there a suggested workaround to this or simply wait for upstream to
 release a newer 3.6.0 ?  I have `yum  update --skip-broken` for now...

 thanks,
   daryl

 Error: Package: glusterfs-server-3.5.2-1.el6.x86_64 (@glusterfs-epel)
Requires: glusterfs = 3.5.2-1.el6
Removing: glusterfs-3.5.2-1.el6.x86_64 (@glusterfs-epel)
glusterfs = 3.5.2-1.el6
Updated By: glusterfs-3.6.0.29-2.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs = 3.6.0.29-2.el6
Available: glusterfs-3.4.0.36rhs-1.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs = 3.4.0.36rhs-1.el6
Available: glusterfs-3.4.0.57rhs-1.el6_5.x86_64
 (rhel-x86_64-server-6)
glusterfs = 3.4.0.57rhs-1.el6_5
Available: glusterfs-3.6.0.28-2.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs = 3.6.0.28-2.el6
 Error: Package: glusterfs-server-3.5.2-1.el6.x86_64 (@glusterfs-epel)
Requires: glusterfs-cli = 3.5.2-1.el6
Removing: glusterfs-cli-3.5.2-1.el6.x86_64 (@glusterfs-epel)
glusterfs-cli = 3.5.2-1.el6
Updated By: glusterfs-cli-3.6.0.29-2.el6.x86_64
 (rhel-x86_64-server-optional-6)
glusterfs-cli = 3.6.0.29-2.el6
Available: glusterfs-cli-3.6.0.28-2.el6.x86_64
 (rhel-x86_64-server-optional-6)
glusterfs-cli = 3.6.0.28-2.el6
 Error: Package: glusterfs-server-3.5.2-1.el6.x86_64 (@glusterfs-epel)
Requires: glusterfs-fuse = 3.5.2-1.el6
Removing: glusterfs-fuse-3.5.2-1.el6.x86_64 (@glusterfs-epel)
glusterfs-fuse = 3.5.2-1.el6
Updated By: glusterfs-fuse-3.6.0.29-2.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs-fuse = 3.6.0.29-2.el6
Available: glusterfs-fuse-3.4.0.36rhs-1.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs-fuse = 3.4.0.36rhs-1.el6
Available: glusterfs-fuse-3.4.0.57rhs-1.el6_5.x86_64
 (rhel-x86_64-server-6)
glusterfs-fuse = 3.4.0.57rhs-1.el6_5
Available: glusterfs-fuse-3.6.0.28-2.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs-fuse = 3.6.0.28-2.el6
 Error: Package: glusterfs-server-3.5.2-1.el6.x86_64 (@glusterfs-epel)
Requires: glusterfs-libs = 3.5.2-1.el6
Removing: glusterfs-libs-3.5.2-1.el6.x86_64 (@glusterfs-epel)
glusterfs-libs = 3.5.2-1.el6
Updated By: glusterfs-libs-3.6.0.29-2.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs-libs = 3.6.0.29-2.el6
Available: glusterfs-libs-3.4.0.36rhs-1.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs-libs = 3.4.0.36rhs-1.el6
Available: glusterfs-libs-3.4.0.57rhs-1.el6_5.x86_64
 (rhel-x86_64-server-6)
glusterfs-libs = 3.4.0.57rhs-1.el6_5
Available: glusterfs-libs-3.6.0.28-2.el6.x86_64
 (rhel-x86_64-server-6)
glusterfs-libs = 3.6.0.28-2.el6

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http

[Gluster-users] Gluster Failed on RPM Update

2014-08-06 Thread Gene Liverman
When updates applied a couple of nights ago all my Gluster nodes went down
and service glusterd status reported it dead on all 3 nodes in my
replicated setup. This seems very similar to a bug that was recently fixed (
https://bugzilla.redhat.com/show_bug.cgi?id=1113543)  Any ideas what's up
with this?

[root@eapps-gluster01 ~]# rpm -qa |grep gluster
glusterfs-libs-3.5.2-1.el6.x86_64
glusterfs-cli-3.5.2-1.el6.x86_64
glusterfs-geo-replication-3.5.2-1.el6.x86_64
glusterfs-3.5.2-1.el6.x86_64
glusterfs-fuse-3.5.2-1.el6.x86_64
glusterfs-server-3.5.2-1.el6.x86_64
glusterfs-api-3.5.2-1.el6.x86_64





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Failed on RPM Update

2014-08-06 Thread Gene Liverman
Bug updated.





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.



On Wed, Aug 6, 2014 at 11:40 AM, Lalatendu Mohanty lmoha...@redhat.com
wrote:

 On 08/06/2014 06:44 PM, Gene Liverman wrote:

 When updates applied a couple of nights ago all my Gluster nodes went
 down and service glusterd status reported it dead on all 3 nodes in my
 replicated setup. This seems very similar to a bug that was recently fixed (
 https://bugzilla.redhat.com/show_bug.cgi?id=1113543)  Any ideas what's
 up with this?

 [root@eapps-gluster01 ~]# rpm -qa |grep gluster
 glusterfs-libs-3.5.2-1.el6.x86_64
 glusterfs-cli-3.5.2-1.el6.x86_64
 glusterfs-geo-replication-3.5.2-1.el6.x86_64
 glusterfs-3.5.2-1.el6.x86_64
 glusterfs-fuse-3.5.2-1.el6.x86_64
 glusterfs-server-3.5.2-1.el6.x86_64
 glusterfs-api-3.5.2-1.el6.x86_64

  Hey Gene,

 Please update the bug with you comments. Not sure why the fix didn't work.

 Thanks,
 Lala

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Monitoring with Zabbix

2014-06-27 Thread Gene Liverman
Anyone got any good tips for monitoring Gluster via Zabbix?




--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Auto Update did not restart services

2014-06-26 Thread Gene Liverman
My systems are set to automatically install updates. Since I installed
Gluster from the repos this means I went from 3.5 to 3.5.1 automatically.
 The problem is this: the installer did not stop or restart the service so
when I checked the status via /sbin/service on RHEL6 it said it was dead. A
simple service restart fixed this but, until I found the problem, it was
off line.  Is this a bug or do I need to disable the repo and manually
check for updates to Gluster or what? Thanks!




--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu


ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Gene Liverman
Makes sense. Thanks!





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.



On Mon, Jun 16, 2014 at 9:34 AM, Kaleb S. KEITHLEY kkeit...@redhat.com
wrote:

 On 06/16/2014 09:31 AM, Gene Liverman wrote:

 How well does Gluster work on Pidora? Does the Raspberry Pi's limited
 RAM hinder it any?


 It seems to work well enough. I've heard of several people who have built
 clusters of pis running GlusterFS.

 It's certainly not going to set any speed records though.

 --

 Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL 5.8 mount fails cannot open /dev/fuse

2014-06-12 Thread Gene Liverman
https://bugzilla.redhat.com/show_bug.cgi?id=1108669





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.



On Fri, Jun 6, 2014 at 3:57 AM, Niels de Vos nde...@redhat.com wrote:

 On Sun, Jun 01, 2014 at 03:02:22PM -0400, Gene Liverman wrote:
  Running the script did indeed fix things, thanks! Personally, I count
 this
  as a bug since it is not required on later versions of RHEL... maybe the
  installer should check that the fuse module is loaded and load it if it's
  not found. Just a thought.

 Could you file a bug for this, and add a reference to this thread on
 a mailinglist archive?

 -
 https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFScomponent=build

 We can add a check in the rpm installation scripts for EL5 and load the
 fuse module if it is available (it may not be available during initial
 installation through anaconda).

 Thanks,
 Niels

 
 
 
 
 
  --
  *Gene Liverman*
  Systems Administrator
  Information Technology Services
  University of West Georgia
 
  ITS: Making Technology Work for You!
 
 
 
  This e-mail and any attachments may contain confidential and privileged
  information. If you are not the intended recipient, please notify the
  sender immediately by return mail, delete this message, and destroy any
  copies. Any dissemination or use of this information by a person other
 than
  the intended recipient is unauthorized and may be illegal or actionable
 by
  law.
 
 
 
  On Sun, Jun 1, 2014 at 10:24 AM, Niels de Vos nde...@redhat.com wrote:
 
   On Sun, Jun 01, 2014 at 09:30:53AM -0400, Gene Liverman wrote:
Ahh, so should I just run that script manually and be done with it?
 Does
that mean that when we eventually do reboot that it will
 automatically
   load
itself from then on? Thanks!
  
   Yes, you can run the script manually (it only does 'modprobe fuse').
   After a reboot the fuse module should be loaded very early during the
   boot process. There should be no issues related to fuse when mounting
 is
   done afterwards.
  
   Cheers,
   Niels
  
   
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
   
ITS: Making Technology Work for You!
   
This e-mail and any attachments may contain confidential and
 privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy
 any
copies. Any dissemination or use of this information by a person
 other
   than
the intended recipient is unauthorized and may be illegal or
 actionable
   by
law.
   
On Jun 1, 2014 3:47 AM, Niels de Vos nde...@redhat.com wrote:
   
 On Sun, Jun 01, 2014 at 03:51:58AM +, Franco Broi wrote:
  Doesn't work for me either on Centos 5, have to modprobe fuse.
 
  On 1 Jun 2014 10:46, Gene Liverman glive...@westga.edu wrote:
  Just setup my first Gluster share (replicated on 3 nodes) and it
   works
  fine on RHEL 6 but when trying to mount it on RHEL 5.8 I get the
  following in my logs:
 
  [2014-06-01 02:01:29.580163] I [glusterfsd.c:1959:main]
 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
   3.5.0
 (/usr/sbin/glusterfs --volfile-server=eapps-gluster01
 --volfile-id=/gv0
 /eapps/shared)
  [2014-06-01 02:01:29.581147] E [mount.c:267:gf_fuse_mount]
 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
  [2014-06-01 02:01:29.581207] E [xlator.c:403:xlator_init] 0-fuse:
 Initialization of volume 'fuse' failed, review your volfile again
 
  I installed glusterfs and glusterfs-fuse from

  
 http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo.el5
 and got have the following currently:
  # rpm -qa |grep -i gluster
  glusterfs-libs-3.5.0-2.el5
  glusterfs-3.5.0-2.el5
  glusterfs-fuse-3.5.0-2.el5
 
  Everything I have read online says that I should not have to
 manually
  install fuse or manually run modprobe or manually make /dev/fuse
 so
  I am lost as to why this is not working.  Any tips on what to do
   next?

 Indeed, that should not be needed. I guess you installed the
 gluster-fuse package after a basic installation and the system did
 not
 reboot yet with glusterfs-fuse installed. On EL5, udev (I think)
 does
 not automatically load the fuse module if /dev/fuse is accessed.
 The
 glusterfs

[Gluster-users] NFS to Gluster Hangs

2014-06-10 Thread Gene Liverman
Twice now I have had my nfs connection to a replicated gluster volume stop
responding. On both servers that connect to the system I have the following
symptoms:

   1. Accessing the mount with the native client is still working fine (the
   volume is mounted both that way and via nfs. One app requires the nfs
   version)
   2. The logs have messages stating the following: kernel: nfs: server
   my-servers-name not responding, still trying

How can I fix this?



Thanks,
Gene
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS to Gluster Hangs

2014-06-10 Thread Gene Liverman
Thanks! I turned off drc as suggested and will have to wait and see how
that works. Here are the packages I have installed via yum:
# rpm -qa |grep -i gluster
glusterfs-cli-3.5.0-2.el6.x86_64
glusterfs-libs-3.5.0-2.el6.x86_64
glusterfs-fuse-3.5.0-2.el6.x86_64
glusterfs-server-3.5.0-2.el6.x86_64
glusterfs-3.5.0-2.el6.x86_64
glusterfs-geo-replication-3.5.0-2.el6.x86_64

The nfs server service was showing to be running even when stuff wasn't
working.  This is from while it was broken:

# gluster volume status
Status of volume: gv0
Gluster process Port
 Online  Pid

Brick eapps-gluster01.my.domain:/export/sdb1/gv0   49152   Y   39593
Brick eapps-gluster02.my.domain:/export/sdb1/gv0   49152   Y   2472
Brick eapps-gluster03.my.domain:/export/sdb1/gv0   49152   Y   1866
NFS Server on localhost  2049Y
  39603
Self-heal Daemon on localhost  N/A Y
39610
NFS Server on eapps-gluster03.my.domain   2049Y   35125
Self-heal Daemon on eapps-gluster03.my.domain   N/A Y   35132
NFS Server on eapps-gluster02.my.domain   2049Y   37103
Self-heal Daemon on eapps-gluster02.my.domain   N/A Y   37110

Task Status of Volume gv0
---


Running 'service glusterd restart' on the NFS server made things start
working again after this.


-- Gene



On Tue, Jun 10, 2014 at 12:10 PM, Niels de Vos nde...@redhat.com wrote:

 On Tue, Jun 10, 2014 at 11:32:50AM -0400, Gene Liverman wrote:
  Twice now I have had my nfs connection to a replicated gluster volume
 stop
  responding. On both servers that connect to the system I have the
 following
  symptoms:
 
 1. Accessing the mount with the native client is still working fine
 (the
 volume is mounted both that way and via nfs. One app requires the nfs
 version)
 2. The logs have messages stating the following: kernel: nfs: server
 my-servers-name not responding, still trying
 
  How can I fix this?

 You should check if the NFS-server (a glusterfs process) is still
 running:

 # gluster volume status

 If the NFS-server is not running anymore, you can start it with:

 # gluster volume start $VOLUME force
 (you only need to do that for one volume)


 In case this is with GlusterFS 3.5, you may be hitting a memory leak in
 the DRC (Duplicate Request Cache) implementation of the NFS-server. You
 can disable DRC with this:

 # gluster volume set $VOLUME nfs.drc off

 In glusterfs-3.5.1 DRC will be disabled by default, there have been too
 many issues with DRC to enable it for everyone. We need to do more tests
 and fix DRC in the current development (master) branch.

 HTH,
 Niels

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS to Gluster Hangs

2014-06-10 Thread Gene Liverman
No firewalls in this case...

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu


On Jun 10, 2014 12:57 PM, Paul Robert Marino prmari...@gmail.com wrote:

 Ive also seen this happen when there is a firewall in the middle and
 nfslockd malfunctioned because of it.


 On Tue, Jun 10, 2014 at 12:20 PM, Gene Liverman glive...@westga.edu
 wrote:
  Thanks! I turned off drc as suggested and will have to wait and see how
 that
  works. Here are the packages I have installed via yum:
  # rpm -qa |grep -i gluster
  glusterfs-cli-3.5.0-2.el6.x86_64
  glusterfs-libs-3.5.0-2.el6.x86_64
  glusterfs-fuse-3.5.0-2.el6.x86_64
  glusterfs-server-3.5.0-2.el6.x86_64
  glusterfs-3.5.0-2.el6.x86_64
  glusterfs-geo-replication-3.5.0-2.el6.x86_64
 
  The nfs server service was showing to be running even when stuff wasn't
  working.  This is from while it was broken:
 
  # gluster volume status
  Status of volume: gv0
  Gluster process Port
  Online  Pid
 
 
  Brick eapps-gluster01.my.domain:/export/sdb1/gv0   49152   Y   39593
  Brick eapps-gluster02.my.domain:/export/sdb1/gv0   49152   Y   2472
  Brick eapps-gluster03.my.domain:/export/sdb1/gv0   49152   Y   1866
  NFS Server on localhost  2049
  Y
  39603
  Self-heal Daemon on localhost  N/A Y
  39610
  NFS Server on eapps-gluster03.my.domain   2049Y
 35125
  Self-heal Daemon on eapps-gluster03.my.domain   N/A Y   35132
  NFS Server on eapps-gluster02.my.domain   2049Y
 37103
  Self-heal Daemon on eapps-gluster02.my.domain   N/A Y   37110
 
  Task Status of Volume gv0
 
 ---
 
 
  Running 'service glusterd restart' on the NFS server made things start
  working again after this.
 
 
  -- Gene
 
 
 
 
  On Tue, Jun 10, 2014 at 12:10 PM, Niels de Vos nde...@redhat.com
 wrote:
 
  On Tue, Jun 10, 2014 at 11:32:50AM -0400, Gene Liverman wrote:
   Twice now I have had my nfs connection to a replicated gluster volume
   stop
   responding. On both servers that connect to the system I have the
   following
   symptoms:
  
  1. Accessing the mount with the native client is still working fine
   (the
  volume is mounted both that way and via nfs. One app requires the
 nfs
  version)
  2. The logs have messages stating the following: kernel: nfs:
 server
  my-servers-name not responding, still trying
  
   How can I fix this?
 
  You should check if the NFS-server (a glusterfs process) is still
  running:
 
  # gluster volume status
 
  If the NFS-server is not running anymore, you can start it with:
 
  # gluster volume start $VOLUME force
  (you only need to do that for one volume)
 
 
  In case this is with GlusterFS 3.5, you may be hitting a memory leak in
  the DRC (Duplicate Request Cache) implementation of the NFS-server. You
  can disable DRC with this:
 
  # gluster volume set $VOLUME nfs.drc off
 
  In glusterfs-3.5.1 DRC will be disabled by default, there have been too
  many issues with DRC to enable it for everyone. We need to do more tests
  and fix DRC in the current development (master) branch.
 
  HTH,
  Niels
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RHEL 5.8 mount fails cannot open /dev/fuse

2014-06-01 Thread Gene Liverman
Running the script did indeed fix things, thanks! Personally, I count this
as a bug since it is not required on later versions of RHEL... maybe the
installer should check that the fuse module is loaded and load it if it's
not found. Just a thought.





--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia

ITS: Making Technology Work for You!



This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.



On Sun, Jun 1, 2014 at 10:24 AM, Niels de Vos nde...@redhat.com wrote:

 On Sun, Jun 01, 2014 at 09:30:53AM -0400, Gene Liverman wrote:
  Ahh, so should I just run that script manually and be done with it? Does
  that mean that when we eventually do reboot that it will automatically
 load
  itself from then on? Thanks!

 Yes, you can run the script manually (it only does 'modprobe fuse').
 After a reboot the fuse module should be loaded very early during the
 boot process. There should be no issues related to fuse when mounting is
 done afterwards.

 Cheers,
 Niels

 
  --
  Gene Liverman
  Systems Administrator
  Information Technology Services
  University of West Georgia
  glive...@westga.edu
 
  ITS: Making Technology Work for You!
 
  This e-mail and any attachments may contain confidential and privileged
  information. If you are not the intended recipient, please notify the
  sender immediately by return mail, delete this message, and destroy any
  copies. Any dissemination or use of this information by a person other
 than
  the intended recipient is unauthorized and may be illegal or actionable
 by
  law.
 
  On Jun 1, 2014 3:47 AM, Niels de Vos nde...@redhat.com wrote:
 
   On Sun, Jun 01, 2014 at 03:51:58AM +, Franco Broi wrote:
Doesn't work for me either on Centos 5, have to modprobe fuse.
   
On 1 Jun 2014 10:46, Gene Liverman glive...@westga.edu wrote:
Just setup my first Gluster share (replicated on 3 nodes) and it
 works
fine on RHEL 6 but when trying to mount it on RHEL 5.8 I get the
following in my logs:
   
[2014-06-01 02:01:29.580163] I [glusterfsd.c:1959:main]
   0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
 3.5.0
   (/usr/sbin/glusterfs --volfile-server=eapps-gluster01 --volfile-id=/gv0
   /eapps/shared)
[2014-06-01 02:01:29.581147] E [mount.c:267:gf_fuse_mount]
   0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
[2014-06-01 02:01:29.581207] E [xlator.c:403:xlator_init] 0-fuse:
   Initialization of volume 'fuse' failed, review your volfile again
   
I installed glusterfs and glusterfs-fuse from
  
 http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo.el5
   and got have the following currently:
# rpm -qa |grep -i gluster
glusterfs-libs-3.5.0-2.el5
glusterfs-3.5.0-2.el5
glusterfs-fuse-3.5.0-2.el5
   
Everything I have read online says that I should not have to manually
install fuse or manually run modprobe or manually make /dev/fuse so
I am lost as to why this is not working.  Any tips on what to do
 next?
  
   Indeed, that should not be needed. I guess you installed the
   gluster-fuse package after a basic installation and the system did not
   reboot yet with glusterfs-fuse installed. On EL5, udev (I think) does
   not automatically load the fuse module if /dev/fuse is accessed. The
   glusterfs-fuse package contains a script that loads the module when the
   system boots (/etc/sysconfig/modules/glusterfs-fuse.modules).
  
   I am not sure if it would be better to load the module immediately when
   glusterfs-fuse gets installed.
  
   Niels
  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] RHEL 5.8 mount fails cannot open /dev/fuse

2014-05-31 Thread Gene Liverman
Just setup my first Gluster share (replicated on 3 nodes) and it works fine
on RHEL 6 but when trying to mount it on RHEL 5.8 I get the following in my
logs:

[2014-06-01 02:01:29.580163] I [glusterfsd.c:1959:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0
(/usr/sbin/glusterfs --volfile-server=eapps-gluster01 --volfile-id=/gv0
/eapps/shared)
[2014-06-01 02:01:29.581147] E [mount.c:267:gf_fuse_mount]
0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
[2014-06-01 02:01:29.581207] E [xlator.c:403:xlator_init] 0-fuse:
Initialization of volume 'fuse' failed, review your volfile again

I installed glusterfs and glusterfs-fuse from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo.el5
and got have the following currently:
# rpm -qa |grep -i gluster
glusterfs-libs-3.5.0-2.el5
glusterfs-3.5.0-2.el5
glusterfs-fuse-3.5.0-2.el5

Everything I have read online says that I should not have to manually
install fuse or manually run modprobe or manually make /dev/fuse so I am
lost as to why this is not working.  Any tips on what to do next?



Thanks,
Gene
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users