Bug filed https://bugzilla.redhat.com/show_bug.cgi?id=1272436
--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
On Thu, Oct 8, 2015 at 1:48 PM, Vijay Bellur
probing or anything else? Do I need to do any
brick removal / adding (I'm thinking no but want to make sure)?
Thanks,
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
On Thu, Oct 8
Happy to do so... what all info should go in the bug report?
--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
On Thu, Oct 8, 2015 at 1:04 PM, Vijay Bellur
gluster to be happy again. Gluster put a .glusterfs
folder in /export/sdb1/gv0 but nothing else has shown up and the brick is
offline. I read the docs on replacing a brick but seem to be missing
something and would appreciate some help. Thanks!
--
*Gene Liverman*
Systems Integration Architect
vent_dispatch_epoll_handler (data=0xf408b0) at
event-epoll.c:575
#10 event_dispatch_epoll_worker (data=0xf408b0) at event-epoll.c:678
#11 0x003b91807a51 in start_thread (arg=0x7fee9db3b700) at
pthread_create.c:301
#12 0x003b914e893d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S
There are a couple of answers to that question...
- The core dump is from a fully patched RHEL 6 box. This is my primary
box
- The other two nodes are fully patched CentOS 6.
--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
it is possible that I can do my OS reinstall without wiping out the data on
two nodes (the third had a hardware failure so it will be fresh from the
ground up).
Thanks,
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
/~gliverma/tmp-files/sosreport-gliverman.gluster-crashing-20151006101239.tar.xz.md5
Thanks again for the help!
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
On Fri, Oct 2, 2015
Pulling those logs now but how do I generate the core file you are asking
for?
--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
On Fri, Oct 2, 2015 at 2:25 AM
In the last few days I've started having issues with my glusterd service
crashing. When it goes down it seems to do so on all nodes in my replicated
volume. How can I troubleshoot this? I'm on a mix of CentOS 6 and RHEL 6.
Thanks!
Gene Liverman
Systems Integration Architect
Information
I have servers set to pull from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64
yet when I go there and work back up the path to the EPEL.repo folder I
only see 6 7 now. Is this a mistake or was support for EPEL 5 dropped?
Thanks,
*Gene Liverman
Very nice! I see a small Puppet module and a Vagrant setup in my immediate
future for using this. Thanks for sharing!
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
This e-mail and any
Awesome, thanks!
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential and privileged
information. If you
I'd like to second this request for suggestions. I'm not as far along so I
need to do some operational monitoring too still. Unlike Paul, I don't use
Nagios but instead use Zabbix. Any and all tips would be appreciated.
--
Gene Liverman
Systems Administrator
Information Technology Services
/glusterfs/LATEST/EPEL.repo/ is
missing the directories for epel-5Client, epel-5Server, and
epel-5Workstation
Can someone take a look at this?
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work
I think it's a good idea.
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended
I have a SPARC server that I'd like to utilize Gluster on and was wondering
if there is any support for that architecture? I am game to run Linux or
Solaris or whatever on the box. Thanks!
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
Could you also provide the output of this command:
$ mount | column -t
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential
Personally, I think all replication benefits from a 9000 MTU. Bigger frame
equals faster replication.
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
This e-mail and any attachments may
Adding the priorities fixed it for me. Thanks!
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
On Wed, Oct 15, 2014 at 6:00 PM, Prasun Gera prasun.g...@gmail.com wrote:
I am
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please
Bug updated.
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential and privileged
information. If you
Anyone got any good tips for monitoring Gluster via Zabbix?
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential
. A
simple service restart fixed this but, until I found the problem, it was
off line. Is this a bug or do I need to disable the repo and manually
check for updates to Gluster or what? Thanks!
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive
Makes sense. Thanks!
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential and privileged
information. If you
https://bugzilla.redhat.com/show_bug.cgi?id=1108669
--
*Gene Liverman*
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
678.839.5492
ITS: Making Technology Work for You!
This e-mail and any attachments may contain confidential
Twice now I have had my nfs connection to a replicated gluster volume stop
responding. On both servers that connect to the system I have the following
symptoms:
1. Accessing the mount with the native client is still working fine (the
volume is mounted both that way and via nfs. One app
---
Running 'service glusterd restart' on the NFS server made things start
working again after this.
-- Gene
On Tue, Jun 10, 2014 at 12:10 PM, Niels de Vos nde...@redhat.com wrote:
On Tue, Jun 10, 2014 at 11:32:50AM -0400, Gene Liverman
No firewalls in this case...
--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu
On Jun 10, 2014 12:57 PM, Paul Robert Marino prmari...@gmail.com wrote:
Ive also seen this happen when there is a firewall in the middle
Running the script did indeed fix things, thanks! Personally, I count this
as a bug since it is not required on later versions of RHEL... maybe the
installer should check that the fuse module is loaded and load it if it's
not found. Just a thought.
--
*Gene Liverman*
Systems Administrator
Just setup my first Gluster share (replicated on 3 nodes) and it works fine
on RHEL 6 but when trying to mount it on RHEL 5.8 I get the following in my
logs:
[2014-06-01 02:01:29.580163] I [glusterfsd.c:1959:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0
31 matches
Mail list logo