Re: [Gluster-users] Gluster EPEL _5_ packages not signed

2014-03-06 Thread Grant Byers
Hi Kaleb Yes, it was just EL5. Apologies. I discovered this after I posted. Are you sure yum is barfing on the signature? Yum on EL5 will barf if your repo uses anything stronger than sha1 (sha) for checksums. The default is sha256 when using createrepo to build the metadata. FWIW, I sign all

Re: [Gluster-users] Gluster EPEL _5_ packages not signed

2014-03-06 Thread Kaleb Keithley
> > I saw that this issue has been raised before for staging packages, but I'm > wanting to bring to the attention of the relevant people/person that the > LATEST Gluster stable packages are also not signed. There are no contact > details within the package headers (see below), so I can't simply

[Gluster-users] Gluster EPEL packages not signed

2014-03-06 Thread Grant Byers
Hi, I saw that this issue has been raised before for staging packages, but I'm wanting to bring to the attention of the relevant people/person that the LATEST Gluster stable packages are also not signed. There are no contact details within the package headers (see below), so I can't simply emai

Re: [Gluster-users] One node goes offline, the other node loses its connection to its local Gluster volume

2014-03-06 Thread Greg Scott
> In your real-life concern, the interconnect would not interfere with the > existence of either > machines' ip address so after the ping-timeout, operations would resume in a > split-brain > configuration. As long as no changes were made to the same file on both > volumes, when the > connect

Re: [Gluster-users] Is there a way to manually clear the heal-failed/split-brain lists?

2014-03-06 Thread Joe Julian
Restart glusterd on your servers. On March 6, 2014 3:58:00 PM PST, Shawn Heisey wrote: >On 3/6/2014 2:14 PM, Michael Peek wrote: >> I've noticed that once I've taken care of a problem, the heal-failed >and >> split-brain lists don't get smaller or go away. Is there a way to >> manually reset the

Re: [Gluster-users] Is there a way to manually clear the heal-failed/split-brain lists?

2014-03-06 Thread Shawn Heisey
On 3/6/2014 2:14 PM, Michael Peek wrote: > I've noticed that once I've taken care of a problem, the heal-failed and > split-brain lists don't get smaller or go away. Is there a way to > manually reset them? I'd like to know the answer to that question too. There is a bug filed on the problem alr

Re: [Gluster-users] One node goes offline, the other node loses its connection to its local Gluster volume

2014-03-06 Thread Joe Julian
On 02/22/2014 05:44 PM, Greg Scott wrote: I have 2 nodes named fw1 and fw2. When I ifdown the NIC I'm using for Gluster on either node, that node cannot see its Gluster volume, but the other node can see it after a timeout. As soon as I ifup that NIC, everyone can see everything again.

Re: [Gluster-users] One node goes offline, the other node loses its connection to its local Gluster volume

2014-03-06 Thread Greg Scott
Sorry Anirban, I didn't mean to disappear into a black hole a couple weeks ago. I've been away from this for a while and I just now have a chance to look at the replies. One suggestion was to try an iptables rule instead of ifdown to simulate my outage and I'll try that in a little while and

[Gluster-users] Is there a way to manually clear the heal-failed/split-brain lists?

2014-03-06 Thread Michael Peek
I've noticed that once I've taken care of a problem, the heal-failed and split-brain lists don't get smaller or go away. Is there a way to manually reset them? Michael ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.o

Re: [Gluster-users] Gluster 3.4.2 on Ubuntu 12.04 LTS Server - Upstart No Go

2014-03-06 Thread Ray Powell
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 I get around the problem on our gluster clusters by putting "nobootwait" and/or "noauto" in my fstab and then having rc.local run the mount command for the gluster mount points. Do not know if that is more or less elegant then your workaround. It at

Re: [Gluster-users] 3.3.0 -> 3.4.2 / Rolling upgrades with no downtime

2014-03-06 Thread Bryan Whitehead
If you're not sure just pick one and save the other. Steps I did: save one of the qcow2 file split-brains (copy from brick to another name) removed the qcow2 file that you just "backed up". Gluster will heal with the other one. restart VM if VM recorvers after a fsck then just delete the saved qco

[Gluster-users] Gluster Spotlight: Apache CloudStack, Pydio, and OpenNebula

2014-03-06 Thread John Mark Walker
This week's spotlight will be all about software integrated with storage services. GFAPI has opened the floodgates for this type of integration with GlusterFS. In this spotlight, we'll hear from people who have been actively working on integrations with Apache CloudStack, Pydio, and OpenNebula.

[Gluster-users] [noob]iops testing in the mounted glusterfolder, need a litt advice

2014-03-06 Thread Kim Holmebakken
Hi, Im trying to test the iops on the glusterfs folder from a client machine running ubuntu 13.10, and was wondering since this is a directory what is the best way to proceed with this ? I made a small block device file within the mounted folder and tried to test the iops with fio to that locati

Re: [Gluster-users] Distributed runs out of Space

2014-03-06 Thread Lars Ellenberg
On Tue, Mar 04, 2014 at 10:09:24AM +0100, Dragon wrote: > Hello, > some time has gone since i reported that problem. I watch this and > find out, that if i run rebalance, which sorts the free space on all 3 > bricks to the same, i can copy new files on the volume. > Now i have again this situation

[Gluster-users] Information about .glusterfs/indices/xattrop

2014-03-06 Thread Chiku
Hello Is there any documentation about xattrop folder ? I read it's about self-heal. Right now self-heal info doesn't work. I have alot of files inside this folder. These files name looks like gfid but I don't find any gfid file with these names. And its don't match any regular files trusted.g

Re: [Gluster-users] 3.3.0 -> 3.4.2 / Rolling upgrades with no downtime

2014-03-06 Thread João Pagaime
thanks but which qcow2/FVM file choose for deletion? maybe there is some known current best-practice for the VM maximum stability if the VM is frozen the decision maybe to delete the oldest qcow2/FVM file, or random choose if there is no difference best regards, --joão Em 05-03-2014