Hi Kaleb
Yes, it was just EL5. Apologies. I discovered this after I posted.
Are you sure yum is barfing on the signature? Yum on EL5 will barf if your repo
uses anything stronger than sha1 (sha) for checksums. The default is sha256
when using createrepo to build the metadata.
FWIW, I sign all
>
> I saw that this issue has been raised before for staging packages, but I'm
> wanting to bring to the attention of the relevant people/person that the
> LATEST Gluster stable packages are also not signed. There are no contact
> details within the package headers (see below), so I can't simply
Hi,
I saw that this issue has been raised before for staging packages, but I'm
wanting to bring to the attention of the relevant people/person that the LATEST
Gluster stable packages are also not signed. There are no contact details
within the package headers (see below), so I can't simply emai
> In your real-life concern, the interconnect would not interfere with the
> existence of either
> machines' ip address so after the ping-timeout, operations would resume in a
> split-brain
> configuration. As long as no changes were made to the same file on both
> volumes, when the
> connect
Restart glusterd on your servers.
On March 6, 2014 3:58:00 PM PST, Shawn Heisey wrote:
>On 3/6/2014 2:14 PM, Michael Peek wrote:
>> I've noticed that once I've taken care of a problem, the heal-failed
>and
>> split-brain lists don't get smaller or go away. Is there a way to
>> manually reset the
On 3/6/2014 2:14 PM, Michael Peek wrote:
> I've noticed that once I've taken care of a problem, the heal-failed and
> split-brain lists don't get smaller or go away. Is there a way to
> manually reset them?
I'd like to know the answer to that question too. There is a bug filed
on the problem alr
On 02/22/2014 05:44 PM, Greg Scott wrote:
I have 2 nodes named fw1 and fw2. When I ifdown the NIC I'm using for
Gluster on either node, that node cannot see its Gluster volume, but
the other node can see it after a timeout. As soon as I ifup that
NIC, everyone can see everything again.
Sorry Anirban, I didn't mean to disappear into a black hole a couple weeks ago.
I've been away from this for a while and I just now have a chance to look at
the replies. One suggestion was to try an iptables rule instead of ifdown to
simulate my outage and I'll try that in a little while and
I've noticed that once I've taken care of a problem, the heal-failed and
split-brain lists don't get smaller or go away. Is there a way to
manually reset them?
Michael
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
I get around the problem on our gluster clusters by putting
"nobootwait" and/or "noauto" in my fstab and then having rc.local run
the mount command for the gluster mount points. Do not know if that is
more or less elegant then your workaround. It at
If you're not sure just pick one and save the other.
Steps I did:
save one of the qcow2 file split-brains (copy from brick to another name)
removed the qcow2 file that you just "backed up". Gluster will heal with
the other one.
restart VM
if VM recorvers after a fsck then just delete the saved qco
This week's spotlight will be all about software integrated with storage
services. GFAPI has opened the floodgates for this type of integration with
GlusterFS. In this spotlight, we'll hear from people who have been actively
working on integrations with Apache CloudStack, Pydio, and OpenNebula.
Hi,
Im trying to test the iops on the glusterfs folder from a client machine
running ubuntu 13.10, and was wondering since this is a directory what is the
best way to proceed with this ?
I made a small block device file within the mounted folder and tried to test
the iops with fio to that locati
On Tue, Mar 04, 2014 at 10:09:24AM +0100, Dragon wrote:
> Hello,
> some time has gone since i reported that problem. I watch this and
> find out, that if i run rebalance, which sorts the free space on all 3
> bricks to the same, i can copy new files on the volume.
> Now i have again this situation
Hello
Is there any documentation about xattrop folder ?
I read it's about self-heal.
Right now self-heal info doesn't work.
I have alot of files inside this folder.
These files name looks like gfid but I don't find any gfid file with
these names.
And its don't match any regular files trusted.g
thanks
but which qcow2/FVM file choose for deletion? maybe there is some
known current best-practice for the VM maximum stability
if the VM is frozen the decision maybe to delete the oldest qcow2/FVM
file, or random choose if there is no difference
best regards,
--joão
Em 05-03-2014
16 matches
Mail list logo