I was able to get this working by deleting the geo-replication session
and recreating it. Not sure why it broke in the first place but it is
working now.
On 04/28/2017 03:08 PM, Michael Watters wrote:
> I've just upgraded my gluster hosts from gluster 3.8 to 3.10 and it
> appears that geo-replic
I've just upgraded my gluster hosts from gluster 3.8 to 3.10 and it
appears that geo-replication on my volume is now broken. Here are the
log entries from the master. I've tried restarting the geo-replication
process several times which did not help. Is there any way to resolve this?
2017-04-28
On Fri, Apr 28, 2017, at 10:57 AM, Jan Wrona wrote:
> I've been struggling with NUFA for a while now and I know very well what
> the "option local-volume-name brick" in the volfile does. In fact, I've
> been using a filter to force gluster to use the local subvolume I want
> instead of the fir
Hi,
I've been struggling with NUFA for a while now and I know very well what
the "option local-volume-name brick" in the volfile does. In fact, I've
been using a filter to force gluster to use the local subvolume I want
instead of the first local subvolume it finds, but filters are very
unrel
Hi,
We see an issue when using a volume that has the default option set
performance.write-behind so have been running
gluster volume set performance.write-behind off
manually, how would I go about forcing all new volumes to have that set ? we
are trying to use heketi for dynamic volume p
Hello,
I have problems with tuning Gluster for optimal small files performance.
My usage scenario is, as I've learned, worst possible scenario, but
it's not up to me to change it:
- small 1KB files
- at least 20M of those
- approx. 10 files/directory
- mostly writes
- average speed 1000 files/sec
Good morning guys,
We’re using GlusterFS 3.6.7 on RHEL 6.7 on AWS using multiple 1TB EBS GP2 disks
as bricks.
We have two nodes with several volumes using type Replicate and two bricks.
1 brick belong to server #1 and, of course, the other one to server #2.
Transport is over TCP and the only o
I'd just like to make an update according to my latest findings on this.
Googling further, I ended up reading this article:
https://community.rackspace.com/developers/f/7/t/4858
Reflecting it to the docs
(https://gluster.readthedocs.io/en/latest/Administrator Guide/Resolving Peer
Rejected/) an
On Fri, Apr 28, 2017 at 3:44 PM, Szymon Miotk
wrote:
> Dear Gluster community,
>
> I have problems with tuning Gluster for small files performance on SSD.
>
> My usage scenario is, as I've learned, worst possible scenario, but
> it's not up to me to change it:
> - small 1KB files
> - at least 20M
Dear Gluster community,
I have problems with tuning Gluster for small files performance on SSD.
My usage scenario is, as I've learned, worst possible scenario, but
it's not up to me to change it:
- small 1KB files
- at least 20M of those
- approx. 10 files/directory
- mostly writes
- average spee
Of course. Please find attached. Hope they can shed some light on this.
Thanks,
Seva
28.04.2017, 12:41, "Mohammed Rafi K C" :
> Can you share the glusterd logs from the three nodes ?
>
> Rafi KC
>
> On 04/28/2017 02:34 PM, Seva Gluschenko wrote:
>> Dear Community,
>>
>> I call for your wisdo
Can you share the glusterd logs from the three nodes ?
Rafi KC
On 04/28/2017 02:34 PM, Seva Gluschenko wrote:
> Dear Community,
>
>
> I call for your wisdom, as it appears that googling for keywords doesn't help
> much.
>
> I have a glusterfs volume with replica count 2, and I tried to perform
Dear Community,
I call for your wisdom, as it appears that googling for keywords doesn't help
much.
I have a glusterfs volume with replica count 2, and I tried to perform the
online upgrade procedure described in the docs
(http://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/
13 matches
Mail list logo