On Wed, Oct 16, 2019 at 11:36 PM Strahil wrote:
> By the way,
>
> I have been left with the impresssion that data is transferred via
> 'rsync' and not via FUSE.
> Am I wrong ?
>
Rsync syncs data from Master FUSE mount to Slave/Remote FUSE mount.
>
> Best Regards,
> Strahil NikolovOn Oct
Got it.
Geo-replication uses slave nodes IP in the following cases,
- Verification during Session creation - It tries to mount the Slave volume
using the hostname/IP provided in Geo-rep create command. Try Geo-rep
create by specifying the external IP which is accessible from the master
node.
-
On Wed, Oct 16, 2019 at 2:47 PM deepu srinivasan wrote:
> Hi Users
> How will the GeoReplication resumes if there is a network Disturbance or
> Network failure between the two Data Centre? What will happen if a rsync
> session for a file fails? Will the rsync session restart for the fail again?
On Wed, Oct 16, 2019 at 11:08 PM deepu srinivasan
wrote:
> Hi Users
> Is there a single point of failure in GeoReplication for gluster?
> My Case:
> I Use 3 nodes in both master and slave volume.
> Master volume : Node1,Node2,Node3
> Slave Volume : Node4,Node5,Node6
> I tried to recreate the
I did explore CEPH a bit, and that might be an option as well, still doing
exploration on gluster. Hopefully no one hates you for making the suggestion
I haven't tried NFS Ganesha yet. I was under the impression it was maybe a
little unstable yet, and found the docs a little limited for
Most probably current version never supported (maybe there was no such need
until now) such elasticity and the only option is to use Highly-Available NFS
Ganesha as the built-in NFS is deprecated.What about scaling on the same system
? Nowadays , servers have a lot of hot-plug disk slots and
Yeah there are somewhat dirty ways to work around it, and I hadn't thought of
this one. Another option for us is to try and tag certain instances as volfile
servers, and always prevent the autoscaler from removing them. It would be
nice though if this behavior could be added to gluster itself
Yes, this makes the issue less likely, but doesn't make it impossible for
something that is fully elastic.
For instance, if I had instead just started with A,B,C and then scaled out and
in twice, all volfile servers would have potentially be destroyed and replaced.
I think the problem is that
By the way,
I have been left with the impresssion that data is transferred via 'rsync'
and not via FUSE.
Am I wrong ?
Best Regards,
Strahil NikolovOn Oct 16, 2019 19:59, Alexander Iliev
wrote:
>
> Hi Aravinda,
>
> All volume brick on the slave volume are up and the volume seems
Hi Aravinda,
All volume brick on the slave volume are up and the volume seems functional.
Your suggestion about trying to mount the slave volume on a master node
brings up my question about network connectivity again - the GlusterFS
documentation[1] says:
> The server specified in the mount
Hi,
We have an old Gluster cluster setup, running a replica 2 across two
datacenters, and currently on version 4.1.5
I need to add an arbiter to this setup, but I'm concerned about the performance
impact of this on the volumes.
I recently set up a new cluster, for a different purpose, and
Thank you for your response!
it does not reply.
but all other servers neither reply to ping 24007
pinged this from each server to each server, no reply at all.
even with firewall disabled it does not reply.
root@diufnas22:/home/diuf-sysadmin# netstat -tulpe |grep 24007
tcp0 0
Hi,
I am keeping Raghvendra in loop and hope he can comment on the "Read being
scheduled as slow fop".
Other than that, I would request you to provide following information to debug
this issue.
1 - Profile information of the volume. You can find the steps here -
13 matches
Mail list logo