Hi Sunny,
Thanks for your response. Yes "
usr/libexec/glusterfs/python/syncdaemon/gsyncd.py'" was missing at slave.
I have installed " glusterfs-geo-replication.x86_64" rpm and then the session
is Active now.
But now I am struggling with the indexing issue. Files more than 5GB in master
vol
Hello,
Regarding the issue:
Bug 1642638 - Log-file rotation on a Disperse Volume while a failed
brick results in files that cannot be healed.
https://bugzilla.redhat.com/show_bug.cgi?id=1642638
Could anybody that has GlusterFS 4.1.x installed see if this problem
exists there?
If anyone know
Hi,
How can i use my nfs exports from my storage as the peer's replicated
volume? Any tip?
Regards.
--
Oğuz Yarımtepe
http://about.me/oguzy
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-use
Hi all,
I am testing the geo-replication service in gluster version 3.10.12 on
centos CentOS Linux release 7.5.1804 and my session remains in faulty
state. On gluster 3.12 version we can configure the following command to
solve the problem.
gluster vol geo-replication mastervol geoaccount@servere
Hi,
I have setup a glusterfs volume gv0 as distributed/replicated:
root@pm1:~# gluster volume info gv0
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 64651501-6df2-4106-b330-fdb3e1fbcdf4
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 19
Hi Krishna,
Please check for this file existance
'/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py' at slave.
- Sunny
On Wed, Oct 24, 2018 at 4:36 PM Krishna Verma wrote:
>
>
>
>
>
>
>
> Hi Everyone,
>
>
>
> I have created a 4*4 distributed gluster but when I am starting the start the
> sessi
Anyone?
I would really like to be able to install GlusterFS 4.1.x on Debian 8 (jessie).
This version of Debian 8 is still widely in use and IMHO there should be a
GlusterFS package for it.
Many thanks in advance for your consideration.
‐‐‐ Original Message ‐‐‐
On Friday, October 19, 2
On 10/24/2018 05:16 PM, Hoggins! wrote:
Thank you, it's working as expected.
I guess it's only safe to put cluster.data-self-heal back on when I get
an updated version of GlusterFS?
Yes correct. Also, you would still need to restart shd whenever you hit
this issue until upgrade.
-Ravi
Thank you, it's working as expected.
I guess it's only safe to put cluster.data-self-heal back on when I get
an updated version of GlusterFS?
Hoggins!
Le 24/10/2018 à 11:53, Ravishankar N a écrit :
>
> On 10/24/2018 02:38 PM, Hoggins! wrote:
>> Thanks, that's helping a lot, I will do that.
>
Hi Everyone,
I have created a 4*4 distributed gluster but when I am starting the start the
session its get failed with below errors.
[2018-10-24 10:02:03.857861] I [gsyncdstatus(monitor):245:set_worker_status]
GeorepStatus: Worker Status Change status=Initializing...
[2018-10-24 10:02:03.858
Hi,
I have setup a glusterfs volume gv0 as distributed/replicated:
root@pm1:~# gluster volume info gv0
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 64651501-6df2-4106-b330-fdb3e1fbcdf4
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 19
On 10/24/2018 02:38 PM, Hoggins! wrote:
Thanks, that's helping a lot, I will do that.
One more question: should the glustershd restart be performed on the
arbiter only, or on each node of the cluster?
If you do a 'gluster volume start volname force' it will restart the shd
on all nodes.
-Ravi
Thanks, that's helping a lot, I will do that.
One more question: should the glustershd restart be performed on the
arbiter only, or on each node of the cluster?
Thanks!
Hoggins!
Le 24/10/2018 à 02:55, Ravishankar N a écrit :
>
> On 10/23/2018 10:01 PM, Hoggins! wrote:
>> Hello there,
>>
>>
Dear Gluster team,
Since January 2018 I am running GLusterFS in mode with 4 nodes.
The storage is attached to oVirt system and has been running happily so far.
I have three volumes:
Gv0_she – triple replicated volume for oVIrt SelfHostedEngine (it’s a
requirement)
Gv1_vmpool – distributed volume
14 matches
Mail list logo