Hi Shwetha,
thank you for your reply...
I ran a few tests in Debug Mode and found no real indication of the
problem. After each start of the geo-replication some files are
transferred at the beginning and then no further transfers.
Few minutes after start the amount of changelog files in lo
umed?
Regards,
Felix
On 03/03/2021 17:28, Dietmar Putz wrote:
Hi,
I'm having a problem with geo-replication. A short summary...
About two month ago I have added two further nodes to a distributed
replicated volume. For that purpose I have stopped the
geo-replication, added two nodes on mvol
Hi,
I'm having a problem with geo-replication. A short summary...
About two month ago I have added two further nodes to a distributed
replicated volume. For that purpose I have stopped the geo-replication,
added two nodes on mvol and svol and started a rebalance process on both
sides. Once the
Hi Andreas,
recently i have been faced with the same fault. I'm pretty sure you are
speaking german, that's why a translation should not be necessary.
I found the reason by tracing a certain process which points to the
gsyncd.log and looking backward from the error until i found some
lgetxat
Hi Christos,
few month ago i had a similar problem but on ubuntu 16.04. At that time
Kotresh gave me a hint :
https://www.spinics.net/lists/gluster-users/msg33694.html
gluster volume geo-replication ::
config access_mount true
this hint solved my problem on ubuntu 16.04. hope that helps.
t
/root/tmp/get_dir_gfid.out | grep aa761834caf1"
Host : gl-node1
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-
Host : gl-node2
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
--------
ms.
i will come back with a more comprehensive and reproducible description
of that issue...
Thanks,
Kotresh HR
On Mon, Mar 12, 2018 at 10:13 PM, Dietmar Putz <mailto:dietmar.p...@3qsdn.com>> wrote:
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
st one file which is stored on gl-node3
and gl-node4 while node1 and 2 are in geo replication error.
since the filesize limitation of the trashcan is obsolete i'm really
interested to use the trashcan feature but i'm concerned it will
interrupt the geo-replication entirely.
does anybody
um 14:06 schrieb Dietmar Putz:
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades)
and gfs 3.12.5 with following rsync version :
1. ii rsync 3.1.1-3ubuntu1
2. ii rsync 3.1.1
.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <mailto:dietmar.p...@3qsdn.com>> wrote:
Hi all,
i have made some tests on the latest Ubuntu 16.04.3 server image.
Upgrades were disabled...
the configuration was always the same...a distributed replicated
vo
= ?
30743 23:34:47 +++ exited with 3 +++
Am 19.01.2018 um 17:27 schrieb Joe Julian:
ubuntu 16.04
--
Dietmar Putz
3Q GmbH
Kurfürstendamm 102
D-10711 Berlin
Mobile: +49 171 / 90 160 39
Mail: dietmar.p...@3qsdn.com
___
Gluster-users maili
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
Hello Anoop,
thank you for your reply
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking
var/crash/_usr_sbin_glusterfsd.0.crash
---------
--
Dietmar Putz
3Q GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2797 866 - 8
Mobile: +49 171 / 90 160 39
:41:06.401189] E [resource(/brick1/mvol1):238:logerr]
Popen: ssh> ssh: connect to host gl-slave-01-int port 22: Connection refused
Somehow it looks like port 22 is hard coded...
Does anybody know how to successfully change the ssh port for a
geo-replication session...?
any hint would be apprec
mox cluster, hosting VM's. Based on
debian Jessie, no issues. gluster.org always has uptodate debian packages.
Thanks, Lindsay. Good to know.
-j
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluste
n you decide to upgrade to 3.6 it's pretty sure that you
will be faced to the problem that clients cannot mount the volume.
( solved by glusterd --xlator-option *.upgrade=on -N )
https://bugzilla.redhat.com/show_bug.cgi?id=1191176
br
dietmar
Am 12.02.2016 um 14:26 schrieb Dietmar Putz:
Hi Dave
have to mention that i always followed the 'scheduling a downtime'
part like in :
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
hope that helps...
best regards
dietmar
On 11.02.2016 23:05, Dave Warren wrote:
On 2016-02-11 04:27, Dietmar Putz wrote:
and i s
Hi Dave,
first of all I'm not a developer, just a user as you and recently i had
a gluster update ( 6 bricks in distributed replicated conf + the same as
slave for the geo-replication)
on ubuntu from 12.04 LTS / GFS 3.4.7 to 14.04 LTS - 3.5.x - 3.6.7 - 3.7.6.
most problem i got is/was regardi
Hi all,
once again i need some help to get our geo-replication running again...
master and slave are 6-node distributed replicated volumes running
ubuntu 14.04 and glusterfs 3.7.6 from the ubuntu ppa.
the master volume already contains about 45 TByte of data, the slave
volume was created from s
: hard-link rename issues on changelog replay --
http://review.gluster.org/13189
I'll post info about the fix propagation plan for the 3.6.x series later.
--
Milind
On Wed, Jan 20, 2016 at 11:23 PM, Dietmar Putz <mailto:p...@3qmedien.net>> wrote:
Hi Milind,
thank you f
g the issue if its not resolved by the alternative
mechanism mentioned above.
Also, crawl status Hybrid Crawl is not an entirely bad thing. It just could mean
that there's a lot of entries that are being processed. However, if things don't
return back to the normal state after trying o
ory might be helpful...
but do i have to execute above shown setfattr commands on the master or
do they just speed up synchronization ?
usually sync should start automatically or could there be a problem
because crawl status is still in 'hybrid crawl'...?
thanks in advance...
best r
if required)
In Brick backend, Crawl and look for files with link count less than
2. (Exclude .glusterfs and .trashcan directory)
regards
Aravinda
On 01/02/2016 09:56 PM, Dietmar Putz wrote:
Hello all,
one more time i need some help with a geo-replication problem.
recently i started a new geo-
Hello all,
one more time i need some help with a geo-replication problem.
recently i started a new geo-replication. the master volume contains
about 45 TB data and the slave volume was new created before
geo-replication setup was done.
master and slave is a 6 node distributed replicated volume
gluster-wien-02 the trusted.gfid was
missing on four nodes but at least on the remaining two nodes the gfid
for 1050 was the same like on the master volume.
I'll try it again on wien-02..
best regards
dietmar
Am 22.12.2015 um 11:47 schrieb Dietmar Putz:
Hi Saravana,
thanks for your reply...
Am 21.12.2015 um 08:08 schrieb Saravanakumar Arumugam:
Hi,
Replies inline..
Thanks,
Saravana
On 12/18/2015 10:02 PM, Dietmar Putz wrote:
Hello again...
after having some big trouble with an xfs issue in kernel 3.13.0-x
and 3.19.0-39 which has been 'solved' by downgradi
Hello again...
after having some big trouble with an xfs issue in kernel 3.13.0-x and
3.19.0-39 which has been 'solved' by downgrading to 3.8.4
(http://comments.gmane.org/gmane.comp.file-systems.xfs.general/71629)
we decided to start a new geo-replication attempt from scratch...
we have delete
Hello all,
on 1st december i upgraded two 6 node cluster from glusterfs 3.5.6 to 3.6.7.
all of them are equal in hw, os and patchlevel, currently running ubuntu
14.04 lts by an do-release-upgrade from 12.04 lts (this was done before
gfs upgrade to 3.5.6, not directly before upgrading to 3.6.7).
19294 Oct 18 2014
/gluster-export/thumbs/2014/2485/272648/rfvg2cmFNJ8Xt9HG.png
ls-lisa-gluster.gluster-wien-05.out:82278502871 612 -rw-rw-rw- 2 root
root 619294 Oct 18 2014
/gluster-export/thumbs/2014/2485/272648/rfvg2cmFNJ8Xt9HG.png
tron@dp-server:~/gluster-9$
--
Dietmar Putz
3Q Medien GmbH
W
ion_cbk]
0-ger-ber-01-client-3: Server lk version = 1
[2015-11-12 15:58:31.504657] I [fuse-bridge.c:3959:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.22
[ 16:13:28 ] - root@gluster-ger-ber-09
/var/log/glusterfs/geo-replication/ger-ber-01 $
Am 12.11.2
ing=no -i
/var/lib/glusterd/geo-replication/secret.pem gluster-wien-02 gluster
--xml --remote-host=localhost volume info aut-wien-01" returned with
255, saying:
[2015-11-09 12:45:40.61755] E [resource(monitor):207:logerr] Popen: ssh>
ssh: connect to host gluster-wien-02 port 2503:
3b826c6441126e2e56f774df5
On 11/11/2015 3:20 AM, Dietmar Putz wrote:
Hi all,
i need some help with a geo-replication issue...
recently i upgraded two 6-node distributed-replicated gluster from
ubuntu 12.04.5 lts to 14.04.3 lts resp. glusterfs 3.4.7 to 3.5.6
since then the geo-replication does not
fb1e2c8e4b
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f
-----
...
putz@sdn-de-gate-01:~/central$
--
Dietmar Putz
3Q Medien GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2792 866 - 8
Mobile: +49 171 / 90 160 39
Hello all,
we have a problem on a geo-replicated volume after upgrade from
glusterfs 3.3.2 to 3.4.6 on ubuntu 12.04.5 lts.
for e.g. a 'ls -l' on the mounted geo-replicated volume does not show
the entire content while the same command on the underlying bricks shows
the entire content.
the ev
35 matches
Mail list logo