On Mon, Apr 20, 2020 at 9:37 PM Yaniv Kaul wrote:
>
>
> On Mon, Apr 20, 2020 at 5:38 PM Dmitry Antipov
> wrote:
>
>> # gluster volume info
>>
>> Volume Name: TEST0
>> Type: Distributed-Replicate
>> Volume ID: ca63095f-58dd-4ba8-82d6-7149a58c1423
>> Status: Created
>> Snapshot Count: 0
>> Number
Could you try disabling syncing xattrs and check ?
gluster vol geo-rep :: config sync-xattrs
false
On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov
wrote:
> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu" <
> etembayo...@gmail.com> wrote:
> >Hello again,
> >
> >These are gsyncd.log from
All '.processed' directories (under working_dir and working_dir/.history)
contain processed changelogs and is no longer required by geo-replication
apart from debugging
purposes. That directory can be cleaned up if it's consuming too much space.
On Wed, Feb 12, 2020 at 11:36 PM Sunny Kumar wrote
.c(1189) [sender=3.1.3]
>
>
>
> The data is synced over to the other machine when I view the file there
>
> [root@pgsotc10 mnt]# cat file1
>
> testdata
>
> [root@pgsotc10 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Wednesday, November 27, 2019 5
sync protocol data stream (code 12) at io.c(226)
> [sender=3.1.3]
>
> [root@jfsotc22 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Tuesday, November 26, 2019 7:22 PM
> *To:* Tan, Jian Chern
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users
otocol
> version 31, so both are up to date as far as I know.
>
> Gluster version on both machines are glusterfs 5.10
>
> OS on both machines are Fedora 29 Server Edition
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Tuesday, November 26, 2019 3:04 PM
>
The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?
On Tue, Nov 26, 2019 at 11:34 AM
a.redhat.com/show_bug.cgi?id=1709248).
>
> On Tue, Nov 19, 2019 at 11:22 AM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Which version of gluster are you using?
>>
>> On Tue, Nov 19, 2019 at 11:00 AM deepu srinivasan
>> wrote:
>>
Which version of gluster are you using?
On Tue, Nov 19, 2019 at 11:00 AM deepu srinivasan
wrote:
> Hi kotresh
> Is there a stable release in 6.x series?
>
>
> On Tue, Nov 19, 2019, 10:44 AM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> This i
This issue has been recently fixed with the following patch and should be
available in latest gluster-6.x
https://review.gluster.org/#/c/glusterfs/+/23570/
On Tue, Nov 19, 2019 at 10:26 AM deepu srinivasan
wrote:
>
> Hi Aravinda
> *The below logs are from master end:*
>
> [2019-11-16 17:29:43.5
The session is moved from "history crawl" to "changelog crawl". After this
point, there are no changelogs to be synced as per the logs.
Please checking in ".processing" directories if there are any pending
changelogs to be synced at
"/var/lib/misc/gluster/gsyncd///.processing"
If there are no pendi
You should be looking into the other log file (changes-.log)
for actual failure.
In your case "changes-home-sas-gluster-data-code-misc.log"
On Tue, Jul 2, 2019 at 12:33 PM deepu srinivasan wrote:
> Any Update on this issue ?
>
> On Mon, Jul 1, 2019 at 4:19 PM deepu srinivasan
> wrote:
>
>> Hi
>
restarting, GlusterFS
>> geo-replication begins synchronizing all the data. All files are compared
>> using checksum, which can be a lengthy and high resource utilization
>> operation on large data sets.
>>
>>
> On Fri, Jun 14, 2019 at 12:30 PM Kotresh Hiremath Ra
61419] and [2019-06-05 08:52:34.019758]
>>>
>>> [2019-06-05 08:52:44.426839] I [MSGID: 106496]
>>> [glusterd-handler.c:3187:__glusterd_handle_mount] 0-glusterd: Received
>>> mount req
>>>
>>> [2019-06-05 08:52:44.426886] E [MSGID: 106061]
>>
Ccing Sunny, who was investing similar issue.
On Tue, Jun 4, 2019 at 5:46 PM deepu srinivasan wrote:
> Have already added the path in bashrc . Still in faulty state
>
> On Tue, Jun 4, 2019, 5:27 PM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>>
gluster/data/code-misc):809:logerr] Popen:
>> /usr/sbin/gluster> 2 : failed with this errno (No such file or directory)
>
>
> On Tue, Jun 4, 2019 at 5:10 PM deepu srinivasan
> wrote:
>
>> Hi
>> As discussed I have upgraded gluster from 4.1 to 6.2 version. But the Geo
listed in ps aux. Only when i set rsync option to
> " " and restart all the process the rsync process is listed in ps aux.
>
>
> On Fri, May 31, 2019 at 4:23 PM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Yes, rsync config option should
result .
>
>> 1559298781.338234 write(2, "rsync: link_stat
>> \"/tmp/gsyncd-aux-mount-EEJ_sY/.gfid/3fa6aed8-802e-4efe-9903-8bc171176d88\"
>> failed: No such file or directory (2)", 128
>
> seems like a file is missing ?
>
> On Fri, May 31, 2019 at 3:25 PM Kotresh H
:16 PM deepu srinivasan
> wrote:
>
>> Hi Kotresh
>> We have tried the above-mentioned rsync option and we are planning to
>> have the version upgrade to 6.0.
>>
>> On Fri, May 31, 2019 at 11:04 AM Kotresh Hiremath Ravishankar <
>> khire...@redhat.com> wrote
Hi,
This looks like the hang because stderr buffer filled up with errors
messages and no one reading it.
I think this issue is fixed in latest releases. As a workaround, you can do
following and check if it works.
Prerequisite:
rsync version should be > 3.1.0
Workaround:
gluster volume geo-repl
.conf
>>
>> [2018-09-25 14:10:39.650958] I [resource(slave
>> master/bricks/brick1/brick):1096:connect] GLUSTER: Mounting gluster volume
>> locally...
>>
>> [2018-09-25 14:10:40.729355] I [resource(slave
>> master/bricks/brick1/brick):1119:connect] GLUSTER:
no fds and they already had a signature. I don't know the
> reason for this. Maybe the client still keep th fd open? I opened a bug for
> this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1685023
>
> Regards
> David
>
> Am Fr., 1. März 2019 um 18:29 Uhr schrieb
Interesting observation! But as discussed in the thread bitrot signing
processes depends 2 min timeout (by default) after last fd closes. It
doesn't have any co-relation with the size of the file.
Did you happen to verify that the fd was still open for large files for
some reason?
On Fri, Mar 1,
Hi,
I tried setting up elastic search on gluster. Here is the note of it. Hope
it helps someone trying to setup ELK stack on gluster.
https://hrkscribbles.blogspot.com/2018/11/elastic-search-on-gluster.html
--
Thanks and Regards,
Kotresh H R
___
Glust
mountbroker-root/user1300/mtpt-geoaccount-ARDW1E
>
> [2018-09-24 13:51:16.116595] W [glusterfsd.c:1514:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7e25) [0x7fafbc9eee25]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55d5dac5dd65]
> -->/usr/sbin/glusterfs(c
The problem occured on slave side whose error is propagated to master.
Mostly any traceback with repce involved is related to problem in slave.
Just check few lines above in the log to find the slave node, the crashed
worker is connected to and get geo replication logs to further debug.
On Fri
You can ignore this error. It is fixed and should be available in next
4.1.x release.
On Sat, 22 Sep 2018, 07:07 Pedro Costa, wrote:
> Forgot to mention, I’m running all VM’s with 16.04.1-Ubuntu, Kernel
> 4.15.0-1023-azure #24
>
>
>
>
>
> *From:* Pedro Costa
> *Sent:* 21 September 2018 10:16
Answer inline.
On Tue, Sep 11, 2018 at 4:19 PM, Kotte, Christian (Ext) <
christian.ko...@novartis.com> wrote:
> Hi all,
>
>
>
> I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup.
> The gsyncd.log on the master is fine, but I have some strange changelog
> warnings and errors
Hi Nico,
The glusterd has crashed on this node. Please raise a bug with core file?
Please use the following tool [1] to setup geo-rep by bringing back the
glusterd
if you are finding it difficult with geo-rep setup steps and let us know if
if it still crashes?
[1] http://aravindavk.in/blog/intro
:4903:
> glusterd_get_gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_gluster/monitor.status statefile
> not present. [No such file or directory]
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar
>
st_slv] 0-management: geo-replication status
> glusterdist gluster-poc-sj::gluster : session is not active
>
> [2018-09-06 07:56:38.486229] I [MSGID: 106028] [glusterd-geo-rep.c:4903:
> glusterd_get_gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo-
> replication/glusterdist_gl
TestInt18.08-b001.t.Z
>
> du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or
> directory
>
> [root@gluster-poc-sj ~]#
>
>
>
> File not reached at slave.
>
>
>
> /Krishna
>
>
>
> *From:* Krishna Verma
> *Sent:* Monday, Se
Hi Krishna,
glusterd log file would help here
Thanks,
Kotresh HR
On Thu, Sep 6, 2018 at 1:02 PM, Krishna Verma wrote:
> Hi All,
>
>
>
> I am getting issue in geo-replication distributed gluster volume. In a
> session status it shows only peer node instead of 2. And I am also not able
> to dele
[gsyncd(config-get):297:main] : Using
> session config file path=/var/lib/glusterd/geo-replication/glusterdist_
> gluster-poc-sj_glusterdist/gsyncd.conf
>
> [2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] : Using
> session config file path=/var/lib/glusterd/geo-
> r
t you to please have a look.
>
>
>
> /Krishna
>
>
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Monday, September 3, 2018 10:19 AM
>
> *To:* Krishna Verma
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org&g
boun...@gluster.org gluster.org> för Marcus Pedersén
> *Skickat:* den 31 augusti 2018 16:09
> *Till:* khire...@redhat.com
>
> *Kopia:* gluster-users@gluster.org
> *Ämne:* Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does
> not work Now: Upgraded to 4.1.3 geo nod
on
>
> geo-replication.indexing: on
>
> transport.address-family: inet
>
> nfs.disable: on
>
> [root@gluster-poc-noida distvol]#
>
>
>
> Please help to fix, I believe its not a normal behavior of gluster rsync.
>
>
>
> /Krishna
>
> *From:* Krishn
Hi Marcus,
Could you attach full logs? Is the same trace back happening repeatedly? It
will be helpful you attach the corresponding mount log as well.
What's the rsync version, you are using?
Thanks,
Kotresh HR
On Fri, Aug 31, 2018 at 12:16 PM, Marcus Pedersén
wrote:
> Hi all,
>
> I had proble
istribution and there is only one brick participating in syncing.
Could you retest and confirm.
>
>
> /Krishna
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Thursday, August 30, 2018 3:20 PM
>
> *To:* Krishna Verma
> *Cc:* Sunny Kumar
distribute count like 3*3 or
> 4*3 :- Are you refereeing to create a distributed volume with 3 master
> node and 3 slave node?
>
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copyin
gt; noi-poc-gluster glusterep /data/gluster/gv0root
> ssh://gluster-poc-sj::glusterepgluster-poc-sjPassive
> N/AN/A
>
> [root@gluster-poc-noida gluster]#
>
>
>
> Thanks in advance for your all time support.
>
>
>
> /Krishna
>
>
s-family: inet
>
> nfs.disable: on
>
> performance.client-io-threads: off
>
> geo-replication.indexing: on
>
> geo-replication.ignore-pid-check: on
>
> changelog.changelog: on
>
> [root@gluster-poc-noida glusterfs]#
>
>
>
> Could you please help me in th
a ~]#
>
>
>
> Is it looks good what we exactly need or di I need to create any more link
> or How to get “libgfchangelog.so” file if missing.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar
> *Sent:* Tuesday, August 28, 2018 4:22 PM
> *To:*
--
>
> gluster-poc-noidaglusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepN/A FaultyN/A N/A
>
> noi-poc-gluster glusterep /data/gluster/gv0root
>gluster-poc-sj::glusterepN/A Faulty
> N/A N/
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar wrote:
> can you do ldconfig /usr/local/lib and share the output of ldco
efresh-timeout: 10performance.read-ahead:
> offperformance.write-behind-window-size: 4MBperformance.write-behind:
> onstorage.build-pgfid: onauth.ssl-allow: *client.ssl: offserver.ssl:
> offchangelog.changelog: onfeatures.bitrot: onfeatures.scrub:
> Activefeatures.scrub-freq: dailycluster
Hi David,
The feature is to provide consistent time attributes (atime, ctime, mtime)
across replica set.
The feature is enabled with following two options.
gluster vol set utime on
gluster vol set ctime on
The features currently does not honour mount options related time
attributes such as 'no
##
> Sent from my phone
> ####
>
> Den 2 aug. 2018 08:07 skrev Kotresh Hiremath Ravishankar <
> khire...@redhat.com>:
> Could you look of any rsync processes hung in master or slave?
>
> On Thu, Aug 2, 2018 at 11:18 AM, Marcus Pedersén
> wrote:
>
&g
##
> Marcus Pedersén
> Systemadministrator
> Interbull Centre
>
> Sent from my phone
> ####
>
>
> Den 2 aug. 2018 06:13 skrev Kotresh Hiremath Ravishankar <
> khire...@redhat.com>:
>
> Hi Marcus,
>
> What's the rs
Hi Marcus,
What's the rsync version being used?
Thanks,
Kotresh HR
On Thu, Aug 2, 2018 at 1:48 AM, Marcus Pedersén
wrote:
> Hi all!
>
> I upgraded from 3.12.9 to 4.1.1 and had problems with geo-replication.
>
> With help from the list with some sym links and so on (handled in another
> thread)
Hi Pablo,
The geo-rep status should go to Faulty if he connection to peer is broken.
Does node log files failing with same error? Are these logs repeating?
Does stop and start geo-rep giving the same error?
Thanks,
Kotresh HR
On Tue, Jul 24, 2018 at 1:47 AM, Pablo J Rebollo Sosa wrote:
> Hi,
>
Looks like gsyncd on slave is failing for some reason.
Please run the below cmd on the master.
#ssh -i /var/lib/glusterd/geo-replication/secret.pem georep@gluster-4.glstr
It should run gsyncd on the slave. If there is error, it should be fixed.
Please share the output of above cmd.
Regards,
Ko
-command-dir
Thanks,
Kotresh HR
On Wed, Jul 18, 2018 at 9:28 AM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Marcus,
>
> I am testing out 4.1 myself and I will have some update today.
> For this particular traceback, gsyncd is not able to find the librar
0
> [2018-07-16 19:35:16.828056] I [gsyncd(worker /urd-gds/gluster):297:main]
> : Using session config file path=/var/lib/glusterd/geo-
> replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf
> [2018-07-16 19:35:16.828066] I [gsyncd(agent /urd-gds/gluster):297:main]
&
!
>
>
> Regards
>
> Marcus
>
>
> --
> *Från:* gluster-users-boun...@gluster.org gluster.org> för Marcus Pedersén
> *Skickat:* den 12 juli 2018 08:51
> *Till:* Kotresh Hiremath Ravishankar
> *Kopia:* gluster-users@gluster.org
> *Ämne:
Hi Marcus,
I think the fix [1] is needed in 4.1
Could you please this out and let us know if that works for you?
[1] https://review.gluster.org/#/c/20207/
Thanks,
Kotresh HR
On Thu, Jul 12, 2018 at 1:49 AM, Marcus Pedersén
wrote:
> Hi all,
>
> I have upgraded from 3.12.9 to 4.1.1 and been fol
Hi Mabi,
You can safely delete old files under /var/lib/misc/glusterfsd.
Thanks,
Kotresh
On Mon, Jun 25, 2018 at 7:30 PM, mabi wrote:
> Hi,
>
> In the past I was using geo-replication but unconfigured it on my two
> volumes by using:
>
> gluster volume geo-replication ... stop
> gluster volume
: Axel Gruber, Anton Gruber
>
> Steuernummer: 141/151/51801
>
>
> Am Mo., 18. Juni 2018 um 11:30 Uhr schrieb Kotresh Hiremath Ravishankar <
> khire...@redhat.com>:
>
>> Hi Alex,
>>
>> Sorry, I lost the context.
>>
>> Which gluster version are
>> this when the system failed. The system was totally unresponsive and
>> required a cold power off and then power on in order to recover the server.
>>
>> Many thanks for your help.
>>
>> Mark Betham.
>>
>> On 11 June 2018 at 05:53, Kotresh Hirema
Hi Alex,
Sorry, I lost the context.
Which gluster version are you using?
Thanks,
Kotresh HR
On Sat, Jun 16, 2018 at 2:57 PM, Axel Gruber wrote:
> Hello
>
> i think its better to open a new Thread:
>
>
> I tryed to install Geo Replication again - setup SSH Key - prepared
> session Broker and s
Hi Axel,
No geo-replication can't be used without SSH. It's not configurable.
Geo-rep master nodes connect to slave and transfers data over ssh.
I assume you have created the geo-rep session before start.
In the command above, the syntax is incorrect. It should use "::" and not
":/"
gluster volu
Hi Axel,
You don't need single server with 140 TB capacity for replication. The
slave (backup) is also a gluster volume similar to master volume.
So create the slave (backup) gluster volume with 4 or more nodes to meet
the capacity of master and setup geo-rep between these two volumes.
Geo-replic
sterfs-cli-3.12.9-1.el7.x86_64
>> python2-gluster-3.12.9-1.el7.x86_64
>> glusterfs-rdma-3.12.9-1.el7.x86_64
>> glusterfs-fuse-3.12.9-1.el7.x86_64
>>
>> I have also attached another screenshot showing the memory usage from the
>> Gluster slave for the last 48 hou
Hi Mark,
Few questions.
1. Is this trace back consistently hit? I just wanted to confirm whether
it's transient which occurs once in a while and gets back to normal?
2. Please upload the complete geo-rep logs from both master and slave.
Thanks,
Kotresh HR
On Wed, Jun 6, 2018 at 7:10 PM, Mark Be
Hi Mark,
Few questions.
1. Is this trace back consistently hit? I just wanted to confirm whether
it's transient which occurs once in a while and gets back to normal?
2. Please upload the complete geo-rep logs from both master and slave.
3. Are the gluster versions same across master and slave?
T
Hi John Hearns,
Thanks for considering gluster. The feature you are requesting is
Active-Active and is not available with geo-replication in 4.0.
So, the use case can't be achieved using single gluster volume. But your
use case can be achieved if we keep two volumes
one for analysis file and other
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
Hi,
It is failing to get the virtual xattr value of
"trusted.glusterfs.volume-mark" at master volume root.
Could you share the geo-replication logs under
/var/log/glusterfs/geo-replication/*.gluster.log ?
I think if there are any transient errors, stopping geo-rep and restarting
master volume shou
s that a problem
> (Master: dispersed-distributed, slave: dispersed-distributed)?
>
> Many thanks in advance!
>
> Best regards
> Marcus
>
>
> On Thu, Feb 08, 2018 at 02:57:48PM +0530, Kotresh Hiremath Ravishankar
> wrote:
> > Answers inline
> >
> > On Th
Hi,
Thanks for reporting the issue. This seems to be a bug.
Could you please raise a bug at https://bugzilla.redhat.com/ under
community/glusterfs ?
We will take a look at it and fix it.
Thanks,
Kotresh HR
On Wed, Feb 21, 2018 at 2:01 PM, Marcus Pedersén
wrote:
> Hi all,
> I use gluster 3.12 o
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr wrote:
> That makes for an interesting problem.
>
> I cannot open port 24007 to allow RPC access.
>
> On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
>
> Hi Alvin,
>
> Yes, g
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
glusterd.
glusterd will be listening in this port and all volume management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr wrote:
> I am running gluster 3.8.9 and tr
gt;
> Many thanks in advance!
>
> Regards
> Marcus
>
>
> On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh Hiremath Ravishankar
> wrote:
> > We are happy to help you out. Please find the answers inline.
> >
> > On Tue, Feb 6, 2018 at 4:39 PM, Marcus Peders
Hi,
When S3 is added to master volume from new node, the following cmd should
be run to generate and distribute ssh keys
1. Generate ssh keys from new node
#gluster system:: execute gsec_create
2. Push those ssh keys of new node to slave
#gluster vol geo-rep :: create
push-pem fo
Answers in line.
On Tue, Feb 6, 2018 at 6:24 PM, Marcus Pedersén
wrote:
> Hi again,
> I made some more tests and the behavior I get is that if any of
> the slaves are down the geo-replication stops working.
> It this the way distributed volumes work, if one server goes down
> the entire system s
We are happy to help you out. Please find the answers inline.
On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén
wrote:
> Hi all,
>
> I am planning my new gluster system and tested things out in
> a bunch of virtual machines.
> I need a bit of help to understand how geo-replication behaves.
>
> I h
Hi,
As a quick workaround for geo-replication to work. Please configure the
following option.
gluster vol geo-replication :: config
access_mount true
The above option will not do the lazy umount and as a result, all the
master and slave volume mounts
maintained by geo-replication can be accesse
Hi,
Geo-replication expects the gfids (unique identifier similar to inode
number in backend file systems) to be same
for a file both on master and slave gluster volume. If the data is directly
copied by other means other than geo-replication,
gfid will be different. The crashes you are seeing is b
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades
ed again.
>
> It had no effect on syncing of trusted.gfid.
>
>
>
> How it is critical to have duplicated gfid’s? Can volume data be corrupted
> in this case somehow?
>
>
>
> Best regards,
>
>
>
> Viktor Nosov
>
>
>
> *From:* Kotr
Hi Viktor,
Answers inline
On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov wrote:
> Hi,
>
> I'm looking for glusterfs feature that can be used to transform data
> between
> volumes of different types provisioned on the same nodes.
> It could be, for example, transformation from disperse to distrib
Hi Amudhan,
Please go through the following that would clarify up-gradation concerns
from DHT to RIO in 4.0
1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
2. DHT volumes would not be migrated to RIO. DHT volumes would still be
using DHT code.
3. The new volume creat
Hi,
No, gluster doesn't support active-active geo-replication. It's not planned
in near future. We will let you know when it's planned.
Thanks,
Kotresh HR
On Tue, Oct 24, 2017 at 11:19 AM, atris adam wrote:
> hi everybody,
>
> Have glusterfs released a feature named active-active georeplicatio
Hi Amudhan,
Sorry for the late response as I was busy with other things. You are right
bitrot uses sha256 for checksum.
If file-1, file-2 are marked bad, the I/O should be errored out with EIO.
If that is not happening, we need
to look further into it. But what's the file contents of file-1 and fi
Hi Felipe,
All the observations you have made are correct. AFR is a synchronous replication
where the client replicates the data which is limited by the speed by the
slowest
node (in your case HDD node). AFR is the replicating each brick and is part of
single
volume. At the end, you will have si
is is an RFE, it would be available from 3.11 and would not
be back ported to 3.10.x
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Serkan Çoban"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "Shyam" , "Gluster Users"
> , &
Hi
https://github.com/gluster/glusterfs/issues/188 is merged in master
and needs to go in 3.11
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kaushal M"
> To: "Shyam"
> Cc: gluster-users@gluster.org, "Gluster Devel"
> Sent: Thursday, April 20, 2017 12:16:39 PM
> Subject
t. If it's impossible to upgrade to
latest version, atleast 3.7.20 would do. It has minimal
conflicts. I can help you out with that.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "ABHISHEK PALIWAL"
> To: "Kotresh Hiremath Ravishankar"
> Cc
enabled, then you won't see any setxattr/getxattrs
related to bitrot.
The fix would be available in 3.11.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "ABHISHEK PALIWAL"
> To: "Pranith Kumar Karampuri"
> Cc: "Gluster Devel&qu
Answers inline.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "Gluster Users"
> Sent: Thursday, April 13, 2017 8:51:29 PM
> Subject: Re: [Gluster-users] Geo replication stuck
, you might need to cleanup the problematic
directory on slave from the backend.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "Gluster Users"
> Sent: Thursday, April 13, 2017
Hi,
Then please use set the following rsync config and let us know if it helps.
gluster vol geo-rep :: config rsync-options
"--ignore-missing-args"
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi"
> To: "Kotresh Hiremath Ravishankar&
Hi Mabi,
What's the rsync version being used?
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "mabi"
> To: "Gluster Users"
> Sent: Saturday, April 8, 2017 4:20:25 PM
> Subject: [Gluster-users] Geo replication stuck (rsync: link_stat
> "(unreachable)")
>
> Hello,
>
ptables on both master and slave nodes and check again?
#iptables -F
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Jeremiah Rothschild"
> To: "Kotresh Hiremath Ravishankar"
> Cc: gluster-users@gluster.org
> Sent: Thursday, March 30, 2017 1:1
Hi Jeremiah,
That's really strange. Please enable DEBUG logs for geo-replication as below
and send
us the logs under "/var/log/glusterfs/geo-replication//*.log" from
master node
gluster vol geo-rep :: config log-level DEBUG
Geo-rep has two ways to detect changes.
1. changelog (Changelog Craw
This could happen if two same ssh-key pub keys one with "command=..." and one
with out
distributed to slave ~/.ssh/authorized_keys. Please check and remove the one
without "command=..".
It should work. For passwordless SSH connection, a separate ssh key pair should
be create.
Thanks and Regard
Hi lejeczek,
Try stop force.
gluster vol geo-rep :: stop force
Thanks and Regards,
Kotresh H R
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "lejeczek"
> To: "Kotresh Hiremath Ravishankar"
> Cc: gluster-users@gluster.org
> Sent: Tues
Hi,
The following steps needs to be followed when a brick is added from new node on
master.
1. Stop geo-rep
2. Run the following command on the master node where passwordless SSH
connection is configured, in order to create a common pem pub file.
# gluster system:: execute gsec_create
3
Answers inline
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "lejeczek"
> To: gluster-users@gluster.org
> Sent: Wednesday, February 1, 2017 5:48:55 PM
> Subject: [Gluster-users] geo repl status: faulty & errors
>
> hi everone,
>
> trying geo-repl first, I've followed tha
s
reclaimed. After which everything
is syncing. So geo-rep might miss syncing few files in ENOSPC scenario.
Thanks and Regards,
Kotresh H R
----- Original Message -
> From: "Viktor Nosov"
> To: "Kotresh Hiremath Ravishankar"
> Cc: gluster-users@gluster.org
>
1 - 100 of 147 matches
Mail list logo