Re: [Gluster-users] gverify.sh purpose

2017-08-21 Thread Saravanakumar Arumugam



On Saturday 19 August 2017 02:05 AM, mabi wrote:

Hi,

When creating a geo-replication session is the gverify.sh used  or ran 
respectively?


Yes, It is executed as part of geo-replication session creation.

or is gverify.sh just an ad-hoc command to test manually if creating a 
geo-replication creationg would succeed?




No need to run separately


~
Saravana


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes from Gluster Bug Triage meeting today

2017-04-04 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana



Meeting summary

agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting 
(Saravanakmr, 12:01:02)


Roll call (Saravanakmr, 12:01:08)

Next week’s meeting host (Saravanakmr, 12:03:18)
ACTION: skumar will host next week's meeting (Saravanakmr, 
12:04:12)


ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:04:42)
ACTION: ndevos need to decide on how to provide/use debug 
builds (Saravanakmr, 12:05:17)


jiffin needs to send the changes to check-bugs.py also 
(Saravanakmr, 12:05:26)
ACTION: jiffin needs to send the changes to check-bugs.py also 
(Saravanakmr, 12:06:01)

http://bit.ly/gluster-bugs-to-triage (Saravanakmr, 12:06:14)

open floor (Saravanakmr, 12:14:04)


Meeting ended at 12:14:30 UTC (full logs).

Action items

skumar will host next week's meeting
ndevos need to decide on how to provide/use debug builds
jiffin needs to send the changes to check-bugs.py also



Action items, by person

jiffin
jiffin needs to send the changes to check-bugs.py also
ndevos
ndevos need to decide on how to provide/use debug builds
skumar
skumar will host next week's meeting


People present (lines said)

Saravanakmr (35)
rafi (8)
skumar (6)
zodbot (5)
jiffin (4)
ndevos (3)
kkeithley (3)
hgowtham (1)

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (today)

2017-04-04 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- Agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting

Currently the following items are listed:

* Roll Call
* Status of last weeks action items
* Group Triage
 - http://bit.ly/gluster-bugs-to-triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster Volume as object storage with S3 interface

2017-03-08 Thread Saravanakumar Arumugam


On 03/08/2017 04:55 PM, Gandalf Corvotempesta wrote:

I'm really inerested in this.

cool.

Let me know if I understood properly, now is possible to access a
Gluster volume as object storage via S3 API ?

Yes. It is possible.

Authentication is currently turned off.  You can expect updates on 
Authentication soon.

Is Gluster-swift (and with that, the rings, auth and so on coming from
OpenStack) still needed ?
You are right. gluster-swift is still needed. But, it is part of Docker 
container.
All gluster-swift processes are running inside Docker container in order 
to provide Object interface.

Docker container accesses Gluster volume to access (get/put) objects.

We are working on a custom solution which will avoids gluster-swift 
altogether.

We will update here once it is ready. Stay tuned.

2017-03-08 9:53 GMT+01:00 Saravanakumar Arumugam :

Hi,

I have posted a blog about accessing Gluster volume via S3 interface.[1]



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Volume as object storage with S3 interface

2017-03-08 Thread Saravanakumar Arumugam

Hi,

I have posted a blog about accessing Gluster volume via S3 interface.[1]


Here, Gluster volume is exposed as a object storage.

Object storage functionality is implemented with changes to Swift 
storage and


swift3 plugin is used to expose S3 interface. [4]


gluster-object is available as part of docker hub [2] and the 
corresponding github


link is [3].


You can expect further updates on this, to provide object storage in 
Kubernetes/OpenShift platform.


Thanks to Prashanth Pai for all his help.


[1] 
https://uyirpodiru.blogspot.in/2017/03/building-gluster-object-in-docker.html


[2] https://hub.docker.com/r/gluster/gluster-object/

[3] https://github.com/SaravanaStorageNetwork/docker-gluster-s3

[4] https://github.com/gluster/gluster-swift


Thanks,

Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Minutes from Gluster Bug Triage meeting today

2017-02-21 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana



Meeting summary


Agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting 
(Saravanakmr, 12:00:38)


Roll call (Saravanakmr, 12:00:44)
Next week’s meeting host (Saravanakmr, 12:01:52)
ACTION: jiffin will host on 28 February (Saravanakmr, 12:04:03)

Action items (Saravanakmr, 12:04:36)
ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:04:48)
ACTION: ndevos need to decide on how to provide/use debug 
builds (Saravanakmr, 12:05:04)


jiffin needs to send the changes to check-bugs.py also 
(Saravanakmr, 12:05:09)
ACTION: jiffin needs to send the changes to check-bugs.py also 
(Saravanakmr, 12:05:30)


Group Triage (Saravanakmr, 12:05:38)
you can find the bugs to triage here in 
http://bit.ly/gluster-bugs-to-triage (Saravanakmr, 12:05:43)
https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/ 
(Saravanakmr, 12:05:49)


Open Floor (Saravanakmr, 12:15:11)



Meeting ended at 12:16:12 UTC (full logs).

Action items

jiffin will host on 28 February
ndevos need to decide on how to provide/use debug builds
jiffin needs to send the changes to check-bugs.py also



Action items, by person

jiffin
jiffin will host on 28 February
jiffin needs to send the changes to check-bugs.py also
ndevos
ndevos need to decide on how to provide/use debug builds



People present (lines said)

Saravanakmr (34)
jiffin (5)
ndevos (5)
zodbot (3)
kkeithley (3)
rafi (3)
skoduri (1)


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (today)

2017-02-21 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- Agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting

Currently the following items are listed:

* Roll Call
* Status of last weeks action items
* Group Triage
 - http://bit.ly/gluster-bugs-to-triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes from Gluster Bug Triage meeting today

2017-01-17 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana



Meeting summary
agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting 
(Saravanakmr, 12:00:24)


Roll call (Saravanakmr, 12:00:31)
Next week’s meeting host (Saravanakmr, 12:03:39)

ndevos need to decide on how to provide/use debug builds (Saravanakmr, 
12:06:15)
ACTION: ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:06:27)


jiffin needs to send the changes to check-bugs.py also (Saravanakmr, 
12:06:47)
ACTION: jiffin needs to send the changes to check-bugs.py also 
(Saravanakmr, 12:07:23)


Group Triage (Saravanakmr, 12:08:13)
you can find the bugs to triage here 
inhttp://bit.ly/gluster-bugs-to-triage (Saravanakmr, 12:08:21)

http://bit.ly/gluster-bugs-to-triage (Saravanakmr, 12:08:53)

Open Floor (Saravanakmr, 12:16:01)


Meeting ended at 12:18:53 UTC (full logs).

Action items
ndevos need to decide on how to provide/use debug builds
jiffin needs to send the changes to check-bugs.py also


Action items, by person
jiffin needs to send the changes to check-bugs.py also
ndevos need to decide on how to provide/use debug builds


People present (lines said)
Saravanakmr (40)
ndevos (9)
kkeithley (5)
skoduri (3)
jiffin (3)
zodbot (3)

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (today)

2017-01-17 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

*This is the first bug triage meeting using hackmd.io (as 
http://public.pad.fsfe.org/ is going to be decommissioned).**

**Agenda and Group Triage links updated**
*
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- Agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
 - http://bit.ly/gluster-bugs-to-triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.


Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes from Gluster Bug Triage meeting today

2016-12-06 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana


Meeting summary :

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage(Saravanakmr 
, 
12:00:20)



1. *Roll call*(Saravanakmr
   
,
   12:00:30)
2. *Next week’s meeting host*(Saravanakmr
   
,
   12:03:19)
3. **/ACTION/:ndevos need to decide on how to provide/use debug
   builds(Saravanakmr
   
,
   12:06:11)
4. /ACTION/:jiffin will try to add an error for bug ownership to
   check-bugs.py(Saravanakmr
   
,
   12:07:14)


1. *hgowtham to close all 3.6 bugs as 3.9 is out; after the discussion
   on the devel list*(Saravanakmr
   
,
   12:07:45)
1. 
http://www.gluster.org/pipermail/gluster-devel/2016-December/051623.html(Saravanakmr
   
,
   12:09:25)

2. *Group Triage*(Saravanakmr
   
,
   12:10:07)
1. you can find the bugs to triage here in
   https://public.pad.fsfe.org/p/gluster-bugs-to-triage(Saravanakmr
   
,
   12:10:16)

3. *Open Floor*(Saravanakmr
   
,
   12:17:47)


Meeting ended at 12:18:32 UTC (full logs 
).



 Action items

1. ndevos need to decide on how to provide/use debug builds
2. jiffin will try to add an error for bug ownership to check-bugs.py


Action items, by person

1. jiffin
1. jiffin will try to add an error for bug ownership to check-bugs.py


 People present (lines said)

1. Saravanakmr (38)
2. hgowtham (14)
3. anraj (5)
4. zodbot (3)
5. ankitraj (2)
6. jiffin (2)

 Meeting ended Tue Dec  6 12:18:32 2016 UTC. Information about 
MeetBot at http://wiki.debian.org/MeetBot .
 Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-12-06/gluster_bug_triage.2016-12-06-12.00.html
 Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-12-06/gluster_bug_triage.2016-12-06-12.00.txt
 Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-12-06/gluster_bug_triage.2016-12-06-12.00.log.html




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (today)

2016-12-06 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to enable shared_storage?

2016-11-19 Thread Saravanakumar Arumugam



On 11/19/2016 04:13 PM, Alexandr Porunov wrote:

It still doesn't work..

I have created that dir:
# mkdir -p /var/run/gluster/shared_storage

and then:
# mount -t glusterfs 127.0.0.1:gluster_shared_storage 
/var/run/gluster/shared_storage

Mount failed. Please check the log file for more details.

Where to find a proper file to read logs? Because 
"/var/log/glusterfs/" has a lot of log files.


You can  find mount logs like this :  "directory_mounted".log inside 
/var/log/glusterfs

There is some issue in your setup...check this log and share it here.


Sincerely,
Alexandr

On Sat, Nov 19, 2016 at 11:16 AM, Saravanakumar Arumugam 
mailto:sarum...@redhat.com>> wrote:



On 11/19/2016 01:39 AM, Alexandr Porunov wrote:

Hello,

I try to enable shared storage for Geo-Replication but I am
not sure that I do it properly.

Here is what I do:
# gluster volume set all cluster.enable-shared-storage enable
volume set: success

# mount -t glusterfs 127.0.0.1:gluster_shared_storage
/var/run/gluster/shared_storage
ERROR: Mount point does not exist
Please specify a mount point
Usage:
man 8 /sbin/mount.glusterfs


This error means /var/run/gluster/shared_storage directory does
NOT exists.

But, running the  command (gluster volume set all
cluster.enable-shared-storage enable)
should carry out the mounting automatically. (so, there is no need
to manually mount).

Check after running "gluster volume set all
cluster.enable-shared-storage enable"
1. gluster volume info
 2. glusterfs process started with volfile-id as
gluster_shared_storage.

Thanks,
Saravana




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to enable shared_storage?

2016-11-19 Thread Saravanakumar Arumugam


On 11/19/2016 01:39 AM, Alexandr Porunov wrote:

Hello,

I try to enable shared storage for Geo-Replication but I am not sure 
that I do it properly.


Here is what I do:
# gluster volume set all cluster.enable-shared-storage enable
volume set: success

# mount -t glusterfs 127.0.0.1:gluster_shared_storage 
/var/run/gluster/shared_storage

ERROR: Mount point does not exist
Please specify a mount point
Usage:
man 8 /sbin/mount.glusterfs



This error means /var/run/gluster/shared_storage directory does NOT exists.

But, running the  command (gluster volume set all 
cluster.enable-shared-storage enable)
should carry out the mounting automatically. (so, there is no need to 
manually mount).


Check after running "gluster volume set all 
cluster.enable-shared-storage enable"

1. gluster volume info
 2. glusterfs process started with volfile-id as 
gluster_shared_storage.


Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS geo-replication brick KeyError

2016-11-15 Thread Saravanakumar Arumugam



On 11/15/2016 03:46 AM, Shirwa Hersi wrote:

Hi,

I'm using glusterfs geo-replication on version 3.7.11, one of the 
bricks becomes faulty and does not replicated to slave bricks after i 
start geo-replication session.
Following are the logs related to the faulty brick, can someone please 
advice me on how to resolve this issue.


[2016-06-11 09:41:17.359086] E 
[syncdutils(/var/glusterfs/gluster_b2/brick):276:log_raise_exception] : 
FAIL:
Traceback (most recent call last):
   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 166, in main
 main_i()
   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 663, in 
main_i
 local.service_loop(*[r for r in [remote] if r])
   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1497, in 
service_loop
 g3.crawlwrap(oneshot=True)
   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 571, in 
crawlwrap
 self.crawl()
   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1201, in 
crawl
 self.changelogs_batch_process(changes)
   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1107, in 
changelogs_batch_process
 self.process(batch)
   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 984, in 
process
 self.datas_in_batch.remove(unlinked_gfid)
KeyError: '.gfid/757b0ad8-b6f5-44da-b71a-1b1c25a72988'
The bug mentioned is fixed in upstream. Refer this link: 
http://www.gluster.org/pipermail/bugs/2016-June/061785.html You can 
update gluster to get the fix. Alternatively, you can try to restart 
geo-rep session using "start force" to overcome the error. But updating 
is better. Thanks, Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-12 Thread Saravanakumar Arumugam



On 11/11/2016 09:09 PM, Sander Eikelenboom wrote:

Friday, November 11, 2016, 4:28:36 PM, you wrote:


Feature requests to in Bugzilla anyway.
Create your volume with the populated brick as brick one. Start it and "heal 
full".

gluster> volume create testvolume transport tcp
gluster> 192.168.1.1:/mnt/glusterfs/testdata/brick force
volume create: private: success: please start the volume to access data
gluster> volume heal testvolume full
Launching heal operation to perform full self heal on volume testvolume has 
been unsuccessful on bricks that are down. Please check if all brick processes 
are running.
gluster> volume start testvolume
volume start: testvolume: success
gluster> volume heal testvolume full
Launching heal operation to perform full self heal on volume testvolume has 
been unsuccessful on bricks that are down. Please check if all brick processes 
are running.

So it seems healing only works on volumes with 2 or more bricks.
So that doesn't seem to workout very well.

Thanks for sharing the result.

Are you restricted to use only a single brick volume.
Because you can keep the data in single brick and add another empty 
brick and try the above step.

( I am guessing what is possible here, not really tried this myself)

Anyway, I think this is a very interesting problem to solve.

Ref document for full heal : 
https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/afr-self-heal-daemon.md


Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] FSFE pads to github wiki / alternative etherpad - info. required

2016-11-10 Thread Saravanakumar Arumugam

Hi,

I am working on moving fsfe pages  to github wiki( as discussed in 
Gluster community meeting, yesterday).

Identified the following links present in fsfe etherpad.

I need your help to check whether the link(maybe created by you) needs 
to be moved to github wiki.

Also, if any other link you wish to add.

Note:
Only items which will "not change" will be moved to github wiki.
(example - meeting status, meeting template)

Items which needs to be updated(read realtime colloboration) will NOT be 
moved to github wiki.

(example - bugs to triage updated real time by multiple users).
We need to identify alternative "etherpad" for the same.

Now the links:
=
https://public.pad.fsfe.org/p/gluster-bug-triage

https://public.pad.fsfe.org/p/gluster-bugs-to-triage

https://public.pad.fsfe.org/p/gluster-community-meetings

https://public.pad.fsfe.org/p/gluster-3.7-hangouts

https://public.pad.fsfe.org/p/glusterfs-release-process-201606

https://public.pad.fsfe.org/p/glusterfs-compound-fops

https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes

https://public.pad.fsfe.org/p/gluster-spurious-failures

https://public.pad.fsfe.org/p/gluster-automated-bug-workflow

https://public.pad.fsfe.org/p/gluster-3.8-features

https://public.pad.fsfe.org/p/gluster-login-issues

https://public.pad.fsfe.org/p/dht_lookup_optimize

https://public.pad.fsfe.org/p/gluster-gerrit-migration

https://public.pad.fsfe.org/p/gluster-component-release-checklist

https://public.pad.fsfe.org/p/glusterfs-bitrot-notes

https://public.pad.fsfe.org/p/review-for-glusterfs-3.7

https://public.pad.fsfe.org/p/gluster-xattr-categorization

https://public.pad.fsfe.org/p/Snapshots_in_glusterfs

https://public.pad.fsfe.org/p/gluster-gd2-kaushal

https://public.pad.fsfe.org/p/gluster-events

https://public.pad.fsfe.org/p/gluster-slogans

https://public.pad.fsfe.org/p/gluster-weekly-news

https://public.pad.fsfe.org/p/gluster-next-planning

https://public.pad.fsfe.org/p/gluster-heketi
==


Thanks,
Saravanakumar




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Minutes: Gluster Community Bug Triage meeting (8th November 2016)

2016-11-08 Thread Saravanakumar Arumugam

Hi all,

The minutes of today's meeting:

Meeting summary

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage 
(Saravanakmr, 12:01:19)


Roll call (Saravanakmr, 12:01:28)
Next week’s meeting host (Saravanakmr, 12:05:25)
ACTION: skoduri will host bug triage meeting on 15 November 
(Saravanakmr, 12:06:54)


ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:07:37)
ACTION: ndevos need to decide on how to provide/use debug 
builds (Saravanakmr, 12:08:25)


jiffin will try to add an error for bug ownership to check-bugs.py 
(Saravanakmr, 12:08:40)
ACTION: jiffin will try to add an error for bug ownership to 
check-bugs.py (Saravanakmr, 12:09:06)


Group Triage (Saravanakmr, 12:09:39)
you can find the bugs to triage here in 
https://public.pad.fsfe.org/p/gluster-bugs-to-triage (Saravanakmr, 12:09:48)
http://www.gluster.org/community/documentation/index.php/Bug_triage 
(Saravanakmr, 12:09:59)


Open Floor (Saravanakmr, 12:22:57)

Action items

skoduri will host bug triage meeting on 15 November
ndevos need to decide on how to provide/use debug builds
jiffin will try to add an error for bug ownership to check-bugs.py


Action items, by person

jiffin
jiffin will try to add an error for bug ownership to check-bugs.py
skoduri
skoduri will host bug triage meeting on 15 November


People present (lines said)

Saravanakmr (35)
skoduri (6)
zodbot (3)
jiffin (2)
sathees (2)
kkeithley (2)


Thanks,
Saravanakumar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting (Today)

2016-11-08 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Minutes from Gluster Bug Triage meeting today

2016-09-13 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana

==
Meeting summary :

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage 
(Saravanakmr, 12:00:50)


Roll call (Saravanakmr, 12:01:02)
Next weeks meeting host (Saravanakmr, 12:04:25)
ACTION: Next weeks meeting host hgowtham (Saravanakmr, 12:06:11)
skoduri hosts the meeting on 27 September (ndevos, 12:06:12)

ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:07:16)
ACTION: ndevos need to decide on how to provide/use debug 
builds (Saravanakmr, 12:07:57)


jiffin will try to add an error for bug ownership to check-bugs.py 
(Saravanakmr, 12:08:15)
ACTION: jiffin will try to add an error for bug ownership to 
check-bugs.py (Saravanakmr, 12:09:03)


Group Triage (Saravanakmr, 12:09:23)
you can fine the bugs to triage here in 
https://public.pad.fsfe.org/p/gluster-bugs-to-triage (Saravanakmr, 12:09:32)
http://www.gluster.org/community/documentation/index.php/Bug_triage Bug 
triage guidelines can be found here ^^ (Saravanakmr, 12:09:44)


Open Floor (Saravanakmr, 12:21:14)



Meeting ended at 12:22:13 UTC (full logs).

Action items

Next weeks meeting host hgowtham
ndevos need to decide on how to provide/use debug builds
jiffin will try to add an error for bug ownership to check-bugs.py



Action items, by person

hgowtham
Next weeks meeting host hgowtham
jiffin
jiffin will try to add an error for bug ownership to check-bugs.py
ndevos
ndevos need to decide on how to provide/use debug builds



People present (lines said)

Saravanakmr (34)
skoduri (8)
ndevos (7)
hgowtham (7)
zodbot (3)
jiffin (1)
kkeithley (1)
==
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-13/gluster_bug_triage.2016-09-13-12.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-13/gluster_bug_triage.2016-09-13-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-13/gluster_bug_triage.2016-09-13-12.00.log.html




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (today)

2016-09-13 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] CFP for Gluster Developer Summit

2016-08-31 Thread Saravanakumar Arumugam

Hi,

I 'd like to talk about:

Title : Gluster and Bareos Integration - Opensource Backup solution

Theme :  Experience - Description of real world experience and feedback 
from:
   b> Developers integrating Gluster with 
other ecosystems


Agenda Planned:
- Bareos Introduction
- Bareos integration with Glusterfs
- Leverage Glusterfs functionality for Bareos
- Demo on integrated solution

Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Distribute only volume access during failure

2016-08-23 Thread Saravanakumar Arumugam

Hi,

On 08/23/2016 03:09 PM, Beard Lionel (BOSTON-STORAGE) wrote:


Hi,

I have noticed that when using a distribute volume, if a brick is not 
accessible, volume is still accessible in read-write mode, but some 
files can’t be created (depending on filename).


Is it possible to force a distribute volume to be put in read-only 
mode during a brick failure, to avoid random access error?



You can either remount the volume as read-only.  ( mount with "ro" option)
OR
You can set the volume read-only. (gluster volume set  
read-only on)


I’m afraid of application behavior in this case, when only some files 
are not accessible. I prefer to have volume not writeable when a brick 
failed.


It is better to configure a distribute replicate volume, so that volume 
is accessible(and writable) even if one node is down.


Thanks,
Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What is op-version?

2016-08-08 Thread Saravanakumar Arumugam


On 08/08/2016 08:59 PM, Atin Mukherjee wrote:



On Mon, Aug 8, 2016 at 3:18 PM, Niels de Vos > wrote:


On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam
wrote:
>
> On 08/07/2016 04:17 PM, ML mail wrote:
> > Hi,
> >
> > Can someone explain me what is the op-version everybody is
speaking about on the mailing list?
> op-version is a way to determine which gluster version you are
running.
>
> This is quite useful during upgrade process, to check for backward
> compatibility.
>
> FYI -
>

https://github.com/gluster/glusterfs/blob/master/libglusterfs/src/globals.h#L21

<https://github.com/gluster/glusterfs/blob/master/libglusterfs/src/globals.h#L21>

Maybe there should be a page about the op-version at our upgrade guide
(just got recently informed we have that page):
http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/
<http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/>


Agreed, we should highlight about op-version when there is a new 
feature introduced which requires a cluster op-version bump up. 
Probably having a metric of release vs op-version would benefit users 
to understand what op-version they should run with. Any takers for 
this change?


I have raised a issue in glusterdocs .. will send a pull request.
https://github.com/gluster/glusterdocs/issues/143




Niels

___
Gluster-users mailing list
Gluster-users@gluster.org

http://www.gluster.org/mailman/listinfo/gluster-users
<http://www.gluster.org/mailman/listinfo/gluster-users>




--

--Atin


--
--Atin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What is op-version?

2016-08-08 Thread Saravanakumar Arumugam


On 08/07/2016 04:17 PM, ML mail wrote:

Hi,

Can someone explain me what is the op-version everybody is speaking about on 
the mailing list?

op-version is a way to determine which gluster version you are running.

This is quite useful during upgrade process, to check for backward 
compatibility.


FYI -
https://github.com/gluster/glusterfs/blob/master/libglusterfs/src/globals.h#L21



Cheers
ML



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-replication configuration issue

2016-07-24 Thread Saravanakumar Arumugam


On 07/25/2016 10:29 AM, Saravanakumar Arumugam wrote:

Hi,

1.
Can you check   /root/.ssh/authorized_keys (in master host) ?


Sorry..typo , this is in slave host (ks4 in your case)


It should contain only entries starting with "command=" .
If there is any duplicate entry without "command=" , delete the same.
and check the geo-rep status again.


2.
This is to confirm ssh connection between master and slave:
When do run the following, you should get gsynd prompt(from master to 
slave).


ssh -i /var/lib/glusterd/geo-replication/secret.pem root@slave


Slave is ks4 in your case.


3.
check your firewall settings


Thanks,
Saravana

On 07/25/2016 02:24 AM, Alexandre Besnard wrote:

Anybody, any clue ?

On 20 Jul 2016, at 01:59, Alexandre Besnard 
mailto:besnard.alexan...@gmail.com>> 
wrote:


Hello

I deleted the content of /root/.ssh/authorized_keys on the slave (ks4)

Then I configured passwordless authentication from the host (ks16):

/ssh-copy-id root@ks4/
/
/
Also did from the host:

/gluster system:: execute gsec_create/

which created a file 
in /var/lib/glusterd/geo-replication/common_secret.pem.pub


Then created the geo-replicated volume successfully:

/gluster volume geo-replication backupvol ks4::backupvol create 
push-pem force/


but still getting the same errors in the log after starting the volume:

/[2016-07-19 23:52:14.900583] I [monitor(monitor):266:monitor] 
Monitor: /
/[2016-07-19 23:52:14.900752] I [monitor(monitor):267:monitor] 
Monitor: starting gsyncd worker/
/[2016-07-19 23:52:14.958281] I 
[gsyncd(/gluster/backupvol):710:main_i] : syncing: 
gluster://localhost:backupvol -> 
ssh://root@ks4:gluster://localhost:backupvol/
/[2016-07-19 23:52:14.958520] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining.../
/[2016-07-19 23:52:15.81407] E 
[syncdutils(/gluster/backupvol):252:log_raise_exception] : 
connection to peer is broken/
/[2016-07-19 23:52:15.81645] E 
[resource(/gluster/backupvol):226:errlog] Popen: command "ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
*/var/lib/glusterd/geo-replication/secret.pem *-p 22 
-oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-Aux8JK/b63292d563144e7818235d683516731d.sock 
root@ks4 /nonexistent/gsyncd --session-owner 
3281242a-ab45-4a0d-99e5-2965b4ac5840 -N --listen --timeout 120 
gluster://localhost:backupvol" returned with 255, saying:/
/[2016-07-19 23:52:15.81733] E 
[resource(/gluster/backupvol):230:logerr] Popen: ssh> 
key_load_public: invalid format/
/[2016-07-19 23:52:15.81804] E 
[resource(/gluster/backupvol):230:logerr] Popen: ssh> Permission 
denied (publickey,password)./
/[2016-07-19 23:52:15.81947] I 
[syncdutils(/gluster/backupvol):220:finalize] : exiting./
/[2016-07-19 23:52:15.82798] I [repce(agent):92:service_loop] 
RepceServer: terminating on reaching EOF./
/[2016-07-19 23:52:15.82946] I [syncdutils(agent):220:finalize] 
: exiting./
/[2016-07-19 23:52:15.82858] I [monitor(monitor):333:monitor] 
Monitor: worker(/gluster/backupvol) died before establishing connection/






Any thoughts ?


On 19 Jul 2016, at 06:33, Aravinda <mailto:avish...@redhat.com>> wrote:


Hi,

Looks like Master Pem keys are not copied to Slave nodes properly, 
Please cleanup /root/.ssh/authorized_keys in Slave nodes and run 
Geo-rep create force again.


gluster volume geo-replication  :: 
create push-pem force


Do you observe any errors related to hook scripts in glusterd log file?

regards
Aravinda

On 07/18/2016 10:11 PM, Alexandre Besnard wrote:

Hello

On a fresh Gluster 3.8 install, I am not able to configure a 
geo-replicated volume. Everything works fine up to starting of the 
volume however Gluster reports a faulty status.


When looking at the logs (gluster_error):

[2016-07-18 16:30:04.371686] I [cli.c:730:main] 0-cli: Started 
running gluster with version 3.8.0
[2016-07-18 16:30:04.435854] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-07-18 16:30:04.435921] I 
[socket.c:2468:socket_event_handler] 0-transport: disconnecting now
[2016-07-18 16:30:04.997986] I [input.c:31:cli_batch] 0-: Exiting 
with: 0




From the geo-replicated logs, it seems I have a SSH configuration 
issue:


2016-07-18 16:35:28.293524] I [monitor(monitor):266:monitor] 
Monitor: 
[2016-07-18 16:35:28.293740] I [monitor(monitor):267:monitor] 
Monitor: starting gsyncd worker
[2016-07-18 16:35:28.352266] I 
[gsyncd(/gluster/backupvol):710:main_i] : syncing: 
gluster://localhost:backupvol -> 
ssh://root@ks4:gluster://localhost:backupvol
[2016-07-18 16:35:28.352489] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-07-18 16:35:28.492474] E 
[syncdutils(/gluster/backupvol):252:log_raise_exception] : 
connection to peer is broken
[2016-07-18 16:35:28.492706] E 
[res

Re: [Gluster-users] Geo-replication configuration issue

2016-07-24 Thread Saravanakumar Arumugam

Hi,

1.
Can you check   /root/.ssh/authorized_keys (in master host) ?
It should contain only entries starting with "command=" .
If there is any duplicate entry without "command=" , delete the same.
and check the geo-rep status again.


2.
This is to confirm ssh connection between master and slave:
When do run the following, you should get gsynd prompt(from master to 
slave).


ssh -i /var/lib/glusterd/geo-replication/secret.pem root@slave

3.
check your firewall settings


Thanks,
Saravana

On 07/25/2016 02:24 AM, Alexandre Besnard wrote:

Anybody, any clue ?

On 20 Jul 2016, at 01:59, Alexandre Besnard 
mailto:besnard.alexan...@gmail.com>> wrote:


Hello

I deleted the content of /root/.ssh/authorized_keys on the slave (ks4)

Then I configured passwordless authentication from the host (ks16):

/ssh-copy-id root@ks4/
/
/
Also did from the host:

/gluster system:: execute gsec_create/

which created a file 
in /var/lib/glusterd/geo-replication/common_secret.pem.pub


Then created the geo-replicated volume successfully:

/gluster volume geo-replication backupvol ks4::backupvol create 
push-pem force/


but still getting the same errors in the log after starting the volume:

/[2016-07-19 23:52:14.900583] I [monitor(monitor):266:monitor] 
Monitor: /
/[2016-07-19 23:52:14.900752] I [monitor(monitor):267:monitor] 
Monitor: starting gsyncd worker/
/[2016-07-19 23:52:14.958281] I 
[gsyncd(/gluster/backupvol):710:main_i] : syncing: 
gluster://localhost:backupvol -> 
ssh://root@ks4:gluster://localhost:backupvol/
/[2016-07-19 23:52:14.958520] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining.../
/[2016-07-19 23:52:15.81407] E 
[syncdutils(/gluster/backupvol):252:log_raise_exception] : 
connection to peer is broken/
/[2016-07-19 23:52:15.81645] E 
[resource(/gluster/backupvol):226:errlog] Popen: command "ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
*/var/lib/glusterd/geo-replication/secret.pem *-p 22 
-oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-Aux8JK/b63292d563144e7818235d683516731d.sock 
root@ks4 /nonexistent/gsyncd --session-owner 
3281242a-ab45-4a0d-99e5-2965b4ac5840 -N --listen --timeout 120 
gluster://localhost:backupvol" returned with 255, saying:/
/[2016-07-19 23:52:15.81733] E 
[resource(/gluster/backupvol):230:logerr] Popen: ssh> 
key_load_public: invalid format/
/[2016-07-19 23:52:15.81804] E 
[resource(/gluster/backupvol):230:logerr] Popen: ssh> Permission 
denied (publickey,password)./
/[2016-07-19 23:52:15.81947] I 
[syncdutils(/gluster/backupvol):220:finalize] : exiting./
/[2016-07-19 23:52:15.82798] I [repce(agent):92:service_loop] 
RepceServer: terminating on reaching EOF./
/[2016-07-19 23:52:15.82946] I [syncdutils(agent):220:finalize] 
: exiting./
/[2016-07-19 23:52:15.82858] I [monitor(monitor):333:monitor] 
Monitor: worker(/gluster/backupvol) died before establishing connection/






Any thoughts ?


On 19 Jul 2016, at 06:33, Aravinda > wrote:


Hi,

Looks like Master Pem keys are not copied to Slave nodes properly, 
Please cleanup /root/.ssh/authorized_keys in Slave nodes and run 
Geo-rep create force again.


gluster volume geo-replication  :: 
create push-pem force


Do you observe any errors related to hook scripts in glusterd log file?

regards
Aravinda

On 07/18/2016 10:11 PM, Alexandre Besnard wrote:

Hello

On a fresh Gluster 3.8 install, I am not able to configure a 
geo-replicated volume. Everything works fine up to starting of the 
volume however Gluster reports a faulty status.


When looking at the logs (gluster_error):

[2016-07-18 16:30:04.371686] I [cli.c:730:main] 0-cli: Started 
running gluster with version 3.8.0
[2016-07-18 16:30:04.435854] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-07-18 16:30:04.435921] I [socket.c:2468:socket_event_handler] 
0-transport: disconnecting now
[2016-07-18 16:30:04.997986] I [input.c:31:cli_batch] 0-: Exiting 
with: 0




From the geo-replicated logs, it seems I have a SSH configuration 
issue:


2016-07-18 16:35:28.293524] I [monitor(monitor):266:monitor] 
Monitor: 
[2016-07-18 16:35:28.293740] I [monitor(monitor):267:monitor] 
Monitor: starting gsyncd worker
[2016-07-18 16:35:28.352266] I 
[gsyncd(/gluster/backupvol):710:main_i] : syncing: 
gluster://localhost:backupvol -> 
ssh://root@ks4:gluster://localhost:backupvol
[2016-07-18 16:35:28.352489] I [changelogagent(agent):73:__init__] 
ChangelogAgent: Agent listining...
[2016-07-18 16:35:28.492474] E 
[syncdutils(/gluster/backupvol):252:log_raise_exception] : 
connection to peer is broken
[2016-07-18 16:35:28.492706] E 
[resource(/gluster/backupvol):226:errlog] Popen: command "ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 
-oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-Fs2XND/b

Re: [Gluster-users] Error 404 ?

2016-07-11 Thread Saravanakumar Arumugam



On 07/11/2016 03:59 PM, Kaleb Keithley wrote:

Starting with the 3.8 releases EPEL packages are in the CentOS Storage SIG 
repos.

If you want to stay on 3.7, edit your /etc/yum.repos.d/glusterfs-epel.repo file 
and change .../LATEST/... to .../3.7/LATEST/...

(There have been several emails to gluster-users and gluster-devel mailing 
lists about this.)

See http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/EPEL.README 
for more info.

I think you mean this link:
https://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.README




- Original Message -

From: "Nicolas Ecarnot" 
To: "gluster-users" 
Sent: Monday, July 11, 2016 6:18:36 AM
Subject: [Gluster-users] Error 404 ?

Hello,

When trying a yum upgrade, I see that :

https://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo

is leading to :

Not Found

The requested URL /pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
was not found on this server.

What did I do wrong?

(it was working for years...)

Thx

--
Nicolas ECARNOT
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Minutes from todays Gluster Bug Triage meeting

2016-07-05 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana


#gluster-meeting: Gluster Bug Triage
Meeting started by Saravanakmr at 12:01:03 UTC (full logs).

Meeting summary

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage 
(Saravanakmr, 12:01:13)


Roll call (Saravanakmr, 12:01:20)
Next weeks meeting host (Saravanakmr, 12:04:08)
ACTION: skoduri will host July 12 meeting (Saravanakmr, 12:05:14)

Action Items (Saravanakmr, 12:06:13)
ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:06:36)
ACTION: ndevos need to decide on how to provide/use debug 
builds (Saravanakmr, 12:07:24)


ndevos to propose some test-cases for minimal libgfapi test 
(Saravanakmr, 12:07:42)
skoduri to remind the developers working on test-automation to 
triage their own bugs (Saravanakmr, 12:12:29)
http://nongnu.13855.n7.nabble.com/Reminder-Triaging-and-Updating-Bug-status-td213287.html 
(Saravanakmr, 12:15:06)


jiffin will try to add an error for bug ownership to check-bugs.py 
(Saravanakmr, 12:16:18)
ACTION: jiffin will try to add an error for bug ownership to 
check-bugs.py (Saravanakmr, 12:17:02)


Group Triage (Saravanakmr, 12:17:34)
bugs to triage have been added to 
https://public.pad.fsfe.org/p/gluster-bugs-to-triage (Saravanakmr, 12:17:44)


Open Floor (Saravanakmr, 12:23:23)



Meeting ended at 12:24:58 UTC (full logs).

Action items

skoduri will host July 12 meeting
ndevos need to decide on how to provide/use debug builds
jiffin will try to add an error for bug ownership to check-bugs.py



Action items, by person

ndevos
ndevos need to decide on how to provide/use debug builds
skoduri
skoduri will host July 12 meeting



People present (lines said)

Saravanakmr (55)
ndevos (17)
skoduri (14)
kkeithley (10)
zodbot (3)
hgowtham (2)

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Bug Triage starts at 12:00 UTC

2016-07-05 Thread Saravanakumar Arumugam

Hi all,

The Gluster Bug Triage Meeting will start in approx. 1 hour 30 minutes from now.
Please join if you are interested in getting a decent status of bugs
that have recently been filed, and maintainers/developers did not pickup
yet.

The meeting also includes a little bit about testing and other misc
stuff related to bugs.

See you there!

Thanks,
Saravanakumar

Agenda:https://public.pad.fsfe.org/p/gluster-bug-triage
Location: #gluster-meeting on Freenode IRC 
-https://webchat.freenode.net/?channels=gluster-meeting
Date: Tuesday July 5, 2016
Time: 12:00 UTC, 13:00 CET, 7:00 EST (to get your local time, run: date -d "12:00 
UTC")
Chair: Saravanakumar


1. Agenda
  -  Roll Call

2. Action Items

1. ndevos need to decide on how to provide/use debug builds

2. ndevos to propose some test-cases for minimal libgfapi test

3. Manikandan and gem to wait until Nigel gives access to test the scripts

4. skoduri to remind the developers working on test-automation to
   triage their own bugs

5. jiffin will try to add an error for bug ownership to check-bugs.py

3. Group Triage

4. Open Floor

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes : Gluster Community Bug Triage meeting (Today)

2016-06-14 Thread Saravanakumar Arumugam


 Hi,


 Please find minutes of June 14 Bug Triage meeting.



 Meeting summary

1.
1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
   (Saravanakmr
   
,
   12:00:37)

2. Roll call (Saravanakmr
   
,
   12:00:49)
3. kkeithley Saravanakmr will set up Coverity, clang, etc on public
   facing machine and run it regularly (Saravanakmr
   
,
   12:04:59)
1. 
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2016-06-13-e0b057cf/cov-errors.txt
   looks pretty full to me (kkeithley
   
,
   12:08:39)

4. ndevos need to decide on how to provide/use debug builds
   (Saravanakmr
   
,
   12:11:54)
1. ACTION: ndevos need to decide on how to provide/use debug builds
   (Saravanakmr
   
,
   12:12:19)
2. ACTION: ndevos to propose some test-cases for minimal libgfapi
   test (Saravanakmr
   
,
   12:12:42)

5. Manikandan and gem to followup with kshlm/misc/nigelb to get access
   to gluster-infra (Saravanakmr
   
,
   12:12:58)
1. ACTION: Manikandan and gem to followup with kshlm/misc/nigelb to
   get access to gluster-infra (Saravanakmr
   
,
   12:14:21)
2. ACTION: Manikandan will host bug triage meeting on June 21st
   2016 (Saravanakmr
   
,
   12:14:30)
3. ACTION: ndevos will host bug triage meeting on June 28th 2016
   (Saravanakmr
   
,
   12:14:38)

6. Group Triage (Saravanakmr
   
,
   12:15:05)
1. you can fine the bugs to triage here in
   https://public.pad.fsfe.org/p/gluster-bugs-to-triage
   (Saravanakmr
   
,
   12:15:12)
2. http://www.gluster.org/community/documentation/index.php/Bug_triage
   (Saravanakmr
   
,
   12:15:17)
3. https://public.pad.fsfe.org/p/gluster-bugs-to-triage
   (Saravanakmr
   
,
   12:15:38)

7. Open Floor (Saravanakmr
   
,
   12:23:40)


Meeting ended at 12:29:29 UTC (full logs 
).



 Action items

1. ndevos need to decide on how to provide/use debug builds
2. ndevos to propose some test-cases for minimal libgfapi test
3. Manikandan and gem to followup with kshlm/misc/nigelb to get access
   to gluster-infra
4. Manikandan will host bug triage meeting on June 21st 2016
5. ndevos will host bug triage meeting on June 28th 2016



 Action items, by person

1. ndevos
1. ndevos need to decide on how to provide/use debug builds
2. ndevos to propose some test-cases for minimal libgfapi test
3. ndevos will host bug triage meeting on June 28th 2016
2. UNASSIGNED
1. Manikandan and gem to followup with kshlm/misc/nigelb to get
   access to gluster-infra
2. Manikandan will host bug triage meeting on June 21st 2016


 People present (lines said)

1. Saravanakmr (45)
2. kkeithley (20)
3. ndevos (10)
4. hgowtham (3)
5. zodbot (3)
6. skoduri (2)
7. jiffin (1)
8. partner (1)


Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC

2016-06-14 Thread Saravanakumar Arumugam

Hi,


This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.


Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Meeting minutes: Gluster Community Bug Triage

2016-05-24 Thread Saravanakumar Arumugam

Hi,

Please find the meeting minutes and summary:


 Minutes:

 Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.html
 Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.txt
 Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.log.html



 Meeting summary:

1. *Roll call* (Saravanakmr
   
,
   12:01:10)
2. *msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions*
   (Saravanakmr
   
,
   12:04:57)
1. /ACTION/: msvbhat will look into lalatenduM's automated Coverity
   setup in Jenkins which need assistance from an admin with more
   permissions (Saravanakmr
   
,
   12:06:06)

3. *ndevos need to decide on how to provide/use debug builds*
   (Saravanakmr
   
,
   12:06:18)
1. /ACTION/: ndevos need to decide on how to provide/use debug
   builds (Saravanakmr
   
,
   12:07:10)

4. *Manikandan and gem to followup with kshlm to get access to
   gluster-infra* (Saravanakmr
   
,
   12:08:34)
1. /ACTION/: Manikandan and gem to followup with kshlm to get
   access to gluster-infra (Saravanakmr
   
,
   12:09:25)

5. *ndevos to propose some test-cases for minimal libgfapi test*
   (Saravanakmr
   
,
   12:09:42)
1. /ACTION/: ndevos to propose some test-cases for minimal libgfapi
   test (Saravanakmr
   
,
   12:09:51)

6. *Group Triage* (Saravanakmr
   
,
   12:10:08)
1. you can fine the bugs to triage here in
   https://public.pad.fsfe.org/p/gluster-bugs-to-triage
   (Saravanakmr
   
,
   12:10:15)

7. *Open Floor* (Saravanakmr
   
,
   12:34:49)



Meeting ended at 12:37:02 UTC (full logs 
). 




 Action items

1. msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions
2. ndevos need to decide on how to provide/use debug builds
3. Manikandan and gem to followup with kshlm to get access to gluster-infra
4. ndevos to propose some test-cases for minimal libgfapi test


 Action items, by person

1. ndevos
1. ndevos need to decide on how to provide/use debug builds
2. ndevos to propose some test-cases for minimal libgfapi test
2. *UNASSIGNED*
1. msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions
2. Manikandan and gem to followup with kshlm to get access to
   gluster-infra


 People present (lines said)

1. Saravanakmr (31)
2. jiffin (3)
3. kkeithley (3)
4. zodbot (3)
5. hgowtham (3)
6. skoduri (1)
7. ndevos (1)

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC ~(in 2.0 hours)

2016-05-24 Thread Saravanakumar Arumugam
Hi, This meeting is scheduled for anyone interested in learning more 
about, or assisting with the Bug Triage. Meeting details: - location: 
#gluster-meeting on Freenode IRC ( 
https://webchat.freenode.net/?channels=gluster-meeting ) - date: every 
Tuesday - time: 12:00 UTC (in your terminal, run: date -d "12:00 UTC") - 
agenda: https://public.pad.fsfe.org/p/gluster-bug-triage Currently the 
following items are listed: * Roll Call * Status of last weeks action 
items * Group Triage * Open Floor Highly appreciate your participation. 
Thanks, Saravana


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 答复: 答复: geo-replication status partial faulty

2016-05-19 Thread Saravanakumar Arumugam

Hi,
+geo-rep team.

Can you get the gluster version you are using?

# For example:
rpm -qa | grep gluster

I hope you have same gluster version installed everywhere.
Please double check and share the same.

Thanks,
Saravana

On 05/19/2016 01:37 PM, vyyy杨雨阳 wrote:


Hi, Saravana

I have changed log level to DEBUG. Then start geo-replication with 
log-file option, attached the file.


gluster volume geo-replication filews 
glusterfs01.sh3.ctripcorp.com::filews_slave start --log-file=geo.log


I have checked  /root/.ssh/authorized_keys in 
glusterfs01.sh3.ctripcorp.com, It  have entries in 
/var/lib/glusterd/geo-replication/common_secret.pem.pub.  and I have 
removed the lines not started with “command=”


ssh -i /var/lib/glusterd/geo-replication/secret.pem  root@ 
glusterfs01.sh3.ctripcorp.com


I can see gsyncd messages and no ssh error.

Attached etc-glusterfs-glusterd.vol.log from faulty node, it shows :

[2016-05-19 06:39:23.405974] I 
[glusterd-geo-rep.c:3516:glusterd_read_status_file] 0-: Using passed 
config 
template(/var/lib/glusterd/geo-replication/filews_glusterfs01.sh3.ctripcorp.com_filews_slave/gsyncd.conf).


[2016-05-19 06:39:23.541169] E 
[glusterd-geo-rep.c:3200:glusterd_gsync_read_frm_status] 0-: Unable to 
read gsyncd status file


[2016-05-19 06:39:23.541210] E 
[glusterd-geo-rep.c:3603:glusterd_read_status_file] 0-: Unable to read 
the statusfile for /export/sdb/filews brick for filews(master), 
glusterfs01.sh3.ctripcorp.com::filews_slave(slave) session


[2016-05-19 06:39:29.472047] I 
[glusterd-geo-rep.c:1835:glusterd_get_statefile_name] 0-: Using passed 
config 
template(/var/lib/glusterd/geo-replication/filews_glusterfs01.sh3.ctripcorp.com_filews_slave/gsyncd.conf).


[2016-05-19 06:39:34.939709] I 
[glusterd-geo-rep.c:3516:glusterd_read_status_file] 0-: Using passed 
config 
template(/var/lib/glusterd/geo-replication/filews_glusterfs01.sh3.ctripcorp.com_filews_slave/gsyncd.conf).


[2016-05-19 06:39:35.058520] E 
[glusterd-geo-rep.c:3200:glusterd_gsync_read_frm_status] 0-: Unable to 
read gsyncd status file


/var/log/glusterfs/geo-replication/filews/ 
ssh%3A%2F%2Froot%4010.15.65.66%3Agluster%3A%2F%2F127.0.0.1%3Afilews_slave.log 
 shows as following:


[2016-05-19 15:11:37.307755] I [monitor(monitor):215:monitor] Monitor: 



[2016-05-19 15:11:37.308059] I [monitor(monitor):216:monitor] Monitor: 
starting gsyncd worker


[2016-05-19 15:11:37.423320] D [gsyncd(agent):627:main_i] : 
rpc_fd: '7,11,10,9'


[2016-05-19 15:11:37.423882] I [changelogagent(agent):72:__init__] 
ChangelogAgent: Agent listining...


[2016-05-19 15:11:37.423906] I [monitor(monitor):267:monitor] Monitor: 
worker(/export/sdb/filews) died before establishing connection


[2016-05-19 15:11:37.424151] I [repce(agent):92:service_loop] 
RepceServer: terminating on reaching EOF.


[2016-05-19 15:11:37.424335] I [syncdutils(agent):214:finalize] : 
exiting.


Best Regards **

*Yuyang Yang***

*发 件人**:*Saravanakumar Arumugam [mailto:sarum...@redhat.com]
*发送时间:* Thursday, May 19, 2016 1:59 PM
*收件人:* vyyy杨雨阳 ; Gluster-users@gluster.org
*主题:* Re: [Gluster-users] 答复: geo-replication status partial faulty

Hi,

There seems to be some issue in glusterfs01.sh3.ctripcorp.com slave node.
Can you share the complete logs ?

You can increase verbosity of debug messages like this:
gluster volume geo-replication  ::volume> config log-level DEBUG



Also, check  /root/.ssh/authorized_keys in glusterfs01.sh3.ctripcorp.com
It should have entries in 
/var/lib/glusterd/geo-replication/common_secret.pem.pub (present in 
master node).


Have a look at this one for example:
https://www.gluster.org/pipermail/gluster-users/2015-August/023174.html

Thanks,
Saravana

On 05/19/2016 07:53 AM, vyyy杨雨阳 wrote:

Hello,

I have tried to config a geo-replication volume , all the master
nodes configuration are the same, When I start this volume, the
status shows partial faulty as following:

gluster volume geo-replication filews
glusterfs01.sh3.ctripcorp.com::filews_slave status

MASTER NODE  MASTER VOLMASTER BRICK
SLAVE  STATUS CHECKPOINT
STATUSCRAWL STATUS


-

SVR8048HW2285filews /export/sdb/filews
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty
N/A  N/A

SVR8050HW2285filews /export/sdb/filews
glusterfs03.sh3.ctripcorp.com::filews_slavePassive
N/A  N/A

SVR8047HW2285filews /export/sdb/filews
glusterfs01.sh3.ctripcorp.com::filews_slaveActive
N/A  Hybrid Crawl

SVR8049HW2285filews /export/sdb/filews
glusterfs05.sh3.ctripcorp.com::filews_slaveActive
N/A  Hybrid Crawl

SH02SVR5951  filews /expo

Re: [Gluster-users] 答复: geo-replication status partial faulty

2016-05-18 Thread Saravanakumar Arumugam

Hi,

There seems to be some issue in glusterfs01.sh3.ctripcorp.com slave node.
Can you share the complete logs ?

You can increase verbosity of debug messages like this:
gluster volume geo-replication  ::volume>config log-level DEBUG



Also, check  /root/.ssh/authorized_keys in glusterfs01.sh3.ctripcorp.com
It should have entries in 
/var/lib/glusterd/geo-replication/common_secret.pem.pub (present in 
master node).


Have a look at this one for example:
https://www.gluster.org/pipermail/gluster-users/2015-August/023174.html

Thanks,
Saravana

On 05/19/2016 07:53 AM, vyyy杨雨阳 wrote:


Hello,

I have tried to config a geo-replication volume , all the master nodes 
configuration are the same, When I start this volume, the status shows 
partial faulty as following:


gluster volume geo-replication filews 
glusterfs01.sh3.ctripcorp.com::filews_slave status


MASTER NODE  MASTER VOLMASTER BRICK  SLAVE STATUS 
CHECKPOINT STATUSCRAWL STATUS


-

SVR8048HW2285filews /export/sdb/filews 
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty 
N/A  N/A


SVR8050HW2285filews /export/sdb/filews 
glusterfs03.sh3.ctripcorp.com::filews_slavePassive 
N/A  N/A


SVR8047HW2285filews /export/sdb/filews 
glusterfs01.sh3.ctripcorp.com::filews_slaveActive 
N/A  Hybrid Crawl


SVR8049HW2285filews /export/sdb/filews 
glusterfs05.sh3.ctripcorp.com::filews_slaveActive 
N/A  Hybrid Crawl


SH02SVR5951  filews /export/sdb/brick1 
glusterfs06.sh3.ctripcorp.com::filews_slavePassive 
N/A  N/A


SH02SVR5953  filews /export/sdb/brick1 
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty 
N/A  N/A


SVR6995HW2285filews /export/sdb/filews 
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty 
N/A  N/A


SH02SVR5954  filews /export/sdb/brick1 
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty 
  N/A  N/A


SVR6994HW2285filews /export/sdb/filews 
glusterfs02.sh3.ctripcorp.com::filews_slavePassive 
N/A  N/A


SVR6993HW2285filews /export/sdb/filews 
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty 
N/A  N/A


SH02SVR5952  filews /export/sdb/brick1 
glusterfs01.sh3.ctripcorp.com::filews_slavefaulty 
N/A  N/A


SVR6996HW2285filews /export/sdb/filews 
glusterfs04.sh3.ctripcorp.com::filews_slavePassive 
N/A  N/A


On the faulty node, log file /var/log/glusterfs/geo-replication/filews 
shows worker(/export/sdb/filews) died before establishing connection


[2016-05-18 16:55:46.402622] I [monitor(monitor):215:monitor] Monitor: 



[2016-05-18 16:55:46.402930] I [monitor(monitor):216:monitor] Monitor: 
starting gsyncd worker


[2016-05-18 16:55:46.517460] I [changelogagent(agent):72:__init__] 
ChangelogAgent: Agent listining...


[2016-05-18 16:55:46.518066] I [repce(agent):92:service_loop] 
RepceServer: terminating on reaching EOF.


[2016-05-18 16:55:46.518279] I [syncdutils(agent):214:finalize] : 
exiting.


[2016-05-18 16:55:46.518194] I [monitor(monitor):267:monitor] Monitor: 
worker(/export/sdb/filews) died before establishing connection


[2016-05-18 16:55:56.697036] I [monitor(monitor):215:monitor] Monitor: 



Any advice and suggestions will be greatly appreciated.

Best Regards **

*��**Yuyang Yang***



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] mount.glusterfs able to return errors directly to caller rather than default log

2016-04-28 Thread Saravanakumar Arumugam


On 04/28/2016 07:26 PM, Scott Creeley wrote:

Is there a way to set an option to return errors directly to the caller?  
Looking at the man-page doesn't appear so...but wondering if there is some 
trick to accomplish this or how hard that would be to implement.

For example, current behavior:

1.  mount.glusterfs fails - returns exit code  (exit status 1) and the true 
errors are logged to log-file parameter (or default)

 [2016-04-27 19:25:53.429657] E [socket.c:2332:socket_connect_finish] 
0-glusterfs: connection to 192.168.121.222:24007 failed (Connection timed out)
 [2016-04-27 19:25:53.429733] E 
[glusterfsd-mgmt.c:1819:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect 
with remote-host: 192.168.121.222 (Transport endpoint is not connected)

2.  user or program needs to access the log file to find the root cause of the 
issue


wondering how hard it would be to return the errors directly to the caller with 
an additional option, something like  verbose or -v

seems work in progress, check this :
http://review.gluster.org/#/c/11469/

Thanks,
Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Minutes from todays Gluster Community Bug Triage meeting (2016-04-19)

2016-04-19 Thread Saravanakumar Arumugam

Hi,
Thanks for the participation.  Please find meeting summary below.

Meeting ended Tue Apr 19 12:58:58 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.log.html


Meeting started by Saravanakmr at 12:00:36 UTC (full logs 
). 




 Meeting summary

1.
1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
   (Saravanakmr
   
,
   12:01:01)

2. *Roll Call* (Saravanakmr
   
,
   12:01:13)
3. *msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions*
   (Saravanakmr
   
,
   12:07:44)
1. /ACTION/: msvbhat will look into lalatenduM's automated Coverity
   setup in Jenkins which need assistance from an admin with more
   permissions (Saravanakmr
   
,
   12:09:08)

4. *ndevos need to decide on how to provide/use debug builds*
   (Saravanakmr
   
,
   12:09:31)
1. /ACTION/: ndevos need to decide on how to provide/use debug
   builds (Saravanakmr
   
,
   12:10:13)

5. *Manikandan to followup with kashlm to get access to gluster-infra*
   (Saravanakmr
   
,
   12:10:33)
1. /ACTION/: Manikandan to followup with kashlm to get access to
   gluster-infra (Saravanakmr
   
,
   12:11:44)
2. /ACTION/: Manikandan and Nandaja will update on bug automation
   (Saravanakmr
   
,
   12:11:54)

6. *msvbhat provide a simple step/walk-through on how to provide
   testcases for the nightly rpm tests* (Saravanakmr
   
,
   12:12:08)
1. /ACTION/: msvbhat provide a simple step/walk-through on how to
   provide testcases for the nightly rpm tests (Saravanakmr
   
,
   12:12:18)
2. /ACTION/: ndevos to propose some test-cases for minimal libgfapi
   test (Saravanakmr
   
,
   12:12:27)

7. *rafi needs to followup on #bug 1323895* (Saravanakmr
   
,
   12:12:36)
1. /ACTION/: rafi needs to followup on #bug 1323895 (Saravanakmr
   
,
   12:14:04)

8. *need to discuss about writing a script to update bug assignee from
   gerrit patch* (Saravanakmr
   
,
   12:14:27)
1. /ACTION/: ndevos need to discuss about writing a script to
   update bug assignee from gerrit patch (Saravanakmr
   
,
   12:18:29)

9. *hari to send a request asking developers to setup notification for
   bugs being filed* (Saravanakmr
   
,
   12:18:52)
1. http://www.spinics.net/lists/gluster-devel/msg19169.html
   (Saravanakmr
   
,
   12:22:20)

10. *Group Triage* (Saravanakmr
   


[Gluster-users] [Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 3 hours)

2016-04-19 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication unprivileged user error

2016-03-30 Thread Saravanakumar Arumugam

Hi,
Replies inline.

Thanks,
Saravana

On 03/31/2016 04:00 AM, Gmail wrote:
I’ve rebuilt the cluster again, making a fresh installation. And now 
the error is different.






MASTER NODE MASTER VOLMASTER BRICK  SLAVE USER
SLAVE   SLAVE NODE  STATUS CRAWL 
STATUSLAST_SYNCED

---
master-host01.me.com    geotest   
/gpool/brick03/geotestguser guser@slave-host01::geotestdrN/A 
Faulty N/A N/A
master-host02.me.com    geotest   
/gpool/brick03/geotestguser guser@slave-host01::geotestdr
slave-host01 Passive   N/A N/A
master-host03.me.com    geotest   
/gpool/brick03/geotestguser guser@slave-host01::geotestdr
slave-host03 Passive   N/A N/A



There seems to issue with geo-rep setup.

 - All the master bricks seems same..which should not be the case.

What type of volume is this?
Can you get "gluster volume status" and "gluster volume info"  for both 
master and slave volume?


Also, share all the commands you execute to setup this georep session ?







[2016-03-30 22:09:31.326898] I [monitor(monitor):221:monitor] Monitor: 

[2016-03-30 22:09:31.327461] I [monitor(monitor):222:monitor] Monitor: 
starting gsyncd worker
[2016-03-30 22:09:31.544631] I 
[gsyncd(/gpool/brick03/geotest):649:main_i] : syncing: 
gluster://localhost:geotest -> 
ssh://guser@slave-host02:gluster://localhost:geotestdr
[2016-03-30 22:09:31.547542] I [changelogagent(agent):75:__init__] 
ChangelogAgent: Agent listining...
[2016-03-30 22:09:31.830554] E 
[syncdutils(/gpool/brick03/geotest):252:log_raise_exception] : 
connection to peer is broken
[2016-03-30 22:09:31.831017] W 
[syncdutils(/gpool/brick03/geotest):256:log_raise_exception] : 
!
[2016-03-30 22:09:31.831258] W 
[syncdutils(/gpool/brick03/geotest):257:log_raise_exception] : 
!!! getting "No such file or directory" errors is most likely due to 
MISCONFIGURATION, please consult 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
[2016-03-30 22:09:31.831502] W 
[syncdutils(/gpool/brick03/geotest):265:log_raise_exception] : 
!
[2016-03-30 22:09:31.836395] E 
[resource(/gpool/brick03/geotest):222:errlog] Popen: command "ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-SfXvbB/de372ce5774b5d259c58c5c9522ffc8f.sock 
guser@slave-host02 /nonexistent/gsyncd --session-owner 
ec473e17-b933-4bf7-9eed-4c393f7aaf5d -N --listen --timeout 120 
gluster://localhost:geotestdr" returned with 127, saying:
[2016-03-30 22:09:31.836694] E 
[resource(/gpool/brick03/geotest):226:logerr] Popen: ssh> bash: 
/nonexistent/gsyncd: No such file or directory
[2016-03-30 22:09:31.837193] I 
[syncdutils(/gpool/brick03/geotest):220:finalize] : exiting.
[2016-03-30 22:09:31.840569] I [repce(agent):92:service_loop] 
RepceServer: terminating on reaching EOF.
[2016-03-30 22:09:31.840993] I [syncdutils(agent):220:finalize] : 
exiting.
[2016-03-30 22:09:31.840742] I [monitor(monitor):274:monitor] Monitor: 
worker(/gpool/brick03/geotest) died before establishing connection
[2016-03-30 22:09:42.130866] I [monitor(monitor):221:monitor] Monitor: 

[2016-03-30 22:09:42.131448] I [monitor(monitor):222:monitor] Monitor: 
starting gsyncd worker
[2016-03-30 22:09:42.348165] I 
[gsyncd(/gpool/brick03/geotest):649:main_i] : syncing: 
gluster://localhost:geotest -> 
ssh://guser@slave-host02:gluster://localhost:geotestdr
[2016-03-30 22:09:42.349118] I [changelogagent(agent):75:__init__] 
ChangelogAgent: Agent listining...
[2016-03-30 22:09:42.653141] E 
[syncdutils(/gpool/brick03/geotest):252:log_raise_exception] : 
connection to peer is broken
[2016-03-30 22:09:42.653656] W 
[syncdutils(/gpool/brick03/geotest):256:log_raise_exception] : 
!
[2016-03-30 22:09:42.653898] W 
[syncdutils(/gpool/brick03/geotest):257:log_raise_exception] : 
!!! getting "No such file or directory" errors is most likely due to 
MISCONFIGURATION, please consult 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
[2016-03-30 22:09:42.654129] W 
[syncdutils(/gpool/brick03/geotest):265:log_raise_exception] : 
!
[2016-03-30 22:09:42.659329] E 
[resource(/gpool/brick03/geotest):222:errlog] Popen: command "ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (Mar 22 2016)

2016-03-22 Thread Saravanakumar Arumugam

Hi,

Please find the minutes of today's Gluster Community Bug Triage meeting 
below. Thanks to everyone who have attended the meeting.


Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-22/gluster_bug_triage.2016-03-22-12.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-22/gluster_bug_triage.2016-03-22-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-22/gluster_bug_triage.2016-03-22-12.00.log.html



#gluster-meeting: Gluster Bug Triage

Meeting started by Saravanakmr at 12:00:03 UTC (full logs 
). 




 Meeting summary

1.
1. agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
   (Saravanakmr
   
,
   12:00:19)

2. *Roll Call* (Saravanakmr
   
,
   12:00:28)
3. *kkeithley_ will come up with a proposal to reduce the number of
   bugs against "mainline" in NEW state* (Saravanakmr
   
,
   12:04:42)
4. *hagarth start/sync email on regular (nightly) automated tests*
   (Saravanakmr
   
,
   12:05:36)
5. *msvbhat will look into using nightly builds for automated testing,
   and will report issues/success to the mailinglist* (Saravanakmr
   
,
   12:06:57)
6. *msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions*
   (Saravanakmr
   
,
   12:09:41)
1. /ACTION/: msvbhat will look into lalatenduM's automated Coverity
   setup in Jenkins which need assistance from an admin with more
   permissions (Saravanakmr
   
,
   12:13:42)

7. *msvbhat and ndevos need to think about and decide how to
   provide/use debug builds* (Saravanakmr
   
,
   12:13:55)
7.
1. /ACTION/: ndevos need to think about and decide how to
   provide/use debug builds (Saravanakmr
   
,
   12:17:02)

8. *ndevos to propose some test-cases for minimal libgfapi test*
   (Saravanakmr
   
,
   12:17:15)
9. *Manikandan and Nandaja will update on bug automation* (Saravanakmr
   
,
   12:19:16)
1. /ACTION/: Manikandan and Nandaja will update on bug automation
   (Saravanakmr
   
,
   12:20:17)
2. /ACTION/: kkeithley_ will come up with a proposal to reduce the
   number of bugs against "mainline" in NEW state (Saravanakmr
   
,
   12:23:53)
3. /ACTION/: hagarth start/sync email on regular (nightly)
   automated tests (Saravanakmr
   
,
   12:24:03)
4. https://public.pad.fsfe.org/p/gluster-bugs-to-triage
   (Saravanakmr
   
,
   12:25:04)
5. http://www.gluster.org/community/documentation/index.php/Bug_triage
   (Manikandan
   
,
   12:26:37)

10. *Open Floor* (Saravanakmr
   
,
   12:39:37)



Meeting ended at 12:41:49 UTC (full logs 
). 




 Action items

1. msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more p

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC

2016-03-22 Thread Saravanakumar Arumugam

Hi,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs build questions

2016-03-09 Thread Saravanakumar Arumugam

Hi,

On 03/09/2016 01:06 PM, jayakrishnan mm wrote:

Hi,

I have installed the  3.7.6  version from .deb on my Ubuntu14.04 PC. 
It is working fine.


Now  I have  built  the 3.7.6..from sources as follows.


1. ./autogen.sh
2. ./configure --enable-debug
3. make
4. sudo make install  --  DESTDIR=/

The .so  files  are  in /usr/local/lib/glusterfs and  the libs 
(libglusterfs.so.0.0.1,etc)  are in /usr/local/lib/. (The glusterfsd 
is in /usr/local/sbin)


But  the Initial installation keeps .so files  in 
/usr/lib/i386-linux-gnu/glusterfs and  libs in /usr/lib


and  daemon in /usr/sbin.



You can set the path to install while doing configure.

Check this link :
https://www.gluster.org/pipermail/gluster-devel/2016-January/047981.html

Now  I am unable to mount the volume from client, giving the  below  
error. (from glustershd.log)


Attached the  full log.

How to   completely remove the previous installation dependencies(.so, 
libs, daemon, etc) ?


Is the below  error  due  to the mismatch between the .deb 
installation and source built components?


--JK




-
[2016-03-09 06:54:19.303329] I [MSGID: 100030] 
[glusterfsd.c:2318:main] 0-/usr/local/sbin/glusterfs: Started running 
/usr/local/sbin/glusterfs version 3.7.6 (args: 
/usr/local/sbin/glusterfs -s localhost --volfile-id gluster/glustershd 
-p /var/lib/glusterd/glustershd/run/glustershd.pid -l 
/var/log/glusterfs/glustershd.log -S 
/var/run/gluster/7d1a316c684c230d5e5088906dfb4d96.socket 
--xlator-option 
*replicate*.node-uuid=021ae21f-cfd9-4569-8eae-b3f7183c414e)
[2016-03-09 06:54:19.311801] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-03-09 06:54:19.318582] D [MSGID: 0] [ec.c:546:init] 
0-ecvol-disperse-0: JK_DEBUG: In init


[2016-03-09 06:54:19.318632] D [MSGID: 0] [ec.c:56:ec_parse_options] 
0-ecvol-disperse-0: JK_DEBUG: In ec_parse_options


[2016-03-09 06:54:19.318666] D [MSGID: 0] 
[options.c:1217:xlator_option_init_int32] 0-ecvol-disperse-0: option 
redundancy using set value 1
[2016-03-09 06:54:19.318696] D [MSGID: 0] [ec.c:88:ec_parse_options] 
0-ec: JK_DEBUG:Initialized with: nodes=3, fragments=2, 
stripe_size=1024, node_mask=7
[2016-03-09 06:54:19.318738] D 
[logging.c:1764:gf_log_flush_extra_msgs] 0-logging-infra: Log buffer 
size reduced. About to flush 5 extra log messages
[2016-03-09 06:54:19.318770] D 
[logging.c:1767:gf_log_flush_extra_msgs] 0-logging-infra: Just flushed 
5 extra log messages

pending frames:
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git 


signal received: 11
time of crash:
2016-03-09 06:54:19
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.6
/usr/lib/i386-linux-gnu/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xf1)[0xb7614cab]
/usr/lib/i386-linux-gnu/libglusterfs.so.0(gf_print_trace+0x253)[0xb762f052]
/usr/local/sbin/glusterfs(glusterfsd_print_trace+0x1a)[0x8050a29]
[0xb76e9404]
/usr/local/lib/glusterfs/3.7.6/xlator/cluster/disperse.so(ec_method_initialize+0x1c)[0xb3994b99]
/usr/local/lib/glusterfs/3.7.6/xlator/cluster/disperse.so(init+0x3d2)[0xb3911d6a]
/usr/lib/i386-linux-gnu/libglusterfs.so.0(+0x1e06f)[0xb761106f]
/usr/lib/i386-linux-gnu/libglusterfs.so.0(xlator_init+0x11b)[0xb76111a0]
/usr/lib/i386-linux-gnu/libglusterfs.so.0(glusterfs_graph_init+0x39)[0xb7664ab2]
/usr/lib/i386-linux-gnu/libglusterfs.so.0(glusterfs_graph_activate+0xa5)[0xb7665592]
/usr/local/sbin/glusterfs(glusterfs_process_volfp+0x14f)[0x8051043]



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Per-client prefered server?

2016-03-03 Thread Saravanakumar Arumugam



On 03/03/2016 05:38 PM, Yannick Perret wrote:

Hello,

I can't find if it is possible to set a prefered server on a 
per-client basis for replica volumes, so I ask the question here.


The context: we have 2 storage servers, each in one building. We also 
have several virtual machines on each building, and they can migrate 
from one building to an other (depending on load, maintenance…).


So (for testing at this time) I setup a x2 replica volume, one replica 
on each storage server of course. As most of our volumes are "many 
reads - few writes" it would be better for bandwidth that each client 
uses the "nearest" storage server (local building switch) - for 
reading, of course. The 2 buildings have a good netlink but we prefer 
to minimize - when not needed - data transferts beetween them (this 
link is shared).


Can you see a solution for this kind of tuning? As far as I understand 
geo-replica is not really what I need, no?


Yes, geo-replication "cannot" be used as you wish to carry out "write" 
operation on Slave side.




It exists "cluster.read-subvolume" option of course but we can have 
clients on both building so a per-volume option is not what we need. 
An per-client equivalent of this option should be nice.


I tested by myself a small patch to perform this - and it seems to 
work fine as far as I can see - but 1. before continuing in this way I 
would first check if it exists an other way and 2. I'm not familiar 
with the whole code so I'm not sure that my tests are in the 
"state-of-the-art" for glusterfs.


maybe you should share that interesting patch :) and get better feedback 
about your test case.



Thanks in advance for any help.

Regards,
--
Y.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-04 Thread Saravanakumar Arumugam

Hi,

On 02/03/2016 08:09 PM, ML mail wrote:

Dear Aravinda,

Thank you for the analysis and submitting a patch for this issue. I hope it can 
make it into the next GlusterFS release 3.7.7.


As suggested I ran the find_gfid_issues.py script on my brick on the two master 
nodes and slave nodes but the only output it shows to me is the following:

You need to run the script only in Slave.


NO GFID(DIR) : /data/myvolume-geo/brick/test
NO GFID(DIR) : /data/myvolume-geo/brick/data
NO GFID(DIR) : /data/myvolume-geo/brick/data/files_encryption
NO GFID(DIR) : /data/myvolume-geo/brick/data/username


As you can see there are no files at all. So I am still left with 394 files of 
0 kBytes on my geo-rep slave node. Do you have any suggestion how to cleanup 
this mess?
Do you mean to say script shows only 4 directories, but there are 394 
files on slave node?


Ok, as of now there is no automatic way of cleaning these files..and you 
need to manually remove them.


You can follow these steps:

1. stop geo-replication session.

2. get list of all 0 kByte files and delete them.
It is important to ensure there is no source file exists in master 
for those files.
( For example,  logo-login-09.svg.ocTransferId1789604916.part is 0 kByte 
file, ensure no such source file exists in master

  Otherwise, you may end up delete files which are in sync progress)

3. Start geo-replication session.

With the patch coming in, these errors should not be encountered in future.

Thanks,
Saravana



Best regards
ML



On Tuesday, February 2, 2016 7:59 AM, Aravinda  wrote:
Hi ML,

We analyzed the issue. Looks like Changelog is replayed may be because
of Geo-rep worker crash or Active/Passive switch or both Geo-rep workers
becoming active.

 From changelogs,

CREATE  logo-login-04.svg.part
RENAME logo-login-04.svg.part logo-login-04.svg

When it is replayed,
CREATE  logo-login-04.svg.part
RENAME logo-login-04.svg.part logo-login-04.svg
CREATE  logo-login-04.svg.part
RENAME logo-login-04.svg.part logo-login-04.svg

During replay backend GFID link is broken and Geo-rep failed to cleanup.
Milind is working on the patch to fix the same. Patches are in review
and expected to be available in 3.7.8 release.

http://review.gluster.org/#/c/13316/
http://review.gluster.org/#/c/13189/

Following script can be used to find problematic file in each Brick backend.
https://gist.github.com/aravindavk/29f673f13c2f8963447e

regards
Aravinda

On 02/01/2016 08:45 PM, ML mail wrote:

Sure, I will just send it to you through an encrypted cloud storage app and 
send you the password via private mail.

Regards
ML



On Monday, February 1, 2016 3:14 PM, Saravanakumar Arumugam 
 wrote:


On 02/01/2016 07:22 PM, ML mail wrote:

I just found out I needed to run the getfattr on a mount and not on the 
glusterfs server directly. So here are the additional output you asked for:


# getfattr -n glusterfs.gfid.string  -m .  logo-login-09.svg
# file: logo-login-09.svg
glusterfs.gfid.string="1c648409-e98b-4544-a7fa-c2aef87f92ad"

# grep 1c648409-e98b-4544-a7fa-c2aef87f92ad 
/data/myvolume/brick/.glusterfs/changelogs -rn
Binary file /data/myvolume/brick/.glusterfs/changelogs/CHANGELOG.1454278219 
matches

Great!  Can you share the CHANGELOG ?  ( It contains various fops
carried out on this gfid)


Regards
ML



On Monday, February 1, 2016 1:30 PM, Saravanakumar Arumugam 
 wrote:
Hi,

On 02/01/2016 02:14 PM, ML mail wrote:

Hello,

I just set up distributed geo-replication to a slave on my 2 nodes' replicated 
volume and noticed quite a few error messages (around 70 of them) in the 
slave's brick log file:

The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log

[2016-01-31 22:19:29.524370] E [MSGID: 113020] [posix.c:1221:posix_mknod] 
0-myvolume-geo-posix: setting gfid on 
/data/myvolume-geo/brick/data/username/files/shared/logo-login-09.svg.ocTransferId1789604916.part
 failed
[2016-01-31 22:19:29.535478] W [MSGID: 113026] [posix.c:1338:posix_mkdir] 
0-myvolume-geo-posix: mkdir 
(/data/username/files_encryption/keys/files/shared/logo-login-09.svg.ocTransferId1789604916.part):
 gfid (15bbcec6-a332-4c21-81e4-c52472b1e13d) isalready associated with 
directory 
(/data/myvolume-geo/brick/.glusterfs/49/5d/495d6868-4844-4632-8ff9-ad9646a878fe/logo-login-09.svg).
 Hence,both directories will share same gfid and thiscan lead to 
inconsistencies.

Can you grep for this gfid(of the corresponding files) in changelogs and
share those files ?

{
For example:

1. Get gfid of the files like this:

# getfattr -n glusterfs.gfid.string  -m .  /mnt/slave/file456
getfattr: Removing leading '/' from absolute path names
# file: mnt/slave/file456
glusterfs.gfid.string="05b22446-de9e-42df-a63e-399c24d690c4"

2. grep for the corresponding gfid in brick back end like below:

[root@gfvm3 changelogs]# grep 05b22446-de9e-42df-a63e-399c24d690c4
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/ -rn
Binary file
/opt/v

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread Saravanakumar Arumugam



On 02/01/2016 07:22 PM, ML mail wrote:

I just found out I needed to run the getfattr on a mount and not on the 
glusterfs server directly. So here are the additional output you asked for:


# getfattr -n glusterfs.gfid.string  -m .  logo-login-09.svg
# file: logo-login-09.svg
glusterfs.gfid.string="1c648409-e98b-4544-a7fa-c2aef87f92ad"

# grep 1c648409-e98b-4544-a7fa-c2aef87f92ad 
/data/myvolume/brick/.glusterfs/changelogs -rn
Binary file /data/myvolume/brick/.glusterfs/changelogs/CHANGELOG.1454278219 
matches
Great!  Can you share the CHANGELOG ?  ( It contains various fops 
carried out on this gfid)

Regards
ML



On Monday, February 1, 2016 1:30 PM, Saravanakumar Arumugam 
 wrote:
Hi,

On 02/01/2016 02:14 PM, ML mail wrote:

Hello,

I just set up distributed geo-replication to a slave on my 2 nodes' replicated 
volume and noticed quite a few error messages (around 70 of them) in the 
slave's brick log file:

The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log

[2016-01-31 22:19:29.524370] E [MSGID: 113020] [posix.c:1221:posix_mknod] 
0-myvolume-geo-posix: setting gfid on 
/data/myvolume-geo/brick/data/username/files/shared/logo-login-09.svg.ocTransferId1789604916.part
 failed
[2016-01-31 22:19:29.535478] W [MSGID: 113026] [posix.c:1338:posix_mkdir] 
0-myvolume-geo-posix: mkdir 
(/data/username/files_encryption/keys/files/shared/logo-login-09.svg.ocTransferId1789604916.part):
 gfid (15bbcec6-a332-4c21-81e4-c52472b1e13d) isalready associated with 
directory 
(/data/myvolume-geo/brick/.glusterfs/49/5d/495d6868-4844-4632-8ff9-ad9646a878fe/logo-login-09.svg).
 Hence,both directories will share same gfid and thiscan lead to 
inconsistencies.

Can you grep for this gfid(of the corresponding files) in changelogs and
share those files ?

{
For example:

1. Get gfid of the files like this:

# getfattr -n glusterfs.gfid.string  -m .  /mnt/slave/file456
getfattr: Removing leading '/' from absolute path names
# file: mnt/slave/file456
glusterfs.gfid.string="05b22446-de9e-42df-a63e-399c24d690c4"

2. grep for the corresponding gfid in brick back end like below:

[root@gfvm3 changelogs]# grep 05b22446-de9e-42df-a63e-399c24d690c4
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/ -rn
Binary file
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/CHANGELOG.1454135265 matches
Binary file
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/CHANGELOG.1454135476 matches

}
This will help in understanding what operations are carried out in
master volume, which leads to this inconsistency.

Also, get the following:
gluster version
gluster volume info
gluster volume geo-replication status


This doesn't look good at all because the file mentioned in the error message (
logo-login-09.svg.ocTransferId1789604916.part) is left there with 0 kbytes and does not 
get deleted or cleaned up by glusterfs, leaving my geo-rep slave node in an inconsistent 
state which does not reflect the reality from the master nodes. The master nodes don't 
have that file anymore (which is correct). Here below is an "ls" of the 
concerned file with the correct file on top.


-rw-r--r-- 2 www-data www-data   24312 Jan  6  2014 logo-login-09.svg
-rw-r--r-- 1 root root   0 Jan 31 23:19 
logo-login-09.svg.ocTransferId1789604916.part

Rename issues in geo-replication are fixed lately. This looks similar to

one.

Thanks,
Saravana
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread Saravanakumar Arumugam

Hi,

On 02/01/2016 02:14 PM, ML mail wrote:

Hello,

I just set up distributed geo-replication to a slave on my 2 nodes' replicated 
volume and noticed quite a few error messages (around 70 of them) in the 
slave's brick log file:

The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log

[2016-01-31 22:19:29.524370] E [MSGID: 113020] [posix.c:1221:posix_mknod] 
0-myvolume-geo-posix: setting gfid on 
/data/myvolume-geo/brick/data/username/files/shared/logo-login-09.svg.ocTransferId1789604916.part
 failed
[2016-01-31 22:19:29.535478] W [MSGID: 113026] [posix.c:1338:posix_mkdir] 
0-myvolume-geo-posix: mkdir 
(/data/username/files_encryption/keys/files/shared/logo-login-09.svg.ocTransferId1789604916.part):
 gfid (15bbcec6-a332-4c21-81e4-c52472b1e13d) isalready associated with 
directory 
(/data/myvolume-geo/brick/.glusterfs/49/5d/495d6868-4844-4632-8ff9-ad9646a878fe/logo-login-09.svg).
 Hence,both directories will share same gfid and thiscan lead to 
inconsistencies.
Can you grep for this gfid(of the corresponding files) in changelogs and 
share those files ?


{
For example:

1. Get gfid of the files like this:

# getfattr -n glusterfs.gfid.string  -m .  /mnt/slave/file456
getfattr: Removing leading '/' from absolute path names
# file: mnt/slave/file456
glusterfs.gfid.string="05b22446-de9e-42df-a63e-399c24d690c4"

2. grep for the corresponding gfid in brick back end like below:

[root@gfvm3 changelogs]# grep 05b22446-de9e-42df-a63e-399c24d690c4 
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/ -rn
Binary file 
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/CHANGELOG.1454135265 matches
Binary file 
/opt/volume_test/tv_2/b1/.glusterfs/changelogs/CHANGELOG.1454135476 matches


}
This will help in understanding what operations are carried out in 
master volume, which leads to this inconsistency.


Also, get the following:
gluster version
gluster volume info
gluster volume geo-replication status



This doesn't look good at all because the file mentioned in the error message (
logo-login-09.svg.ocTransferId1789604916.part) is left there with 0 kbytes and does not 
get deleted or cleaned up by glusterfs, leaving my geo-rep slave node in an inconsistent 
state which does not reflect the reality from the master nodes. The master nodes don't 
have that file anymore (which is correct). Here below is an "ls" of the 
concerned file with the correct file on top.


-rw-r--r-- 2 www-data www-data   24312 Jan  6  2014 logo-login-09.svg
-rw-r--r-- 1 root root   0 Jan 31 23:19 
logo-login-09.svg.ocTransferId1789604916.part
Rename issues in geo-replication are fixed lately. This looks similar to 
one.


Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication 3.6.7 - no trusted.gfid on some slave nodes - stale file handle

2015-12-20 Thread Saravanakumar Arumugam

Hi,
Replies inline..

Thanks,
Saravana

On 12/18/2015 10:02 PM, Dietmar Putz wrote:

Hello again...

after having some big trouble with an xfs issue in kernel 3.13.0-x and 
3.19.0-39 which has been 'solved' by downgrading to 3.8.4 
(http://comments.gmane.org/gmane.comp.file-systems.xfs.general/71629)

we decided to start a new geo-replication attempt from scratch...
we have deleted the former geo-replication session and started a new 
one as described in :

http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6

master and slave is a distributed replicated volume running on gluster 
3.6.7 / ubuntu 14.04.
setup worked as described but unfortunately geo-replication isn't 
syncing files and remains in the below shown status.


in the ~geo-replication-slaves/...gluster.log i can found on all slave 
nodes messages like :


[2015-12-16 15:06:46.837748] W [dht-layout.c:180:dht_layout_search] 
0-aut-wien-01-dht: no subvolume for hash (value) = 1448787070
[2015-12-16 15:06:46.837789] W [fuse-bridge.c:1261:fuse_err_cbk] 
0-glusterfs-fuse: 74203: SETXATTR() 
/.gfid/d4815ee4-3348-4105-9136-d0219d956ed8 => -1 (No such file or 
directory)
[2015-12-16 15:06:47.090212] I [dht-layout.c:663:dht_layout_normalize] 
0-aut-wien-01-dht: Found anomalies in (null) (gfid = 
d4815ee4-3348-4105-9136-d0219d956ed8). Holes=1 overlaps=0


[2015-12-16 20:25:55.327874] W [fuse-bridge.c:1967:fuse_create_cbk] 
0-glusterfs-fuse: 199968: /.gfid/603de79d-8d41-44bd-845e-3727cf64a617 
=> -1 (Operation not permitted)
[2015-12-16 20:25:55.617016] W [fuse-bridge.c:1967:fuse_create_cbk] 
0-glusterfs-fuse: 199971: /.gfid/8622fb7d-8909-42de-adb5-c67ed6f006c0 
=> -1 (Operation not permitted)
Please check whether selinux is enabled in both Master/Slave..I remember 
seeing such errors if selinux enabled.




this is found only on gluster-wien-03-int which is in 'Hybrid Crawl' :
[2015-12-16 17:17:07.219939] W [fuse-bridge.c:1261:fuse_err_cbk] 
0-glusterfs-fuse: 123841: SETXATTR() 
/.gfid/----0001 => -1 (File exists)
[2015-12-16 17:17:07.220658] W 
[client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-aut-wien-01-client-3: 
remote operation failed: File exists. Path: /2301
[2015-12-16 17:17:07.220702] W 
[client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-aut-wien-01-client-2: 
remote operation failed: File exists. Path: /2301



Some errors like "file exists" can be ignored.


But first of all i would like to have a look at this message, found 
about 6000 times on gluster-wien-05-int and ~07-int which are in 
'History Crawl':
[2015-12-16 13:03:25.658359] W [fuse-bridge.c:483:fuse_entry_cbk] 
0-glusterfs-fuse: 119569: LOOKUP() 
/.gfid/d4815ee4-3348-4105-9136-d0219d956ed8/.dstXXXfDyaP9 => -1 (Stale 
file handle)


The gfid d4815ee4-3348-4105-9136-d0219d956ed8 
1050="d4815ee4-3348-4105-9136-d0219d956ed8" belongs as shown to the 
folder 1050 in the brick-directory.


any brick in the master volume looks like this one ...:
Host : gluster-ger-ber-12-int
# file: gluster-export/1050
trusted.afr.dirty=0x
trusted.afr.ger-ber-01-client-0=0x
trusted.afr.ger-ber-01-client-1=0x
trusted.afr.ger-ber-01-client-2=0x
trusted.afr.ger-ber-01-client-3=0x
trusted.gfid=0xd4815ee4334841059136d0219d956ed8
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.1c31dc4d-7ee3-423b-8577-c7b0ce2e356a.stime=0x5660629c7e4e 

trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x567428e42116 


trusted.glusterfs.dht=0x0001aaa9

on the slave volume just the brick of wien-02 and wien-03 have the 
same trusted.gfid

Host : gluster-wien-03
# file: gluster-export/1050
trusted.afr.aut-wien-01-client-0=0x
trusted.afr.aut-wien-01-client-1=0x
trusted.gfid=0xd4815ee4334841059136d0219d956ed8
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x5638bfb5000379c0 


trusted.glusterfs.dht=0x00015554

all nodes in 'History Crawl' haven't this trusted.gfid assigned.
Host : gluster-wien-05
# file: gluster-export/1050
trusted.afr.aut-wien-01-client-2=0x
trusted.afr.aut-wien-01-client-3=0x
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x5638bfb5000379c0 


trusted.glusterfs.dht=0x0001

I'm not sure if it is normal or if that trusted.gfid should have been 
assigned on all slave nodes by the slave-upgrade.sh script.


As per the doc, it applies gfid on all slave nodes.

bash slave-upgrade.sh localhost: 
/tmp/master_gfid_file.txt $PWD/gsync-sync-gfid was running on wien-02 
which has password less login for any other slave node.
as i could see in the process list slave-upgrade.sh was running on 
each slave node and starts as far as i can remember with a 'rm -rf 
~/.glusterfs/...'
so the mentioned gfid should disappeared by the slave-upgrade.sh but 
shou

Re: [Gluster-users] after upgrade to 3.6.7 : Internal error xfs_attr3_leaf_write_verify

2015-12-06 Thread Saravanakumar Arumugam

Hi,
This seems like  XFS filesystem issue.
Can you communicate this error to xfs mailing list?

Thanks,
Saravana

On 12/06/2015 05:23 AM, Julius Thomas wrote:

Dear Gluster Users,

after fixing the problem in the last mail from my colleague by 
upgrading to kernel 3.19.0-39-generic in case of changes with this bug 
in the xfs tree,

the xfs filesystem crashes again after 4 - 5 hours on several peers.

Has anyone recommendations for fixing this problems?
Are there known issues with xfs and ubuntu 14.04?

What is the latest stable release of gluster3, v3.6.3?


You can find latest gluster here.
http://download.gluster.org/pub/gluster/glusterfs/LATEST/

and follow the link here for Ubuntu:
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Ubuntu/

Dec 5 21:14:48 gluster-ger-ber-11 kernel: [16564.018838] XFS (sdc1): 
Metadata corruption detected at xfs_attr3_leaf_write_verify+0xe5/0x100 
[xfs], block 0x44458e670
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.018879] XFS (sdc1): 
Unmount and run xfs_repair
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.018895] XFS (sdc1): 
First 64 bytes of corrupted metadata buffer:
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.018916] 
880417ff3000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00 

Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.018956] 
880417ff3010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00 
. ..
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.018984] 
880417ff3020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 

Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.019011] 
880417ff3030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 

Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.019041] XFS (sdc1): 
xfs_do_force_shutdown(0x8) called from line 1249 of file 
/build/linux-lts-vivid-1jarlV/linux-lts-vivid-3.19.0/fs/xfs/xfs_buf.c. 
Return address = 0xc02bbd22
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.019044] XFS (sdc1): 
Corruption of in-memory data detected.  Shutting down filesystem
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.019069] XFS (sdc1): 
Please umount the filesystem and rectify the problem(s)
Dec  5 21:14:48 gluster-ger-ber-11 kernel: [16564.069906] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:15:08 gluster-ger-ber-11 gluster-export[4447]: [2015-12-05 
21:15:08.797327] M 
[posix-helpers.c:1559:posix_health_check_thread_proc] 
0-ger-ber-01-posix: health-check failed, going down
Dec  5 21:15:18 gluster-ger-ber-11 kernel: [16594.068660] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:15:38 gluster-ger-ber-11 gluster-export[4447]: [2015-12-05 
21:15:38.797422] M 
[posix-helpers.c:1564:posix_health_check_thread_proc] 
0-ger-ber-01-posix: still alive! -> SIGTERM
Dec  5 21:15:48 gluster-ger-ber-11 kernel: [16624.119428] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:16:18 gluster-ger-ber-11 kernel: [16654.170134] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:16:48 gluster-ger-ber-11 kernel: [16684.220834] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:17:01 gluster-ger-ber-11 CRON[17656]: (root) CMD (   cd / && 
run-parts --report /etc/cron.hourly)
Dec  5 21:17:18 gluster-ger-ber-11 kernel: [16714.271507] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:17:48 gluster-ger-ber-11 kernel: [16744.322244] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:18:18 gluster-ger-ber-11 kernel: [16774.372948] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:18:48 gluster-ger-ber-11 kernel: [16804.423650] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:19:18 gluster-ger-ber-11 kernel: [16834.474365] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:19:48 gluster-ger-ber-11 kernel: [16864.525082] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:20:18 gluster-ger-ber-11 kernel: [16894.575778] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:20:49 gluster-ger-ber-11 kernel: [16924.626464] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:21:19 gluster-ger-ber-11 kernel: [16954.677161] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:21:49 gluster-ger-ber-11 kernel: [16984.727791] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:22:19 gluster-ger-ber-11 kernel: [17014.778570] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:22:49 gluster-ger-ber-11 kernel: [17044.829240] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:23:19 gluster-ger-ber-11 kernel: [17074.880003] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:23:49 gluster-ger-ber-11 kernel: [17104.930643] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:24:19 gluster-ger-ber-11 kernel: [17134.981336] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:24:49 gluster-ger-ber-11 kernel: [17165.032049] XFS (sdc1): 
xfs_log_force: error -5 returned.
Dec  5 21:25:19 gluster-ger-ber-11 kernel: [17195.082689] XFS (sdc1): 
xfs_log_forc

Re: [Gluster-users] Geo-rep failing initial sync

2015-10-19 Thread Saravanakumar Arumugam

Hi Wade,

There seems to be some issue in syncing the existing data in the volume 
using Xsync crawl.
( To give some background: When geo-rep is started it goes to filesystem 
crawl(Xsync) and sync all the data to slave, and then the session 
switches to CHANGELOG mode).


We are looking in to this.

Any specific reason to go for Stripe volume?  This seems to be not 
extensively tested with geo-rep.


Thanks,
Saravana

On 10/19/2015 08:24 AM, Wade Fitzpatrick wrote:
The relevant portions of the log appear to be as follows. Everything 
seemed fairly normal (though quite slow) until


[2015-10-08 15:31:26.471216] I 
[master(/data/gluster1/static/brick1):1249:crawl] _GMaster: finished 
hybrid crawl syncing, stime: (1444278018, 482251)
[2015-10-08 15:31:34.39248] I 
[syncdutils(/data/gluster1/static/brick1):220:finalize] : exiting.
[2015-10-08 15:31:34.40934] I [repce(agent):92:service_loop] 
RepceServer: terminating on reaching EOF.
[2015-10-08 15:31:34.41220] I [syncdutils(agent):220:finalize] : 
exiting.
[2015-10-08 15:31:35.615353] I [monitor(monitor):362:distribute] 
: slave bricks: [{'host': 'palace', 'dir': 
'/data/gluster1/static/brick1'}, {'host': 'madonna', 'dir'

: '/data/gluster1/static/brick2'}]
[2015-10-08 15:31:35.616558] I [monitor(monitor):383:distribute] 
: worker specs: [('/data/gluster1/static/brick1', 
'ssh://root@palace:gluster://localhost:static', 1)]
[2015-10-08 15:31:35.748434] I [monitor(monitor):221:monitor] Monitor: 

[2015-10-08 15:31:35.748775] I [monitor(monitor):222:monitor] Monitor: 
starting gsyncd worker
[2015-10-08 15:31:35.837651] I [changelogagent(agent):75:__init__] 
ChangelogAgent: Agent listining...
[2015-10-08 15:31:35.841150] I 
[gsyncd(/data/gluster1/static/brick1):649:main_i] : syncing: 
gluster://localhost:static -> ssh://root@palace:gluster://localhost:static
[2015-10-08 15:31:38.543379] I 
[master(/data/gluster1/static/brick1):83:gmaster_builder] : 
setting up xsync change detection mode
[2015-10-08 15:31:38.543802] I 
[master(/data/gluster1/static/brick1):401:__init__] _GMaster: using 
'tar over ssh' as the sync engine
[2015-10-08 15:31:38.544673] I 
[master(/data/gluster1/static/brick1):83:gmaster_builder] : 
setting up xsync change detection mode
[2015-10-08 15:31:38.544924] I 
[master(/data/gluster1/static/brick1):401:__init__] _GMaster: using 
'tar over ssh' as the sync engine
[2015-10-08 15:31:38.546163] I 
[master(/data/gluster1/static/brick1):83:gmaster_builder] : 
setting up xsync change detection mode
[2015-10-08 15:31:38.546406] I 
[master(/data/gluster1/static/brick1):401:__init__] _GMaster: using 
'tar over ssh' as the sync engine
[2015-10-08 15:31:38.548989] I 
[master(/data/gluster1/static/brick1):1220:register] _GMaster: xsync 
temp directory: 
/var/lib/misc/glusterfsd/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic/5f45950b672b0d32fa97e00350eca862/xsync
[2015-10-08 15:31:38.549267] I 
[master(/data/gluster1/static/brick1):1220:register] _GMaster: xsync 
temp directory: 
/var/lib/misc/glusterfsd/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic/5f45950b672b0d32fa97e00350eca862/xsync
[2015-10-08 15:31:38.549467] I 
[master(/data/gluster1/static/brick1):1220:register] _GMaster: xsync 
temp directory: 
/var/lib/misc/glusterfsd/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic/5f45950b672b0d32fa97e00350eca862/xsync
[2015-10-08 15:31:38.549632] I 
[resource(/data/gluster1/static/brick1):1432:service_loop] GLUSTER: 
Register time: 1444278698
[2015-10-08 15:31:38.582277] I 
[master(/data/gluster1/static/brick1):530:crawlwrap] _GMaster: primary 
master with volume id 3f9f810d-a988-4914-a5ca-5bd7b251a273 ...
[2015-10-08 15:31:38.584099] I 
[master(/data/gluster1/static/brick1):539:crawlwrap] _GMaster: crawl 
interval: 60 seconds
[2015-10-08 15:31:38.587405] I 
[master(/data/gluster1/static/brick1):1242:crawl] _GMaster: starting 
hybrid crawl..., stime: (1444278018, 482251)
[2015-10-08 15:31:38.588735] I 
[master(/data/gluster1/static/brick1):1249:crawl] _GMaster: finished 
hybrid crawl syncing, stime: (1444278018, 482251)
[2015-10-08 15:31:38.590116] I 
[master(/data/gluster1/static/brick1):530:crawlwrap] _GMaster: primary 
master with volume id 3f9f810d-a988-4914-a5ca-5bd7b251a273 ...
[2015-10-08 15:31:38.591582] I 
[master(/data/gluster1/static/brick1):539:crawlwrap] _GMaster: crawl 
interval: 60 seconds
[2015-10-08 15:31:38.593844] I 
[master(/data/gluster1/static/brick1):1242:crawl] _GMaster: starting 
hybrid crawl..., stime: (1444278018, 482251)
[2015-10-08 15:31:38.594832] I 
[master(/data/gluster1/static/brick1):1249:crawl] _GMaster: finished 
hybrid crawl syncing, stime: (1444278018, 482251)
[2015-10-08 15:32:38.641908] I 
[master(/data/gluster1/static/brick1):552:crawlwrap] _GMaster: 1 
crawls, 0 turns
[2015-10-08 15:32:38.644370] I 
[master(/data/gluster1/static/brick1):1242:crawl] _GMaster: starting 
hybrid crawl..., stime: 

Re: [Gluster-users] Geo-rep failing initial sync

2015-10-19 Thread Saravanakumar Arumugam

Please check 'gluster  volume status'

If the corresponding brick is down/ glusterfsd process crashed, Faulty 
state is observed.


Thanks,
Saravana

On 10/19/2015 09:59 AM, Wade Fitzpatrick wrote:
I have now tried to re-initialise the whole geo-rep setup but the 
replication slave went Faulty immediately. Any help here would be 
appreciated, I cannot even find how to recover a faulty node without 
recreating the geo-rep.


root@james:~# gluster volume geo-replication static gluster-b1::static 
stop
Stopping geo-replication session between static & gluster-b1::static 
has been successful
root@james:~# gluster volume geo-replication static gluster-b1::static 
delete
Deleting geo-replication session between static & gluster-b1::static 
has been successful


I then destroyed the volume and re-created bricks on 
gluster-b1::static slave volume.


root@palace:~# gluster volume stop static
Stopping volume will make its data inaccessible. Do you want to 
continue? (y/n) y

volume stop: static: success
root@palace:~# gluster volume delete static
Deleting volume will erase all information about the volume. Do you 
want to continue? (y/n) y

volume delete: static: success

root@palace:~# gluster volume create static stripe 2 transport tcp 
palace:/data/gluster1/static/brick1 madonna:/data/gluster1/static/brick2

volume create: static: success: please start the volume to access data
root@palace:~# gluster volume info

Volume Name: static
Type: Stripe
Volume ID: dc14cd83-2736-4faf-8e11-c6d711ff8f56
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: palace:/data/gluster1/static/brick1
Brick2: madonna:/data/gluster1/static/brick2
Options Reconfigured:
performance.readdir-ahead: on
root@palace:~# gluster volume start static
volume start: static: success


Then established the geo-rep sync again

root@james:~# gluster volume geo-replication static 
ssh://gluster-b1::static create
Creating geo-replication session between static & 
ssh://gluster-b1::static has been successful
root@james:~# gluster volume geo-replication static 
ssh://gluster-b1::static config use_meta_volume true

geo-replication config updated successfully
root@james:~# gluster volume geo-replication static 
ssh://gluster-b1::static config use-tarssh true

geo-replication config updated successfully

root@james:~# gluster volume geo-replication static 
ssh://gluster-b1::static config

special_sync_mode: partial
state_socket_unencoded: 
/var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.socket
gluster_log_file: 
/var/log/glusterfs/geo-replication/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.gluster.log
ssh_command: ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem

use_tarssh: true
ignore_deletes: false
change_detector: changelog
gluster_command_dir: /usr/sbin/
state_file: 
/var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.status

remote_gsyncd: /nonexistent/gsyncd
log_file: 
/var/log/glusterfs/geo-replication/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.log
changelog_log_file: 
/var/log/glusterfs/geo-replication/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic-changes.log

socketdir: /var/run/gluster
working_dir: 
/var/lib/misc/glusterfsd/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic
state_detail_file: 
/var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic-detail.status

use_meta_volume: true
ssh_command_tar: ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/tar_ssh.pem
pid_file: 
/var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.pid
georep_session_working_dir: 
/var/lib/glusterd/geo-replication/static_gluster-b1_static/

gluster_params: aux-gfid-mount acl

root@james:~# gluster volume geo-replication static 
ssh://gluster-b1::static start
Geo-replication session between static and ssh://gluster-b1::static 
does not exist.

geo-replication command failed
root@james:~# gluster volume geo-replication static 
ssh://gluster-b1::static status detail


MASTER NODEMASTER VOLMASTER BRICKSLAVE 
USERSLAVE   SLAVE NODESTATUSCRAWL 
STATUSLAST_SYNCEDENTRYDATAMETAFAILURES CHECKPOINT 
TIMECHECKPOINT COMPLETEDCHECKPOINT COMPLETION TIME
 

james  static/data/gluster1/static/brick1 
root  ssh://gluster-b1::s

Re: [Gluster-users] problems with geo-replication on 3.7.4

2015-09-22 Thread Saravanakumar Arumugam

Hi,
Replies inline.

Thanks,
Saravana

On 09/21/2015 03:56 PM, ML mail wrote:

That's right, the earlier error I've posted with ZFS actually only appeared 
during the setup of the geo replication and does not appear anymore. In fact 
ZFS does not have any inodes so I guess you would need to adapt the GlusterFS 
code to check if the FS is ZFS or not.

Now regarding the "/nonexistent/gsyncd: No such file or directory" I have 
manually fixed it by editing the gsyncd_template.conf file on all nodes, I guess creating 
a symlink as you suggest would have also worked. Shouldn't this work out of the box btw??

Yes, It should work without any issues. I think these issues crop due to 
difference in environment.


Anyway , We will look into this.

Another informational error message I have seen in the log file on my slave 
(/var/log/glusterfs/geo-replication-slaves) is the following:

[2015-09-21 10:21:12.646161] I [dict.c:473:dict_get] 
(-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.4/xlator/system/posix-acl.so(posix_acl_setxattr_cbk+0x26)
 [0x7fa8a24e7166] 
-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.4/xlator/system/posix-acl.so(handling_other_acl_related_xattr+0xb0)
 [0x7fa8a24e70f0] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0x93) 
[0x7fa8a96c9093] ) 0-dict: !this || key=system.posix_acl_default [Invalid argument]

It's appearing every minute and wanted to check if this is a bug maybe? or how 
bad is it?
Is the synchronization happening properly? ( copy some files into master 
and verified the contents at Slave side?)

Please verify it.

Please share the complete log. This may be a possible issue with ZFS( 
again I am  only guessing)


Finally I have set up everything like mentioned in the github documentation using the mountbroker and a 
separate user for replication but still when I run a "gluster volume geo-replication status" I see 
"root" under the "SLAVE USER". Is this normal???

So to resume, I've got geo-replication setup but it's quite patchy and messy 
and does not run under my special replication user I wanted it to run under.

As per my knowledge, it should display with specific user which you have 
setup.
Please share complete command details and logs. (Also, review all your 
commands to check whether everything is setup as mentioned).


On Monday, September 21, 2015 8:07 AM, Saravanakumar Arumugam 
 wrote:
Replies inline.

On 09/19/2015 03:37 PM, ML mail wrote:

So yes indeed I am using ZFS on Linux v.0.6.5 as filesystem behind Gluster. As 
operating system I use Debian 8.2 GNU/Linux.


I also followed that documentation you mention in order to enable POSIX acltype 
for example on my ZFS volume.

I checked and on my two bricks as well as my slave I have the coreutils package 
with its stat util. I have read quite a few posts of people using ZFS with 
Gluster and this should not be a problem. Or is this maybe a new bug in 
GlusterFS?

By checking the log file
/var/log/glusterfs/geo-replication/reptest/ssh%3A%2F%2Froot%40192.168.40.3%3Agluster%3A%2F%2F127.0.0.1%3Areptest.log
 I saw the following error message which might help to debug this issue:

[2015-09-18 23:41:09.646944] E [resource(/data/reptest/brick):226:logerr] Popen: 
ssh> bash: /nonexistent/gsyncd: No such file or directory

Does this ring any bells?


Do you meant to say , the earlier error "could not find (null) to
getinode size for data"  is gone and now you are getting this error.

check whether these steps helps you :
http://irclog.perlgeek.de/gluster/2015-01-08#i_9903500

Please share the complete log , if you still face any issues.

Also, report back if it helps, so that we can fix it here.



On Saturday, September 19, 2015 6:18 AM, Saravanakumar Arumugam 
 wrote:
Hi,

The underlying filesystem which you use seems like ZFS.

I don't have much idea about zfs. You may want to check this link:
http://www.gluster.org/community/documentation/index.php/GlusterOnZFS

As far as the error is concerned, it is trying to use stat command to
get inode details.
(stat command is provided by coreutils, which is quite a basic package).

Could you share your System details? Is it a Linux system ?

ps:
XFS is the recommended and widely tested filesystem for glusterfs.

Thanks,
Saravana



On 09/19/2015 03:03 AM, ML mail wrote:

Hello,

I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS 
but it still does not seem to work. I have at least managed to run succesfully 
the georepsetup using the following command:


georepsetup reptest gfsgeo@gfs1geo reptest

But as soon as I run:


gluster volume geo-replication reptest gfs1geo::reptest start

i see the following error messages every 2 minutes in 
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log:

[2015-09-18 21:27:26.341524] I [MSGID: 106488] 
[glusterd-handler.c:1463:__glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req
[2015-09-18 21:27:26.474240] I [

Re: [Gluster-users] problems with geo-replication on 3.7.4

2015-09-20 Thread Saravanakumar Arumugam

Replies inline.

On 09/19/2015 03:37 PM, ML mail wrote:

So yes indeed I am using ZFS on Linux v.0.6.5 as filesystem behind Gluster. As 
operating system I use Debian 8.2 GNU/Linux.


I also followed that documentation you mention in order to enable POSIX acltype 
for example on my ZFS volume.

I checked and on my two bricks as well as my slave I have the coreutils package 
with its stat util. I have read quite a few posts of people using ZFS with 
Gluster and this should not be a problem. Or is this maybe a new bug in 
GlusterFS?

By checking the log file
/var/log/glusterfs/geo-replication/reptest/ssh%3A%2F%2Froot%40192.168.40.3%3Agluster%3A%2F%2F127.0.0.1%3Areptest.log
 I saw the following error message which might help to debug this issue:

[2015-09-18 23:41:09.646944] E [resource(/data/reptest/brick):226:logerr] Popen: 
ssh> bash: /nonexistent/gsyncd: No such file or directory

Does this ring any bells?

Do you meant to say , the earlier error "could not find (null) to 
getinode size for data"  is gone and now you are getting this error.


check whether these steps helps you :
http://irclog.perlgeek.de/gluster/2015-01-08#i_9903500

Please share the complete log , if you still face any issues.

Also, report back if it helps, so that we can fix it here.



On Saturday, September 19, 2015 6:18 AM, Saravanakumar Arumugam 
 wrote:
Hi,

The underlying filesystem which you use seems like ZFS.

I don't have much idea about zfs. You may want to check this link:
http://www.gluster.org/community/documentation/index.php/GlusterOnZFS

As far as the error is concerned, it is trying to use stat command to
get inode details.
(stat command is provided by coreutils, which is quite a basic package).

Could you share your System details? Is it a Linux system ?

ps:
XFS is the recommended and widely tested filesystem for glusterfs.

Thanks,
Saravana



On 09/19/2015 03:03 AM, ML mail wrote:

Hello,

I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS 
but it still does not seem to work. I have at least managed to run succesfully 
the georepsetup using the following command:


georepsetup reptest gfsgeo@gfs1geo reptest

But as soon as I run:


gluster volume geo-replication reptest gfs1geo::reptest start

i see the following error messages every 2 minutes in 
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log:

[2015-09-18 21:27:26.341524] I [MSGID: 106488] 
[glusterd-handler.c:1463:__glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req
[2015-09-18 21:27:26.474240] I [MSGID: 106499] 
[glusterd-handler.c:4258:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume reptest
[2015-09-18 21:27:26.475231] E [MSGID: 106419] 
[glusterd-utils.c:4972:glusterd_add_inode_size_to_dict] 0-management: could not 
find (null) to getinode size for data/reptest (zfs): (null) package missing?


and nothing really happens.

Does anyone have an idea what's wrong now?

Regards
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] problems with geo-replication on 3.7.4

2015-09-18 Thread Saravanakumar Arumugam

Hi,

The underlying filesystem which you use seems like ZFS.

I don't have much idea about zfs. You may want to check this link:
http://www.gluster.org/community/documentation/index.php/GlusterOnZFS

As far as the error is concerned, it is trying to use stat command to 
get inode details.

(stat command is provided by coreutils, which is quite a basic package).

Could you share your System details? Is it a Linux system ?

ps:
XFS is the recommended and widely tested filesystem for glusterfs.

Thanks,
Saravana


On 09/19/2015 03:03 AM, ML mail wrote:

Hello,

I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS 
but it still does not seem to work. I have at least managed to run succesfully 
the georepsetup using the following command:


georepsetup reptest gfsgeo@gfs1geo reptest

But as soon as I run:


gluster volume geo-replication reptest gfs1geo::reptest start

i see the following error messages every 2 minutes in 
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log:

[2015-09-18 21:27:26.341524] I [MSGID: 106488] 
[glusterd-handler.c:1463:__glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req
[2015-09-18 21:27:26.474240] I [MSGID: 106499] 
[glusterd-handler.c:4258:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume reptest
[2015-09-18 21:27:26.475231] E [MSGID: 106419] 
[glusterd-utils.c:4972:glusterd_add_inode_size_to_dict] 0-management: could not 
find (null) to getinode size for data/reptest (zfs): (null) package missing?


and nothing really happens.

Does anyone have an idea what's wrong now?

Regards
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-15 Thread Saravanakumar Arumugam

Hi,
Replies inline.


On 09/16/2015 01:34 AM, ML mail wrote:

Thanks for your detailed example. Based on that it looks like my issue is SSH 
based. Now I have the following two SSH related questions:

1) The setup of a SSH passwordless account on the slave, does it need to be 
using the same SSH public key as stored by GlusterFS in the
/var/lib/glusterd/geo-replication directory? or can I simply generate my own 
with ssh-keygen?

You can simply generate on your own with ssh-keygen.

You should be able to login from Master node to Slave node without password.
(# ssh root@ ) thats it.



2) is it possible to use another user than root for geo-replication with 
GlusterFS v3.6?
It is supported in 3.6.5. It involves more steps in addition to below 
mentioned(which is for root user).
Please refer the link which you have 
mentioned.(http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html)




Regards
ML



On Tuesday, September 15, 2015 9:16 AM, Saravanakumar Arumugam 
 wrote:
Hi,
You are right,   This tool may not be compatible with 3.6.5.

I have tried myself with 3.6.5, but faced this error.
==
georepsetup tv1 gfvm3 tv2
Geo-replication session will be established between tv1 and gfvm3::tv2
Root password of gfvm3 is required to complete the setup. NOTE: Password
will not be stored.

root@gfvm3's password:
[OK] gfvm3 is Reachable(Port 22)
[OK] SSH Connection established root@gfvm3
[OK] Master Volume and Slave Volume are compatible (Version: 3.6.5)
[OK] Common secret pub file present at
/var/lib/glusterd/geo-replication/common_secret.pem.pub
[OK] common_secret.pem.pub file copied to gfvm3
[OK] Master SSH Keys copied to all Up Slave nodes
[OK] Updated Master SSH Keys to all Up Slave nodes authorized_keys file
[NOT OK] Failed to Establish Geo-replication Session
Command type not found while handling geo-replication options
[root@gfvm3 georepsetup]#
==
So, some more changes required in this tool.


Coming back to your question:

I have setup geo-replication using the commands in 3.6.5.
Please recheck all the commands (with necessary changes at your end).


[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# cat /etc/redhat-release
Fedora release 21 (Twenty One)
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# rpm -qa | grep glusterfs
glusterfs-devel-3.6.5-1.fc21.x86_64
glusterfs-3.6.5-1.fc21.x86_64
glusterfs-rdma-3.6.5-1.fc21.x86_64
glusterfs-fuse-3.6.5-1.fc21.x86_64
glusterfs-server-3.6.5-1.fc21.x86_64
glusterfs-debuginfo-3.6.5-1.fc21.x86_64
glusterfs-libs-3.6.5-1.fc21.x86_64
glusterfs-extra-xlators-3.6.5-1.fc21.x86_64
glusterfs-geo-replication-3.6.5-1.fc21.x86_64
glusterfs-api-3.6.5-1.fc21.x86_64
glusterfs-api-devel-3.6.5-1.fc21.x86_64
glusterfs-cli-3.6.5-1.fc21.x86_64
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd start
Redirecting to /bin/systemctl start  glusterd.service
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled)
 Active: active (running) since Tue 2015-09-15 12:19:32 IST; 4s ago
Process: 2778 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
(code=exited, status=0/SUCCESS)
   Main PID: 2779 (glusterd)
 CGroup: /system.slice/glusterd.service
 └─2779 /usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# ps aux | grep glus
root  2779  0.0  0.4 448208 17288 ?Ssl  12:19   0:00
/usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv1
gfvm3:/opt/volume_test/tv_1/b1 gfvm3:/opt/volume_test/tv_1/b2 force
volume create: tv1: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv2
gfvm3:/opt/volume_test/tv_2/b1 gfvm3:/opt/volume_test/tv_2/b2 force
volume create: tv2: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#  gluster volume start tv1
volume start: tv1: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume start tv2
volume start: tv2: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv1 /mnt/master/
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv2 /mnt/slave/
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster system:: execute gsec_create
Common secret pub file present at
/var/lib/glusterd/geo-replication/common_secret.pem.pub

Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-15 Thread Saravanakumar Arumugam
3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume geo-replication tv1 gfvm3::tv2 
status


MASTER NODEMASTER VOLMASTER BRICK SLAVE STATUS
CHECKPOINT STATUSCRAWL STATUS

--
gfvm3  tv1   /opt/volume_test/tv_1/b1 gfvm3::tv2
ActiveN/A  Changelog Crawl
gfvm3  tv1   /opt/volume_test/tv_1/b2 gfvm3::tv2
ActiveN/A  Changelog Crawl

[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# cp /etc/hosts
hostshosts.allow  hosts.deny
[root@gfvm3 georepsetup]# cp /etc/hosts* /mnt/master; sleep 20; ls 
/mnt/slave/

hosts  hosts.allow  hosts.deny
[root@gfvm3 georepsetup]# ls /mnt/master
hosts  hosts.allow  hosts.deny
[root@gfvm3 georepsetup]# ls /mnt/slave/
hosts  hosts.allow  hosts.deny
[root@gfvm3 georepsetup]#

One important step which I have NOT mentioned is you need to setup 
passwordless ssh.


You need to use ssh-keygen and ssh-copy-id for having passwordless setup 
from one node in Master to one node in Slave.
This needs to be carried out before this step: " gluster system:: 
execute gsec_create"
This needs to be done at the same NODE(at Master) where you execute 
geo-rep create command.


You can find geo-replication related logs here 
:/var/log/glusterfs/geo-replication/

Please share the logs if you still face any issues.

Thanks,
Saravana


On 09/14/2015 11:23 PM, ML mail wrote:

Yes I can ping the slave node with it's name and IP address, I've even entered 
manually its name in /etc/hosts.

Does this nice python script also work for Gluster 3.6? The blog post only 
speaks about 3.7...

Regards
ML




On Monday, September 14, 2015 9:38 AM, Saravanakumar Arumugam 
 wrote:
Hi,

<< Unable to fetch slave volume details. Please check the slave cluster
and slave volume. geo-replication command failed
Have you checked whether you are able to reach the Slave node from the
master node?

There is a super simple way of setting up geo-rep written by Aravinda.
Refer:
http://blog.gluster.org/2015/09/introducing-georepsetup-gluster-geo-replication-setup-tool-2/

Refer the README for both usual (root user based) and
mountbroker(non-root) setup details here:
https://github.com/aravindavk/georepsetup/blob/master/README.md

Thanks,
Saravana



On 09/13/2015 09:46 PM, ML mail wrote:

Hello,

I am using the following documentation in order to setup geo replication 
between two sites
http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html

Unfortunately the step:

gluster volume geo-replication myvolume gfs...@gfs1geo.domain.com::myvolume 
create push-pem

Fails with the following error:

Unable to fetch slave volume details. Please check the slave cluster and slave 
volume.
geo-replication command failed

Any ideas?

btw: the documentation
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/index.html
   does not seem to work with GlusterFS 3.6.5 that's why I am using the other 
mentioned documentation. It fails at the mountbroker step (
gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root)

Regards
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-14 Thread Saravanakumar Arumugam

Hi,

<< Unable to fetch slave volume details. Please check the slave cluster 
and slave volume. geo-replication command failed
Have you checked whether you are able to reach the Slave node from the 
master node?


There is a super simple way of setting up geo-rep written by Aravinda.
Refer:
http://blog.gluster.org/2015/09/introducing-georepsetup-gluster-geo-replication-setup-tool-2/

Refer the README for both usual (root user based) and 
mountbroker(non-root) setup details here:

https://github.com/aravindavk/georepsetup/blob/master/README.md

Thanks,
Saravana


On 09/13/2015 09:46 PM, ML mail wrote:

Hello,

I am using the following documentation in order to setup geo replication 
between two sites
http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html

Unfortunately the step:

gluster volume geo-replication myvolume gfs...@gfs1geo.domain.com::myvolume 
create push-pem

Fails with the following error:

Unable to fetch slave volume details. Please check the slave cluster and slave 
volume.
geo-replication command failed

Any ideas?

btw: the documentation
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/index.html
  does not seem to work with GlusterFS 3.6.5 that's why I am using the other 
mentioned documentation. It fails at the mountbroker step (
gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root)

Regards
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Plans for Gluster 3.8

2015-08-14 Thread Saravanakumar Arumugam

Hi Atin/Kaushal,
I am interested to take up "selective read-only mode" feature. (Bug#829042)
I will look into this and talk to you further.

Thanks,
Saravana

On 08/13/2015 08:58 PM, Atin Mukherjee wrote:


Can we have some volunteers of these BZs?

-Atin
Sent from one plus one

On Aug 12, 2015 12:34 PM, "Kaushal M" > wrote:


Hi Csaba,

These are the updates regarding the requirements, after our meeting
last week. The specific updates on the requirements are inline.

In general, we feel that the requirements for selective read-only mode
and immediate disconnection of clients on access revocation are doable
for GlusterFS-3.8. The only problem right now is that we do not have
any volunteers for it.

> 1.Bug 829042 - [FEAT] selective read-only mode
> https://bugzilla.redhat.com/show_bug.cgi?id=829042
>
>   absolutely necessary for not getting tarred & feathered in
Tokyo ;)
>   either resurrect http://review.gluster.org/3526
>   and _find out integration with auth mechanism for special
>   mounts_, or come up with a completely different concept
>

With the availability of client_t, implementing this should become
easier. The server xlator would store the incoming connections common
name or address in the client_t associated with the connection. The
read-only xlator could then make use of this information to
selectively allow read-only clients. The read-only xlator would need
to implement a new option for selective read-only, which would be
populated with lists of common-names and addresses of clients which
would get read-only access.

> 2.Bug 1245380 - [RFE] Render all mounts of a volume defunct
upon access revocation
> https://bugzilla.redhat.com/show_bug.cgi?id=1245380
>
>   necessary to let us enable a watershed scalability
>   enhancement
>

Currently, when auth.allow/reject and auth.ssl-allow options are
changed, the server xlator does a reconfigure to reload its access
list. It just does a reload, and doesn't affect any existing
connections. To bring this feature in, the server xlator would need to
iterate through its xprt_list and check every connection for
authorization again on a reconfigure. Those connections which have
lost authorization would be disconnected.

> 3.Bug 1226776 – [RFE] volume capability query
> https://bugzilla.redhat.com/show_bug.cgi?id=1226776
>
>   eventually we'll be choking in spaghetti if we don't get
>   this feature. The ugly version checks we need to do against
>   GlusterFS as in
>
>

https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3
>
>   will proliferate and eat the guts of the code out of its
>   living body if this is not addressed.
>

This requires some more thought to figure out the correct solution.
One possible way to get the capabilities of the cluster would be to
look at the clusters running op-version. This can be obtained using
`gluster volume get all cluster.op-version` (the volume get command is
available in glusterfs-3.6 and above). But this doesn't provide much
improvement over the existing checks being done in the driver.
___
Gluster-devel mailing list
gluster-de...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Looking for Fedora Package Maintainers.

2015-06-24 Thread Saravanakumar Arumugam

mailto:humble.deva...@gmail.com>> wrote:


Hi All,

As we maintain 3 releases ( currently 3.5, 3.6 and 3.7)  of
GlusterFS and having an average of  one release per week , we need
more helping hands on this task.

The responsibility includes building fedora and epel rpms using koji
build system and deploying  the rpms to download.gluster.org
 [1] after signing and creating repos.

If any one is interested to help us on maintaining fedora GlusterFS
packaging, please let us ( kkeithley,  ndevos or myself ) know.


I'm interested in helping/maintaining of gluster packaging.

Best Regards,
Vishwanath


[1] http://download.gluster.org/pub/gluster/glusterfs/

--Humble


Add my name to list of volunteers.

Raghavendra Talur


Hi Humble,
You can count me too for any help related.

Thanks,
Saravana

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication & directory move/rename

2015-06-23 Thread Saravanakumar Arumugam

Thanks Vijay and Gabriel
Sorry, I have verified different hash/cache scenarios but missed 
verifying directory rename.


<issue only seems to affect directories.
Directories are created in all bricks and are different from files(where 
cache, hash scenario occurs) .


--
Saravanakumar

On 06/23/2015 04:41 AM, Vijay Bellur wrote:

On Monday 22 June 2015 06:09 PM, Gabriel Kuri wrote:

OK, so I just tested this in 3.6.3 and directory moves/renames seem to
be working fine, so it seems like something happened somewhere between
3.6.3 and 3.7.2 that broke it?



Possibly this commit broke it in 3.7:

commit f1ac02a52f4019e7890ce501af7e825ef703d14d
Author: Saravanakumar Arumugam 
Date:   Tue May 5 17:03:39 2015 +0530

geo-rep: rename handling in dht volume(changelog changes)


I have sent out a patch for review [1] which should address the problem.
Thanks for the report!

-Vijay

[1] http://review.gluster.org/11356

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users