Re: [Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-07 Thread Raghavendra Gowdappa
+rafi.

Rafi, can you have an initial analysis on this?

regards,
Raghavendra.

- Original Message -
> From: "Milind Changire" 
> To: gluster-devel@gluster.org
> Sent: Tuesday, March 8, 2016 12:53:27 PM
> Subject: [Gluster-devel] [master] FAILED: 
> bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
> 
> ==
> Running tests in file
> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
> [07:27:48]
> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
> ..
> not ok 11 Got "1" instead of "0"
> not ok 14 Got "1" instead of "0"
> not ok 15 Got "1" instead of "0"
> not ok 16 Got "1" instead of "0"
> Failed 4/16 subtests
> [07:27:48]
> 
> Test Summary Report
> ---
> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
> (Wstat: 0 Tests: 16 Failed: 4)
>   Failed tests:  11, 14-16
> Files=1, Tests=16, 23 wallclock secs ( 0.02 usr  0.00 sys +  1.13 cusr  0.39
> csys =  1.54 CPU)
> Result: FAIL
> End of test
> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
> ==
> 
> Please advise.
> 
> --
> Milind
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-07 Thread Milind Changire
==
Running tests in file 
./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
[07:27:48] 
./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t .. 
not ok 11 Got "1" instead of "0"
not ok 14 Got "1" instead of "0"
not ok 15 Got "1" instead of "0"
not ok 16 Got "1" instead of "0"
Failed 4/16 subtests 
[07:27:48]

Test Summary Report
---
./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t 
(Wstat: 0 Tests: 16 Failed: 4)
  Failed tests:  11, 14-16
Files=1, Tests=16, 23 wallclock secs ( 0.02 usr  0.00 sys +  1.13 cusr  0.39 
csys =  1.54 CPU)
Result: FAIL
End of test 
./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
==

Please advise.

--
Milind

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [ANNOUNCE] Maintainer Update

2016-03-07 Thread Kotresh Hiremath Ravishankar
Congrats and all the best Aravinda!

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Venky Shankar" 
> To: "Gluster Devel" 
> Cc: maintain...@gluster.org
> Sent: Tuesday, March 8, 2016 10:49:46 AM
> Subject: [Gluster-devel] [ANNOUNCE] Maintainer Update
> 
> Hey folks,
> 
> As of yesterday, Aravinda has taken over the maintainership of
> Geo-replication. Over
> the past year or so, he has been actively involved in it's development -
> introducing
> new features, fixing bugs, reviewing patches and helping out the community.
> Needless
> to say, he's the go-to guy for anything related to Geo-replication.
> 
> Although this shift should have been in effect far earlier, it's better late
> than
> never (and before Aravinda could change his mind, Jeff took the liberty of
> merging
> the maintainer update patch ;)).
> 
> Congrats and all the best with the new role, Aravinda.
> 
> Thanks!
> 
> Venky
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] CentOS Regression generated core by .tests/basic/tier/tier-file-create.t

2016-03-07 Thread Kotresh Hiremath Ravishankar
Hi All,

The regression run has caused the core to generate for below patch.

https://build.gluster.org/job/rackspace-regression-2GB-triggered/18859/console

>From the initial analysis, it's a tiered setup where ec sub-volume is the cold 
>tier and afr is the hot tier.
The crash has happened during lookup, the lookup is wound to cold-tier, since 
it is not present there, dht issued discover
onto hot-tier and while serializing dictionary, it found the 'data' is freed 
for the key 'trusted.ec.size'.

(gdb) bt
#0  0x7fe059df9772 in memcpy () from ./lib64/libc.so.6
#1  0x7fe05b209902 in dict_serialize_lk (this=0x7fe04809f7dc, 
buf=0x7fe0480a2b7c "") at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:2533
#2  0x7fe05b20a182 in dict_allocate_and_serialize (this=0x7fe04809f7dc, 
buf=0x7fe04ef6bb08, length=0x7fe04ef6bb00) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:2780
#3  0x7fe04e3492de in client3_3_lookup (frame=0x7fe0480a22dc, 
this=0x7fe048008c00, data=0x7fe04ef6bbe0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client-rpc-fops.c:3368
#4  0x7fe04e32c8c8 in client_lookup (frame=0x7fe0480a22dc, 
this=0x7fe048008c00, loc=0x7fe0480a4354, xdata=0x7fe04809f7dc) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client.c:417
#5  0x7fe04dbdaf5f in afr_lookup_do (frame=0x7fe04809f6dc, 
this=0x7fe048029e00, err=0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/afr/src/afr-common.c:2422
#6  0x7fe04dbdb4bb in afr_lookup (frame=0x7fe04809f6dc, 
this=0x7fe048029e00, loc=0x7fe03c0082f4, xattr_req=0x7fe03c00810c) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/afr/src/afr-common.c:2532
#7  0x7fe04de3c2b8 in dht_lookup (frame=0x7fe0480a0a3c, 
this=0x7fe04802c580, loc=0x7fe03c0082f4, xattr_req=0x7fe03c00810c) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:2429
#8  0x7fe04d91f07e in dht_lookup_everywhere (frame=0x7fe03c0081ec, 
this=0x7fe04802d450, loc=0x7fe03c0082f4) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:1803
#9  0x7fe04d920953 in dht_lookup_cbk (frame=0x7fe03c0081ec, 
cookie=0x7fe03c00902c, this=0x7fe04802d450, op_ret=-1, op_errno=2, inode=0x0, 
stbuf=0x0, xattr=0x0, postparent=0x0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:2056
#10 0x7fe04de35b94 in dht_lookup_everywhere_done (frame=0x7fe03c00902c, 
this=0x7fe0480288a0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:1338
#11 0x7fe04de38281 in dht_lookup_everywhere_cbk (frame=0x7fe03c00902c, 
cookie=0x7fe04809ed2c, this=0x7fe0480288a0, op_ret=-1, op_errno=2, inode=0x0, 
buf=0x0, xattr=0x0, postparent=0x0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:1768
#12 0x7fe05b27 in default_lookup_cbk (frame=0x7fe04809ed2c, 
cookie=0x7fe048099ddc, this=0x7fe048027590, op_ret=-1, op_errno=2, inode=0x0, 
buf=0x0, xdata=0x0, postparent=0x0) at defaults.c:1188
#13 0x7fe04e0a4861 in ec_manager_lookup (fop=0x7fe048099ddc, state=-5) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-generic.c:864
#14 0x7fe04e0a0b3a in __ec_manager (fop=0x7fe048099ddc, error=2) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-common.c:2098
#15 0x7fe04e09c912 in ec_resume (fop=0x7fe048099ddc, error=0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-common.c:289
#16 0x7fe04e09caf8 in ec_complete (fop=0x7fe048099ddc) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-common.c:362
#17 0x7fe04e0a41a8 in ec_lookup_cbk (frame=0x7fe04800107c, cookie=0x5, 
this=0x7fe048027590, op_ret=-1, op_errno=2, inode=0x7fe03c00152c, 
buf=0x7fe04ef6c860, xdata=0x0, postparent=0x7fe04ef6c7f0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-generic.c:758
#18 0x7fe04e348239 in client3_3_lookup_cbk (req=0x7fe04809dd4c, 
iov=0x7fe04809dd8c, count=1, myframe=0x7fe04809964c)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client-rpc-fops.c:3028
#19 0x7fe05afd83e6 in rpc_clnt_handle_reply (clnt=0x7fe048066350, 
pollin=0x7fe0480018f0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:759
#20 0x7fe05afd8884 in rpc_clnt_notify (trans=0x7fe0480667f0, 
mydata=0x7fe048066380, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fe0480018f0)
at 

[Gluster-devel] [ANNOUNCE] Maintainer Update

2016-03-07 Thread Venky Shankar
Hey folks,

As of yesterday, Aravinda has taken over the maintainership of Geo-replication. 
Over
the past year or so, he has been actively involved in it's development - 
introducing
new features, fixing bugs, reviewing patches and helping out the community. 
Needless
to say, he's the go-to guy for anything related to Geo-replication.

Although this shift should have been in effect far earlier, it's better late 
than
never (and before Aravinda could change his mind, Jeff took the liberty of 
merging
the maintainer update patch ;)).

Congrats and all the best with the new role, Aravinda.

Thanks!

Venky
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-07 Thread Krutika Dhananjay
It has been failing rather frequently.
Have reported a bug at https://bugzilla.redhat.com/show_bug.cgi?id=1315560
For now, have moved it to bad tests here:
http://review.gluster.org/#/c/13632/1

-Krutika

On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay 
wrote:

> +Pranith
>
> -Krutika
>
>
> On Sat, Mar 5, 2016 at 11:34 PM, Dan Lambright 
> wrote:
>
>>
>>
>> - Original Message -
>> > From: "Dan Lambright" 
>> > To: "Shyam" 
>> > Cc: "Krutika Dhananjay" , "Gluster Devel" <
>> gluster-devel@gluster.org>, "Rafi Kavungal Chundattu
>> > Parambil" , "Nithya Balachandran" <
>> nbala...@redhat.com>, "Joseph Fernandes"
>> > , "gluster-infra" 
>> > Sent: Friday, March 4, 2016 9:51:18 AM
>> > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
>> dumping core on Linux
>> >
>> >
>> >
>> > - Original Message -
>> > > From: "Shyam" 
>> > > To: "Krutika Dhananjay" , "Gluster Devel"
>> > > , "Rafi Kavungal Chundattu
>> > > Parambil" , "Nithya Balachandran"
>> > > , "Joseph Fernandes"
>> > > , "Dan Lambright" 
>> > > Cc: "gluster-infra" 
>> > > Sent: Friday, March 4, 2016 9:45:17 AM
>> > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
>> dumping
>> > > core on Linux
>> > >
>> > > Facing the same problem in the following runs as well,
>> > >
>> > > 1)
>> > >
>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18767/console
>> > > 2) https://build.gluster.org/job/regression-test-burn-in/546/console
>> > > 3) https://build.gluster.org/job/regression-test-burn-in/547/console
>> > > 4) https://build.gluster.org/job/regression-test-burn-in/549/console
>> > >
>> > > Last successful burn-in was: 545 (but do not see the test having been
>> > > run here, so this is inconclusive)
>> > >
>> > > burn-in test 544 is hung on the same test here,
>> > > https://build.gluster.org/job/regression-test-burn-in/544/console
>> > >
>> > > (and at this point I am stopping the hunt for when this last
>> succeeded :) )
>> > >
>> > > Let's know if anyone is taking a peek at the cores.
>> >
>> > hm. Not familiar with this test. Written by Pranith? I'll look.
>>
>> We are doing lookup everywhere, and building up a dict of the extended
>> attributes of a file as we traverse each sub volume across the hot and cold
>> tiers. The length field of one of the EC keys is corrupted.
>>
>> Not clear why this is happening.. I see no tiering relationship as of
>> yet, its possible the file is being demoted in parallel to the foreground
>> script operation.
>>
>> The test runs fine on my machines.  Does this reproduce consistently on
>> one of the Jenkins machines? If so, getting onto it would be the next step.
>> I think that would be preferable to masking this test case.
>>
>>
>> >
>> > >
>> > > Thanks,
>> > > Shyam
>> > >
>> > >
>> > >
>> > > On 03/04/2016 07:40 AM, Krutika Dhananjay wrote:
>> > > > Could someone from tiering dev team please take a look?
>> > > >
>> > > >
>> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18793/console
>> > > >
>> > > > -Krutika
>> > > >
>> > > >
>> > > > ___
>> > > > Gluster-infra mailing list
>> > > > gluster-in...@gluster.org
>> > > > http://www.gluster.org/mailman/listinfo/gluster-infra
>> > > >
>> > >
>> >
>>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regression: Bitrot core generated by distribute/bug-1117851.t

2016-03-07 Thread FNU Raghavendra Manjunath
Hi,

I have raised a bug for it (
https://bugzilla.redhat.com/show_bug.cgi?id=1315465).

A patch has been sent for review (http://review.gluster.org/#/c/13628/).

Regards,
Raghavendra


On Mon, Mar 7, 2016 at 11:04 AM, Poornima Gurusiddaiah 
wrote:

> Hi,
>
> I see a bitrot crash caused by a dht test case
> ./tests/bugs/distribute/bug-1117851.t
> Regression link:
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18858/console
>
> Backtrace:
>
> /build/install/lib/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xf2)[0x7f8d873d93f6]
> /build/install/lib/libglusterfs.so.0(gf_print_trace+0x22b)[0x7f8d873df381]
> /build/install/sbin/glusterfsd(glusterfsd_print_trace+0x1f)[0x409962]
> /lib64/libc.so.6(+0x326a0)[0x7f8d85f6b6a0]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/bitrot-stub.so(+0x6829)[0x7f8d79409829]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/bitrot-stub.so(+0x11751)[0x7f8d79414751]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/changelog.so(+0x505c)[0x7f8d7961f05c]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/changetimerecorder.so(+0xa8aa)[0x7f8d79ceb8aa]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/trash.so(+0xa544)[0x7f8d79f07544]
>
> /build/install/lib/glusterfs/3.8dev/xlator/storage/posix.so(+0x10650)[0x7f8d7a737650]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/trash.so(+0xd489)[0x7f8d79f0a489]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/changetimerecorder.so(+0xb179)[0x7f8d79cec179]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/changelog.so(+0x5d1c)[0x7f8d7961fd1c]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/bitrot-stub.so(+0x11a72)[0x7f8d79414a72]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/access-control.so(+0x7d3f)[0x7f8d791f9d3f]
> /build/install/lib/libglusterfs.so.0(default_unlink+0xa8)[0x7f8d8746d114]
>
> /build/install/lib/glusterfs/3.8dev/xlator/features/upcall.so(+0x4eab)[0x7f8d78dc4eab]
>
> /build/install/lib/libglusterfs.so.0(default_unlink_resume+0x1dd)[0x7f8d87469ef7]
>
> /build/install/lib/libglusterfs.so.0(call_resume_wind+0x321)[0x7f8d873f6c4c]
> /build/install/lib/libglusterfs.so.0(call_resume+0xd2)[0x7f8d873ffa1a]
>
> /build/install/lib/glusterfs/3.8dev/xlator/performance/io-threads.so(+0x4727)[0x7f8d78bb9727]
> /lib64/libpthread.so.0(+0x7aa1)[0x7f8d866b8aa1]
> /lib64/libc.so.6(clone+0x6d)[0x7f8d8602193d]
>
> Can you please look into it. Do i need to raise a BZ for this?
>
>
> Regards,
> Poornima
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query on healing process

2016-03-07 Thread ABHISHEK PALIWAL
On Fri, Mar 4, 2016 at 5:31 PM, Ravishankar N 
wrote:

> On 03/04/2016 12:10 PM, ABHISHEK PALIWAL wrote:
>
> Hi Ravi,
>
> 3. On the rebooted node, do you have ssl enabled by any chance? There is a
> bug for "Not able to fetch volfile' when ssl is enabled:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1258931
>
> -> I have checked but ssl is disabled but still getting these errors
>
> # gluster volume heal c_glusterfs info
> c_glusterfs: Not able to fetch volfile from glusterd
> Volume heal failed.
>
>
> Ok, just to confirm, glusterd  and other brick processes are running after
> this node rebooted?
> When you run the above command, you need to check
> /var/log/glusterfs/glfsheal-volname.log logs errros. Setting
> client-log-level to DEBUG would give you a more verbose message
>
> Yes, glusterd and other brick processes running fine. I have check the
/var/log/glusterfs/glfsheal-volname.log file without the log-level= DEBUG.
Here is the logs from that file

[2016-03-02 13:51:39.059440] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2016-03-02 13:51:39.072172] W [MSGID: 101012]
[common-utils.c:2776:gf_get_reserved_ports] 0-glusterfs: could not open the
file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports
info [No such file or directory]
[2016-03-02 13:51:39.072228] W [MSGID: 101081]
[common-utils.c:2810:gf_process_reserved_ports] 0-glusterfs: Not able to
get reserved ports, hence there is a possibility that glusterfs may consume
reserved port
[2016-03-02 13:51:39.072583] E [socket.c:2278:socket_connect_finish]
0-gfapi: connection to 127.0.0.1:24007 failed (Connection refused)
[2016-03-02 13:51:39.072663] E [MSGID: 104024]
[glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with
remote-host: localhost (Transport endpoint is not connected) [Transport
endpoint is not connected]
[2016-03-02 13:51:39.072700] I [MSGID: 104025]
[glfs-mgmt.c:744:mgmt_rpc_notify] 0-glfs-mgmt: Exhausted all volfile
servers [Transport endpoint is not connected]

> # gluster volume heal c_glusterfs info split-brain
> c_glusterfs: Not able to fetch volfile from glusterd
> Volume heal failed.
>
>
>
>
> And based on the your observation I understood that this is not the
> problem of split-brain but *is there any way through which can find out
> the file which is not in split-brain as well as not in sync?*
>
>
> `gluster volume heal c_glusterfs info split-brain`  should give you files
> that need heal.
>

I have run "gluster volume heal c_glusterfs info split-brain" command but
it is not showing that file which is out of sync that is the issue file is
not in sync on both of the brick and split-brain is not showing that
command in output for heal required.

Thats is why I am asking that is there any command other than this split
brain command so that I can find out the files those are required the heal
operation but not displayed in the output of "gluster volume heal
c_glusterfs info split-brain" command.

>
>
> # getfattr -m . -d -e hex
> /opt/lvmdir/c2/brick/logfiles/availability/CELLO_AVAILABILITY2_LOG.xml
> getfattr: Removing leading '/' from absolute path names
> # file:
> opt/lvmdir/c2/brick/logfiles/availability/CELLO_AVAILABILITY2_LOG.xml
> trusted.afr.c_glusterfs-client-0=0x
> trusted.afr.c_glusterfs-client-2=0x
> trusted.afr.c_glusterfs-client-4=0x
> trusted.afr.c_glusterfs-client-6=0x
> trusted.afr.c_glusterfs-client-8=*0x0006** //because
> client8 is the latest client in our case and starting 8 digits *
>
> *0006are saying like there is something in changelog data. *
> trusted.afr.dirty=0x
> trusted.bit-rot.version=0x001356d86c0c000217fd
> trusted.gfid=0x9f5e354ecfda40149ddce7d5ffe760ae
>
> # lhsh 002500 getfattr -m . -d -e hex
> /opt/lvmdir/c2/brick/logfiles/availability/CELLO_AVAILABILITY2_LOG.xml
> getfattr: Removing leading '/' from absolute path names
> # file:
> opt/lvmdir/c2/brick/logfiles/availability/CELLO_AVAILABILITY2_LOG.xml
> trusted.afr.c_glusterfs-client-1=*0x** // and
> here we can say that there is no split brain but the file is out of sync*
> trusted.afr.dirty=0x
> trusted.bit-rot.version=0x001156d86c290005735c
> trusted.gfid=0x9f5e354ecfda40149ddce7d5ffe760ae
>
> # gluster volume info
>
> Volume Name: c_glusterfs
> Type: Replicate
> Volume ID: c6a61455-d378-48bf-ad40-7a3ce897fc9c
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> network.ping-timeout: 4
> nfs.disable: on
>
>
> # gluster volume info
>
> Volume Name: c_glusterfs
> Type: Replicate
> Volume 

Re: [Gluster-devel] Default quorum for 2 way replication

2016-03-07 Thread Shyam

On 03/05/2016 05:26 AM, Pranith Kumar Karampuri wrote:

That is the point. There is an illusion of choice between Data integrity
and HA. But we are not *really* giving HA, are we? HA will be there only
if second brick in the replica pair goes down. In your typical


@Pranith, can you elaborate on this? I am not so AFR savvy, so unable
to comprehend why HA is available if only when the second brick goes
down and is not when the first does. Just helps in understanding the
issue at hand.


Because it is client side replication there is a fixed *leader* i.e. 1st
brick.


Ah! good to know, thank you.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Using geo-replication as backup solution using gluster volume snapshot!

2016-03-07 Thread Kotresh Hiremath Ravishankar
Hi All,

Here is the idea, we can use geo-replication as backup solution using gluster 
volume
snapshots on slave side. One of the drawbacks of geo-replication is that it's a
continuous asynchronous replication and would not help in getting the last 
week's or
yesterday's data. So if we use gluster snapshots at the slave end, we can use 
the
snapshots to get the last week's or yesterday's data making it a candidate for a
backup solution. The limitation is that the snapshots at the slave end can't be
restored as it will break the running geo-replication. It could be mounted and
we have access to data when the snapshots are taken. It's just a naive idea.
Any suggestions and use cases are worth discussing:)


Thanks and Regards,
Kotresh H R

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel