Hi Sungsik,

We have filed a bug to track this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1287996

Thanks,
Vijay


On Thu, Dec 3, 2015 at 12:31 PM, 박성식 <mulg...@gmail.com> wrote:

> Thanks to Vijay answer.
>
> Volume 'info' and 'cksum' file copy already know.
>
> I wanted to know if there is another solution for the 'quota-version'.
>
> It's now planning to create test automation scripts for basic
> full-features.
>
> I'll post another when issue is detected.
>
> Thank you once again for answers.
>
> Sungsik, Park.
>
>
>
> ------------------------------------------------------------------------------------------
>
> Sung Sik, Park [박성식, 朴宬植]
>
> Email: mulg...@gmail.com <syp...@wisetodd.com>
>
> KR: +82 10-4518-2647
>
>
> ------------------------------------------------------------------------------------------
>
> On Thu, Dec 3, 2015 at 2:56 PM, Vijaikumar Mallikarjuna <
> vmall...@redhat.com> wrote:
>
>> Hi Sung Sik Park,
>>
>> Please find my comments inline.
>>
>>
>>
>> On Thu, Dec 3, 2015 at 7:19 AM, 박성식 <mulg...@gmail.com> wrote:
>>
>>> Hi all
>>>
>>> Gluster-3.7.6 in 'Peer Rejected' problem occurs in the following test
>>> case.(it doesn't occur if don't enable the volume quota)
>>>
>>> The problem is caused from the commit point
>>> '3d3176958b7da48dbacb1a5a0fedf26322a38297'.
>>>
>>> Can you help with the following problems?
>>>
>>> # Test enviroment
>>>
>>> 1. Ubuntu 14.04 Server
>>>    (1) c00002-dn00001: 192.16.63.21
>>>    (2) c00002-dn00002: 192.16.63.22
>>>    (3) c00002-dn00003: 192.16.63.23
>>> 2. Glusterfs-3.7.6
>>>
>>> # Test procedure
>>>
>>> 1. Gluster installation
>>>
>>> $ git clone git://git.gluster.com/glusterfs.git git.gluster.com
>>> $ cd git.gluster.com/
>>> $ git checkout v3.7.6
>>> $ ./autogen.sh ; ./configure ; make ; make install ; ldconfig
>>>
>>> 2. Glusterd startup
>>>
>>> $ /etc/init.d/glusterd start
>>>
>>> 3. Gluster peer: 2 Server peer
>>>
>>> [c00002-dn00001] $ gluster peer probe 192.16.63.22
>>> [c00002-dn00001] $ gluster volume create tvol0001
>>> c00002-dn00001:/brick/tvol0001 c00002-dn00002:/brick/tvol0001 force
>>> [c00002-dn00001] $ gluster volume start tvol0001
>>> [c00002-dn00001] $ gluster volume quota tvol0001 enable
>>>
>>> 4. Another server peer
>>>
>>> [c00002-dn00001] $ gluster peer probe 192.16.63.23
>>> peer probe: success.
>>>
>>> [c00002-dn00001] $ gluster peer status
>>> Uuid: dd749b9b-d4e1-4b94-8f36-c03064bc5a57
>>> State: Peer Rejected (Connected)
>>>
>>> # c00002-dn00001 log
>>> [2015-12-03 00:37:52.464600] I [MSGID: 106487]
>>> [glusterd-handler.c:1422:__glusterd_handle_cli_list_friends] 0-glusterd:
>>> Received cli list req
>>> [2015-12-03 00:38:51.035018] I [MSGID: 106487]
>>> [glusterd-handler.c:1301:__glusterd_handle_cli_deprobe] 0-glusterd:
>>> Received CLI deprobe req
>>> [2015-12-03 00:38:51.036338] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:599:__glusterd_friend_remove_cbk] 0-glusterd: Received
>>> ACC from uuid: dd749b9b-d4e1-4b94-8f36-c03064bc5a57, host: c00002-dn00001,
>>> port: 0
>>> [2015-12-03 00:38:51.041859] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:695:__glusterd_friend_update_cbk] 0-management:
>>> Received ACC from uuid: cb22d272-d563-4bda-802b-c93798130108
>>> [2015-12-03 00:38:56.441189] I [MSGID: 106487]
>>> [glusterd-handler.c:1189:__glusterd_handle_cli_probe] 0-glusterd: Received
>>> CLI probe req 192.16.63.23 24007
>>> [2015-12-03 00:38:56.441828] I [MSGID: 106129]
>>> [glusterd-handler.c:3611:glusterd_probe_begin] 0-glusterd: Unable to find
>>> peerinfo for host: 192.16.63.23 (24007)
>>> [2015-12-03 00:38:56.445471] I [rpc-clnt.c:984:rpc_clnt_connection_init]
>>> 0-management: setting frame-timeout to 600
>>> [2015-12-03 00:38:56.445954] W [socket.c:869:__socket_keepalive]
>>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 23, Invalid
>>> argument
>>> [2015-12-03 00:38:56.445977] E [socket.c:2965:socket_connect]
>>> 0-management: Failed to set keep-alive: Invalid argument
>>> [2015-12-03 00:38:56.446232] I [MSGID: 106498]
>>> [glusterd-handler.c:3539:glusterd_friend_add] 0-management: connect
>>> returned 0
>>> [2015-12-03 00:38:56.457933] I [MSGID: 106511]
>>> [glusterd-rpc-ops.c:257:__glusterd_probe_cbk] 0-management: Received probe
>>> resp from uuid: dd749b9b-d4e1-4b94-8f36-c03064bc5a57, host: 192.16.63.23
>>> [2015-12-03 00:38:56.457992] I [MSGID: 106511]
>>> [glusterd-rpc-ops.c:417:__glusterd_probe_cbk] 0-glusterd: Received resp to
>>> probe req
>>> [2015-12-03 00:38:56.519048] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:481:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
>>> from uuid: dd749b9b-d4e1-4b94-8f36-c03064bc5a57, host: 192.16.63.23, port: 0
>>> [2015-12-03 00:38:56.524556] I [MSGID: 106163]
>>> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management: using the op-version 30707
>>> [2015-12-03 00:38:56.527701] I [MSGID: 106490]
>>> [glusterd-handler.c:2895:__glusterd_handle_probe_query] 0-glusterd:
>>> Received probe from uuid: dd749b9b-d4e1-4b94-8f36-c03064bc5a57
>>> [2015-12-03 00:38:56.528598] I [MSGID: 106493]
>>> [glusterd-handler.c:2958:__glusterd_handle_probe_query] 0-glusterd:
>>> Responded to c00002-dn00003, op_ret: 0, op_errno: 0, ret: 0
>>> [2015-12-03 00:38:56.529154] I [MSGID: 106490]
>>> [glusterd-handler.c:2550:__glusterd_handle_incoming_friend_req] 0-glusterd:
>>> Received probe from uuid: dd749b9b-d4e1-4b94-8f36-c03064bc5a57
>>> [2015-12-03 00:38:56.529743] E [MSGID: 106010]
>>> [glusterd-utils.c:2727:glusterd_compare_friend_volume] 0-management:
>>> Version of Cksums tvol0001 differ. local cksum = 870323923, remote cksum =
>>> 870586066 on peer 192.16.63.23
>>> [2015-12-03 00:38:56.529905] I [MSGID: 106493]
>>> [glusterd-handler.c:3791:glusterd_xfer_friend_add_resp] 0-glusterd:
>>> Responded to 192.16.63.23 (0), ret: 0
>>> [2015-12-03 00:38:59.243087] I [MSGID: 106487]
>>> [glusterd-handler.c:1422:__glusterd_handle_cli_list_friends] 0-glusterd:
>>> Received cli list req
>>>
>>> # c00002-dn00003 log
>>> [2015-12-03 00:38:45.556047] I [MSGID: 106491]
>>> [glusterd-handler.c:2600:__glusterd_handle_incoming_unfriend_req]
>>> 0-glusterd: Received unfriend from uuid:
>>> 090e2d18-7a6a-4d11-89cd-7c583180cba3
>>> [2015-12-03 00:38:45.556835] I [MSGID: 106493]
>>> [glusterd-handler.c:3758:glusterd_xfer_friend_remove_resp] 0-glusterd:
>>> Responded to c00002-dn00001 (0), ret: 0
>>> [2015-12-03 00:38:45.556863] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management:  already stopped
>>> [2015-12-03 00:38:45.556873] I [MSGID: 106510]
>>> [glusterd-sm.c:637:glusterd_peer_detach_cleanup] 0-management: Deleting
>>> stale volume tvol0001
>>> [2015-12-03 00:38:45.571032] W [socket.c:588:__socket_rwv] 0-nfs: readv
>>> on /var/run/gluster/ce041cf16b3a2a739b2381df8b2eb141.socket failed (No data
>>> available)
>>> [2015-12-03 00:38:46.557936] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: glustershd
>>> already stopped
>>> [2015-12-03 00:38:46.563409] W [socket.c:588:__socket_rwv] 0-quotad:
>>> readv on /var/run/gluster/a3c5c6ab5ef0542c80d4dcee7b6f6b00.socket failed
>>> (No data available)
>>> [2015-12-03 00:38:47.559114] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already
>>> stopped
>>> [2015-12-03 00:38:47.559217] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already
>>> stopped
>>> [2015-12-03 00:38:47.565412] I [MSGID: 101053]
>>> [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=3 total=6
>>> [2015-12-03 00:38:47.565589] I [MSGID: 101053]
>>> [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=3 total=6
>>> [2015-12-03 00:38:50.969101] I [MSGID: 106163]
>>> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management: using the op-version 30707
>>> [2015-12-03 00:38:50.972274] I [MSGID: 106490]
>>> [glusterd-handler.c:2895:__glusterd_handle_probe_query] 0-glusterd:
>>> Received probe from uuid: 090e2d18-7a6a-4d11-89cd-7c583180cba3
>>> [2015-12-03 00:38:50.973833] I [MSGID: 106129]
>>> [glusterd-handler.c:2930:__glusterd_handle_probe_query] 0-glusterd: Unable
>>> to find peerinfo for host: c00002-dn00001 (24007)
>>> [2015-12-03 00:38:50.976661] I [rpc-clnt.c:984:rpc_clnt_connection_init]
>>> 0-management: setting frame-timeout to 600
>>> [2015-12-03 00:38:50.977655] W [socket.c:869:__socket_keepalive]
>>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid
>>> argument
>>> [2015-12-03 00:38:50.977679] E [socket.c:2965:socket_connect]
>>> 0-management: Failed to set keep-alive: Invalid argument
>>> [2015-12-03 00:38:50.978170] I [MSGID: 106498]
>>> [glusterd-handler.c:3539:glusterd_friend_add] 0-management: connect
>>> returned 0
>>> [2015-12-03 00:38:50.978682] I [MSGID: 106493]
>>> [glusterd-handler.c:2958:__glusterd_handle_probe_query] 0-glusterd:
>>> Responded to c00002-dn00001, op_ret: 0, op_errno: 0, ret: 0
>>> [2015-12-03 00:38:50.979553] I [MSGID: 106490]
>>> [glusterd-handler.c:2550:__glusterd_handle_incoming_friend_req] 0-glusterd:
>>> Received probe from uuid: 090e2d18-7a6a-4d11-89cd-7c583180cba3
>>> [2015-12-03 00:38:50.987658] E [MSGID: 106187]
>>> [glusterd-utils.c:4541:glusterd_brick_start] 0-management: Could not find
>>> peer on which brick c00002-dn00002:/brick/tvol0001 resides
>>> [2015-12-03 00:38:50.987678] E [MSGID: 106005]
>>> [glusterd-op-sm.c:2184:glusterd_start_bricks] 0-management: Failed to start
>>> c00002-dn00002:/brick/tvol0001 for tvol0001
>>> [2015-12-03 00:38:51.017178] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already
>>> stopped
>>> [2015-12-03 00:38:51.026769] W [socket.c:3009:socket_connect] 0-nfs:
>>> Ignore failed connection attempt on
>>> /var/run/gluster/ce041cf16b3a2a739b2381df8b2eb141.socket, (No such file or
>>> directory)
>>> [2015-12-03 00:38:51.026992] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: glustershd
>>> already stopped
>>> [2015-12-03 00:38:51.032213] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already
>>> stopped
>>> [2015-12-03 00:38:51.036689] W [socket.c:3009:socket_connect] 0-quotad:
>>> Ignore failed connection attempt on
>>> /var/run/gluster/a3c5c6ab5ef0542c80d4dcee7b6f6b00.socket, (No such file or
>>> directory)
>>> [2015-12-03 00:38:51.036778] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already
>>> stopped
>>> [2015-12-03 00:38:51.036828] I [MSGID: 106132]
>>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already
>>> stopped
>>> [2015-12-03 00:38:51.039228] I [MSGID: 106493]
>>> [glusterd-handler.c:3791:glusterd_xfer_friend_add_resp] 0-glusterd:
>>> Responded to c00002-dn00001 (0), ret: 0
>>> [2015-12-03 00:38:51.044296] W [socket.c:588:__socket_rwv] 0-quotad:
>>> readv on /var/run/gluster/a3c5c6ab5ef0542c80d4dcee7b6f6b00.socket failed
>>> (Invalid argument)
>>> [2015-12-03 00:38:51.044652] I [MSGID: 106006]
>>> [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management:
>>> quotad has disconnected from glusterd.
>>> [2015-12-03 00:38:51.044868] W [socket.c:588:__socket_rwv] 0-nfs: readv
>>> on /var/run/gluster/ce041cf16b3a2a739b2381df8b2eb141.socket failed (Invalid
>>> argument)
>>> [2015-12-03 00:38:51.044902] I [MSGID: 106006]
>>> [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs
>>> has disconnected from glusterd.
>>> [2015-12-03 00:38:51.049332] I [MSGID: 106511]
>>> [glusterd-rpc-ops.c:257:__glusterd_probe_cbk] 0-management: Received probe
>>> resp from uuid: 090e2d18-7a6a-4d11-89cd-7c583180cba3, host: c00002-dn00001
>>> [2015-12-03 00:38:51.049365] I [MSGID: 106511]
>>> [glusterd-rpc-ops.c:417:__glusterd_probe_cbk] 0-glusterd: Received resp to
>>> probe req
>>> [2015-12-03 00:38:51.050941] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:481:__glusterd_friend_add_cbk] 0-glusterd: Received RJT
>>> from uuid: 090e2d18-7a6a-4d11-89cd-7c583180cba3, host: c00002-dn00001,
>>> port: 0
>>>
>>> # c00002-dn00001 /var/lib/glusterd/vols/tvol0001/info
>>> type=0
>>> count=2
>>> status=1
>>> sub_count=0
>>> stripe_count=1
>>> replica_count=1
>>> disperse_count=0
>>> redundancy_count=0
>>> version=3
>>> transport-type=0
>>> volume-id=cb9864b7-0bb4-439d-b60b-595d2e1e0cc3
>>> username=eb29fba6-a51a-4e65-abbd-8ef03d7bb22d
>>> password=c987bfd7-8da3-4379-abe1-7dcbcda7d49d
>>> op-version=3
>>> client-op-version=3
>>> quota-version=1
>>> parent_volname=N/A
>>> restored_from_snap=00000000-0000-0000-0000-000000000000
>>> snap-max-hard-limit=256
>>> features.quota-deem-statfs=on
>>> features.inode-quota=on
>>> features.quota=on
>>> performance.readdir-ahead=on
>>> brick-0=c00002-dn00001:-brick-tvol0001
>>> brick-1=c00002-dn00002:-brick-tvol0001
>>>
>>> # c00002-dn00003 /var/lib/glusterd/vols/tvol0001/info
>>> type=0
>>> count=2
>>> status=1
>>> sub_count=0
>>> stripe_count=1
>>> replica_count=1
>>> disperse_count=0
>>> redundancy_count=0
>>> version=3
>>> transport-type=0
>>> volume-id=cb9864b7-0bb4-439d-b60b-595d2e1e0cc3
>>> username=eb29fba6-a51a-4e65-abbd-8ef03d7bb22d
>>> password=c987bfd7-8da3-4379-abe1-7dcbcda7d49d
>>> op-version=3
>>> client-op-version=3
>>> quota-version=0
>>>
>>
>> Somehow, parameter 'quota-version' is not updated here. I will try to
>> re-create this problem in-house and fix this issue ASAP.
>> For a workaround, you can try below steps
>> 1) stop glusterd on 'c00002-dn00003'
>> 2) copy volume 'info' and 'cksum' file from 'c00002-dn00001' to
>> ''c00002-dn00003'
>> 3) start glusterd
>> 4) try peer probe
>>
>> Thanks,
>> Vijay
>>
>>
>> parent_volname=N/A
>>> restored_from_snap=00000000-0000-0000-0000-000000000000
>>> snap-max-hard-limit=256
>>> features.quota-deem-statfs=on
>>> features.inode-quota=on
>>> features.quota=on
>>> performance.readdir-ahead=on
>>> brick-0=c00002-dn00001:-brick-tvol0001
>>> brick-1=c00002-dn00002:-brick-tvol0001
>>>
>>>
>>> ------------------------------------------------------------------------------------------
>>>
>>> Sung Sik, Park [박성식, 朴宬植]
>>>
>>> Email: mulg...@gmail.com <syp...@wisetodd.com>
>>>
>>>
>>> ------------------------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to