Hello, Nithya and Atin

I was able to remove the attributes so i don’t get that message,  but still get 
a failed message, this is the file from the  etc-glusterfs-glusterd.vol.log

[2018-01-12 16:51:41.758928] I [MSGID: 106482] 
[glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received 
add brick req
[2018-01-12 16:51:41.758973] I [MSGID: 106578] 
[glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: 
replica-count is 2
[2018-01-12 16:51:41.791000] I [run.c:191:runner_log] 
(-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) 
[0x7fa642d04045] 
-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) 
[0x7fa642d9cd85] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fa64e6381e5] 
) 0-management: Ran script: 
/var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh 
--volname=scratch --version=1 --volume-op=add-brick 
--gd-workdir=/var/lib/glusterd
[2018-01-12 16:51:41.791080] I [MSGID: 106578] 
[glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: 
replica-count is set 0
[2018-01-12 16:51:41.791102] I [MSGID: 106578] 
[glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type 
is set 0, need to change it
[2018-01-12 16:51:42.070229] I [MSGID: 106143] 
[glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
/gdata/brick2/scratch on port 49154
[2018-01-12 16:51:42.070366] I [MSGID: 106143] 
[glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
/gdata/brick2/scratch.rdma on port 49155
[2018-01-12 16:51:42.070947] E [MSGID: 106005] 
[glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start 
brick gluster01ib:/gdata/brick2/scratch
[2018-01-12 16:51:42.071005] E [MSGID: 106074] 
[glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add 
bricks
[2018-01-12 16:51:42.071023] E [MSGID: 106123] 
[glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit 
failed.
[2018-01-12 16:51:42.071036] E [MSGID: 106123] 
[glusterd-mgmt.c:1427:glusterd_mgmt_v3_commit] 0-management: Commit failed for 
operation Add brick on local node
[2018-01-12 16:51:42.071053] E [MSGID: 106123] 
[glusterd-mgmt.c:2018:glusterd_mgmt_v3_initiate_all_phases] 0-management: 
Commit Op Failed
[2018-01-12 16:50:09.901594] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req
[2018-01-12 16:53:26.610769] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req
[2018-01-12 16:53:53.481757] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume scratch
[2018-01-12 16:53:26.611769] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req

—— /var/log/glusterfs/bricks/gdata-brick1-scratch.log

[2018-01-12 16:46:08.196624] I [MSGID: 100030] [glusterfsd.c:2454:main] 
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.15 
(args: /usr/sbin/glusterfsd -s gluster01ib --volfile-id 
scratch.gluster01ib.gdata-brick1-scratch -p 
/var/lib/glusterd/vols/scratch/run/gluster01ib-gdata-brick1-scratch.pid -S 
/var/run/gluster/55beb2e433599e4960bdbedf427e0451.socket --brick-name 
/gdata/brick1/scratch -l /var/log/glusterfs/bricks/gdata-brick1-scratch.log 
--xlator-option *-posix.glusterd-uuid=39ede14b-884d-40dd-a478-1fe44aef2050 
--brick-port 49152 --xlator-option 
scratch-server.transport.rdma.listen-port=49153 --xlator-option 
scratch-server.listen-port=49152 --volfile-server-transport=socket,rdma)
[2018-01-12 16:46:08.218278] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2018-01-12 16:46:08.235389] I [MSGID: 101173] 
[graph.c:269:gf_add_cmdline_options] 0-scratch-server: adding option 
'listen-port' for volume 'scratch-server' with value '49152'
[2018-01-12 16:46:08.235423] I [MSGID: 101173] 
[graph.c:269:gf_add_cmdline_options] 0-scratch-server: adding option 
'transport.rdma.listen-port' for volume 'scratch-server' with value '49153'
[2018-01-12 16:46:08.235483] I [MSGID: 101173] 
[graph.c:269:gf_add_cmdline_options] 0-scratch-posix: adding option 
'glusterd-uuid' for volume 'scratch-posix' with value 
'39ede14b-884d-40dd-a478-1fe44aef2050'
[2018-01-12 16:46:08.236021] I [MSGID: 115034] 
[server.c:398:_check_for_auth_option] 0-scratch-decompounder: skip format check 
for non-addr auth option auth.login./gdata/brick1/scratch.allow
[2018-01-12 16:46:08.236051] I [MSGID: 115034] 
[server.c:398:_check_for_auth_option] 0-scratch-decompounder: skip format check 
for non-addr auth option 
auth.login.407f9224-14cd-4d8d-a50e-b278e41c915f.password
[2018-01-12 16:46:08.236041] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 2
[2018-01-12 16:46:08.237687] I [rpcsvc.c:2243:rpcsvc_set_outstanding_rpc_limit] 
0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2018-01-12 16:46:08.237856] W [MSGID: 101002] [options.c:954:xl_opt_validate] 
0-scratch-server: option 'listen-port' is deprecated, preferred is 
'transport.socket.listen-port', continuing with correction
[2018-01-12 16:46:08.238132] E [rpc-transport.c:287:rpc_transport_load] 
0-rpc-transport: /usr/lib64/glusterfs/3.8.15/rpc-transport/rdma.so: cannot open 
shared object file: No such file or directory
[2018-01-12 16:46:08.238151] W [rpc-transport.c:291:rpc_transport_load] 
0-rpc-transport: volume 'rdma.scratch-server': transport-type 'rdma' is not 
valid or not found on this machine
[2018-01-12 16:46:08.238165] W [rpcsvc.c:1667:rpcsvc_create_listener] 
0-rpc-service: cannot create listener, initing the transport failed
[2018-01-12 16:46:08.238180] E [MSGID: 115045] [server.c:1069:init] 
0-scratch-server: creation of 1 listeners failed, continuing with succeeded 
transport
[2018-01-12 16:46:08.347992] I [MSGID: 121050] 
[ctr-helper.c:259:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2018-01-12 16:46:08.348045] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2018-01-12 16:46:08.348067] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-journalmode from params.Assigning default value: wal
[2018-01-12 16:46:08.348086] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-sync from params.Assigning default value: off
[2018-01-12 16:46:08.348102] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-autovacuum from params.Assigning default value: none
[2018-01-12 16:46:08.348651] I [trash.c:2408:init] 0-scratch-trash: no option 
specified for 'eliminate', using NULL
[2018-01-12 16:46:08.370469] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 
'rpc-auth.auth-glusterfs' is not recognized
[2018-01-12 16:46:08.370548] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 
'rpc-auth.auth-unix' is not recognized
[2018-01-12 16:46:08.370599] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 
'rpc-auth.auth-null' is not recognized
[2018-01-12 16:46:08.370686] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 'auth-path' is 
not recognized
[2018-01-12 16:46:08.370733] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-quota: option 'timeout' is not 
recognized
[2018-01-12 16:46:08.370846] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-trash: option 'brick-path' is 
not recognized
[2018-01-12 16:46:08.387913] W [MSGID: 113026] [posix.c:1534:posix_mkdir] 
0-scratch-posix: mkdir (/.trashcan/): gfid 
(00000000-0000-0000-0000-000000000005) is already associated with directory 
(/gdata/brick1/scratch/.glusterfs/00/00/00000000-0000-0000-0000-000000000001/.trashcan).
 Hence, both directories will share same gfid and this can lead to 
inconsistencies.
[2018-01-12 16:46:08.387972] E [MSGID: 113027] [posix.c:1641:posix_mkdir] 
0-scratch-posix: mkdir of /gdata/brick1/scratch/.trashcan/ failed [File exists]
[2018-01-12 16:46:08.388236] W [MSGID: 113026] [posix.c:1534:posix_mkdir] 
0-scratch-posix: mkdir (/.trashcan/internal_op): gfid 
(00000000-0000-0000-0000-000000000006) is already associated with directory 
(/gdata/brick1/scratch/.glusterfs/00/00/00000000-0000-0000-0000-000000000005/internal_op).
 Hence, both directories will share same gfid and this can lead to 
inconsistencies.
[2018-01-12 16:46:08.388279] E [MSGID: 113027] [posix.c:1641:posix_mkdir] 
0-scratch-posix: mkdir of /gdata/brick1/scratch/.trashcan/internal_op failed 
[File exists]

---------------------------------
Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131-0001
carc.unm.edu <http://carc.unm.edu/>



> On Jan 12, 2018, at 8:46 AM, Nithya Balachandran <nbala...@redhat.com> wrote:
> 
> 
> ---------- Forwarded message ----------
> From: Jose Sanchez <joses...@carc.unm.edu <mailto:joses...@carc.unm.edu>>
> Date: 11 January 2018 at 22:05
> Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks 
> each.
> To: Nithya Balachandran <nbala...@redhat.com <mailto:nbala...@redhat.com>>
> Cc: gluster-users <gluster-users@gluster.org 
> <mailto:gluster-users@gluster.org>>
> 
> 
> Hi Nithya
> 
> Thanks for helping me with this, I understand now , but I  have few 
> questions. 
> 
> When i had it setup in replica (just 2 nodes with 2 bricks) and tried to 
> added , it failed. 
> 
>> [root@gluster01 ~]# gluster volume add-brick scratch replica 2 
>> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
>> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
> 
> Did you try the add brick operation several times with the same bricks? If 
> yes, that could be the cause as Gluster sets xattrs on the brick root 
> directory.
> 
> and after that, I  ran the status and info in it and on the status i get just 
> the two brikcs
> 
>> Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       
>> 3140 
>> Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       2634
> 
> and on the info i get all 4 ( 2 x2)   is this normal?? behavior?
> 
> So the brick count does not match for the same volume in the gluster volume 
> status and gluster volume info commands? No, that is not normal.
> 
>> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Brick3: gluster01ib:/gdata/brick2/scratch
>> Brick4: gluster02ib:/gdata/brick2/scratch
> 
> 
>  
> Now when i try to mount it , i still get only 14 tb and not 28? Am i doing 
> something wrong? also when I start/stop services, cluster goes back to 
> replicated mode from distributed-replicate
> 
> If the fuse mount sees only 2 bricks , that would explain the 14TB.
> 
> gluster01ib:/scratch   14T   34M   14T   1% /mnt/gluster_test
> 
> —— Gluster mount log file ——
> 
> [2018-01-11 16:06:44.963043] I [MSGID: 114046] 
> [client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1: Connected 
> to scratch-client-1, attached to remote volume '/gdata/brick1/scratch'.
> [2018-01-11 16:06:44.963065] I [MSGID: 114047] 
> [client-handshake.c:1227:client_setvolume_cbk] 0-scratch-client-1: Server and 
> Client lk-version numbers are not same, reopening the fds
> [2018-01-11 16:06:44.968291] I [MSGID: 114035] 
> [client-handshake.c:202:client_set_lk_version_cbk] 0-scratch-client-1: Server 
> lk version = 1
> [2018-01-11 16:06:44.968404] I [fuse-bridge.c:4147:fuse_init] 
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 
> 7.22
> [2018-01-11 16:06:44.968438] I [fuse-bridge.c:4832:fuse_graph_sync] 0-fuse: 
> switched to graph 0
> [2018-01-11 16:06:44.969544] I [MSGID: 108031] 
> [afr-common.c:2166:afr_local_discovery_cbk] 0-scratch-replicate-0: selecting 
> local read_child scratch-client-0
> 
> —— CLI  Log File  ——
> 
> [root@gluster01 glusterfs]# tail cli.log
> [2018-01-11 15:54:14.468122] I [socket.c:2403:socket_event_handler] 
> 0-transport: disconnecting now
> [2018-01-11 15:54:14.468737] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 
> 0-cli: Received resp to get vol: 0
> [2018-01-11 15:54:14.469462] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 
> 0-cli: Received resp to get vol: 0
> [2018-01-11 15:54:14.469530] I [input.c:31:cli_batch] 0-: Exiting with: 0
> [2018-01-11 16:03:40.422568] I [cli.c:728:main] 0-cli: Started running 
> gluster with version 3.8.15
> [2018-01-11 16:03:40.430195] I 
> [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-: geo-replication not 
> installed
> [2018-01-11 16:03:40.430492] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with 
> index 1
> [2018-01-11 16:03:40.430568] I [socket.c:2403:socket_event_handler] 
> 0-transport: disconnecting now
> [2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk] 
> 0-cli: Received resp to set
> [2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0
> 
> —— etc-glusterfs-glusterd.vol.log — 
> 
> [2018-01-10 14:59:23.676814] I [MSGID: 106499] 
> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
> Received status volume req for volume scratch
> [2018-01-10 15:00:29.516071] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> [2018-01-10 15:01:09.872082] I [MSGID: 106482] 
> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received 
> add brick req
> [2018-01-10 15:01:09.872128] I [MSGID: 106578] 
> [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: 
> replica-count is 2
> [2018-01-10 15:01:09.876763] E [MSGID: 106451] 
> [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: 
> /gdata/brick2/scratch is already part of a volume [File exists]
> [2018-01-10 15:01:09.876807] W [MSGID: 106122] 
> [glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick 
> prevalidation failed.
> [2018-01-10 15:01:09.876822] E [MSGID: 106122] 
> [glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
> Validation failed for operation Add brick on local node
> [2018-01-10 15:01:09.876834] E [MSGID: 106122] 
> [glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre 
> Validation Failed
> [2018-01-10 15:01:16.005881] I [run.c:191:runner_log] 
> (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) 
> [0x7f1066d15045] 
> -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) 
> [0x7f1066dadd85] -->/lib64/libglusterfs.so.0(runner_log+0x115) 
> [0x7f10726491e5] ) 0-management: Ran script: 
> /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh 
> --volname=scratch --version=1 --volume-op=add-brick 
> --gd-workdir=/var/lib/glusterd
> [2018-01-10 15:01:15.982929] E [MSGID: 106451] 
> [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: 
> /gdata/brick2/scratch is already part of a volume [File exists]
> [2018-01-10 15:01:16.005959] I [MSGID: 106578] 
> [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: 
> replica-count is set 0
> 
> Atin, is this correct? It looks like it tries to add the bricks even though 
> the prevalidation failed
> 
> 
> [2018-01-10 15:01:16.006018] I [MSGID: 106578] 
> [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type 
> is set 0, need to change it
> [2018-01-10 15:01:16.062001] I [MSGID: 106143] 
> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
> /gdata/brick2/scratch on port 49154
> [2018-01-10 15:01:16.062137] I [MSGID: 106143] 
> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick 
> /gdata/brick2/scratch.rdma on port 49155
> [2018-01-10 15:01:16.062673] E [MSGID: 106005] 
> [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start 
> brick gluster01ib:/gdata/brick2/scratch
> [2018-01-10 15:01:16.062715] E [MSGID: 106074] 
> [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add 
> bricks
> [2018-01-10 15:01:16.062729] E [MSGID: 106123] 
> [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit 
> failed.
> [2018-01-10 15:01:16.062741] E [MSGID: 106123] 
> [glusterd-mgmt.c:1427:glusterd_mgmt_v3_commit] 0-management: Commit failed 
> for operation Add brick on local node
> [2018-01-10 15:01:16.062754] E [MSGID: 106123] 
> [glusterd-mgmt.c:2018:glusterd_mgmt_v3_initiate_all_phases] 0-management: 
> Commit Op Failed
> [2018-01-10 15:01:35.914090] I [MSGID: 106499] 
> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
> Received status volume req for volume scratch
> [2018-01-10 15:01:15.979236] I [MSGID: 106482] 
> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received 
> add brick req
> [2018-01-10 15:01:15.979250] I [MSGID: 106578] 
> [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: 
> replica-count is 2
> The message "I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req" repeated 3 times between [2018-01-10 15:00:29.516071] 
> and [2018-01-10 15:01:39.652014]
> [2018-01-10 16:16:42.776653] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> [2018-01-10 16:16:42.777614] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> [2018-01-11 15:45:09.023393] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> [2018-01-11 15:45:19.916301] I [MSGID: 106499] 
> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
> Received status volume req for volume scratch
> [2018-01-11 15:45:09.024217] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> [2018-01-11 15:54:10.172137] I [MSGID: 106499] 
> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
> Received status volume req for volume scratch
> [2018-01-11 15:54:14.468529] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> [2018-01-11 15:54:14.469408] I [MSGID: 106488] 
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
> Received get vol req
> 
> Thanks
> 
> Jose
> 
> 
> 
> 
> 
> 
> ---------------------------------
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131
> 
> 
>> On Jan 10, 2018, at 9:02 PM, Nithya Balachandran <nbala...@redhat.com 
>> <mailto:nbala...@redhat.com>> wrote:
>> 
>> Hi Jose,
>> 
>> Gluster is working as expected. The Distribute-replicated type just means 
>> that there are now 2 replica sets and files will be distributed across them. 
>> A volume of type Replicate (1xn where n is the number of bricks in the 
>> replica set) indicates there is no distribution  (all files on the volume 
>> will be present on all the bricks in the volume). 
>>  
>> A volume of type Distributed-Replicate indicates the volume is both 
>> distributed (as in files will only be created on one of the replicated sets) 
>> and replicated.  So in the above example, a file will exist on either Brick1 
>> and Brick2 or Brick3 and Brick4. 
>> 
>> 
>> After the add brick, the volume will have a total capacity of 28TB and store 
>> 2 copies of every file. Let me know if that is not what you are looking for.
>> 
>> 
>> Regards,
>> Nithya
>> 
>> 
>> On 10 January 2018 at 20:40, Jose Sanchez <joses...@carc.unm.edu 
>> <mailto:joses...@carc.unm.edu>> wrote:
>> 
>> 
>> Hi Nithya
>> 
>> This is what i have so far, I have peer both cluster nodes together as 
>> replica, from node 1A and 1B , now when i tried to add it , i get the error 
>> that it is already part of a volume. when i run the cluster volume info , i 
>> see that has switch to distributed-replica. 
>> 
>> Thanks
>> 
>> Jose
>> 
>> 
>> 
>> 
>> 
>> [root@gluster01 ~]# gluster volume status
>> Status of volume: scratch
>> Gluster process                             TCP Port  RDMA Port  Online  Pid
>> ------------------------------------------------------------------------------
>> Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       
>> 3140 
>> Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       
>> 2634 
>> Self-heal Daemon on localhost               N/A       N/A        Y       
>> 3132 
>> Self-heal Daemon on gluster02ib             N/A       N/A        Y       
>> 2626 
>>  
>> Task Status of Volume scratch
>> ------------------------------------------------------------------------------
>> There are no active volume tasks
>>  
>> [root@gluster01 ~]#
>> 
>> [root@gluster01 ~]# gluster volume info
>>  
>> Volume Name: scratch
>> Type: Replicate
>> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> nfs.disable: on
>> [root@gluster01 ~]#
>> 
>> 
>> -------------------------------------
>> 
>> [root@gluster01 ~]# gluster volume add-brick scratch replica 2 
>> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
>> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
>> 
>> 
>> [root@gluster01 ~]# gluster volume status
>> Status of volume: scratch
>> Gluster process                             TCP Port  RDMA Port  Online  Pid
>> ------------------------------------------------------------------------------
>> Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       
>> 3140 
>> Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       
>> 2634 
>> Self-heal Daemon on gluster02ib             N/A       N/A        Y       
>> 2626 
>> Self-heal Daemon on localhost               N/A       N/A        Y       
>> 3132 
>>  
>> Task Status of Volume scratch
>> ------------------------------------------------------------------------------
>> There are no active volume tasks
>>  
>> [root@gluster01 ~]# gluster volume info
>>  
>> Volume Name: scratch
>> Type: Distributed-Replicate
>> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Brick3: gluster01ib:/gdata/brick2/scratch
>> Brick4: gluster02ib:/gdata/brick2/scratch
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> nfs.disable: on
>> [root@gluster01 ~]# 
>> 
>> 
>> 
>> --------------------------------
>> Jose Sanchez
>> Center of Advanced Research Computing
>> Albuquerque, NM 87131-0001
>> carc.unm.edu <http://carc.unm.edu/>
>> 
>> 
>>> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran <nbala...@redhat.com 
>>> <mailto:nbala...@redhat.com>> wrote:
>>> 
>>> Hi,
>>> 
>>> Please let us know what commands you ran so far and the output of the 
>>> gluster volume info command.
>>> 
>>> Thanks,
>>> Nithya
>>> 
>>> On 9 January 2018 at 23:06, Jose Sanchez <joses...@carc.unm.edu 
>>> <mailto:joses...@carc.unm.edu>> wrote:
>>> Hello
>>> 
>>> We are trying to setup Gluster for our project/scratch storage HPC machine 
>>> using a replicated mode with 2 nodes, 2 bricks each (14tb each). 
>>> 
>>> Our goal is to be able to have a replicated system between node 1 and 2 (A 
>>> bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so we 
>>> can have a total of 28tb replicated mode. 
>>> 
>>> Node 1 [ (Brick A) (Brick B) ]
>>> Node 2 [ (Brick A) (Brick B) ]
>>> --------------------------------------------
>>>             14Tb + 14Tb = 28Tb
>>> 
>>> At this  I was able to create the replica nodes between node 1 and 2 (brick 
>>> A) but I’ve not been able to add to the replica together, Gluster switches 
>>> to distributed replica   when i add it with only 14Tb.
>>> 
>>> Any help will be appreciated.
>>> 
>>> Thanks
>>> 
>>> Jose
>>> 
>>> ---------------------------------
>>> Jose Sanchez
>>> Center of Advanced Research Computing
>>> Albuquerque, NM 87131
>>> carc.unm.edu <http://carc.unm.edu/>
>>> 
>>> 
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
>>> http://lists.gluster.org/mailman/listinfo/gluster-users 
>>> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>>> 
>> 
>> 
> 
> 

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to