Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-16 Thread Jose Sanchez
Hello Atin

I have tried the force command at the end of add-brick command, and it says 
that failed. It adds to the status but not info commands. Im running 3.8.15 

Any suggestions will be appreciated

Thanks

Jose





-
Jose Sanchez
Systems/Network Analyst 1
Center of Advanced Research Computing
1601 Central Ave.
MSC 01 1190
Albuquerque, NM 87131-0001
carc.unm.edu <http://carc.unm.edu/>
575.636.4232

> On Jan 15, 2018, at 6:06 AM, Atin Mukherjee <amukh...@redhat.com> wrote:
> 
> 
> On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran <nbala...@redhat.com 
> <mailto:nbala...@redhat.com>> wrote:
> -- Forwarded message --
> From: Jose Sanchez <joses...@carc.unm.edu <mailto:joses...@carc.unm.edu>>
> Date: 11 January 2018 at 22:05
> Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks 
> each.
> To: Nithya Balachandran <nbala...@redhat.com <mailto:nbala...@redhat.com>>
> Cc: gluster-users <gluster-users@gluster.org 
> <mailto:gluster-users@gluster.org>>
> 
> 
> Hi Nithya
> 
> Thanks for helping me with this, I understand now , but I  have few 
> questions. 
> 
> When i had it setup in replica (just 2 nodes with 2 bricks) and tried to 
> added , it failed. 
> 
>> [root@gluster01 ~]# gluster volume add-brick scratch replica 2 
>> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
>> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
> 
> Did you try the add brick operation several times with the same bricks? If 
> yes, that could be the cause as Gluster sets xattrs on the brick root 
> directory.
> 
> and after that, I  ran the status and info in it and on the status i get just 
> the two brikcs
> 
>> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   
>> 3140 
>> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y   2634
> 
> and on the info i get all 4 ( 2 x2)   is this normal?? behavior?
> 
> So the brick count does not match for the same volume in the gluster volume 
> status and gluster volume info commands? No, that is not normal.
> 
>> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Brick3: gluster01ib:/gdata/brick2/scratch
>> Brick4: gluster02ib:/gdata/brick2/scratch
> 
> 
>  
> Now when i try to mount it , i still get only 14 tb and not 28? Am i doing 
> something wrong? also when I start/stop services, cluster goes back to 
> replicated mode from distributed-replicate
> 
> If the fuse mount sees only 2 bricks , that would explain the 14TB.
> 
> gluster01ib:/scratch   14T   34M   14T   1% /mnt/gluster_test
> 
> —— Gluster mount log file ——
> 
> [2018-01-11 16:06:44.963043] I [MSGID: 114046] 
> [client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1: Connected 
> to scratch-client-1, attached to remote volume '/gdata/brick1/scratch'.
> [2018-01-11 16:06:44.963065] I [MSGID: 114047] 
> [client-handshake.c:1227:client_setvolume_cbk] 0-scratch-client-1: Server and 
> Client lk-version numbers are not same, reopening the fds
> [2018-01-11 16:06:44.968291] I [MSGID: 114035] 
> [client-handshake.c:202:client_set_lk_version_cbk] 0-scratch-client-1: Server 
> lk version = 1
> [2018-01-11 16:06:44.968404] I [fuse-bridge.c:4147:fuse_init] 
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 
> 7.22
> [2018-01-11 16:06:44.968438] I [fuse-bridge.c:4832:fuse_graph_sync] 0-fuse: 
> switched to graph 0
> [2018-01-11 16:06:44.969544] I [MSGID: 108031] 
> [afr-common.c:2166:afr_local_discovery_cbk] 0-scratch-replicate-0: selecting 
> local read_child scratch-client-0
> 
> —— CLI  Log File  ——
> 
> [root@gluster01 glusterfs]# tail cli.log
> [2018-01-11 15:54:14.468122] I [socket.c:2403:socket_event_handler] 
> 0-transport: disconnecting now
> [2018-01-11 15:54:14.468737] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 
> 0-cli: Received resp to get vol: 0
> [2018-01-11 15:54:14.469462] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 
> 0-cli: Received resp to get vol: 0
> [2018-01-11 15:54:14.469530] I [input.c:31:cli_batch] 0-: Exiting with: 0
> [2018-01-11 16:03:40.422568] I [cli.c:728:main] 0-cli: Started running 
> gluster with version 3.8.15
> [2018-01-11 16:03:40.430195] I 
> [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-: geo-replication not 
> installed
> [2018-01-11 16:03:40.430492] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with 
> index 1
> [2018-01-11 16:03:40.430568] I [socket.c:2403:socket_event_handler] 
> 0-transport: disconnecting now
&g

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-15 Thread Atin Mukherjee
On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran <nbala...@redhat.com>
wrote:

> -- Forwarded message --
> From: Jose Sanchez <joses...@carc.unm.edu>
> Date: 11 January 2018 at 22:05
> Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks
> each.
> To: Nithya Balachandran <nbala...@redhat.com>
> Cc: gluster-users <gluster-users@gluster.org>
>
>
> Hi Nithya
>
> Thanks for helping me with this, I understand now , but I  have few
> questions.
>
> When i had it setup in replica (just 2 nodes with 2 bricks) and tried to
> added , it failed.
>
> [root@gluster01 ~]# gluster volume add-brick scratch replica 2
>> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
>> volume add-brick: failed: /gdata/brick2/scratch is already part of a
>> volume
>>
>
> Did you try the add brick operation several times with the same bricks? If
> yes, that could be the cause as Gluster sets xattrs on the brick root
> directory.
>
> and after that, I  ran the status and info in it and on the status i get
> just the two brikcs
>
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
>> 3140
>> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y
>> 2634
>>
>
> and on the info i get all 4 ( 2 x2)   is this normal?? behavior?
>
> So the brick count does not match for the same volume in the gluster
> volume status and gluster volume info commands? No, that is not normal.
>
> Bricks:
>> Brick1: gluster01ib:/gdata/brick1/scratch
>> Brick2: gluster02ib:/gdata/brick1/scratch
>> Brick3: gluster01ib:/gdata/brick2/scratch
>> Brick4: gluster02ib:/gdata/brick2/scratch
>>
>
>
> Now when i try to mount it , i still get only 14 tb and not 28? Am i doing
> something wrong? also when I start/stop services, cluster goes back to
> replicated mode from distributed-replicate
>
> If the fuse mount sees only 2 bricks , that would explain the 14TB.
>
> gluster01ib:/scratch   14T   34M   14T   1% /mnt/gluster_test
>
> —— Gluster mount log file ——
>
> [2018-01-11 16:06:44.963043] I [MSGID: 114046]
> [client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1:
> Connected to scratch-client-1, attached to remote volume
> '/gdata/brick1/scratch'.
> [2018-01-11 16:06:44.963065] I [MSGID: 114047]
> [client-handshake.c:1227:client_setvolume_cbk] 0-scratch-client-1: Server
> and Client lk-version numbers are not same, reopening the fds
> [2018-01-11 16:06:44.968291] I [MSGID: 114035]
> [client-handshake.c:202:client_set_lk_version_cbk] 0-scratch-client-1:
> Server lk version = 1
> [2018-01-11 16:06:44.968404] I [fuse-bridge.c:4147:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
> 7.22
> [2018-01-11 16:06:44.968438] I [fuse-bridge.c:4832:fuse_graph_sync]
> 0-fuse: switched to graph 0
> [2018-01-11 16:06:44.969544] I [MSGID: 108031]
> [afr-common.c:2166:afr_local_discovery_cbk] 0-scratch-replicate-0:
> selecting local read_child scratch-client-0
>
> —— CLI  Log File  ——
>
> [root@gluster01 glusterfs]# tail cli.log
> [2018-01-11 15:54:14.468122] I [socket.c:2403:socket_event_handler]
> 0-transport: disconnecting now
> [2018-01-11 15:54:14.468737] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
> 0-cli: Received resp to get vol: 0
> [2018-01-11 15:54:14.469462] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
> 0-cli: Received resp to get vol: 0
> [2018-01-11 15:54:14.469530] I [input.c:31:cli_batch] 0-: Exiting with: 0
> [2018-01-11 16:03:40.422568] I [cli.c:728:main] 0-cli: Started running
> gluster with version 3.8.15
> [2018-01-11 16:03:40.430195] I
> [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-: geo-replication not
> installed
> [2018-01-11 16:03:40.430492] I [MSGID: 101190]
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-01-11 16:03:40.430568] I [socket.c:2403:socket_event_handler]
> 0-transport: disconnecting now
> [2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk]
> 0-cli: Received resp to set
> [2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0
>
> —— etc-glusterfs-glusterd.vol.log —
>
> [2018-01-10 14:59:23.676814] I [MSGID: 106499]
> [glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
> Received status volume req for volume scratch
> [2018-01-10 15:00:29.516071] I [MSGID: 106488]
> [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management:
> Received get vol req
> [2018-01-10 15:01:09.872082] I [MSGID: 106482]
> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management:
> Received add

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-12 Thread Jose Sanchez
c-service: Configured rpc.outstanding-rpc-limit with value 64
[2018-01-12 16:46:08.237856] W [MSGID: 101002] [options.c:954:xl_opt_validate] 
0-scratch-server: option 'listen-port' is deprecated, preferred is 
'transport.socket.listen-port', continuing with correction
[2018-01-12 16:46:08.238132] E [rpc-transport.c:287:rpc_transport_load] 
0-rpc-transport: /usr/lib64/glusterfs/3.8.15/rpc-transport/rdma.so: cannot open 
shared object file: No such file or directory
[2018-01-12 16:46:08.238151] W [rpc-transport.c:291:rpc_transport_load] 
0-rpc-transport: volume 'rdma.scratch-server': transport-type 'rdma' is not 
valid or not found on this machine
[2018-01-12 16:46:08.238165] W [rpcsvc.c:1667:rpcsvc_create_listener] 
0-rpc-service: cannot create listener, initing the transport failed
[2018-01-12 16:46:08.238180] E [MSGID: 115045] [server.c:1069:init] 
0-scratch-server: creation of 1 listeners failed, continuing with succeeded 
transport
[2018-01-12 16:46:08.347992] I [MSGID: 121050] 
[ctr-helper.c:259:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2018-01-12 16:46:08.348045] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2018-01-12 16:46:08.348067] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-journalmode from params.Assigning default value: wal
[2018-01-12 16:46:08.348086] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-sync from params.Assigning default value: off
[2018-01-12 16:46:08.348102] W [MSGID: 101105] 
[gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-scratch-changetimerecorder: Failed 
to retrieve sql-db-autovacuum from params.Assigning default value: none
[2018-01-12 16:46:08.348651] I [trash.c:2408:init] 0-scratch-trash: no option 
specified for 'eliminate', using NULL
[2018-01-12 16:46:08.370469] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 
'rpc-auth.auth-glusterfs' is not recognized
[2018-01-12 16:46:08.370548] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 
'rpc-auth.auth-unix' is not recognized
[2018-01-12 16:46:08.370599] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 
'rpc-auth.auth-null' is not recognized
[2018-01-12 16:46:08.370686] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-server: option 'auth-path' is 
not recognized
[2018-01-12 16:46:08.370733] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-quota: option 'timeout' is not 
recognized
[2018-01-12 16:46:08.370846] W [MSGID: 101174] 
[graph.c:360:_log_if_unknown_option] 0-scratch-trash: option 'brick-path' is 
not recognized
[2018-01-12 16:46:08.387913] W [MSGID: 113026] [posix.c:1534:posix_mkdir] 
0-scratch-posix: mkdir (/.trashcan/): gfid 
(----0005) is already associated with directory 
(/gdata/brick1/scratch/.glusterfs/00/00/----0001/.trashcan).
 Hence, both directories will share same gfid and this can lead to 
inconsistencies.
[2018-01-12 16:46:08.387972] E [MSGID: 113027] [posix.c:1641:posix_mkdir] 
0-scratch-posix: mkdir of /gdata/brick1/scratch/.trashcan/ failed [File exists]
[2018-01-12 16:46:08.388236] W [MSGID: 113026] [posix.c:1534:posix_mkdir] 
0-scratch-posix: mkdir (/.trashcan/internal_op): gfid 
(----0006) is already associated with directory 
(/gdata/brick1/scratch/.glusterfs/00/00/----0005/internal_op).
 Hence, both directories will share same gfid and this can lead to 
inconsistencies.
[2018-01-12 16:46:08.388279] E [MSGID: 113027] [posix.c:1641:posix_mkdir] 
0-scratch-posix: mkdir of /gdata/brick1/scratch/.trashcan/internal_op failed 
[File exists]

-
Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131-0001
carc.unm.edu <http://carc.unm.edu/>



> On Jan 12, 2018, at 8:46 AM, Nithya Balachandran <nbala...@redhat.com> wrote:
> 
> 
> -- Forwarded message --
> From: Jose Sanchez <joses...@carc.unm.edu <mailto:joses...@carc.unm.edu>>
> Date: 11 January 2018 at 22:05
> Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks 
> each.
> To: Nithya Balachandran <nbala...@redhat.com <mailto:nbala...@redhat.com>>
> Cc: gluster-users <gluster-users@gluster.org 
> <mailto:gluster-users@gluster.org>>
> 
> 
> Hi Nithya
> 
> Thanks for helping me with this, I understand now , but I  have few 
> questions. 
> 
> When i had it setup in replica (just 2 nodes with 2 bricks) and tried to 
> added , it failed. 
> 
>> [root@gluster01 ~]# gluster volume add-brick scratch replica 2 
>> gluster01ib:/gdata/brick2/scrat

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-12 Thread Nithya Balachandran
-- Forwarded message --
From: Jose Sanchez <joses...@carc.unm.edu>
Date: 11 January 2018 at 22:05
Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks
each.
To: Nithya Balachandran <nbala...@redhat.com>
Cc: gluster-users <gluster-users@gluster.org>


Hi Nithya

Thanks for helping me with this, I understand now , but I  have few
questions.

When i had it setup in replica (just 2 nodes with 2 bricks) and tried to
added , it failed.

[root@gluster01 ~]# gluster volume add-brick scratch replica 2
> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
>

Did you try the add brick operation several times with the same bricks? If
yes, that could be the cause as Gluster sets xattrs on the brick root
directory.

and after that, I  ran the status and info in it and on the status i get
just the two brikcs

Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y
> 2634
>

and on the info i get all 4 ( 2 x2)   is this normal?? behavior?

So the brick count does not match for the same volume in the gluster volume
status and gluster volume info commands? No, that is not normal.

Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
>


Now when i try to mount it , i still get only 14 tb and not 28? Am i doing
something wrong? also when I start/stop services, cluster goes back to
replicated mode from distributed-replicate

If the fuse mount sees only 2 bricks , that would explain the 14TB.

gluster01ib:/scratch   14T   34M   14T   1% /mnt/gluster_test

—— Gluster mount log file ——

[2018-01-11 16:06:44.963043] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1:
Connected to scratch-client-1, attached to remote volume
'/gdata/brick1/scratch'.
[2018-01-11 16:06:44.963065] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 0-scratch-client-1: Server
and Client lk-version numbers are not same, reopening the fds
[2018-01-11 16:06:44.968291] I [MSGID: 114035]
[client-handshake.c:202:client_set_lk_version_cbk] 0-scratch-client-1:
Server lk version = 1
[2018-01-11 16:06:44.968404] I [fuse-bridge.c:4147:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
7.22
[2018-01-11 16:06:44.968438] I [fuse-bridge.c:4832:fuse_graph_sync] 0-fuse:
switched to graph 0
[2018-01-11 16:06:44.969544] I [MSGID: 108031]
[afr-common.c:2166:afr_local_discovery_cbk] 0-scratch-replicate-0:
selecting local read_child scratch-client-0

—— CLI  Log File  ——

[root@gluster01 glusterfs]# tail cli.log
[2018-01-11 15:54:14.468122] I [socket.c:2403:socket_event_handler]
0-transport: disconnecting now
[2018-01-11 15:54:14.468737] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
0-cli: Received resp to get vol: 0
[2018-01-11 15:54:14.469462] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
0-cli: Received resp to get vol: 0
[2018-01-11 15:54:14.469530] I [input.c:31:cli_batch] 0-: Exiting with: 0
[2018-01-11 16:03:40.422568] I [cli.c:728:main] 0-cli: Started running
gluster with version 3.8.15
[2018-01-11 16:03:40.430195] I [cli-cmd-volume.c:1828:cli_check_gsync_present]
0-: geo-replication not installed
[2018-01-11 16:03:40.430492] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-01-11 16:03:40.430568] I [socket.c:2403:socket_event_handler]
0-transport: disconnecting now
[2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk]
0-cli: Received resp to set
[2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0

—— etc-glusterfs-glusterd.vol.log —

[2018-01-10 14:59:23.676814] I [MSGID: 106499]
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume scratch
[2018-01-10 15:00:29.516071] I [MSGID: 106488]
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management:
Received get vol req
[2018-01-10 15:01:09.872082] I [MSGID: 106482]
[glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management:
Received add brick req
[2018-01-10 15:01:09.872128] I [MSGID: 106578]
[glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management:
replica-count is 2
[2018-01-10 15:01:09.876763] E [MSGID: 106451]
[glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management:
/gdata/brick2/scratch is already part of a volume [File exists]
[2018-01-10 15:01:09.876807] W [MSGID: 106122]
[glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick
prevalidation failed.
[2018-01-10 15:01:09.876822] E [MSGID: 106122]
[glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre
Validation failed for operation Add brick on local node
[2018-01-

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-11 Thread Jose Sanchez
Hi Nithya

Thanks for helping me with this, I understand now , but I  have few questions. 

When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added 
, it failed. 

> [root@gluster01 ~]# gluster volume add-brick scratch replica 2 
> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume

and after that, I  ran the status and info in it and on the status i get just 
the two brikcs

> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   3140 
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y   2634

and on the info i get all 4 ( 2 x2)   is this normal?? behavior?

> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch


 
Now when i try to mount it , i still get only 14 tb and not 28? Am i doing 
something wrong? also when I start/stop services, cluster goes back to 
replicated mode from distributed-replicate

gluster01ib:/scratch   14T   34M   14T   1% /mnt/gluster_test

—— Gluster mount log file ——

[2018-01-11 16:06:44.963043] I [MSGID: 114046] 
[client-handshake.c:1216:client_setvolume_cbk] 0-scratch-client-1: Connected to 
scratch-client-1, attached to remote volume '/gdata/brick1/scratch'.
[2018-01-11 16:06:44.963065] I [MSGID: 114047] 
[client-handshake.c:1227:client_setvolume_cbk] 0-scratch-client-1: Server and 
Client lk-version numbers are not same, reopening the fds
[2018-01-11 16:06:44.968291] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 0-scratch-client-1: Server 
lk version = 1
[2018-01-11 16:06:44.968404] I [fuse-bridge.c:4147:fuse_init] 0-glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
[2018-01-11 16:06:44.968438] I [fuse-bridge.c:4832:fuse_graph_sync] 0-fuse: 
switched to graph 0
[2018-01-11 16:06:44.969544] I [MSGID: 108031] 
[afr-common.c:2166:afr_local_discovery_cbk] 0-scratch-replicate-0: selecting 
local read_child scratch-client-0

—— CLI  Log File  ——

[root@gluster01 glusterfs]# tail cli.log
[2018-01-11 15:54:14.468122] I [socket.c:2403:socket_event_handler] 
0-transport: disconnecting now
[2018-01-11 15:54:14.468737] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: 
Received resp to get vol: 0
[2018-01-11 15:54:14.469462] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: 
Received resp to get vol: 0
[2018-01-11 15:54:14.469530] I [input.c:31:cli_batch] 0-: Exiting with: 0
[2018-01-11 16:03:40.422568] I [cli.c:728:main] 0-cli: Started running gluster 
with version 3.8.15
[2018-01-11 16:03:40.430195] I [cli-cmd-volume.c:1828:cli_check_gsync_present] 
0-: geo-replication not installed
[2018-01-11 16:03:40.430492] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2018-01-11 16:03:40.430568] I [socket.c:2403:socket_event_handler] 
0-transport: disconnecting now
[2018-01-11 16:03:40.485256] I [cli-rpc-ops.c:2244:gf_cli_set_volume_cbk] 
0-cli: Received resp to set
[2018-01-11 16:03:40.485497] I [input.c:31:cli_batch] 0-: Exiting with: 0

—— etc-glusterfs-glusterd.vol.log — 

[2018-01-10 14:59:23.676814] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume scratch
[2018-01-10 15:00:29.516071] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req
[2018-01-10 15:01:09.872082] I [MSGID: 106482] 
[glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received 
add brick req
[2018-01-10 15:01:09.872128] I [MSGID: 106578] 
[glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: 
replica-count is 2
[2018-01-10 15:01:09.876763] E [MSGID: 106451] 
[glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: 
/gdata/brick2/scratch is already part of a volume [File exists]
[2018-01-10 15:01:09.876807] W [MSGID: 106122] 
[glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick 
prevalidation failed.
[2018-01-10 15:01:09.876822] E [MSGID: 106122] 
[glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Add brick on local node
[2018-01-10 15:01:09.876834] E [MSGID: 106122] 
[glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre 
Validation Failed
[2018-01-10 15:01:16.005881] I [run.c:191:runner_log] 
(-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) 
[0x7f1066d15045] 
-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) 
[0x7f1066dadd85] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f10726491e5] 
) 0-management: Ran script: 
/var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh 
--volname=scratch --version=1 --volume-op=add-brick 
--gd-workdir=/var/lib/glusterd
[2018-01-10 15:01:15.982929] E [MSGID: 106451] 

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-10 Thread Nithya Balachandran
Hi Jose,

Gluster is working as expected. The Distribute-replicated type just means
that there are now 2 replica sets and files will be distributed across
them.

A volume of type Replicate (1xn where n is the number of bricks in the
replica set) indicates there is no distribution  (all files on the
volume will be present on all the bricks in the volume).


A volume of type Distributed-Replicate indicates the volume is both
distributed (as in files will only be created on one of the replicated
sets) and replicated. So in the above example, a file will exist on either
Brick1 and Brick2 or Brick3 and Brick4.


After the add brick, the volume will have a total capacity of 28TB and
store 2 copies of every file. Let me know if that is not what you are
looking for.


Regards,
Nithya


On 10 January 2018 at 20:40, Jose Sanchez  wrote:

>
>
> Hi Nithya
>
> This is what i have so far, I have peer both cluster nodes together as
> replica, from node 1A and 1B , now when i tried to add it , i get the error
> that it is already part of a volume. when i run the cluster volume info , i
> see that has switch to distributed-replica.
>
> Thanks
>
> Jose
>
>
>
>
>
> [root@gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y
> 2634
> Self-heal Daemon on localhost   N/A   N/AY
> 3132
> Self-heal Daemon on gluster02ib N/A   N/AY
> 2626
>
>
> Task Status of Volume scratch
> 
> --
> There are no active volume tasks
>
>
> [root@gluster01 ~]#
>
> [root@gluster01 ~]# gluster volume info
>
>
> Volume Name: scratch
> Type: *Replicate*
> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> [root@gluster01 ~]#
>
>
> -
>
> [root@gluster01 ~]# gluster volume add-brick scratch replica 2
> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
>
>
> [root@gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y
> 2634
> Self-heal Daemon on gluster02ib N/A   N/AY
> 2626
> Self-heal Daemon on localhost   N/A   N/AY
> 3132
>
>
> Task Status of Volume scratch
> 
> --
> There are no active volume tasks
>
>
> [root@gluster01 ~]# gluster volume info
>
>
> Volume Name: scratch
> Type: *Distributed-Replicate*
> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> [root@gluster01 ~]#
>
>
>
> 
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131-0001
> carc.unm.edu
>
>
> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran 
> wrote:
>
> Hi,
>
> Please let us know what commands you ran so far and the output of the *gluster
> volume info* command.
>
> Thanks,
> Nithya
>
> On 9 January 2018 at 23:06, Jose Sanchez  wrote:
>
>> Hello
>>
>> We are trying to setup Gluster for our project/scratch storage HPC
>> machine using a replicated mode with 2 nodes, 2 bricks each (14tb each).
>>
>> Our goal is to be able to have a replicated system between node 1 and 2
>> (A bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so
>> we can have a total of 28tb replicated mode.
>>
>> Node 1 [ (Brick A) (Brick B) ]
>> Node 2 [ (Brick A) (Brick B) ]
>> 
>> 14Tb + 14Tb = 28Tb
>>
>> At this  I was able to create the replica nodes between node 1 and 2
>> (brick A) but I’ve not been able to add to the replica together, Gluster
>> switches to distributed replica   when i add it with only 14Tb.
>>
>> Any 

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-10 Thread Jose Sanchez


Hi Nithya

This is what i have so far, I have peer both cluster nodes together as replica, 
from node 1A and 1B , now when i tried to add it , i get the error that it is 
already part of a volume. when i run the cluster volume info , i see that has 
switch to distributed-replica. 

Thanks

Jose





[root@gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   3140 
Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y   2634 
Self-heal Daemon on localhost   N/A   N/AY   3132 
Self-heal Daemon on gluster02ib N/A   N/AY   2626 
 
Task Status of Volume scratch
--
There are no active volume tasks
 
[root@gluster01 ~]#

[root@gluster01 ~]# gluster volume info
 
Volume Name: scratch
Type: Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]#


-

[root@gluster01 ~]# gluster volume add-brick scratch replica 2 
gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
volume add-brick: failed: /gdata/brick2/scratch is already part of a volume


[root@gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y   3140 
Brick gluster02ib:/gdata/brick1/scratch 49153 49154  Y   2634 
Self-heal Daemon on gluster02ib N/A   N/AY   2626 
Self-heal Daemon on localhost   N/A   N/AY   3132 
 
Task Status of Volume scratch
--
There are no active volume tasks
 
[root@gluster01 ~]# gluster volume info
 
Volume Name: scratch
Type: Distributed-Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Brick3: gluster01ib:/gdata/brick2/scratch
Brick4: gluster02ib:/gdata/brick2/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]# 




Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131-0001
carc.unm.edu 


> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran  wrote:
> 
> Hi,
> 
> Please let us know what commands you ran so far and the output of the gluster 
> volume info command.
> 
> Thanks,
> Nithya
> 
> On 9 January 2018 at 23:06, Jose Sanchez  > wrote:
> Hello
> 
> We are trying to setup Gluster for our project/scratch storage HPC machine 
> using a replicated mode with 2 nodes, 2 bricks each (14tb each). 
> 
> Our goal is to be able to have a replicated system between node 1 and 2 (A 
> bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so we 
> can have a total of 28tb replicated mode. 
> 
> Node 1 [ (Brick A) (Brick B) ]
> Node 2 [ (Brick A) (Brick B) ]
> 
>   14Tb + 14Tb = 28Tb
> 
> At this  I was able to create the replica nodes between node 1 and 2 (brick 
> A) but I’ve not been able to add to the replica together, Gluster switches to 
> distributed replica   when i add it with only 14Tb.
> 
> Any help will be appreciated.
> 
> Thanks
> 
> Jose
> 
> -
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131
> carc.unm.edu 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users 
> 
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

2018-01-09 Thread Nithya Balachandran
Hi,

Please let us know what commands you ran so far and the output of the *gluster
volume info* command.

Thanks,
Nithya

On 9 January 2018 at 23:06, Jose Sanchez  wrote:

> Hello
>
> We are trying to setup Gluster for our project/scratch storage HPC machine
> using a replicated mode with 2 nodes, 2 bricks each (14tb each).
>
> Our goal is to be able to have a replicated system between node 1 and 2 (A
> bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so we
> can have a total of 28tb replicated mode.
>
> Node 1 [ (Brick A) (Brick B) ]
> Node 2 [ (Brick A) (Brick B) ]
> 
> 14Tb + 14Tb = 28Tb
>
> At this  I was able to create the replica nodes between node 1 and 2
> (brick A) but I’ve not been able to add to the replica together, Gluster
> switches to distributed replica   when i add it with only 14Tb.
>
> Any help will be appreciated.
>
> Thanks
>
> Jose
>
> -
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131
> carc.unm.edu
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users