Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Sahina Bose
[Adding gluster users to help with error]

[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]

On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan  wrote:

> BTW RDMA is working as expected:
> root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
> tcp_bw:
> bw  =  475 MB/sec
> tcp_lat:
> latency  =  52.8 us
> [root@clei26 ~]#
>
> thank you beforehand.
> Arman.
>
>
> On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan 
> wrote:
>
>> just for reference:
>>  gluster volume info
>>
>> Volume Name: GluReplica
>> Type: Replicate
>> Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: 10.10.10.44:/zclei22/01/glu
>> Brick2: 10.10.10.42:/zclei21/01/glu
>> Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
>> Options Reconfigured:
>> network.ping-timeout: 30
>> server.allow-insecure: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> cluster.data-self-heal-algorithm: full
>> features.shard: on
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> performance.readdir-ahead: on
>> nfs.disable: on
>>
>>
>>
>> [root@clei21 ~]# gluster volume status
>> Status of volume: GluReplica
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>> 
>> --
>> Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
>> 15870
>> Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
>> 17473
>> Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
>> 18897
>> Self-heal Daemon on localhost   N/A   N/AY
>> 17502
>> Self-heal Daemon on 10.10.10.41 N/A   N/AY
>> 13353
>> Self-heal Daemon on 10.10.10.44 N/A   N/AY
>> 32745
>>
>> Task Status of Volume GluReplica
>> 
>> --
>> There are no active volume tasks
>>
>>
>> On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan 
>> wrote:
>>
>>> I am not able to mount with RDMA over cli
>>> Are there some volfile parameters needs to be tuned?
>>> /usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.1
>>> 0.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
>>>
>>> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
>>> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
>>> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
>>> --volfile-server=10.10.10.41 --volfile-server-transport=rdma
>>> --volfile-id=/GluReplica.rdma /mnt)
>>> [2017-03-02 11:49:47.812699] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with index 1
>>> [2017-03-02 11:49:47.825210] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with index 2
>>> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
>>> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
>>> channel creation failed [No such device]
>>> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
>>> 0-GluReplica-client-2: Failed to initialize IB Device
>>> [2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
>>> 0-rpc-transport: 'rdma' initialization failed
>>> [2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
>>> 0-GluReplica-client-2: loading of new rpc-transport failed
>>> [2017-03-02 11:49:47.829325] I [MSGID: 101053]
>>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
>>> total=0
>>> [2017-03-02 11:49:47.829371] I [MSGID: 101053]
>>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124 max=0
>>> total=0
>>> [2017-03-02 11:49:47.829391] E [MSGID: 114022]
>>> [client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to
>>> initialize RPC
>>> [2017-03-02 11:49:47.829413] E [MSGID: 101019]
>>> [xlator.c:433:xlator_init] 0-GluReplica-client-2: Initialization of volume
>>> 'GluReplica-client-2' failed, review your volfile again
>>> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
>>> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
>>> translator failed
>>> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
>>> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
>>> [2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
>>> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
>>> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
>>> -->/usr/sbin/glusterfs(cleanup_an

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Misak Khachatryan
Hello,

hmm, i saw that i'm not using RDMA, how i can safely tune it? I have 3
server setup with GlusterFS:

[root@virt1 ~]# gluster volume info

Volume Name: data
Type: Replicate
Volume ID: d53c2202-0dba-4973-960e-4642d41bcdd8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt1:/gluster/brick2/data
Brick2: virt2:/gluster/brick2/data
Brick3: virt3:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.strict-o-direct: on
nfs.disable: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 6
cluster.shd-wait-qlength: 1
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: off
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on

ovirt 4.1

Thanks in advance?


Best regards,
Misak Khachatryan


On Thu, Mar 2, 2017 at 4:06 PM, Arman Khalatyan  wrote:
> BTW RDMA is working as expected:
> root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
> tcp_bw:
> bw  =  475 MB/sec
> tcp_lat:
> latency  =  52.8 us
> [root@clei26 ~]#
>
> thank you beforehand.
> Arman.
>
>
> On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan  wrote:
>>
>> just for reference:
>>  gluster volume info
>>
>> Volume Name: GluReplica
>> Type: Replicate
>> Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: 10.10.10.44:/zclei22/01/glu
>> Brick2: 10.10.10.42:/zclei21/01/glu
>> Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
>> Options Reconfigured:
>> network.ping-timeout: 30
>> server.allow-insecure: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> cluster.data-self-heal-algorithm: full
>> features.shard: on
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> performance.readdir-ahead: on
>> nfs.disable: on
>>
>>
>>
>> [root@clei21 ~]# gluster volume status
>> Status of volume: GluReplica
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> --
>> Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
>> 15870
>> Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
>> 17473
>> Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
>> 18897
>> Self-heal Daemon on localhost   N/A   N/AY
>> 17502
>> Self-heal Daemon on 10.10.10.41 N/A   N/AY
>> 13353
>> Self-heal Daemon on 10.10.10.44 N/A   N/AY
>> 32745
>>
>> Task Status of Volume GluReplica
>>
>> --
>> There are no active volume tasks
>>
>>
>> On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan 
>> wrote:
>>>
>>> I am not able to mount with RDMA over cli
>>> Are there some volfile parameters needs to be tuned?
>>> /usr/bin/mount  -t glusterfs  -o
>>> backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
>>> 10.10.10.44:/GluReplica /mnt
>>>
>>> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
>>> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
>>> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
>>> --volfile-server=10.10.10.41 --volfile-server-transport=rdma
>>> --volfile-id=/GluReplica.rdma /mnt)
>>> [2017-03-02 11:49:47.812699] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with
>>> index 1
>>> [2017-03-02 11:49:47.825210] I [MSGID: 101190]
>>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with
>>> index 2
>>> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
>>> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
>>> channel creation failed [No such device]
>>> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
>>> 0-GluReplica-client-2: Failed to initialize IB Device
>>> [2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
>>> 0-rpc-transport: 'rdma' initialization failed
>>> [2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
>>> 0-GluReplica-client-2: loading of new rpc-transport failed
>>> [2017-03-02 11:49:47.829325] I [MSGID: 101053]
>>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
>>> total=0
>>> [2017-03-02 11:49:47.829371] I [MSGID: 101053]
>>> [

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
BTW RDMA is working as expected:
root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
tcp_bw:
bw  =  475 MB/sec
tcp_lat:
latency  =  52.8 us
[root@clei26 ~]#

thank you beforehand.
Arman.


On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan  wrote:

> just for reference:
>  gluster volume info
>
> Volume Name: GluReplica
> Type: Replicate
> Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp,rdma
> Bricks:
> Brick1: 10.10.10.44:/zclei22/01/glu
> Brick2: 10.10.10.42:/zclei21/01/glu
> Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> server.allow-insecure: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.data-self-heal-algorithm: full
> features.shard: on
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
> nfs.disable: on
>
>
>
> [root@clei21 ~]# gluster volume status
> Status of volume: GluReplica
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
> 15870
> Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
> 17473
> Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
> 18897
> Self-heal Daemon on localhost   N/A   N/AY
> 17502
> Self-heal Daemon on 10.10.10.41 N/A   N/AY
> 13353
> Self-heal Daemon on 10.10.10.44 N/A   N/AY
> 32745
>
> Task Status of Volume GluReplica
> 
> --
> There are no active volume tasks
>
>
> On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan 
> wrote:
>
>> I am not able to mount with RDMA over cli
>> Are there some volfile parameters needs to be tuned?
>> /usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.1
>> 0.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
>>
>> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
>> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
>> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
>> --volfile-server=10.10.10.41 --volfile-server-transport=rdma
>> --volfile-id=/GluReplica.rdma /mnt)
>> [2017-03-02 11:49:47.812699] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> [2017-03-02 11:49:47.825210] I [MSGID: 101190]
>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 2
>> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
>> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
>> channel creation failed [No such device]
>> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
>> 0-GluReplica-client-2: Failed to initialize IB Device
>> [2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
>> 0-rpc-transport: 'rdma' initialization failed
>> [2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
>> 0-GluReplica-client-2: loading of new rpc-transport failed
>> [2017-03-02 11:49:47.829325] I [MSGID: 101053]
>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
>> total=0
>> [2017-03-02 11:49:47.829371] I [MSGID: 101053]
>> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124 max=0
>> total=0
>> [2017-03-02 11:49:47.829391] E [MSGID: 114022]
>> [client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to
>> initialize RPC
>> [2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
>> 0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
>> failed, review your volfile again
>> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
>> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
>> translator failed
>> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
>> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
>> [2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
>> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
>> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
>> received signum (1), shutting down
>> [2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse:
>> Unmounting '/mnt'.
>> [2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
>> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
>> -->/usr/sbin/glusterfs(cleanup_a

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
just for reference:
 gluster volume info

Volume Name: GluReplica
Type: Replicate
Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp,rdma
Bricks:
Brick1: 10.10.10.44:/zclei22/01/glu
Brick2: 10.10.10.42:/zclei21/01/glu
Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
Options Reconfigured:
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.data-self-heal-algorithm: full
features.shard: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
nfs.disable: on



[root@clei21 ~]# gluster volume status
Status of volume: GluReplica
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y
15870
Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y
17473
Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y
18897
Self-heal Daemon on localhost   N/A   N/AY
17502
Self-heal Daemon on 10.10.10.41 N/A   N/AY
13353
Self-heal Daemon on 10.10.10.44 N/A   N/AY
32745

Task Status of Volume GluReplica
--
There are no active volume tasks


On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan  wrote:

> I am not able to mount with RDMA over cli
> Are there some volfile parameters needs to be tuned?
> /usr/bin/mount  -t glusterfs  -o backup-volfile-servers=10.10.
> 10.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
>
> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
> --volfile-server=10.10.10.41 --volfile-server-transport=rdma
> --volfile-id=/GluReplica.rdma /mnt)
> [2017-03-02 11:49:47.812699] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2017-03-02 11:49:47.825210] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 2
> [2017-03-02 11:49:47.828996] W [MSGID: 103071] 
> [rdma.c:4589:__gf_rdma_ctx_create]
> 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
> 0-GluReplica-client-2: Failed to initialize IB Device
> [2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
> 0-GluReplica-client-2: loading of new rpc-transport failed
> [2017-03-02 11:49:47.829325] I [MSGID: 101053] 
> [mem-pool.c:641:mem_pool_destroy]
> 0-GluReplica-client-2: size=588 max=0 total=0
> [2017-03-02 11:49:47.829371] I [MSGID: 101053] 
> [mem-pool.c:641:mem_pool_destroy]
> 0-GluReplica-client-2: size=124 max=0 total=0
> [2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc]
> 0-GluReplica-client-2: failed to initialize RPC
> [2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
> 0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
> failed, review your volfile again
> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
> translator failed
> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
> [2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
> received signum (1), shutting down
> [2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse:
> Unmounting '/mnt'.
> [2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
> received signum (15), shutting down
> [2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
> received signum (15), shutting down
>
>
>
>
> On Thu, Mar 2, 2017 at 12:11 PM, 

Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
I am not able to mount with RDMA over cli
Are there some volfile parameters needs to be tuned?
/usr/bin/mount  -t glusterfs  -o
backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
10.10.10.44:/GluReplica /mnt

[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
(args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
--volfile-server=10.10.10.44 --volfile-server=10.10.10.42
--volfile-server=10.10.10.41 --volfile-server-transport=rdma
--volfile-id=/GluReplica.rdma /mnt)
[2017-03-02 11:49:47.812699] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-03-02 11:49:47.825210] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
channel creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
[2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
0-GluReplica-client-2: loading of new rpc-transport failed
[2017-03-02 11:49:47.829325] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
total=0
[2017-03-02 11:49:47.829371] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124 max=0
total=0
[2017-03-02 11:49:47.829391] E [MSGID: 114022]
[client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to initialize
RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
failed, review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066]
[graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176]
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed
[2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (1), shutting down
[2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting
'/mnt'.
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (15), shutting down
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (15), shutting down




On Thu, Mar 2, 2017 at 12:11 PM, Sahina Bose  wrote:

> You will need to pass additional mount options while creating the storage
> domain (transport=rdma)
>
>
> Please let us know if this works.
>
> On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan  wrote:
>
>> Hi,
>> Are there way to force the connections over RDMA only?
>> If I check host mounts I cannot see rdma mount option:
>>  mount -l| grep gluster
>> 10.10.10.44:/GluReplica on 
>> /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
>> type fuse.glusterfs (rw,relatime,user_id=0,group_i
>> d=0,default_permissions,allow_other,max_read=131072)
>>
>> I have glusterized 3 nodes:
>>
>> GluReplica
>> Volume ID:
>> ee686dfe-203a-4caa-a691-26353460cc48
>> Volume Type:
>> Replicate (Arbiter)
>> Replica Count:
>> 2 + 1
>> Number of Bricks:
>> 3
>> Transport Types:
>> TCP, RDMA
>> Maximum no of snapshots:
>> 256
>> Capacity:
>> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Sahina Bose
You will need to pass additional mount options while creating the storage
domain (transport=rdma)


Please let us know if this works.

On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan  wrote:

> Hi,
> Are there way to force the connections over RDMA only?
> If I check host mounts I cannot see rdma mount option:
>  mount -l| grep gluster
> 10.10.10.44:/GluReplica on 
> /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
> type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,
> allow_other,max_read=131072)
>
> I have glusterized 3 nodes:
>
> GluReplica
> Volume ID:
> ee686dfe-203a-4caa-a691-26353460cc48
> Volume Type:
> Replicate (Arbiter)
> Replica Count:
> 2 + 1
> Number of Bricks:
> 3
> Transport Types:
> TCP, RDMA
> Maximum no of snapshots:
> 256
> Capacity:
> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Arman Khalatyan
Hi,
Are there way to force the connections over RDMA only?
If I check host mounts I cannot see rdma mount option:
 mount -l| grep gluster
10.10.10.44:/GluReplica on
/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

I have glusterized 3 nodes:

GluReplica
Volume ID:
ee686dfe-203a-4caa-a691-26353460cc48
Volume Type:
Replicate (Arbiter)
Replica Count:
2 + 1
Number of Bricks:
3
Transport Types:
TCP, RDMA
Maximum no of snapshots:
256
Capacity:
3.51 TiB total, 190.56 GiB used, 3.33 TiB free
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users