Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-21 Thread Deepak Naidu
Hi Joe, I know how the TLS works, but what is the MTLS in your email reference 
anything different from TLS in terms of config.

--
Deepak

On Mar 21, 2017, at 2:51 AM, Joseph Lorenzini 
mailto:jalo...@gmail.com>> wrote:

Hi Deepak,

The starting point would be that link you initially provided. In terms of help, 
could you elaborate more on what you are looking for? Do you need a high level 
primer on how to create a chain-of-trust with openssl? Certificate management? 
Or are you looking for more on how to properly provision the TLS certifcates in 
gluster?

Joe

On Sun, Mar 19, 2017 at 11:52 AM, Deepak Naidu 
mailto:dna...@nvidia.com>> wrote:
Thanks Joe for your inputs. I guess comparing client -- glusterServer IO 
performance over MTLS and non-MTLS should give me some idea on the 
client/server IO overhead.

Also any URL related to setup & configuring MTLS is appreciated.



--
Deepak

On Mar 19, 2017, at 7:00 AM, Joseph Lorenzini 
mailto:jalo...@gmail.com>> wrote:

Hi Deepak,

Sorta. I think it depends on what we mean by I/O path and performance.

If we are referring to disk I/O for gluster servers, then no. If we are 
referring to the network I/O between a gluster client and server, than yes 
there will by definition be some additional overhead. That however is true of 
any security layer one chooses to pick for any application, especially a 
distributed system. In practice, security of any kind, whether its encryption, 
ACLs, or even iptables, will degrade the performance of an application. And 
since distributed systems by definition handle their state through network I/O, 
that means security + distributed system = network latency. There's a reason 
people say security is where performance goes to die. :)

Now that all said, frequently the issue is not whether there will be network 
latency, but how much and does it matter? Moreover, what are the specific 
performance requirements for your gluster pool and have they been weighed 
against the costs of meeting those requirements? Additionally, how does meeting 
those performance requirements weigh against all your other requirements like 
for example having basic network security around a distributed system?

I would be quite surprised if openssl MTLS  would be any slower compared to 
some other key-based authentication scheme. Most of the cost of TLS is around 
the TLS handshake, which is a one-time hit when the gluster client mounts the 
volume. Since the client is maintaining a persistent TLS connection, most of 
the overhead is openssl code performing symmetric encryption, which openssl, 
despite all its warts, is really really good at doing really really fast 
(understand this all relative to an arbitrary baseline :).  Bottom line: with 
modern hardware, the performance impact of MTLS should be negligible. IMHO, if 
the performance requirement can't tolerate MTLS, then its in practice 
preventing you from implementing any reasonable security scheme at all. In that 
case, you'd be better off just setting up an isolated network and skipping any 
type of authentication.

I'd recommend setting up MTLS with gluster and run your performance tests 
against it. That will definitively answer your question of whether the 
performance is acceptable. The MTLS setup is not that hard and the gluster 
documentation is reasonable though could be improved (I need to submit some PRs 
against it). if you have any questions about setup and configuration, I am sure 
I can help.

Joe

On Sat, Mar 18, 2017 at 2:25 PM, Deepak Naidu 
mailto:dna...@nvidia.com>> wrote:
Hi Joe, thanks for taking time for explaining. I am having basic set of 
requirements along with IO performance as key factor, my reply below should 
justify what I am trying to achieve.

>>If I am understanding your use case properly, you want to ensure that a 
>>client may only mount a gluster volume if and only if it presents a key or 
>>secret that attests to the client's identity, which the gluster server can 
>>use to verify that client's identity.

Yes, this is the exact use case for my requirements.



>>That's exactly what gluster MTLS is doing since the gluster server performs 
>>chain-of-trust validation on the client's leaf certificate.

That's good, but my confusion here is does this MTLS also encrypt's IO traffic 
like TLS. If yes, than it's not want I am looking for. The reason is the IO 
encryption/decryption is an extra overhead for my use case as performance of IO 
is also factor why we're are looking for GlusterFS, unless my understanding is 
incorrect that IO encryption has no overhead.



>> I don't understand why I/O path encryption is something you want to avoid -- 
>> seems like an essential part of basic network security that you get for 
>> "free" with the authentication.

If I understand the term IO path encryption corre

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-19 Thread Deepak Naidu
Thanks Joe for your inputs. I guess comparing client -- glusterServer IO 
performance over MTLS and non-MTLS should give me some idea on the 
client/server IO overhead.

Also any URL related to setup & configuring MTLS is appreciated.



--
Deepak

On Mar 19, 2017, at 7:00 AM, Joseph Lorenzini 
mailto:jalo...@gmail.com>> wrote:

Hi Deepak,

Sorta. I think it depends on what we mean by I/O path and performance.

If we are referring to disk I/O for gluster servers, then no. If we are 
referring to the network I/O between a gluster client and server, than yes 
there will by definition be some additional overhead. That however is true of 
any security layer one chooses to pick for any application, especially a 
distributed system. In practice, security of any kind, whether its encryption, 
ACLs, or even iptables, will degrade the performance of an application. And 
since distributed systems by definition handle their state through network I/O, 
that means security + distributed system = network latency. There's a reason 
people say security is where performance goes to die. :)

Now that all said, frequently the issue is not whether there will be network 
latency, but how much and does it matter? Moreover, what are the specific 
performance requirements for your gluster pool and have they been weighed 
against the costs of meeting those requirements? Additionally, how does meeting 
those performance requirements weigh against all your other requirements like 
for example having basic network security around a distributed system?

I would be quite surprised if openssl MTLS  would be any slower compared to 
some other key-based authentication scheme. Most of the cost of TLS is around 
the TLS handshake, which is a one-time hit when the gluster client mounts the 
volume. Since the client is maintaining a persistent TLS connection, most of 
the overhead is openssl code performing symmetric encryption, which openssl, 
despite all its warts, is really really good at doing really really fast 
(understand this all relative to an arbitrary baseline :).  Bottom line: with 
modern hardware, the performance impact of MTLS should be negligible. IMHO, if 
the performance requirement can't tolerate MTLS, then its in practice 
preventing you from implementing any reasonable security scheme at all. In that 
case, you'd be better off just setting up an isolated network and skipping any 
type of authentication.

I'd recommend setting up MTLS with gluster and run your performance tests 
against it. That will definitively answer your question of whether the 
performance is acceptable. The MTLS setup is not that hard and the gluster 
documentation is reasonable though could be improved (I need to submit some PRs 
against it). if you have any questions about setup and configuration, I am sure 
I can help.

Joe

On Sat, Mar 18, 2017 at 2:25 PM, Deepak Naidu 
mailto:dna...@nvidia.com>> wrote:
Hi Joe, thanks for taking time for explaining. I am having basic set of 
requirements along with IO performance as key factor, my reply below should 
justify what I am trying to achieve.

>>If I am understanding your use case properly, you want to ensure that a 
>>client may only mount a gluster volume if and only if it presents a key or 
>>secret that attests to the client's identity, which the gluster server can 
>>use to verify that client's identity.

Yes, this is the exact use case for my requirements.



>>That's exactly what gluster MTLS is doing since the gluster server performs 
>>chain-of-trust validation on the client's leaf certificate.

That's good, but my confusion here is does this MTLS also encrypt's IO traffic 
like TLS. If yes, than it's not want I am looking for. The reason is the IO 
encryption/decryption is an extra overhead for my use case as performance of IO 
is also factor why we're are looking for GlusterFS, unless my understanding is 
incorrect that IO encryption has no overhead.



>> I don't understand why I/O path encryption is something you want to avoid -- 
>> seems like an essential part of basic network security that you get for 
>> "free" with the authentication.

If I understand the term IO path encryption correctly, all the storage IO will 
go through extra latency of encryption & decryption which is not needed for my 
requirements as this produced extra IO latency which is why I am trying to 
avoid IO path encryption & just need basic secret based authentication.




--
Deepak

> On Mar 18, 2017, at 10:46 AM, Joseph Lorenzini 
> mailto:jalo...@gmail.com>> wrote:
>
> I am little confused about what you are trying to accomplish here. If I am 
> understanding your use case properly, you want to ensure that a client may 
> only mount a gluster volume if and only if it presents a key or secret that 
> attests to the client's identi

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Deepak Naidu
Hi Joe, thanks for taking time for explaining. I am having basic set of 
requirements along with IO performance as key factor, my reply below should 
justify what I am trying to achieve.

>>If I am understanding your use case properly, you want to ensure that a 
>>client may only mount a gluster volume if and only if it presents a key or 
>>secret that attests to the client's identity, which the gluster server can 
>>use to verify that client's identity. 

Yes, this is the exact use case for my requirements.



>>That's exactly what gluster MTLS is doing since the gluster server performs 
>>chain-of-trust validation on the client's leaf certificate.

That's good, but my confusion here is does this MTLS also encrypt's IO traffic 
like TLS. If yes, than it's not want I am looking for. The reason is the IO 
encryption/decryption is an extra overhead for my use case as performance of IO 
is also factor why we're are looking for GlusterFS, unless my understanding is 
incorrect that IO encryption has no overhead.



>> I don't understand why I/O path encryption is something you want to avoid -- 
>> seems like an essential part of basic network security that you get for 
>> "free" with the authentication. 

If I understand the term IO path encryption correctly, all the storage IO will 
go through extra latency of encryption & decryption which is not needed for my 
requirements as this produced extra IO latency which is why I am trying to 
avoid IO path encryption & just need basic secret based authentication.




--
Deepak

> On Mar 18, 2017, at 10:46 AM, Joseph Lorenzini  wrote:
> 
> I am little confused about what you are trying to accomplish here. If I am 
> understanding your use case properly, you want to ensure that a client may 
> only mount a gluster volume if and only if it presents a key or secret that 
> attests to the client's identity, which the gluster server can use to verify 
> that client's identity. That's exactly what gluster MTLS is doing since the 
> gluster server performs chain-of-trust validation on the client's leaf 
> certificate.
> 
> Of course this will necessarily force encryption of the I/O path since its 
> TLS. I don't understand why I/O path encryption is something you want to 
> avoid -- seems like an essential part of basic network security that you get 
> for "free" with the authentication. It is true that if you want the key-based 
> authentication of a gluster client, you will need gluster MTLS. You could 
> treat encryption as the "cost" of getting authentication if you will.
> 
> Now I am personally pretty negative on X.509 and chain-of-trust in general, 
> since the trust model has been proven to not scale and is frequently broken 
> by malicious and incompetent CAs. I also think its a completely inappropriate 
> security model for something like gluster where all endpoints are known and 
> controlled by a single entity, forcing a massive amount of unnecessary 
> complexity with certificate management with no real added security. I have 
> thought about making a feature request that gluster support a simple 
> public-key encryption that's implemented like SSH. But all that said, MTLS is 
> a well-tested, well known security protocol and the gluster team built it on 
> top of openssl so it does get the security job done in an acceptable way. The 
> fact that the I/O path is encrypted is not the thing that bothers me about 
> the implementation though.
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Deepak Naidu
Thanks Joseph for info. 

>>In addition, gluster uses MTLS (each endpoint validate's the other's 
>>chain-of-trust), so you get authentication as well.

Does it only do authentication of mounts. I am not interested at this moment on 
IO path encryption only looking for authentication.

>>you can set the auth.allow and auth.reject options to whitelist and blacklist 
>>clients based on their source IPs.

I have used this but unfortunately I don't see ipbased / hostbased ACL as 
matured method, unless GlusterFS supports Kerberos completely. The reason I am 
looking for key or secret based secured mounts is, there will be entire subnet 
granted to the service & more elegant way is to allow only the client on that 
subnet to gluster mount would be if they use keys/secret as the client might 
next cycle/reboot get different IP. I can find workaround related to IP but 
this seems really weird that gluster only uses SSL to encrypt IO traffic but 
not use the same for authenticated mount.



--
Deepak

> On Mar 18, 2017, at 9:14 AM, Joseph Lorenzini  wrote:
> 
> 
> Hi Deepak,
> 
> Here's the TLDR
> 
> If you don't want the I/O path to be encrypted but you want to control access 
> to a gluster volume, you can set the auth.allow and auth.reject options to 
> whitelist and blacklist clients based on their source IPs. There's also 
> always iptables rules if you don't want to do that.
> 
> Note this only address a client's (i.e system where multiple unix users can 
> exist) to mount a gluster volume. This does not address access by different 
> unix users on that mounted gluster volume -- that's a separate and much more 
> complicated issue. I can elaborate on that more if you want. 
> 
> Here's the longer explanation on the TLS piece. 
> 
> So there are a couple different security layers here. TLS will in fact 
> encrypt the I/O path -- that's one of its key selling points which is to 
> ensure confidentiality of the data sent between the gluster server and 
> gluster client. In addition, gluster uses MTLS (each endpoint validate's the 
> other's chain-of-trust), so you get authentication as well. Additionally, if 
> you set the auth.ssl-allow option on the gluster volume, you can specify 
> whether authenticated TLS client is permitted to access the volume based on 
> the common name in the client's certificate. This provides an inflexible but 
> reasonably strong form of authorization.
> 
> 
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-17 Thread Deepak Naidu
Any info guys ?

--
Deepak

From: Deepak Naidu
Sent: Friday, March 17, 2017 12:32 AM
To: gluster-users@gluster.org
Subject: Secured mount in GlusterFS using keys

Hello,

Is there a way like cephFS where a keyring can be passed for mount. I see SSL 
in GlusterFS something secured mount based on pem & key files, but I am bit 
confused where these are only for mount authentication or for IO path 
encryption. I only want authorized Glusterfs mount based on keys or certs or 
secret "no encryption of IO path" . Is there a way possible for GlusterFS.

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/SSL/




--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Secured mount in GlusterFS using keys

2017-03-17 Thread Deepak Naidu
Hello,

Is there a way like cephFS where a keyring can be passed for mount. I see SSL 
in GlusterFS something secured mount based on pem & key files, but I am bit 
confused where these are only for mount authentication or for IO path 
encryption. I only want authorized Glusterfs mount based on keys or certs or 
secret "no encryption of IO path" . Is there a way possible for GlusterFS.

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/SSL/




--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Increase or performance tune READ perf for glusterfs distributed volume

2017-03-08 Thread Deepak Naidu
Hi Karan,

>>Are you reading a small file data-set or large files data-set and secondly, 
>>volume is mounted using which protocol?

I am using 1mb block size to test using RDMA transport.

--
Deepak

> On Mar 8, 2017, at 2:48 AM, Karan Sandha  wrote:
> 
> Are you reading a small file data-set or large files data-set and secondly, 
> volume is mounted using which protocol?
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Increase or performance tune READ perf for glusterfs distributed volume

2017-03-07 Thread Deepak Naidu
Is there are any tuning param for READ,  I need to set to get maximum 
throughput on glusterfs distributed volume read performance. Currently, I am 
trying to compare this with my local SSD Disk performance.


* My local SSD(/dev/sdb) can random read 6.3TB in 56 minutes on XFS 
filesystem.

* I have 2x node distributed glusterfs volume. When I read the same 
workload, it takes around 63 minutes.

* Network is IPoIB using RDMA. Infiniband network is 1x 100 Gb/sec (4X 
EDR)

Any suggestion is appreciated.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS Multitenancy -- supports multi-tenancy by partitioning users or groups into logical volumes on shared storage

2017-03-06 Thread Deepak Naidu
>>Idea of multi-tenancy is to have multiple tenants on same volume. May be I 
>>didn't understand your idea completely

First, if you can help me understand how Glusterfs defines and does multi 
tenancy, it would be helpful.


Second, multi tenancy should have complete isolation of resource from disk , 
network & access. If I use the same volume for multiple tenant, how I am 
isolate resource ? I need that understanding for gluster. How can I guarantee 
that failure of the brick in that volume is not effecting all tenants(if I 
accept ur logic) of shared tenant using same volume.


--
Deepak

> On Mar 5, 2017, at 11:51 PM, Pranith Kumar Karampuri  
> wrote:
> 
> Idea of multi-tenancy is to have multiple tenants on same volume. May be I 
> didn't understand your idea completely
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS Multitenancy -- supports multi-tenancy by partitioning users or groups into logical volumes on shared storage

2017-03-05 Thread Deepak Naidu
Anyone on how multi tenancy works on gluster


https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Introduction/

GlusterFS. It supports multi-tenancy by partitioning users or groups into 
logical volumes on shared storage.



--
Deepak

On Mar 2, 2017, at 3:38 PM, Deepak Naidu 
mailto:dna...@nvidia.com>> wrote:

Hello,

I have been reading the below statement in GlusterFS docs & articles regarding 
multi-tenancy. Is this statement related to virtual environment ie VM's. How 
valid is "partitioning users or groups into logical volumes". Can someone 
explain what it really means.
Is it that I can associate a user/group(UID/GID) like NFS to a glusterFS 
volumes ?

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Introduction/

GlusterFS. It supports multi-tenancy by partitioning users or groups into 
logical volumes on shared storage.


My though was I can do multi-tenancy at volume level as below.


* Create a distributed volume named data1 for tenant1 from 
StorageNode1-5 using Disk1(raided) using NIC-1 network

* Similarly create distributed volume named data2 for tenant2 from 
StorageNode1-5 using Disk2(raided) using NIC-2 network

Is my understanding correct ? How is the user/group come into picture.


--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-03 Thread Deepak Naidu
>> As you can see from my previous email that the RDMA connection tested with 
>> qperf.
I think you have wrong command. Your testing TCP & not RDMA. Also check if you 
have RDMA & IB modules loaded on your hosts.
root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
tcp_bw:
bw  =  475 MB/sec
tcp_lat:
latency  =  52.8 us
[root@clei26 ~]#

Please run below command to test RDMA

[root@storageN2 ~]# qperf storageN1 ud_lat ud_bw
ud_lat:
latency  =  7.51 us
ud_bw:
send_bw  =  9.21 GB/sec
recv_bw  =  9.21 GB/sec
[root@sc-sdgx-202 ~]#

Read qperf man pages for more info.

* To run a TCP bandwidth and latency test:
qperf myserver tcp_bw tcp_lat
* To run a UDP latency test and then cause the server to terminate:
qperf myserver udp_lat quit
* To measure the RDMA UD latency and bandwidth:
qperf myserver ud_lat ud_bw
* To measure RDMA UC bi-directional bandwidth:
qperf myserver rc_bi_bw
* To get a range of TCP latencies with a message size from 1 to 64K
qperf myserver -oo msg_size:1:64K:*2 -vu tcp_lat


Check if you have RDMA & IB modules loaded

lsmod | grep -i ib

lsmod | grep -i rdma



--
Deepak



From: Arman Khalatyan [mailto:arm2...@gmail.com]
Sent: Thursday, March 02, 2017 10:57 PM
To: Deepak Naidu
Cc: Rafi Kavungal Chundattu Parambil; gluster-users@gluster.org; users; Sahina 
Bose
Subject: RE: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

Dear Deepak, thank you for the hints, which gluster are you using?
As you can see from my previous email that the RDMA connection tested with 
qperf. It is working as expected. In my case the clients are servers as well, 
they are hosts for the ovirt. Disabling selinux is nor recommended by ovirt, 
but i will give a try.

Am 03.03.2017 7:50 vorm. schrieb "Deepak Naidu" 
mailto:dna...@nvidia.com>>:
I have been testing glusterfs over RDMA & below is the command I use. Reading 
up the logs, it looks like your IB(InfiniBand) device is not being initialized. 
I am not sure if u have an issue on the client IB or the storage server IB. 
Also have you configured ur IB devices correctly. I am using IPoIB.
Can you check your firewall, disable selinux, I think, you might have checked 
it already ?

mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0



• The below error seems if you have issue starting your volume. I had 
issue, when my transport was set to tcp,rdma. I had to force start my volume. 
If I had set it only to tcp on the volume, the volume would start easily.

[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc] 
0-GluReplica-client-2: failed to initialize RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init] 
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2' failed, 
review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066] 
[graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing 
translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176] 
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed


• The below error seems if you have issue with IB device. If not 
configured properly.

[2017-03-02 11:49:47.828996] W [MSGID: 103071] 
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel 
creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init] 
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed


--
Deepak


From: 
gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org> 
[mailto:gluster-users-boun...@gluster.org<mailto:gluster-users-boun...@gluster.org>]
 On Behalf Of Sahina Bose
Sent: Thursday, March 02, 2017 10:26 PM
To: Arman Khalatyan; 
gluster-users@gluster.org<mailto:gluster-users@gluster.org>; Rafi Kavungal 
Chundattu Parambil
Cc: users
Subject: Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

[Adding gluster users to help with error]

[2017-03-02 11:49:47.828996] W [MSGID: 103071] 
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel 
creation failed [No such device]

On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan 
mailto:arm2...@gmail.com>> wrote:
BTW RDMA is working as expected:
root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
tcp_bw:
bw  =  475 MB/sec
tcp_lat:
latency  =  52.8 us
[root@clei26 ~]#
thank you beforehand.
Arman.

On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan 
mailto:arm2...@gmail.com>> wrote:
just for reference:
 gluster volume info

Volume Name: GluReplica
Type: Replicate
Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp,rdma
Bricks:
Brick1: 10.10.10.44:/zclei22/01/glu
Brick2: 

Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

2017-03-02 Thread Deepak Naidu
I have been testing glusterfs over RDMA & below is the command I use. Reading 
up the logs, it looks like your IB(InfiniBand) device is not being initialized. 
I am not sure if u have an issue on the client IB or the storage server IB. 
Also have you configured ur IB devices correctly. I am using IPoIB.
Can you check your firewall, disable selinux, I think, you might have checked 
it already ?

mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0



· The below error seems if you have issue starting your volume. I had 
issue, when my transport was set to tcp,rdma. I had to force start my volume. 
If I had set it only to tcp on the volume, the volume would start easily.

[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc] 
0-GluReplica-client-2: failed to initialize RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init] 
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2' failed, 
review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066] 
[graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing 
translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176] 
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed



· The below error seems if you have issue with IB device. If not 
configured properly.

[2017-03-02 11:49:47.828996] W [MSGID: 103071] 
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel 
creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init] 
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed



--
Deepak


From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Sahina Bose
Sent: Thursday, March 02, 2017 10:26 PM
To: Arman Khalatyan; gluster-users@gluster.org; Rafi Kavungal Chundattu Parambil
Cc: users
Subject: Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?

[Adding gluster users to help with error]

[2017-03-02 11:49:47.828996] W [MSGID: 103071] 
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel 
creation failed [No such device]

On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan 
mailto:arm2...@gmail.com>> wrote:
BTW RDMA is working as expected:
root@clei26 ~]# qperf clei22.vib  tcp_bw tcp_lat
tcp_bw:
bw  =  475 MB/sec
tcp_lat:
latency  =  52.8 us
[root@clei26 ~]#
thank you beforehand.
Arman.

On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan 
mailto:arm2...@gmail.com>> wrote:
just for reference:
 gluster volume info

Volume Name: GluReplica
Type: Replicate
Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp,rdma
Bricks:
Brick1: 10.10.10.44:/zclei22/01/glu
Brick2: 10.10.10.42:/zclei21/01/glu
Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
Options Reconfigured:
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.data-self-heal-algorithm: full
features.shard: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
nfs.disable: on



[root@clei21 ~]# gluster volume status
Status of volume: GluReplica
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.10.10.44:/zclei22/01/glu   49158 49159  Y   15870
Brick 10.10.10.42:/zclei21/01/glu   49156 49157  Y   17473
Brick 10.10.10.41:/zclei26/01/glu   49153 49154  Y   18897
Self-heal Daemon on localhost   N/A   N/AY   17502
Self-heal Daemon on 10.10.10.41 N/A   N/AY   13353
Self-heal Daemon on 10.10.10.44 N/A   N/AY   32745

Task Status of Volume GluReplica
--
There are no active volume tasks

On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan 
mailto:arm2...@gmail.com>> wrote:
I am not able to mount with RDMA over cli
Are there some volfile parameters needs to be tuned?
/usr/bin/mount  -t glusterfs  -o 
backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma 
10.10.10.44:/GluReplica /mnt

[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9 (args: 
/usr/sbin/glusterfs --volfile-server=10.10.10.44 --volfile-server=10.10.10.44 
--volfile-server=10.10.10.42 --volfile-server=10.10.10.41 
--volfile-server-transport=rdma --volfile-id=/GluReplica.rdma /mnt)
[2017-03-02 11:49:47.

[Gluster-users] GlusterFS Multitenancy -- supports multi-tenancy by partitioning users or groups into logical volumes on shared storage

2017-03-02 Thread Deepak Naidu
Hello,

I have been reading the below statement in GlusterFS docs & articles regarding 
multi-tenancy. Is this statement related to virtual environment ie VM's. How 
valid is "partitioning users or groups into logical volumes". Can someone 
explain what it really means.
Is it that I can associate a user/group(UID/GID) like NFS to a glusterFS 
volumes ?

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Introduction/

GlusterFS. It supports multi-tenancy by partitioning users or groups into 
logical volumes on shared storage.


My though was I can do multi-tenancy at volume level as below.


* Create a distributed volume named data1 for tenant1 from 
StorageNode1-5 using Disk1(raided) using NIC-1 network

* Similarly create distributed volume named data2 for tenant2 from 
StorageNode1-5 using Disk2(raided) using NIC-2 network

Is my understanding correct ? How is the user/group come into picture.


--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] volume start: data0: failed: Commit failed on localhost.

2017-02-25 Thread Deepak Naidu
>>So in this case, although the volume status shows up that volume is not 
>>started, the brick process(es) actually do start. As a workaround please use 
>>volume start force one more time.

Thanks Atin for providing the bug info.

--
Deepak

> On Feb 25, 2017, at 7:16 AM, Atin Mukherjee  wrote:
> 
> .
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] volume start: data0: failed: Commit failed on localhost.

2017-02-24 Thread Deepak Naidu
I keep on getting this error when my config.transport is set to both tcp,rdma. 
The volume doesn't start. I get the below error during volume start.

To get around this, I end up delete the volume, then configure either only rdma 
or tcp. May be I am missing something, just trying to get the volume up.

root@hostname:~# gluster volume start data0
volume start: data0: failed: Commit failed on localhost. Please check log file 
for details.
root@hostname:~#

root@ hostname:~# gluster volume status data0
Staging failed on storageN2. Error: Volume data0 is not started
root@ hostname:~

=
[2017-02-24 08:00:29.923516] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume data0
[2017-02-24 08:00:29.926140] E [MSGID: 106153] 
[glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on 
storageN2. Error: Volume data0 is not started
[2017-02-24 08:00:33.770505] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume data0
[2017-02-24 08:00:33.772824] E [MSGID: 106153] 
[glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on 
storageN2. Error: Volume data0 is not started
=
[2017-02-24 08:01:36.305165] E [MSGID: 106537] 
[glusterd-volume-ops.c:1660:glusterd_op_stage_start_volume] 0-management: 
Volume data0 already started
[2017-02-24 08:01:36.305191] W [MSGID: 106122] 
[glusterd-mgmt.c:198:gd_mgmt_v3_pre_validate_fn] 0-management: Volume start 
prevalidation failed.
[2017-02-24 08:01:36.305198] E [MSGID: 106122] 
[glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Start on local node
[2017-02-24 08:01:36.305205] E [MSGID: 106122] 
[glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre 
Validation Failed


--
Deepak


---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS throughput inconsistent

2017-02-22 Thread Deepak Naidu
Hello,

I have GlusterFS 3.8.8. I am using IB RDMA. I have noticed during Write or Read 
the throughput doesn't seem consistent for same workload(fio command). 
Sometimes I get higher throughput sometimes it quickly goes into half, then 
stays there.

I cannot predict a consistent behavior  every time when I run the same 
workload. The time to complete varies. Is there any log file or something I can 
look into, to understand this behavior. I am single client(fuse) running 32 
thread, 1mb block size, creating 200GB or reading 200GB files randomly with 
directIO.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Different network for server and client

2017-02-22 Thread Deepak Naidu

>>I thought about this, but if servers are also clients, then this would not 
>>work.
Well, I suppose this is not possible or not required.

In that case not sure why would you need separate network ?


>>Is it really the client which replicates the data and distributes it to the 
>>different nodes?

Yes, as far as I understand, folks can correct me here. When you mount the 
GlusterFS fuse, it gets the list of storage nodes which are part of the volume 
brick. If you write your data on the client, the GlusterFS fuse client sends it 
to the server based on whatever translation service is chosen, example 
replicated, distributed, dispersed volume.

--
Deepak

> On Feb 22, 2017, at 9:44 AM, Alessandro Briosi  wrote:
> 
> 
> I thought about this, but if servers are also clients, then this would not 
> work.
> Well, I suppose this is not possible or not required.
> 
> Is it really the client which replicates the data and distributes it to the 
> different nodes?
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Different network for server and client

2017-02-22 Thread Deepak Naidu
I have a setup where storage nodes use network-1 & client nodes use network-2.

In both server/client, I use /etc/hosts entry to define storage node name, 
example node1, node2, node3 etc...

When client uses node1 hostname it resolves to network-2 & when storage uses 
node1 it resolves to network-1.

I don't think GlusterFS has as many scrubbing jobs like other SDS where they 
need cluster interconnect, it's mostly client server replication/translator 
model. Other use case where "different" network is helpful , if you have remote 
replication(geo) going across the pipe.

--
Deepak

On Feb 22, 2017, at 9:19 AM, David Gossage 
mailto:dgoss...@carouselchecks.com>> wrote:


On Wed, Feb 22, 2017 at 9:29 AM, Alessandro Briosi 
mailto:a...@metalit.com>> wrote:
Il 22/02/2017 13:54, Gandalf Corvotempesta ha scritto:
> I don't think would be possible because is the client that write on
> all server
> The replication is made by the client, not by the server


I really hope this is not true.

Not sure if it's a great idea, but gluster if I recall binds on all interfaces 
so if you use dns names to connect and create your gluster volumes/peers you 
can have those dns names point to whatever IP you want.

Alessandro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs + performance-tuning + infiniband + rdma

2017-02-21 Thread Deepak Naidu
Anyone who have tuned glusterfs performance for read and write IO ?

--
Deepak

On Feb 20, 2017, at 9:24 AM, Deepak Naidu 
mailto:dna...@nvidia.com>> wrote:

Hello,

I tried some performance tuning options like performance.client-io-threads 
etc... & my throughput performance increased to more 50%. Since then I am 
trying to find what are the performance tuning parameter to increase the write 
throughput.

The logic goes: If I get  MBps using local SSD, then if I run the same test 
on GlusterFS(2x-distribute), do I get 2x the throughput or (1/2) the time as 
localSSD. I know writes can't be near like localSSD. But I am using RDMA & I 
can feel there are some GlusterFS tunable to increase write perf, as I was able 
to increase write perf by 50%.

Anyone, sharing some basic guidelines is appreciated.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfs + performance-tuning + infiniband + rdma

2017-02-20 Thread Deepak Naidu
Hello,

I tried some performance tuning options like performance.client-io-threads 
etc... & my throughput performance increased to more 50%. Since then I am 
trying to find what are the performance tuning parameter to increase the write 
throughput.

The logic goes: If I get  MBps using local SSD, then if I run the same test 
on GlusterFS(2x-distribute), do I get 2x the throughput or ½ the time as 
localSSD. I know writes can't be near like localSSD. But I am using RDMA & I 
can feel there are some GlusterFS tunable to increase write perf, as I was able 
to increase write perf by 50%.

Anyone, sharing some basic guidelines is appreciated.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS + Ubuntu + Infiniband RDMA

2017-02-15 Thread Deepak Naidu
Hello,

Does anyone have a working setup of GlusterFS on Ubuntu 14.04.5 LTS using 
Infiniband & RDMA ?

I am planning to use Infiniband(IPoIB) for Cluster-Interconnect & how would 
RMDA be configured. Any info is appreciated.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS Volume for HPC workload

2017-02-09 Thread Deepak Naidu
Folks,

Wanted to get some inputs on the type of GlusterFS Volume is best suited for 
HPC workload which is throughput intensive. Anyone using GlusterFS in their env 
for HPC workload.

I want to keep a balance of data usage & redundancy. I want to try erasure 
coded(dispersed) volume, not sure if its throughput intensive or not.

PS: I don't have any GlusterFS env running, still in the though process.

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-11 Thread Deepak Naidu
>> Also can u please mention which version of ganesha and details of 
>> ganesha.conf . Latest stable release for
I am using ganesha.nfsd Release = V2.3.2

ganesha.conf is the standard glusterfs FSAL.

>> I am wondering why it is  sending gettattr call on "security.selinux".
Fyi, selinux is disabled both on server & client.


>> Check /var/log/ganesha.log and /var/log/ganesha-gfapi.log for more clues
They are kosher. I replied to my earlier post. I guess the delay/pause in the 
ls l or rm -rf command is something todo with my VM setup/network as I see the 
pause/delay in the kernel.NFS also.


--
Deepak


-Original Message-
From: Jiffin Tony Thottan [mailto:jthot...@redhat.com] 
Sent: Thursday, August 11, 2016 10:36 PM
To: Deepak Naidu; Vijay Bellur; gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts



On 12/08/16 07:23, Deepak Naidu wrote:
> I tried more things to figure out the issue. Like upgrading NFS-ganesha to 
> the latest version(as the earlier version had some bug regarding crashing), 
> that helped a bit.
>
> But still again the ls -ls or rm -rf files were hanging but not much as 
> earlier. So upgrade of NFS ganesha to stable version did help help a bit.
>
> I did strace again, looks like its pausing/hanging at "lstat" I had to 
> [crtl+c] to get the exact hang/pausing line.
>
> lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 
> 255) = -1 ENODATA (No data available) lstat("/mnt/gluster/rand.25.0", 
> {st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
> lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 
> 255) = -1 ENODATA (No data available) lstat("/mnt/gluster/rand.24.0", 
> ^C
>
>
> NOTE: I am running fio to generate some write operation & hangs are seen when 
> issuing ls during write operation.
>
> Next thing, I might try is to use NFS mount rather than Glustefs fuse to see 
> if its related to fuse client.
>
> strace of ls -l /mnt/gluster/==
>
> munmap(0x7efebec71000, 4096)= 0
> openat(AT_FDCWD, "/mnt/gluster/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) 
> = 3
> getdents(3, /* 14 entries */, 32768)= 464
> lstat("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
> {st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 0 
> lgetxattr("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
> "security.selinux", 0x1990900, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/file1", {st_mode=S_IFREG|0644, 
> st_size=19474461184, ...}) = 0 lgetxattr("/mnt/gluster/file1", 
> "security.selinux", 0x1990940, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.0.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.0.0", 
> "security.selinux", 0x1990940, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.31.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.31.0", 
> "security.selinux", 0x1990960, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.30.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.30.0", 
> "security.selinux", 0x1990980, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.29.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.29.0", 
> "security.selinux", 0x19909a0, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.28.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.28.0", 
> "security.selinux", 0x19909c0, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.27.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.27.0", 
> "security.selinux", 0x19909e0, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.26.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.26.0", 
> "security.selinux", 0x1990a00, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.25.0", {st_mode=S_IFREG|0644, 
> st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.25.0", 
> "security.selinux", 0x1990a20, 255) = -1 ENODATA (No data available) 
> lstat("/mnt/gluster/rand.24.0", ^C
>
> strace of end -  ls -l /mnt/gluster/==


I am wondering why it is  sending gettattr call 

Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-11 Thread Deepak Naidu
Ok I tried kernel.nfs(ie no & its hangs as well. So as you said Vijay, the 
issue might be with my setup on virtual on the network side.

--
Deepak

-Original Message-
From: Deepak Naidu 
Sent: Thursday, August 11, 2016 6:54 PM
To: 'Vijay Bellur'; 'gluster-users@gluster.org'
Subject: RE: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

I tried more things to figure out the issue. Like upgrading NFS-ganesha to the 
latest version(as the earlier version had some bug regarding crashing), that 
helped a bit.

But still again the ls -ls or rm -rf files were hanging but not much as 
earlier. So upgrade of NFS ganesha to stable version did help help a bit.

I did strace again, looks like its pausing/hanging at "lstat" I had to [crtl+c] 
to get the exact hang/pausing line.

lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.25.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.24.0", ^C


NOTE: I am running fio to generate some write operation & hangs are seen when 
issuing ls during write operation.

Next thing, I might try is to use NFS mount rather than Glustefs fuse to see if 
its related to fuse client.

strace of ls -l /mnt/gluster/==

munmap(0x7efebec71000, 4096)= 0
openat(AT_FDCWD, "/mnt/gluster/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
getdents(3, /* 14 entries */, 32768)= 464
lstat("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
{st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 0 
lgetxattr("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
"security.selinux", 0x1990900, 255) = -1 ENODATA (No data available) 
lstat("/mnt/gluster/file1", {st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 
0 lgetxattr("/mnt/gluster/file1", "security.selinux", 0x1990940, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.0.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.0.0", "security.selinux", 0x1990940, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.31.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.31.0", "security.selinux", 0x1990960, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.30.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.30.0", "security.selinux", 0x1990980, 255) = -1 
ENODATA (No da
 ta available) lstat("/mnt/gluster/rand.29.0", {st_mode=S_IFREG|0644, 
st_size=2147483648, ...}) = 0 lgetxattr("/mnt/gluster/rand.29.0", 
"security.selinux", 0x19909a0, 255) = -1 ENODATA (No data available) 
lstat("/mnt/gluster/rand.28.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0 lgetxattr("/mnt/gluster/rand.28.0", "security.selinux", 0x19909c0, 
255) = -1 ENODATA (No data available) lstat("/mnt/gluster/rand.27.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.27.0", "security.selinux", 0x19909e0, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.26.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.25.0", 
{st_mode=S_IFREG|0644, st_size=2147483648, ...}) = 0 
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1 
ENODATA (No data available) lstat("/mnt/gluster/rand.
 24.0", ^C

strace of end -  ls -l /mnt/gluster/==

-Original Message-
From: Deepak Naidu
Sent: Wednesday, August 10, 2016 2:25 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

To be more precious the hang is clearly seen when there is some IO(write) to 
the mount point. Even rm -rf takes time to clear the files.

Below, time command showing the delay. Typically it should take less then a 
second, but glusterfs take more than 5seconds just to list 32x 2GB files.

[root@client-host ~]# time ls -l /mnt/gluster/ total 34575680 -rw-r--r--. 1 
root root 2147483648 Aug 10 12:23 rand.0.0 -rw-r--r--. 1 root root 2147483648 
Aug 10 12:23 rand.1.0 -rw-r--r--. 1 root root 2147454976 Aug 10 12:23 rand.10.0 
-rw-r--r--. 1 root root 2147463168 Aug 10 12:23 rand.11.0 -rw-r--r--. 1 root 
root 2147467264 Aug 10 12:23 rand.12.0 -rw-r--r--. 1 root root 2147475456 Aug 
10 12:23 rand.13.0 -rw-r

Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-11 Thread Deepak Naidu
I tried more things to figure out the issue. Like upgrading NFS-ganesha to the 
latest version(as the earlier version had some bug regarding crashing), that 
helped a bit.

But still again the ls -ls or rm -rf files were hanging but not much as 
earlier. So upgrade of NFS ganesha to stable version did help help a bit.

I did strace again, looks like its pausing/hanging at "lstat" I had to [crtl+c] 
to get the exact hang/pausing line.

lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.25.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.24.0", ^C


NOTE: I am running fio to generate some write operation & hangs are seen when 
issuing ls during write operation.

Next thing, I might try is to use NFS mount rather than Glustefs fuse to see if 
its related to fuse client.

strace of ls -l /mnt/gluster/==

munmap(0x7efebec71000, 4096)= 0
openat(AT_FDCWD, "/mnt/gluster/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
getdents(3, /* 14 entries */, 32768)= 464
lstat("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
{st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 0
lgetxattr("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
"security.selinux", 0x1990900, 255) = -1 ENODATA (No data available)
lstat("/mnt/gluster/file1", {st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 0
lgetxattr("/mnt/gluster/file1", "security.selinux", 0x1990940, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.0.0", {st_mode=S_IFREG|0644, st_size=2147483648, ...}) 
= 0
lgetxattr("/mnt/gluster/rand.0.0", "security.selinux", 0x1990940, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.31.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.31.0", "security.selinux", 0x1990960, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.30.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.30.0", "security.selinux", 0x1990980, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.29.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.29.0", "security.selinux", 0x19909a0, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.28.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.28.0", "security.selinux", 0x19909c0, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.27.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.27.0", "security.selinux", 0x19909e0, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.26.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.25.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.24.0", ^C

strace of end -  ls -l /mnt/gluster/==

-Original Message-
From: Deepak Naidu 
Sent: Wednesday, August 10, 2016 2:25 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

To be more precious the hang is clearly seen when there is some IO(write) to 
the mount point. Even rm -rf takes time to clear the files.

Below, time command showing the delay. Typically it should take less then a 
second, but glusterfs take more than 5seconds just to list 32x 2GB files.

[root@client-host ~]# time ls -l /mnt/gluster/ total 34575680 -rw-r--r--. 1 
root root 2147483648 Aug 10 12:23 rand.0.0 -rw-r--r--. 1 root root 2147483648 
Aug 10 12:23 rand.1.0 -rw-r--r--. 1 root root 2147454976 Aug 10 12:23 rand.10.0 
-rw-r--r--. 1 root root 2147463168 Aug 10 12:23 rand.11.0 -rw-r--r--. 1 root 
root 2147467264 Aug 10 12:23 rand.12.0 -rw-r--r--. 1 root root 2147475456 Aug 
10 12:23 rand.13.0 -rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.14.0 
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.15.0 -rw-r--r--. 1 root 
root 2147483648 Aug 10 12:23 rand.16.0 -rw-r--r--. 1 root root 2147479552 Aug 
10 12:23 rand.17.0 -rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.18.0 
-rw-r--r--. 1 root root 2147467264 Aug 10 12:23 rand.19.0 -rw-r--r--. 1 root 
root 2147483648 Aug 10 12:23 rand.2.0 -

Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-10 Thread Deepak Naidu
To be more precious the hang is clearly seen when there is some IO(write) to 
the mount point. Even rm -rf takes time to clear the files.

Below, time command showing the delay. Typically it should take less then a 
second, but glusterfs take more than 5seconds just to list 32x 2GB files.

[root@client-host ~]# time ls -l /mnt/gluster/
total 34575680
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.0.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.1.0
-rw-r--r--. 1 root root 2147454976 Aug 10 12:23 rand.10.0
-rw-r--r--. 1 root root 2147463168 Aug 10 12:23 rand.11.0
-rw-r--r--. 1 root root 2147467264 Aug 10 12:23 rand.12.0
-rw-r--r--. 1 root root 2147475456 Aug 10 12:23 rand.13.0
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.14.0
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.15.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.16.0
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.17.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.18.0
-rw-r--r--. 1 root root 2147467264 Aug 10 12:23 rand.19.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.2.0
-rw-r--r--. 1 root root 2147475456 Aug 10 12:23 rand.20.0
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.21.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.22.0
-rw-r--r--. 1 root root 2147459072 Aug 10 12:23 rand.23.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.24.0
-rw-r--r--. 1 root root 2147471360 Aug 10 12:23 rand.25.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.26.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.27.0
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.28.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.29.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.3.0
-rw-r--r--. 1 root root 2147442688 Aug 10 12:23 rand.30.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.31.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.4.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.5.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.6.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.7.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.8.0
-rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.9.0

real0m7.478s
user0m0.001s
sys 0m0.005s
 [root@client-host ~]#

--
Deepak

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
Sent: Wednesday, August 10, 2016 2:18 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

* PGP Signed: 08/10/2016 at 02:18:22 PM, Decrypted

I did strace & its waiting on IO.

--
Deepak

-Original Message-
From: Vijay Bellur [mailto:vbel...@redhat.com]
Sent: Wednesday, August 10, 2016 2:17 PM
To: Deepak Naidu
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

On 08/10/2016 05:12 PM, Deepak Naidu wrote:
> Before we can try physical we wanted POC on VM.
>
> Just a note the VMs are decently powerful 18cpus, 10gig NIC, 45GB Ram 1TB SSD 
> drives. This is per node spec.
>
> I don't see the ls -l command hanging when I try to list the files from the 
> gluster-node VMs itself So the question.

The reason I alluded to a physical setup was to remove the variables that can 
affect performance in a virtual setup. The behavior is not usual for the scale 
of deployment that you mention. You could use strace in conjunction with 
gluster volume profile to determine where the latency is stemming from.

Regards,
Vijay

>
> --
> Deepak
>
>> On Aug 10, 2016, at 2:01 PM, Vijay Bellur  wrote:
>>
>>> On 08/10/2016 04:54 PM, Deepak Naidu wrote:
>>> Anyone who has seen the issue in their env ?
>>
>>
>>> --
>>> Deepak
>>>
>>> -Original Message-----
>>> From: gluster-users-boun...@gluster.org 
>>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
>>> Sent: Tuesday, August 09, 2016 9:14 PM
>>> To: gluster-users@gluster.org
>>> Subject: [Gluster-users] Linux (ls -l) command pauses/slow on 
>>> GlusterFS mounts
>>>
>>> Greetings,
>>>
>>> I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB. 
>>> Regardless of what type of volume I create when listing files under 
>>> directory using ls command the GlusterFS mount hangs pauses for few 
>>> seconds. This is same if there're 2-5 19gb file each or 2gb file each. 
>>> There are less than  10 files under the GlusterFS mount.
>>>
>>> I am using NFS-Ganesha for NFS server with GlusterFS and the Linux client 
>>> is mounted using GlusterFS fuse mount with direct-io enabled.
>>>
>>> GlusterFS version 3.8(latest)
>>>
>&g

Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-10 Thread Deepak Naidu
I did strace & its waiting on IO.

--
Deepak

-Original Message-
From: Vijay Bellur [mailto:vbel...@redhat.com] 
Sent: Wednesday, August 10, 2016 2:17 PM
To: Deepak Naidu
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

On 08/10/2016 05:12 PM, Deepak Naidu wrote:
> Before we can try physical we wanted POC on VM.
>
> Just a note the VMs are decently powerful 18cpus, 10gig NIC, 45GB Ram 1TB SSD 
> drives. This is per node spec.
>
> I don't see the ls -l command hanging when I try to list the files from the 
> gluster-node VMs itself So the question.

The reason I alluded to a physical setup was to remove the variables that can 
affect performance in a virtual setup. The behavior is not usual for the scale 
of deployment that you mention. You could use strace in conjunction with 
gluster volume profile to determine where the latency is stemming from.

Regards,
Vijay

>
> --
> Deepak
>
>> On Aug 10, 2016, at 2:01 PM, Vijay Bellur  wrote:
>>
>>> On 08/10/2016 04:54 PM, Deepak Naidu wrote:
>>> Anyone who has seen the issue in their env ?
>>
>>
>>> --
>>> Deepak
>>>
>>> -Original Message-----
>>> From: gluster-users-boun...@gluster.org 
>>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
>>> Sent: Tuesday, August 09, 2016 9:14 PM
>>> To: gluster-users@gluster.org
>>> Subject: [Gluster-users] Linux (ls -l) command pauses/slow on 
>>> GlusterFS mounts
>>>
>>> Greetings,
>>>
>>> I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB. 
>>> Regardless of what type of volume I create when listing files under 
>>> directory using ls command the GlusterFS mount hangs pauses for few 
>>> seconds. This is same if there're 2-5 19gb file each or 2gb file each. 
>>> There are less than  10 files under the GlusterFS mount.
>>>
>>> I am using NFS-Ganesha for NFS server with GlusterFS and the Linux client 
>>> is mounted using GlusterFS fuse mount with direct-io enabled.
>>>
>>> GlusterFS version 3.8(latest)
>>>
>>>
>>> Any insight is appreciated.
>>
>> This does not seem usual for the deployment that you describe. Can you try 
>> on a physical setup to see if the same behavior is observed?
>>
>> -Vijay
>>
>>
> --
> - This email message is for the sole use of the intended 
> recipient(s) and may contain confidential information.  Any 
> unauthorized review, use, disclosure or distribution is prohibited.  
> If you are not the intended recipient, please contact the sender by 
> reply email and destroy all copies of the original message.
> --
> -
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-10 Thread Deepak Naidu
Before we can try physical we wanted POC on VM.

Just a note the VMs are decently powerful 18cpus, 10gig NIC, 45GB Ram 1TB SSD 
drives. This is per node spec.

I don't see the ls -l command hanging when I try to list the files from the 
gluster-node VMs itself So the question.

--
Deepak

> On Aug 10, 2016, at 2:01 PM, Vijay Bellur  wrote:
> 
>> On 08/10/2016 04:54 PM, Deepak Naidu wrote:
>> Anyone who has seen the issue in their env ?
> 
> 
>> --
>> Deepak
>> 
>> -Original Message-
>> From: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
>> Sent: Tuesday, August 09, 2016 9:14 PM
>> To: gluster-users@gluster.org
>> Subject: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
>> mounts
>> 
>> Greetings,
>> 
>> I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB. 
>> Regardless of what type of volume I create when listing files under 
>> directory using ls command the GlusterFS mount hangs pauses for few seconds. 
>> This is same if there're 2-5 19gb file each or 2gb file each. There are less 
>> than  10 files under the GlusterFS mount.
>> 
>> I am using NFS-Ganesha for NFS server with GlusterFS and the Linux client is 
>> mounted using GlusterFS fuse mount with direct-io enabled.
>> 
>> GlusterFS version 3.8(latest)
>> 
>> 
>> Any insight is appreciated.
> 
> This does not seem usual for the deployment that you describe. Can you try on 
> a physical setup to see if the same behavior is observed?
> 
> -Vijay
> 
> 
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-10 Thread Deepak Naidu
Anyone who has seen the issue in their env ?

--
Deepak

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
Sent: Tuesday, August 09, 2016 9:14 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

Greetings,

I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB. 
Regardless of what type of volume I create when listing files under directory 
using ls command the GlusterFS mount hangs pauses for few seconds. This is same 
if there're 2-5 19gb file each or 2gb file each. There are less than  10 files 
under the GlusterFS mount.

I am using NFS-Ganesha for NFS server with GlusterFS and the Linux client is 
mounted using GlusterFS fuse mount with direct-io enabled.

GlusterFS version 3.8(latest) 


Any insight is appreciated.

--
Deepak
---
This email message is for the sole use of the intended recipient(s) and may 
contain confidential information.  Any unauthorized review, use, disclosure or 
distribution is prohibited.  If you are not the intended recipient, please 
contact the sender by reply email and destroy all copies of the original 
message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-09 Thread Deepak Naidu
Greetings,

I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB. 
Regardless of what type of volume I create when listing files under directory 
using ls command the GlusterFS mount hangs pauses for few seconds. This is same 
if there're 2-5 19gb file each or 2gb file each. There are less than  10 files 
under the GlusterFS mount.

I am using NFS-Ganesha for NFS server with GlusterFS and the Linux client is 
mounted using GlusterFS fuse mount with direct-io enabled.

GlusterFS version 3.8(latest) 


Any insight is appreciated.

--
Deepak
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users