Joe
On Sun, Mar 19, 2017 at 11:52 AM, Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Thanks Joe for your inputs. I guess comparing client -- glusterServer IO
performance over MTLS and non-MTLS should give me some idea on the
client/server IO overhead.
Also
r
documentation is reasonable though could be improved (I need to submit some PRs
against it). if you have any questions about setup and configuration, I am sure
I can help.
Joe
On Sat, Mar 18, 2017 at 2:25 PM, Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Hi
Hi Joe, thanks for taking time for explaining. I am having basic set of
requirements along with IO performance as key factor, my reply below should
justify what I am trying to achieve.
>>If I am understanding your use case properly, you want to ensure that a
>>client may only mount a gluster
Thanks Joseph for info.
>>In addition, gluster uses MTLS (each endpoint validate's the other's
>>chain-of-trust), so you get authentication as well.
Does it only do authentication of mounts. I am not interested at this moment on
IO path encryption only looking for authentication.
>>you can
Any info guys ?
--
Deepak
From: Deepak Naidu
Sent: Friday, March 17, 2017 12:32 AM
To: gluster-users@gluster.org
Subject: Secured mount in GlusterFS using keys
Hello,
Is there a way like cephFS where a keyring can be passed for mount. I see SSL
in GlusterFS something secured mount based
Hello,
Is there a way like cephFS where a keyring can be passed for mount. I see SSL
in GlusterFS something secured mount based on pem & key files, but I am bit
confused where these are only for mount authentication or for IO path
encryption. I only want authorized Glusterfs mount based on
Hi Karan,
>>Are you reading a small file data-set or large files data-set and secondly,
>>volume is mounted using which protocol?
I am using 1mb block size to test using RDMA transport.
--
Deepak
> On Mar 8, 2017, at 2:48 AM, Karan Sandha wrote:
>
> Are you reading a
Is there are any tuning param for READ, I need to set to get maximum
throughput on glusterfs distributed volume read performance. Currently, I am
trying to compare this with my local SSD Disk performance.
* My local SSD(/dev/sdb) can random read 6.3TB in 56 minutes on XFS
filesystem.
>>Idea of multi-tenancy is to have multiple tenants on same volume. May be I
>>didn't understand your idea completely
First, if you can help me understand how Glusterfs defines and does multi
tenancy, it would be helpful.
Second, multi tenancy should have complete isolation of resource from
--
Deepak
From: Arman Khalatyan [mailto:arm2...@gmail.com]
Sent: Thursday, March 02, 2017 10:57 PM
To: Deepak Naidu
Cc: Rafi Kavungal Chundattu Parambil; gluster-users@gluster.org; users; Sahina
Bose
Subject: RE: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?
Dear Deepak, thank
I have been testing glusterfs over RDMA & below is the command I use. Reading
up the logs, it looks like your IB(InfiniBand) device is not being initialized.
I am not sure if u have an issue on the client IB or the storage server IB.
Also have you configured ur IB devices correctly. I am using
Hello,
I have been reading the below statement in GlusterFS docs & articles regarding
multi-tenancy. Is this statement related to virtual environment ie VM's. How
valid is "partitioning users or groups into logical volumes". Can someone
explain what it really means.
Is it that I can associate
>>So in this case, although the volume status shows up that volume is not
>>started, the brick process(es) actually do start. As a workaround please use
>>volume start force one more time.
Thanks Atin for providing the bug info.
--
Deepak
> On Feb 25, 2017, at 7:16 AM, Atin Mukherjee
I keep on getting this error when my config.transport is set to both tcp,rdma.
The volume doesn't start. I get the below error during volume start.
To get around this, I end up delete the volume, then configure either only rdma
or tcp. May be I am missing something, just trying to get the
Hello,
I have GlusterFS 3.8.8. I am using IB RDMA. I have noticed during Write or Read
the throughput doesn't seem consistent for same workload(fio command).
Sometimes I get higher throughput sometimes it quickly goes into half, then
stays there.
I cannot predict a consistent behavior every
>>I thought about this, but if servers are also clients, then this would not
>>work.
Well, I suppose this is not possible or not required.
In that case not sure why would you need separate network ?
>>Is it really the client which replicates the data and distributes it to the
>>different
I have a setup where storage nodes use network-1 & client nodes use network-2.
In both server/client, I use /etc/hosts entry to define storage node name,
example node1, node2, node3 etc...
When client uses node1 hostname it resolves to network-2 & when storage uses
node1 it resolves to
Anyone who have tuned glusterfs performance for read and write IO ?
--
Deepak
On Feb 20, 2017, at 9:24 AM, Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Hello,
I tried some performance tuning options like performance.client-io-threads
etc... & my throug
Hello,
I tried some performance tuning options like performance.client-io-threads
etc... & my throughput performance increased to more 50%. Since then I am
trying to find what are the performance tuning parameter to increase the write
throughput.
The logic goes: If I get MBps using local
Hello,
Does anyone have a working setup of GlusterFS on Ubuntu 14.04.5 LTS using
Infiniband & RDMA ?
I am planning to use Infiniband(IPoIB) for Cluster-Interconnect & how would
RMDA be configured. Any info is appreciated.
--
Deepak
Folks,
Wanted to get some inputs on the type of GlusterFS Volume is best suited for
HPC workload which is throughput intensive. Anyone using GlusterFS in their env
for HPC workload.
I want to keep a balance of data usage & redundancy. I want to try erasure
coded(dispersed) volume, not sure if
he
pause/delay in the kernel.NFS also.
--
Deepak
-Original Message-
From: Jiffin Tony Thottan [mailto:jthot...@redhat.com]
Sent: Thursday, August 11, 2016 10:36 PM
To: Deepak Naidu; Vijay Bellur; gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow
Ok I tried kernel.nfs(ie no & its hangs as well. So as you said Vijay, the
issue might be with my setup on virtual on the network side.
--
Deepak
-Original Message-
From: Deepak Naidu
Sent: Thursday, August 11, 2016 6:54 PM
To: 'Vijay Bellur'; 'gluster-users@gluster.org'
Subject
quot;, 0x1990a00, 255) = -1
ENODATA (No data available)
lstat("/mnt/gluster/rand.25.0", {st_mode=S_IFREG|0644, st_size=2147483648,
...}) = 0
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1
ENODATA (No data available)
lstat("/mnt/gluster/r
] On Behalf Of Deepak Naidu
Sent: Wednesday, August 10, 2016 2:18 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS
mounts
* PGP Signed: 08/10/2016 at 02:18:22 PM, Decrypted
I did strace & its waiting on IO.
--
De
I did strace & its waiting on IO.
--
Deepak
-Original Message-
From: Vijay Bellur [mailto:vbel...@redhat.com]
Sent: Wednesday, August 10, 2016 2:17 PM
To: Deepak Naidu
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS
mo
Aug 10, 2016, at 2:01 PM, Vijay Bellur <vbel...@redhat.com> wrote:
>
>> On 08/10/2016 04:54 PM, Deepak Naidu wrote:
>> Anyone who has seen the issue in their env ?
>
>
>> --
>> Deepak
>>
>> -Original Message-
>> From: glus
Anyone who has seen the issue in their env ?
--
Deepak
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
Sent: Tuesday, August 09, 2016 9:14 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Linux (ls
Greetings,
I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB.
Regardless of what type of volume I create when listing files under directory
using ls command the GlusterFS mount hangs pauses for few seconds. This is same
if there're 2-5 19gb file each or 2gb file each. There
29 matches
Mail list logo