Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Soumya Koduri
On 08/09/2016 09:06 PM, Mahdi Adnan wrote: Hi, Thank you for your reply. The traffic is related to GlusterFS; 18:31:20.419056 IP 192.168.208.134.49058 > 192.168.208.134.49153: Flags [.], ack 3876, win 24576, options [nop,nop,TS val 247718812 ecr 247718772], length 0 18:31:20.419080 IP

[Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-09 Thread Deepak Naidu
Greetings, I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB. Regardless of what type of volume I create when listing files under directory using ls command the GlusterFS mount hangs pauses for few seconds. This is same if there're 2-5 19gb file each or 2gb file each. There

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Corey Kovacs
Thanks to everyone who responded. I pulled my head out of a very dark place and realized, after back tracking, that the ganesha wasn't properly enabled. In fact it fails. So, sorry for wasting your time, but the information and help will undoubtedly come in handy. Thanks again! Corey (going to

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Ben Werthmann
However, logs from nfs-ganesha and ganesha-gfapi would be the most helpful at this time. On Tue, Aug 9, 2016 at 3:38 PM, Ben Werthmann wrote: > I've found this useful for ensuring that Ganesha is building what you've > asked it to build. If you want Ceph or ZFS, you need to

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Ben Werthmann
I've found this useful for ensuring that Ganesha is building what you've asked it to build. If you want Ceph or ZFS, you need to install the required libraries. cmake $ganeha_src_dir -DCMAKE_BUILD_TYPE=Maintainer -DSTRICT_PACKAGE=ON -DUSE_FSAL_CEPH=NO -DUSE_FSAL_ZFS=NO On Tue, Aug 9, 2016 at

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-09 Thread Alex Crow
Your replica 2 result is pretty damn good IMHO, you would always expect at the very most 1/2 the write speed than a local write to brick storage. Not sure why a 1 brick volume doesn't approach your native though - it could be that FUSE overhead caps you at <1GB/s in your setup. AFAIK there is

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Corey Kovacs
Thanks Ben, It's a package built by me, (https://download.gluster.org/ pub/gluster/nfs-ganesha/2.3.0/CentOS/epel-7.1/SRPMS/) the fsal appears to have built (the library is in place). I'll take a look at that bug and refactor my config and see how it goes. Thanks for your help! On Tue, Aug 9,

Re: [Gluster-users] Need help to design a data storage

2016-08-09 Thread Gandalf Corvotempesta
Il 09 ago 2016 19:57, "Ashish Pandey" ha scritto: > Yes, redundant data spread across multiple servers. In my example I mentioned 6 different nodes each with one brick. > Point is that for 4+2 you can loose any 2 bricks. It could be because of node failure or brick failure. >

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Ben Werthmann
Which nfs-ganesha package are you using? I recall someone on my team saying that there's a nfs-ganesha package floating around which did not have the Gluster FSAL built. Gluster's nfs-ganesha packages are located here: https://download.gluster.org/pub/gluster/nfs-ganesha/ What does your

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Mahdi Adnan
Please, post the ganesha log file and the output of "showmount -e" -- Respectfully Mahdi A. Mahdi From: corey.kov...@gmail.com Date: Tue, 9 Aug 2016 12:10:29 -0600 Subject: Re: [Gluster-users] Nfs-ganesha... To: mahdi.ad...@outlook.com Mahdi, Thanks for the quick response. EXPORT

Re: [Gluster-users] Need help to design a data storage

2016-08-09 Thread Ashish Pandey
What about EC? Are redundant data spread across multiple servers? If not, multiple replica would be placed on the same server. I can loose 2 bricks (2 disks) but what if I'll loose the whole server with both bricks on it? And when a server fails, multiple bricks are affected . -

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Mahdi Adnan
Hi, Please post ganesha configuration file. -- Respectfully Mahdi A. Mahdi From: corey.kov...@gmail.com Date: Tue, 9 Aug 2016 11:24:58 -0600 To: gluster-users@gluster.org Subject: [Gluster-users] Nfs-ganesha... If not an appropriate place to ask, my apologies. I have been trying

Re: [Gluster-users] Need help to design a data storage

2016-08-09 Thread Gandalf Corvotempesta
Il 09 ago 2016 19:20, "Ashish Pandey" ha scritto: > 3 - EC with redundancy 2 that is 4+2 > The over all storage space you get is 4TB and any 2 bricks can be down at any point of time. So it is as good as replica 3 but providing more space. Not really. With replica 3 i can

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-09 Thread Дмитрий Глушенок
Hi, Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster node): Writing locally to replica 2 volume (each brick is separate local RAID6): 613 MB/sec Writing locally to 1-brick volume: 877 MB/sec Writing locally to the brick itself (directly to XFS): 1400 MB/sec Tests

[Gluster-users] Nfs-ganesha...

2016-08-09 Thread Corey Kovacs
If not an appropriate place to ask, my apologies. I have been trying to get Nfs-ganesha 2.3 to work with gluster 3.8. It never seems to load the fsal. Are there known issues with this combination? Corey ___ Gluster-users mailing list

Re: [Gluster-users] Need help to design a data storage

2016-08-09 Thread Ashish Pandey
Yes Gandalf, I think you are missing a point, the way we configure EC. To explain that I would like to take less number of disks. Lets say you have 6 disk of 1TB each on 6 different nodes. 1- Replica 2 using gluster There will be 3 sub volume of replica - afr-1, afr-2, afr-3 each with pair

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-09 Thread Mahdi Adnan
Okay so after migrating a few VMs to the new cluster, the native nfs did NOT crash again, it's running for two days straight.My workload does not involve high throughput, but high IOp, it's average around 100 IO/ps for each brick.I will try to recreate this workload on a VM and see if it crash

[Gluster-users] Convert replica 2 to replicat3 (arbiter) volume

2016-08-09 Thread ML mail
Hi, I just finished to read the documentation about arbiter (https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/) and would like to convert my existing replica 2 volumes to replica 3 volumes. How do I proceed? Unfortunately, I did not find any

Re: [Gluster-users] Need help to design a data storage

2016-08-09 Thread Gandalf Corvotempesta
Il 09 ago 2016 10:06 AM, "Ashish Pandey" ha scritto: > If your main concern is data redundancy, I would suggest you to go for erasure coded volume provided by gluster. Anyway EC volumes has a lower redundancy level than standard replicated volumes. Let's assume a 9 nodes

Re: [Gluster-users] Need help to design a data storage

2016-08-09 Thread Gandalf Corvotempesta
Il 09 ago 2016 10:06 AM, "Ashish Pandey" ha scritto: > If your main concern is data redundancy, I would suggest you to go for erasure coded volume provided by gluster. > Erasure coded (EC) volume or disperse volume can provide you redundancy without wasting too much storage.

Re: [Gluster-users] Change underlying brick on node

2016-08-09 Thread David Gossage
On Mon, Aug 8, 2016 at 5:24 PM, Joe Julian wrote: > > > On 08/08/2016 02:56 PM, David Gossage wrote: > > On Mon, Aug 8, 2016 at 4:37 PM, David Gossage > wrote: > >> On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian wrote: >>

Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Soumya Koduri
On 08/09/2016 03:33 PM, Mahdi Adnan wrote: Hi, Im using NFS-Ganesha to access my volume, it's working fine for now but im seeing lots of traffic on the Loopback interface, in fact it's the same amount of traffic on the bonding interface, can anyone please explain to me why is this happening ?

Re: [Gluster-users] incorrect usage value on a directory

2016-08-09 Thread Manikandan Selvaganesh
Hi Sergei, When quota is enabled, quota-deem-statfs should be set to ON(By default with the recent versions). But apparently from your 'gluster v info' output, it is like quota-deem-statfs is not on. Could you please check and confirm the same on /var/lib/glusterd/vols//info. If you do not find

Re: [Gluster-users] incorrect usage value on a directory

2016-08-09 Thread Sergei Gerasenko
Hi , The gluster version is 3.7.12. Here’s the output of `gluster info`: Volume Name: ftp_volume Type: Distributed-Replicate Volume ID: SOME_VOLUME_ID Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: host03:/data/ftp_gluster_brick Brick2:

Re: [Gluster-users] incorrect usage value on a directory

2016-08-09 Thread Manikandan Selvaganesh
Hi, Sorry, I missed the mail. May I know which version of gluster you are using and please paste the output of gluster v info? On Sat, Aug 6, 2016 at 8:19 AM, Sergei Gerasenko wrote: > Hi, > > I'm playing with quotas and the quota list command on one of the > directories

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting (Today)

2016-08-09 Thread Muthu Vigneshwaran
Hi all, The weekly Gluster bug triage is about to take place in an hour Meeting details: - location: #gluster-meeting on Freenode IRC ( https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your terminal, run: date -d "12:00

[Gluster-users] Meeting Minutes of last week's Gluster Community Bug Triage Meeting

2016-08-09 Thread Muthu Vigneshwaran
Hi all, Here are the minutes of Last week's Gluster Community Bug Triage Meeting. Sorry for the delay. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-08-02/gluster_community_bug_triage_meeting.2016-08-02-12.03.html Minutes (text):

Re: [Gluster-users] ec heal questions

2016-08-09 Thread Serkan Çoban
Does increasing any of below values helps ec heal speed? performance.io-thread-count 16 performance.high-prio-threads 16 performance.normal-prio-threads 16 performance.low-prio-threads 16 performance.least-prio-threads 1 client.event-threads 8 server.event-threads 8 On Mon, Aug 8, 2016 at 2:48

[Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Mahdi Adnan
Hi, Im using NFS-Ganesha to access my volume, it's working fine for now but im seeing lots of traffic on the Loopback interface, in fact it's the same amount of traffic on the bonding interface, can anyone please explain to me why is this happening ?also, i got the following error in the

Re: [Gluster-users] Change underlying brick on node

2016-08-09 Thread David Gossage
On Tue, Aug 9, 2016 at 2:18 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 9 August 2016 at 12:23, David Gossage > wrote: > > Since my dev is now on 3.8 and has granular enabled I'm feeling too lazy > to > > roll back so will just wait till 3.8.2 is

Re: [Gluster-users] Change underlying brick on node

2016-08-09 Thread Lindsay Mathieson
On 9 August 2016 at 12:23, David Gossage wrote: > Since my dev is now on 3.8 and has granular enabled I'm feeling too lazy to > roll back so will just wait till 3.8.2 is released in few days that fixes > the bugs mentioned to me and then test this few times on my dev.

Re: [Gluster-users] What is op-version?

2016-08-09 Thread Atin Mukherjee
On Tue, Aug 9, 2016 at 10:58 AM, Saravanakumar Arumugam wrote: > > On 08/08/2016 08:59 PM, Atin Mukherjee wrote: > > > > On Mon, Aug 8, 2016 at 3:18 PM, Niels de Vos wrote: > >> On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam wrote: >> >