On 08/09/2016 09:06 PM, Mahdi Adnan wrote:
Hi,
Thank you for your reply.
The traffic is related to GlusterFS;
18:31:20.419056 IP 192.168.208.134.49058 > 192.168.208.134.49153: Flags
[.], ack 3876, win 24576, options [nop,nop,TS val 247718812 ecr
247718772], length 0
18:31:20.419080 IP
Greetings,
I have 3node GlusterFS on VM for POC each node has 2x bricks of 200GB.
Regardless of what type of volume I create when listing files under directory
using ls command the GlusterFS mount hangs pauses for few seconds. This is same
if there're 2-5 19gb file each or 2gb file each. There
Thanks to everyone who responded. I pulled my head out of a very dark place
and realized, after back tracking, that the ganesha wasn't properly
enabled. In fact it fails. So, sorry for wasting your time, but the
information and help will undoubtedly come in handy.
Thanks again!
Corey (going to
However, logs from nfs-ganesha and ganesha-gfapi would be the most helpful
at this time.
On Tue, Aug 9, 2016 at 3:38 PM, Ben Werthmann wrote:
> I've found this useful for ensuring that Ganesha is building what you've
> asked it to build. If you want Ceph or ZFS, you need to
I've found this useful for ensuring that Ganesha is building what you've
asked it to build. If you want Ceph or ZFS, you need to install the
required libraries.
cmake $ganeha_src_dir -DCMAKE_BUILD_TYPE=Maintainer -DSTRICT_PACKAGE=ON
-DUSE_FSAL_CEPH=NO -DUSE_FSAL_ZFS=NO
On Tue, Aug 9, 2016 at
Your replica 2 result is pretty damn good IMHO, you would always expect
at the very most 1/2 the write speed than a local write to brick
storage. Not sure why a 1 brick volume doesn't approach your native
though - it could be that FUSE overhead caps you at <1GB/s in your setup.
AFAIK there is
Thanks Ben,
It's a package built by me, (https://download.gluster.org/
pub/gluster/nfs-ganesha/2.3.0/CentOS/epel-7.1/SRPMS/) the fsal appears to
have built (the library is in place). I'll take a look at that bug and
refactor my config and see how it goes. Thanks for your help!
On Tue, Aug 9,
Il 09 ago 2016 19:57, "Ashish Pandey" ha scritto:
> Yes, redundant data spread across multiple servers. In my example I
mentioned 6 different nodes each with one brick.
> Point is that for 4+2 you can loose any 2 bricks. It could be because of
node failure or brick failure.
>
Which nfs-ganesha package are you using? I recall someone on my team saying
that there's a nfs-ganesha package floating around which did not have the
Gluster FSAL built.
Gluster's nfs-ganesha packages are located here:
https://download.gluster.org/pub/gluster/nfs-ganesha/
What does your
Please, post the ganesha log file and the output of "showmount -e"
--
Respectfully
Mahdi A. Mahdi
From: corey.kov...@gmail.com
Date: Tue, 9 Aug 2016 12:10:29 -0600
Subject: Re: [Gluster-users] Nfs-ganesha...
To: mahdi.ad...@outlook.com
Mahdi, Thanks for the quick response. EXPORT
What about EC? Are redundant data spread across multiple servers? If not,
multiple replica would be placed on the same server. I can loose 2 bricks (2
disks) but what if I'll loose the whole server with both bricks on it? And when
a server fails, multiple bricks are affected .
-
Hi,
Please post ganesha configuration file.
--
Respectfully
Mahdi A. Mahdi
From: corey.kov...@gmail.com
Date: Tue, 9 Aug 2016 11:24:58 -0600
To: gluster-users@gluster.org
Subject: [Gluster-users] Nfs-ganesha...
If not an appropriate place to ask, my apologies.
I have been trying
Il 09 ago 2016 19:20, "Ashish Pandey" ha scritto:
> 3 - EC with redundancy 2 that is 4+2
> The over all storage space you get is 4TB and any 2 bricks can be down at
any point of time. So it is as good as replica 3 but providing more space.
Not really.
With replica 3 i can
Hi,
Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster
node):
Writing locally to replica 2 volume (each brick is separate local RAID6): 613
MB/sec
Writing locally to 1-brick volume: 877 MB/sec
Writing locally to the brick itself (directly to XFS): 1400 MB/sec
Tests
If not an appropriate place to ask, my apologies.
I have been trying to get Nfs-ganesha 2.3 to work with gluster 3.8. It
never seems to load the fsal. Are there known issues with this combination?
Corey
___
Gluster-users mailing list
Yes Gandalf, I think you are missing a point, the way we configure EC.
To explain that I would like to take less number of disks. Lets say you have 6
disk of 1TB each on 6 different nodes.
1- Replica 2 using gluster
There will be 3 sub volume of replica - afr-1, afr-2, afr-3 each with pair
Okay so after migrating a few VMs to the new cluster, the native nfs did NOT
crash again, it's running for two days straight.My workload does not involve
high throughput, but high IOp, it's average around 100 IO/ps for each brick.I
will try to recreate this workload on a VM and see if it crash
Hi,
I just finished to read the documentation about arbiter
(https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/)
and would like to convert my existing replica 2 volumes to replica 3 volumes.
How do I proceed? Unfortunately, I did not find any
Il 09 ago 2016 10:06 AM, "Ashish Pandey" ha scritto:
> If your main concern is data redundancy, I would suggest you to go for
erasure coded volume provided by gluster.
Anyway EC volumes has a lower redundancy level than standard replicated
volumes.
Let's assume a 9 nodes
Il 09 ago 2016 10:06 AM, "Ashish Pandey" ha scritto:
> If your main concern is data redundancy, I would suggest you to go for
erasure coded volume provided by gluster.
> Erasure coded (EC) volume or disperse volume can provide you redundancy
without wasting too much storage.
On Mon, Aug 8, 2016 at 5:24 PM, Joe Julian wrote:
>
>
> On 08/08/2016 02:56 PM, David Gossage wrote:
>
> On Mon, Aug 8, 2016 at 4:37 PM, David Gossage > wrote:
>
>> On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian wrote:
>>
On 08/09/2016 03:33 PM, Mahdi Adnan wrote:
Hi,
Im using NFS-Ganesha to access my volume, it's working fine for now but
im seeing lots of traffic on the Loopback interface, in fact it's the
same amount of traffic on the bonding interface, can anyone please
explain to me why is this happening ?
Hi Sergei,
When quota is enabled, quota-deem-statfs should be set to ON(By default
with the recent versions). But apparently
from your 'gluster v info' output, it is like quota-deem-statfs is not on.
Could you please check and confirm the same on
/var/lib/glusterd/vols//info.
If you do not find
Hi ,
The gluster version is 3.7.12. Here’s the output of `gluster info`:
Volume Name: ftp_volume
Type: Distributed-Replicate
Volume ID: SOME_VOLUME_ID
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: host03:/data/ftp_gluster_brick
Brick2:
Hi,
Sorry, I missed the mail. May I know which version of gluster you are using
and please paste the output of
gluster v info?
On Sat, Aug 6, 2016 at 8:19 AM, Sergei Gerasenko wrote:
> Hi,
>
> I'm playing with quotas and the quota list command on one of the
> directories
Hi all,
The weekly Gluster bug triage is about to take place in an hour
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00
Hi all,
Here are the minutes of Last week's Gluster Community Bug Triage Meeting.
Sorry for the delay.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-02/gluster_community_bug_triage_meeting.2016-08-02-12.03.html
Minutes (text):
Does increasing any of below values helps ec heal speed?
performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 16
performance.least-prio-threads 1
client.event-threads 8
server.event-threads 8
On Mon, Aug 8, 2016 at 2:48
Hi,
Im using NFS-Ganesha to access my volume, it's working fine for now but im
seeing lots of traffic on the Loopback interface, in fact it's the same amount
of traffic on the bonding interface, can anyone please explain to me why is
this happening ?also, i got the following error in the
On Tue, Aug 9, 2016 at 2:18 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9 August 2016 at 12:23, David Gossage
> wrote:
> > Since my dev is now on 3.8 and has granular enabled I'm feeling too lazy
> to
> > roll back so will just wait till 3.8.2 is
On 9 August 2016 at 12:23, David Gossage wrote:
> Since my dev is now on 3.8 and has granular enabled I'm feeling too lazy to
> roll back so will just wait till 3.8.2 is released in few days that fixes
> the bugs mentioned to me and then test this few times on my dev.
On Tue, Aug 9, 2016 at 10:58 AM, Saravanakumar Arumugam wrote:
>
> On 08/08/2016 08:59 PM, Atin Mukherjee wrote:
>
>
>
> On Mon, Aug 8, 2016 at 3:18 PM, Niels de Vos wrote:
>
>> On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam wrote:
>> >
32 matches
Mail list logo