To utilize FUSE's readdirplus, we need at least
http://review.gluster.org/3905 and a couple more dependencies to make it.
They had slipped down in priority for the short term. If you have interest
to act as an early tester I can refresh those patches for you..
Avati
On Tue, Dec 11, 2012 at 7:05 P
I have compiled and installed the 3.7 kernel with this patch, and I do not
see a difference, at least on a client only level with glusterfs 3.2.7. I
will attempt to do some client/server testing with some repurposed machines
in the next day or two. I hope to test with 3.2.7 and 3.3.1; the only thin
In response to Whit, this patch hasn't made it into a kernel release.
As a partial response to myself: For others looking to patch their own
kernel source, it won't apply to anything older than 3.7-rc1 without the
incorporation of a much larger uapi patchset (probably all history here,
and then so
Also, if we were to apply this patch to an existing kernel tree would
GlusterFS automatically make use of it? Or would we need a special mount
option?
--
Adam
On Tue, Dec 4, 2012 at 6:48 PM, Whit Blauvelt wrote:
> Avanti,
>
> For those of us willing compile kernels when there's a distinct advant
Avanti,
For those of us willing compile kernels when there's a distinct advantage,
has this patch made it into a kernel release? If so, which?
Thanks,
Whit
On Tue, Dec 04, 2012 at 04:35:39PM -0800, Anand Avati wrote:
> Support for READDIRPLUS in FUSE improves directory listing performance
> sign
, 2012 5:36 PM
>
> *To:* Kushnir, Michael (NIH/NLM/LHC) [C]
> *Cc:* Andrew Holway; gluster-users@gluster.org
>
> *Subject:* Re: [Gluster-users] Does brick fs play a large role on listing
> files client side?
>
> ** **
>
> I think performance.cache-refresh-timeout *m
t files older than X, and etc...
Thanks,
Michael
From: Bryan Whitehead [mailto:dri...@megahappy.net]
Sent: Tuesday, December 04, 2012 5:36 PM
To: Kushnir, Michael (NIH/NLM/LHC) [C]
Cc: Andrew Holway; gluster-users@gluster.org
Subject: Re: [Gluster-users] Does brick fs play a large role on listing fi
y mounting the volume over
> NFS? Something else?
>
> Thanks,
> Michael
>
>
> -Original Message-
> From: Andrew Holway [mailto:a.hol...@syseleven.de]
> Sent: Tuesday, December 04, 2012 4:47 PM
> To: Kushnir, Michael (NIH/NLM/LHC) [C]
> Cc: gluster-users@gluster.org
>
ter-users@gluster.org
Subject: Re: [Gluster-users] Does brick fs play a large role on listing files
client side?
On Dec 4, 2012, at 5:30 PM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote:
> My GlusterFS deployment right now is 8 x 512GB OCZ Vertex 4 (no RAID)
> connected to Dell PERC H710, f
On Dec 4, 2012, at 5:30 PM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote:
> My GlusterFS deployment right now is 8 x 512GB OCZ Vertex 4 (no RAID)
> connected to Dell PERC H710, formatted as XFS and put together into a
> distributed volume.
Hi,
Are you just using a single brick? Gluster is a scale
Hello everyone,
We are in the process of evaluating GlusterFS to replace an ocfs2 deployment as
backing storage for a web application server cluster.
My ocfs2 deployment was 8 x 512GB Crucial M4 SSDs in RAID0 connected to an LSI
9260-8i with DRBD protocol C pushed out as an iSCSI LUN via IET.
11 matches
Mail list logo