Den 2014-10-03 16:23, Niels de Vos skrev:
On Fri, Oct 03, 2014 at 03:26:04PM +0200, Peter Haraldson wrote:
Hi all!
Hi Peter!
I'm rather new to glusterfs, trying it out for redundant storage for my very
small company.
I have a minimal setup of glusterfs, 2 servers (storage1 storage2) with
Hi Tom,
Which version of Gluster are you running? I talked with my operations team, and
they don't seem to recall a log entry afr_dir_exclusive_crawl. But AFR seems
like a self-heal.
Therefore I suspect you're using Gluster in a very similar way that we do,
which means a lot of file entries in
Hello,
http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
page has been updated with Ubuntu 14.04 steps and now you can run the test
suite on ubuntu.
I ran test suite and below are results.
Gluster version: v3.4.5
OS: Ubuntu 14.04 LTS
Test Summary Report
Testcase tests/bugs/bug-887145.t fails due permission issue.
Here is a snippet,
root@fractal-0025:/home/kiran/glusterfs# touch /mnt/glusterfs/0/dir/file
touch: cannot touch '/mnt/glusterfs/0/dir/file': Permission denied
root@fractal-0025:/home/kiran/glusterfs#
When I installed the 3.5.3beta on my HPC cluster, I get the following
warnings during the mounts:
WARNING: getfattr not found, certain checks will be skipped..
I do not have attr installed on my compute nodes. Is this something
that I need in order for gluster to work properly or can this
Hi all,
I have two Redhat EL6 and I install glusterfs
[~]# rpm -qa | grep gluster |sort
glusterfs-3.5.1-1.el6.x86_64
glusterfs-api-3.5.1-1.el6.x86_64
glusterfs-cli-3.5.1-1.el6.x86_64
glusterfs-fuse-3.5.1-1.el6.x86_64
glusterfs-libs-3.5.1-1.el6.x86_64
glusterfs-server-3.5.1-1.el6.x86_64
I create
On Mon, Oct 06, 2014 at 02:30:11PM +, David F. Robinson wrote:
When I installed the 3.5.3beta on my HPC cluster, I get the following
warnings during the mounts:
WARNING: getfattr not found, certain checks will be skipped..
I do not have attr installed on my compute nodes. Is this
You are correct... Typo on my part. It happened when I installed
3.6.0-beta3.
I'll file the bug report so that fuse installation is dependent on attr
being installed... Thanks...
David
-- Original Message --
From: Niels de Vos nde...@redhat.com
To: David F. Robinson
Hello,
My glusterfs-3.4.2-1.el6 is having a performance issue. It was working fine
until the 100TB file system hit ~90% full. I was seeing around 90Mb/s for the
last 10 months. This then dropped to 40Mb/s. Since nothing changed on the
system, I focused on the transition to the 90% full file
Yup, pretty common for us. Once we hit ~90% on either of our two
production clusters (107 TB usable each), performance takes a beating.
I don't consider this a problem, per se. Most file systems (clustered
or otherwise) are the same. I consider a high water mark for any
production file system
Same here, we try to keep them under 80% too.
2014-10-06 19:40 GMT-03:00 Dan Mons dm...@cuttingedge.com.au:
Yup, pretty common for us. Once we hit ~90% on either of our two
production clusters (107 TB usable each), performance takes a beating.
I don't consider this a problem, per se. Most
Yup, pretty common for us. Once we hit ~90% on either of our two
production clusters (107 TB usable each), performance takes a beating.
I don't consider this a problem, per se. Most file systems (clustered
or otherwise) are the same. I consider a high water mark for any
production file
On 7 October 2014 08:56, Jeff Darcy jda...@redhat.com wrote:
I can't think of a good reason for such a steep drop-off in GlusterFS.
Sure, performance should degrade somewhat due to fragmenting, but not
suddenly. It's not like Lustre, which would do massive preallocation
and fall apart when
Not an issue for us, were at 92% on an 800TB distributed volume, 16
bricks spread across 4 servers. Lookups can be a bit slow but raw IO
hasn't changed.
On Tue, 2014-10-07 at 09:16 +1000, Dan Mons wrote:
On 7 October 2014 08:56, Jeff Darcy jda...@redhat.com wrote:
I can't think of a good
We have 6 nodes with one brick per node (2x3 replicate-distribute).
35TB per brick, for 107TB total usable.
Not sure if our low brick count (or maybe large brick per node?)
contributes to the slowdown when full.
We're looking to add more nodes by the end of the year. After that,
I'll look this
Our bricks are 50TB, running ZOL, 16 disks raidz2. Works OK with Gluster
now that they fixed xattrs.
8k writes with fsync 170MB/Sec, reads 335MB/Sec.
On Tue, 2014-10-07 at 14:24 +1000, Dan Mons wrote:
We have 6 nodes with one brick per node (2x3 replicate-distribute).
35TB per brick, for
16 matches
Mail list logo