Hi Brian, I'm just wondering if you had any luck with figuring out performance
limitations of your setup. I'm testing a similar configuration, so any tips or
recommendations would be much appreciated. Thanks, --Alex
___
Gluster-users mailing list
Gluste
Final point. I tried remounting the volume using an undocumented setting I
saw in another posting:
mount -o direct-io-mode=enable -t glusterfs dev-storage1:/single1
/gluster/single1
But with that, and KVM also using cache=none, the VM simply hung on startup.
This looks like a bug to me.
With th
On Sat, Jun 09, 2012 at 09:53:05AM +0100, Brian Candler wrote:
> So clearly cache='none' (O_DIRECT) makes a big difference when using a
> local filesystem, so I'd very much like to be able to test it with gluster.
Aha, O_DIRECT is in 3.4+:
http://comments.gmane.org/gmane.comp.file-systems.gluster.
On Fri, Jun 08, 2012 at 09:30:19PM +0100, Brian Candler wrote:
> ubuntu@lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros2 bs=1024k count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 14.5182 s, 7.2 MB/s
>
> And this is after live-migrating the VM to dev-storage2:
On Fri, Jun 08, 2012 at 05:46:42PM +0100, Brian Candler wrote:
> The VM boots with io='native' and bus='virtio', but performance is still
> very poor:
>
> ubuntu@lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros bs=1024k
> count=100
> 100+0 records in
> 100+0 records out
> 10485
On Fri, Jun 08, 2012 at 02:23:57PM -0400, olav johansen wrote:
>This is a single thread trying to process a sequential task where the
>latency really becomes a problem with ls -aR I get similar speed:
That's interesting.
>[@web1 files]# time ls -aR|wc -l
>1968316
>real27m2
Hi Brian,
This is a single thread trying to process a sequential task where the
latency really becomes a problem with ls -aR I get similar speed:
[@web1 files]# time ls -aR|wc -l
1968316
real27m23.432s
user0m5.523s
sys 0m35.369s
[@web1 files]# time ls -aR|wc -l
1968316
real26m2.72
; gluster-users@gluster.org; Fernando Frediani (Qube)
Subject: Re: [Gluster-users] Performance optimization tips Gluster 3.3? (small
files / directory listings)
On Thu, Jun 07, 2012 at 02:36:26PM +0100, Brian Candler wrote:
> I'm interested in understanding this, especially the split-brain
>
On Fri, Jun 08, 2012 at 05:46:42PM +0100, Brian Candler wrote:
> but glusterfs objected to the cache='none' option (possibly this opens the
> file with O_DIRECT?)
Yes that's definitely the problem, as I can see if I strace the kvm process:
stat("/gluster/safe/images/lucidtest/tmpaJqTD9.qcow2", {s
On Thu, Jun 07, 2012 at 02:36:26PM +0100, Brian Candler wrote:
> I'm interested in understanding this, especially the split-brain scenarios
> (better to understand them *before* you're stuck in a problem :-)
>
> BTW I'm in the process of building a 2-node 3.3 test cluster right now.
FYI, I have g
On Fri, Jun 08, 2012 at 12:19:58AM -0400, olav johansen wrote:
># mount -t glusterfs fs1:/data-storage /storage
>I've copied over my data to it again and doing a ls several times,
>takes ~0.5 seconds:
>[@web1 files]# time ls -all|wc -l
Like I said before, please also try without th
tat
>.readlink
>.getxattr
>.fgetxattr
>.readv
>
> Pranith.
> - Original Message -
> From: "Brian Candler"
> To: "Pranith Kumar Karampuri"
> Cc: "olav johansen" , gluster-users@gluster.org,
> "
;
Cc: "olav johansen" , gluster-users@gluster.org, "Fernando
Frediani (Qube)"
Sent: Thursday, June 7, 2012 7:06:26 PM
Subject: Re: [Gluster-users] Performance optimization tips Gluster 3.3? (small
files / directory listings)
On Thu, Jun 07, 2012 at 08:34:56AM -0400, Pranith
On Thu, Jun 07, 2012 at 08:34:56AM -0400, Pranith Kumar Karampuri wrote:
> Brian,
> Small correction: 'sending queries to *both* servers to check they are in
> sync - even read accesses.' Read fops like stat/getxattr etc are sent to only
> one brick.
Is that new behaviour for 3.3? My understan
Cc: "olav johansen" , "gluster-users@gluster.org"
Sent: Thursday, June 7, 2012 4:24:37 PM
Subject: Re: [Gluster-users] Performance optimization tips Gluster 3.3? (small
files / directory listings)
On Thu, Jun 07, 2012 at 10:10:03AM +, Fernando Frediani (Qube) wrote:
nt: Thursday, June 7, 2012 7:00:14 AM
> Subject: Re: [Gluster-users] Performance optimization tips Gluster 3.3?
> (small files / directory listings)
>
> Hello there.
>
>
> That's really interesting, because we think about using GlusterFS too
> with a
>
t find the link right now.
- Original Message -
From: "olav johansen"
To: gluster-users@gluster.org
Sent: Thursday, June 7, 2012 8:02:14 AM
Subject: [Gluster-users] Performance optimization tips Gluster 3.3? (small
files / directory listings)
Hi,
I'm using Gluster 3
On Thu, Jun 07, 2012 at 10:10:03AM +, Fernando Frediani (Qube) wrote:
>Sorry this reply won’t be of any help to your problem, but I am too
>curious to understand how it can be even slower if monting using
>Gluster client which I would expect always be quicker than NFS or
>anythi
users] Performance optimization tips Gluster 3.3? (small
files / directory listings)
Hi,
I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode
(fs1, fs2)
Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB
sata 7200rpm (RAID1 for os), 6x1TB sa
2012 8:02:14 AM
Subject: [Gluster-users] Performance optimization tips Gluster 3.3? (small
files / directory listings)
Hi,
I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode
(fs1, fs2)
Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid
Hi,
I'm using Gluster 3.3.0-1.el6.x86_64, on two storage nodes, replicated mode
(fs1, fs2)
Node specs: CentOS 6.2 Intel Quad Core 2.8Ghz, 4Gb ram, 3ware raid, 2x500GB
sata 7200rpm (RAID1 for os), 6x1TB sata 7200rpm (RAID10 for /data), 1Gbit
network
I've it mounted data partition to web1 a Dual Qu
21 matches
Mail list logo