On Fri, 2 Mar 2012, Brian Candler wrote:

On Fri, Mar 02, 2012 at 02:41:30PM +0200, Harald Hannelius wrote:
So next is back to the four-node setup you had before. I would expect that
to perform about the same.

So would I expect too. But;

# time dd if=/dev/zero bs=1M count=20000 of=/gluster/testfile
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 1058.22 s, 19.8 MB/s

real    17m38.357s
user    0m0.040s
sys     0m12.501s

Right, so we know:

- replic of aethra and alcippe is fast
- distrib/replic across all four nodes is slow

So chopping further, what about:

- replic of adraste and helen?

The pattern for me starts to look like this;

  max-write-speed ~= <link speed>/nodes.

Volume Name: test
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: adraste:/data/single
Brick2: helen:/data/single

# time dd if=/dev/zero bs=1M count=10000 of=/mnt/testfile
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 195.816 s, 53.5 MB/s

real    3m15.821s
user    0m0.016s
sys     0m8.169s


This would show whether one of these nodes is at fault.

At least I got double figure readings this time. Sometimes I get
write speeds of 5-6 MB/s.

Well, I'm a bit lost when you start talking about VMs. Is this a production
environment, and you are doing these dd/cp tests *in addition* to the
production load of VM traffic?  Or are you doing tests on an unloaded
system?

I have some systems running in the background yes. They are not really production machines.

Note: mail servers have a nasty habit of doing fsync() all the time, for
every single received message.

It looks like openldap's slapadd uses some kind of sync as well. The load-average on the KVM-host was up at 9.00 while slapadd was running.

Tools which might be useful to observe the production load:

 iostat 1
 # shows the count of I/O requests and KB read/written per second

iotop is handy too.

 btrace /dev/sdb | grep ' [DC] '
 # shows the actual I/O operations dispatched (D) and completed (C)
 # to the drive

There are also gluster-layer tools but I've not tried them:
http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/chap-Gluster_Administration_Guide-Monitor_Workload.html

Regards,

Brian.



--

Harald Hannelius | harald.hannelius/a\arcada.fi | +358 50 594 1020
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to