[Gluster-users] The ProxMox/ARM challange...

2011-08-09 Thread Charles Williams
So ladies and gentlemen. I am looking for a bit of
insight/input/advice/rants/raves and/or general insanity that may or may
not lead me to the solution I am searching for.


Here is the situation.


I have a small farm of ProxMox servers. Standard installs (Debian Lenny
based Host systems with mixed Containers). As per standard install the
largest portion of the drive is handed over to /dev/mapper/pve-data
which is mounted under /var/lib/vz. Standard configuration of OpenVZ and
it's containers are in /etc/vz.


I have a few QNAP T412 ARM based NAS boxes installed and waiting with 12
TB per box on a private net connected to the ProxMox boxes on their
second NIC. Backup PC is installed and backing up all host systems and
Containers separately. On the QNAP box the data is stored in
/share/MD0_DATA.


The challenge is getting the data on the Host systems mirrored to the
QNAP boxes.


Host node--  QNAP node

/etc  --  /share/MD0_DATA/Host/etc

/var/lib/vz--  /share/MD0_DATA/Host/vz


Gluster was the first solution that went through my mind. The problem is
that, from what I have been able to piece together, Gluster has problems
with ARM based systems. How would you go about getting this setup
working? And be advised, rsync is not an option unless you can couple it
with something that catches the changes on the systems (iwatch maybe)
and can trigger an rsync update of that/those file(s) as soon as it's
changed.


If you need more information just let me know.


Have fun with the challenge,

Chuck
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] High memory usage GlusterFS

2011-08-09 Thread Marius S. Eriksrud

I have a replicated server-setup, 2 servers and 3 volumes.
Each server has 4GB of RAM, so gluster is using over 50%.
I have tried echo 2  /proc/sys/vm/drop_caches, it free up some 
hundred MB's, but it does not take long before it is back. Any ideas?


Memory consumpsion on server 1:
29256 glusterfs --volume-name=vmail /kit/vmail1
52968 glusterfs --volume-name=global /kit/global
166860 /usr/sbin/glusterfsd -p /var/run/glusterfsd.pid -f 
/etc/glusterfs/glusterfsd.vol --log-file 
/var/log/glusterfs/glusterfsd.vol.log

2101844 glusterfs --volume-name=data /kit/data1

Memory consumpsion on server 2:
132784 glusterfs --volume-name=global /kit/global
196996 /usr/sbin/glusterfsd -p /var/run/glusterfsd.pid -f 
/etc/glusterfs/glusterfsd.vol --log-file 
/var/log/glusterfs/glusterfsd.vol.log

557648 glusterfs --volume-name=vmail /kit/vmail1
1586040 glusterfs --volume-name=data /kit/data1

--

glusterfs.vol:
http://pastebin.com/tQJKYhKC

glusterfsd.vol
http://pastebin.com/4BJfq3a5

--
Med vennlig hilsen
Marius S. Eriksrud
Webutvikler
Tlf: 99 603 888
mar...@konsept-it.no
www.konsept-it.no

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] High memory usage GlusterFS

2011-08-09 Thread Amar Tumballi
Did you do 'gluster volume set' or multiple add-brick operations on the
volume?

Also, you can do 'kill -USR1 PID' to get details on what would be
consuming memory (check /tmp/glusterdump.PID after kill).

Again, please share the information on which version of glusterfs you are
using, that helps to see if we had any known leaks in that particular
version.

Regards,
Amar


 Memory consumpsion on server 1:
 29256 glusterfs --volume-name=vmail /kit/vmail1
 52968 glusterfs --volume-name=global /kit/global
 166860 /usr/sbin/glusterfsd -p /var/run/glusterfsd.pid -f
 /etc/glusterfs/glusterfsd.vol --log-file /var/log/glusterfs/glusterfsd.**
 vol.log
 2101844 glusterfs --volume-name=data /kit/data1

 Memory consumpsion on server 2:
 132784 glusterfs --volume-name=global /kit/global
 196996 /usr/sbin/glusterfsd -p /var/run/glusterfsd.pid -f
 /etc/glusterfs/glusterfsd.vol --log-file /var/log/glusterfs/glusterfsd.**
 vol.log
 557648 glusterfs --volume-name=vmail /kit/vmail1
 1586040 glusterfs --volume-name=data /kit/data1

 --
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] 3.3 Beta with Granular Lock Code

2011-08-09 Thread John Mark Walker
Greetings,

Community member sysconfig (http://community.gluster.org/u/sysconfig/) was kind 
enough to take Avati's granular locking code for the 3.3 beta and roll it into 
RPMs for CentOS 6:


http://community.gluster.org/p/centos-6-rpms-for-3-3beta-with-granular-locking/

Which was built from Avati's sources here:

http://shell.gluster.com/~avati/

If you have feedback, please let us know. For some users, it fixed a problem 
with VM hosting as it relates to self-heal.

NOTE: THIS IS NOT EVEN BETA CODE. IT MIGHT (AND PROBABLY WILL) BREAK. USE AT 
YOUR OWN RISK.

You may wish to wait until we migrate the code into trunk, which should be in a 
couple of days, but I'm passing on this info in the hopes that you will test it.

Thanks,
John Mark Walker
Gluster Community Guy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-08-09 Thread Jesse Stroik

Pavan,

Thank you for your help.  We wanted to get back to you with our results 
and observations.  I'm cc'ing gluster-users for posterity.


We did experiment with enable-trickling-writes.  That was one of the 
translator tunables we wanted to know the precise syntax for so that we 
could be certain we were disabling it.  As hoped, disabling trickling 
writes improved performance somewhat.


We are definitely interested in any other undocumented write-buffer 
related tunables.  We've tested the documented tuning parameters.


Performance improved significantly when we switched clients to mainline 
kernel (2.5.35-13).  We also updated to OFED 1.5.3 but it wasn't 
responsible for the performance improvement.


Our findings with 32KB block size (cp) write performance:

250-300MB/sec single stream performance
400MB/sec multiple-stream per client performance

This is much higher than we observed with kernel 2.6.18 series.  Using 
the 2.6.18 line, we also observed virtually no difference between 
running single stream tests and multi stream tests suggesting a 
bottleneck with the fabric.


Both 2.6.18 and 2.6.35-13 performed very well (about 600MB/sec) when 
writing 128KB blocks.


When I disabled write-behind on the 2.6.18 series of kernels as a test, 
performance plummeted to a few MB/sec when writing blocks sizes smaller 
than 128KB.  We did not test this extensively.


Disabling enable-trickling-writes gave us approximately a 20% boost, 
reflected in the numbers above, for single-stream writes.  We observed 
no significant difference with several streams per client due to 
disabling that tunable.


For reference, we are running another cluster file system on the same 
underlying hardware/software.  With both the old kernel (2.6.18.x) and 
the new kernel (2.6.35-13) we get approximately:


450-550MB/sec single stream performance
1200MB+/sec multiple stream per client performance

We set the test directory to write entire files to a single LUN which is 
how we configured gluster in an effort to mitigate differences.


It is treacherous to speculate why we might be more limited with gluster 
over RDMA than the other cluster file system without spending a 
significant amount of analysis.  That said, I wonder if there may be an 
issue with the way in which fuse handles write buffers causing a 
bottleneck for RMDA.


The bottom line is that our observed performance was poor using the 
2.6.18 RHEL 5 kernel line relative to the mainline (2.6.35) kernels. 
Updating to the newer kernels was well worth the testing and downtime. 
Hopefully this information can help others.


Best,
Jesse Stroik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.3 Beta with Granular Lock Code

2011-08-09 Thread Anand Avati
On Tue, Aug 9, 2011 at 11:22 PM, John Mark Walker jwal...@gluster.comwrote:

  Greetings,

 Community member sysconfig (http://community.gluster.org/u/sysconfig/) was
 kind enough to take Avati's granular locking code for the 3.3 beta and roll
 it into RPMs for CentOS 6:


 http://community.gluster.org/p/centos-6-rpms-for-3-3beta-with-granular-locking/

 Which was built from Avati's sources here:

 http://shell.gluster.com/~avati/


Most of the credit goes to Pranith for hammering it down!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-08-09 Thread Pavan T C

On Wednesday 10 August 2011 12:11 AM, Jesse Stroik wrote:

Pavan,

Thank you for your help. We wanted to get back to you with our results
and observations. I'm cc'ing gluster-users for posterity.

We did experiment with enable-trickling-writes. That was one of the
translator tunables we wanted to know the precise syntax for so that we
could be certain we were disabling it. As hoped, disabling trickling
writes improved performance somewhat.

We are definitely interested in any other undocumented write-buffer
related tunables. We've tested the documented tuning parameters.

Performance improved significantly when we switched clients to mainline
kernel (2.5.35-13). We also updated to OFED 1.5.3 but it wasn't
responsible for the performance improvement.

Our findings with 32KB block size (cp) write performance:

250-300MB/sec single stream performance
400MB/sec multiple-stream per client performance


Ok. Lets see if we can improve this further. Please use the following 
tunables as suggested below:


For write-behind -
option cache-size 16MB

For read-ahead -
option page-count 16

For io-cache -
option cache-size 64MB

You will need to place these lines in the client volume file, restart 
the server and remount the volume on the clients.
Your client (fuse) volume file sections will look like below (of course, 
with change in the volume name) -


volume testvol-write-behind
type performance/write-behind
option cache-size 16MB
subvolumes testvol-client-0
end-volume

volume testvol-read-ahead
type performance/read-ahead
option page-count 16
subvolumes testvol-write-behind
end-volume

volume testvol-io-cache
type performance/io-cache
option cache-size 64MB
subvolumes testvol-read-ahead
end-volume

Run your copy command with these tunables. For now, lets have the 
default setting for trickling writes which is 'ENABLED'. You can simply 
remove this tunable from the volume file to get the default behaviour.


Pavan


This is much higher than we observed with kernel 2.6.18 series. Using
the 2.6.18 line, we also observed virtually no difference between
running single stream tests and multi stream tests suggesting a
bottleneck with the fabric.

Both 2.6.18 and 2.6.35-13 performed very well (about 600MB/sec) when
writing 128KB blocks.

When I disabled write-behind on the 2.6.18 series of kernels as a test,
performance plummeted to a few MB/sec when writing blocks sizes smaller
than 128KB. We did not test this extensively.

Disabling enable-trickling-writes gave us approximately a 20% boost,
reflected in the numbers above, for single-stream writes. We observed no
significant difference with several streams per client due to disabling
that tunable.

For reference, we are running another cluster file system on the same
underlying hardware/software. With both the old kernel (2.6.18.x) and
the new kernel (2.6.35-13) we get approximately:

450-550MB/sec single stream performance
1200MB+/sec multiple stream per client performance

We set the test directory to write entire files to a single LUN which is
how we configured gluster in an effort to mitigate differences.

It is treacherous to speculate why we might be more limited with gluster
over RDMA than the other cluster file system without spending a
significant amount of analysis. That said, I wonder if there may be an
issue with the way in which fuse handles write buffers causing a
bottleneck for RMDA.

The bottom line is that our observed performance was poor using the
2.6.18 RHEL 5 kernel line relative to the mainline (2.6.35) kernels.
Updating to the newer kernels was well worth the testing and downtime.
Hopefully this information can help others.

Best,
Jesse Stroik


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] 3.2.2 Performance Issue

2011-08-09 Thread Joey McDonald
Hello all,

I've configured 4 bricks over a GigE network, however I'm getting very slow
performance for writing to my gluster share.

Just set this up this week, and here's what I'm seeing:

[root@vm-container-0-0 ~]# gluster --version | head -1
glusterfs 3.2.2 built on Jul 14 2011 13:34:25

[root@vm-container-0-0 pifs]# gluster volume info

Volume Name: pifs
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vm-container-0-0:/gluster
Brick2: vm-container-0-1:/gluster
Brick3: vm-container-0-2:/gluster
Brick4: vm-container-0-3:/gluster

The 4 systems, are each storage bricks and storage clients, mounting gluster
like so:

[root@vm-container-0-1 ~]# df -h  /pifs/
FilesystemSize  Used Avail Use% Mounted on
glusterfs#127.0.0.1:pifs
  1.8T  848M  1.7T   1% /pifs

iperf show's network through put looking good:

[root@vm-container-0-0 pifs]# iperf -c vm-container-0-1

Client connecting to vm-container-0-1, TCP port 5001
TCP window size: 16.0 KByte (default)

[  3] local 10.19.127.254 port 53441 connected with 10.19.127.253 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes941 Mbits/sec


Then, writing to the local disk is pretty fast:

[root@vm-container-0-0 pifs]# dd if=/dev/zero of=/root/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 4.8066 seconds, 436 MB/s

However, writes to the gluster share, are abysmally slow:

[root@vm-container-0-0 pifs]# dd if=/dev/zero of=/pifs/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 241.866 seconds, 8.7 MB/s

Other than the fact that it's quite slow, it seems to be very stable.

iozone testing shows about the same results.

Any help troubleshooting would be much appreciated. Thanks!

   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-09 Thread Pavan T C

On Wednesday 10 August 2011 02:56 AM, Joey McDonald wrote:

Hello all,

I've configured 4 bricks over a GigE network, however I'm getting very
slow performance for writing to my gluster share.

Just set this up this week, and here's what I'm seeing:


A few questions -

1. Are these baremetal systems or are they Virtual machines ?

2. What is the amount of RAM of each of these systems ?

3. How many CPUs do they have ?

4. Can you also perform the dd on /gluster as opposed to /root to check 
the backend performance ?


5. What is your disk backend ? Is it direct attached or is it an array ?

6. What is the backend filesystem ?

7. Can you run a simple scp of about 10M between any two of these 
systems and report the speed ?


Pavan



[root@vm-container-0-0 ~]# gluster --version | head -1
glusterfs 3.2.2 built on Jul 14 2011 13:34:25

[root@vm-container-0-0 pifs]# gluster volume info

Volume Name: pifs
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vm-container-0-0:/gluster
Brick2: vm-container-0-1:/gluster
Brick3: vm-container-0-2:/gluster
Brick4: vm-container-0-3:/gluster

The 4 systems, are each storage bricks and storage clients, mounting
gluster like so:

[root@vm-container-0-1 ~]# df -h /pifs/
Filesystem Size Used Avail Use% Mounted on
glusterfs#127.0.0.1:pifs
1.8T 848M 1.7T 1% /pifs

iperf show's network through put looking good:

[root@vm-container-0-0 pifs]# iperf -c vm-container-0-1

Client connecting to vm-container-0-1, TCP port 5001
TCP window size: 16.0 KByte (default)

[ 3] local 10.19.127.254 port 53441 connected with 10.19.127.253 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec


Then, writing to the local disk is pretty fast:

[root@vm-container-0-0 pifs]# dd if=/dev/zero of=/root/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 4.8066 seconds, 436 MB/s

However, writes to the gluster share, are abysmally slow:

[root@vm-container-0-0 pifs]# dd if=/dev/zero of=/pifs/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 241.866 seconds, 8.7 MB/s

Other than the fact that it's quite slow, it seems to be very stable.

iozone testing shows about the same results.

Any help troubleshooting would be much appreciated. Thanks!

--joey






___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-09 Thread Joey McDonald
On Tue, Aug 9, 2011 at 5:40 PM, Joey McDonald j...@scare.org wrote:

 Hi Pavan,

 Thanks for your quick reply, comments inline:


 1. Are these baremetal systems or are they Virtual machines ?


 Bare metal systems.



 2. What is the amount of RAM of each of these systems ?


 They all have 4194304 kB of memory.



 3. How many CPUs do they have ?


 They each have 8 procs.


 4. Can you also perform the dd on /gluster as opposed to /root to check
 the backend performance ?


 Sure, here is that output:

 [root@vm-container-0-0 ~]# dd if=/dev/zero of=/gluster/dd_test.img bs=1M
 count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes (2.1 GB) copied, 6.65193 seconds, 315 MB/s



 5. What is your disk backend ? Is it direct attached or is it an array ?


 Direct attached, /gluster is /dev/sdb1, 1TB SATA drive (as is /dev/sda):

 [root@vm-container-0-0 ~]# hdparm -i /dev/sdb

 /dev/sdb:

  Model=WDC WD1002FBYS-02A6B0   , FwRev=03.00C06, SerialNo=
 WD-WMATV5311442
  Config={ HardSect NotMFM HdSw15uSec SpinMotCtl Fixed DTR5Mbs FmtGapReq }
  RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
  BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0?
  CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
  IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
  PIO modes:  pio0 pio3 pio4
  DMA modes:  mdma0 mdma1 mdma2
  UDMA modes: udma0 udma1 udma2
  AdvancedPM=no WriteCache=enabled
  Drive conforms to: Unspecified:  ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3
 ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7

  * signifies the current active mode



 6. What is the backend filesystem ?


 ext3


 7. Can you run a simple scp of about 10M between any two of these systems
 and report the speed ?


  Sure, output:

 [root@vm-container-0-1 ~]# scp vm-container-0-0:/gluster/dd_test.img .
 Warning: Permanently added 'vm-container-0-0' (RSA) to the list of known
 hosts.
 root@vm-container-0-0's password:
 dd_test.img
 100% 2000MB
  39.2MB/s   00:51


--joey




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster and Read-Only Disks

2011-08-09 Thread L W
I've run into a problem with gluster in a small NAS setup, just trying to
serve a couple disks on a LAN Is it possible to export a filesystem I
have mounted as read-only on the server? I have an ext2 disk that I need to
access read-only, but creating a volume with that mountpoint fails with an
extended attributes not supported error.  If that mountpoint is a
subdirectory of the volume export point, everything except the read-only
subtree works.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem with Gluster Geo Replication, status faulty

2011-08-09 Thread Andre Mosin

I over looked the minimum requirements page. Thank you very much Vijay!

Andre
Date: Sun, 7 Aug 2011 14:24:55 +0530
Subject: Re: [Gluster-users] Problem with Gluster Geo Replication, status faulty
From: vi...@gluster.com
To: al_dan...@hotmail.com
CC: kaushi...@gluster.com; zhourong.m...@flixlab.com; gluster-users@gluster.org



On Thu, Aug 4, 2011 at 10:24 PM, Andre Mosin al_dan...@hotmail.com wrote:






Hi Kaushik,

Thanks for the detailed explanation!  I was messing with it yesterday and it 
seems that gluster-core and fuse packages have to be installed on the slave 
server as well. Could you confirm that?



Yes, GlusterFS installation is necessary on the slave. Even if you intend to 
have a non-Gluster directory as the slave, glusterfs-core and glusterfs-fuse 
packages need to be installed on the slave. The minimum requirements for 
Geo-replication can be found here:


http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Checking_Geo-replication_Minimum_Requirements


Regards,
Vijay
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] The ProxMox/ARM challange...

2011-08-09 Thread Charles Williams
So ladies and gentlemen. I am looking for a bit of
insight/input/advice/rants/raves and/or general insanity that may or may
not lead me to the solution I am searching for.


Here is the situation.


I have a small farm of ProxMox servers. Standard installs (Debian Lenny
based Host systems with mixed Containers). As per standard install the
largest portion of the drive is handed over to /dev/mapper/pve-data
which is mounted under /var/lib/vz. Standard configuration of OpenVZ and
it's containers are in /etc/vz.


I have a few QNAP T412 ARM based NAS boxes installed and waiting with 12
TB per box on a private net connected to the ProxMox boxes on their
second NIC. Backup PC is installed and backing up all host systems and
Containers separately. On the QNAP box the data is stored in
/share/MD0_DATA.


The challenge is getting the data on the Host systems mirrored to the
QNAP boxes.


Host node--  QNAP node

/etc  --  /share/MD0_DATA/Host/etc

/var/lib/vz--  /share/MD0_DATA/Host/vz


Gluster was the first solution that went through my mind. The problem is
that, from what I have been able to piece together, Gluster has problems
with ARM based systems. How would you go about getting this setup
working? And be advised, rsync is not an option unless you can couple it
with something that catches the changes on the systems (iwatch maybe)
and can trigger an rsync update of that/those file(s) as soon as it's
changed.


If you need more information just let me know.


Have fun with the challenge,

Chuck

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster and Read-Only Disks

2011-08-09 Thread Mohammed Junaid
Currently its not possible to mount GlusterFS that has a read-only export
point because the server processes check for extended attribute support in
the filesystem by creating a dummy extended attribute (which is removed just
after creating it), since the export point is mounted read-only the setting
of extended attribute fails and the process exits.

On Mon, Aug 8, 2011 at 9:48 AM, L W laserspewpew...@gmail.com wrote:

 I've run into a problem with gluster in a small NAS setup, just trying to
 serve a couple disks on a LAN Is it possible to export a filesystem I
 have mounted as read-only on the server? I have an ext2 disk that I need
 to access read-only, but creating a volume with that mountpoint fails with
 an extended attributes not supported error.  If that mountpoint is a
 subdirectory of the volume export point, everything except the read-only
 subtree works.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-09 Thread Mohit Anchlia
And can you also give the mount options of gluster fs?

On Tue, Aug 9, 2011 at 4:41 PM, Joey McDonald j...@scare.org wrote:


 On Tue, Aug 9, 2011 at 5:40 PM, Joey McDonald j...@scare.org wrote:

 Hi Pavan,
 Thanks for your quick reply, comments inline:


 1. Are these baremetal systems or are they Virtual machines ?

 Bare metal systems.


 2. What is the amount of RAM of each of these systems ?

 They all have 4194304 kB of memory.


 3. How many CPUs do they have ?

 They each have 8 procs.


 4. Can you also perform the dd on /gluster as opposed to /root to check
 the backend performance ?

 Sure, here is that output:

 [root@vm-container-0-0 ~]# dd if=/dev/zero of=/gluster/dd_test.img bs=1M
 count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes (2.1 GB) copied, 6.65193 seconds, 315 MB/s


 5. What is your disk backend ? Is it direct attached or is it an array ?

 Direct attached, /gluster is /dev/sdb1, 1TB SATA drive (as is /dev/sda):
 [root@vm-container-0-0 ~]# hdparm -i /dev/sdb
 /dev/sdb:
  Model=WDC WD1002FBYS-02A6B0                   , FwRev=03.00C06, SerialNo=
     WD-WMATV5311442
  Config={ HardSect NotMFM HdSw15uSec SpinMotCtl Fixed DTR5Mbs FmtGapReq
 }
  RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
  BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0?
  CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
  IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
  PIO modes:  pio0 pio3 pio4
  DMA modes:  mdma0 mdma1 mdma2
  UDMA modes: udma0 udma1 udma2
  AdvancedPM=no WriteCache=enabled
  Drive conforms to: Unspecified:  ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3
 ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7
  * signifies the current active mode


 6. What is the backend filesystem ?

 ext3


 7. Can you run a simple scp of about 10M between any two of these systems
 and report the speed ?

  Sure, output:
 [root@vm-container-0-1 ~]# scp vm-container-0-0:/gluster/dd_test.img .
 Warning: Permanently added 'vm-container-0-0' (RSA) to the list of known
 hosts.
 root@vm-container-0-0's password:
 dd_test.img
                                                               100% 2000MB
  39.2MB/s   00:51

    --joey




 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users