Re: [Gluster-users] No performance difference using libgfapi?

2014-04-04 Thread Humble Devassy Chirammal
Hi David,

Regarding hdparm:

 'hdparm' has to be used against  SATA/IDE device.

--snip--
   hdparm - get/set SATA/IDE device parameters

   hdparm provides a command line interface to various kernel
interfaces supported by the Linux SATA/PATA/SAS libata subsystem and the
older
   IDE driver subsystem.  Many newer (2008 and later) USB drive
enclosures now also support SAT (SCSI-ATA Command Translation) and
therefore
   may  also  work  with hdparm.  E.g. recent WD Passport models and
recent NexStar-3 enclosures.  Some options may work correctly only with
   the latest kernels.

--/snip--

 Here in your guest , its 'virtio' disk ( /dev/vd{a,b,c..} which uses
'virtio' bus and virtio-blk is not ATA, so this looks to be an incorrect
way of using 'hdparm'.

More or less, now the virt software allows you to use virtio-scsi ( the
disk shown inside the guest will be sd{a,b,..}, where most of featureset is
respected from scsi protocol point of view.. you may look into that as
well.

--Humble





On Thu, Apr 3, 2014 at 3:35 PM, Dave Christianson 
davidchristians...@gmail.com wrote:

 Good Morning,

 In my earlier experience invoking a VM using qemu/libgfapi, I reported
 that it was noticeably faster than the same VM invoked from libvirt using a
 FUSE mount; however, this was erroneous as the qemu/libgfapi-invoked image
 was created using 2x the RAM and cpu's...

 So, invoking the image using both methods using consistent specs of 2GB
 RAM and 2 cpu's, I attempted to check drive performance using the following
 commands:

 (For regular FUSE mount I have the gluster volume mounted at
 /var/lib/libvirt/images.)

 (For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.)

 Using libvirt/FUSE mount:
 [root@tester1 ~]# hdparm -Tt /dev/vda1
 /dev/vda1:
  Timing cached reads:11346 MB in 2.00 seconds = 5681.63 MB/sec
  Timing buffered disk reads:36 MB in 3.05 seconds = 11.80 MB/sec
 [root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
 /tmp/output
 10240+0 records in
 10240+0 records out
 41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec

 Using qemu/libgfapi:
 [root@tester1 ~]# hdparm -Tt /dev/vda1
 /dev/vda1:
  Timing cached reads:11998 MB in 2.00 seconds = 6008.57 MB/sec
  Timing buffered disk reads:36 MB in 3.03 seconds = 11.89 MB/sec
 [root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
 /tmp/output
 10240+0 records in
 10240+0 records out
 41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec

 Should I be seeing a bigger difference, or am I doing something wrong?

 I'm also curious whether people have found that the performance difference
 is greater as the size of the gluster cluster scales up.

 Thanks,
 David


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] No performance difference using libgfapi?

2014-04-04 Thread Dan Lambright
A possible reason that you do not see a performance difference is the buffer 
cache on the client. This cache is unavailable to libgfapi, but may be utilized 
if you use FUSE.

Take a look at the mount option: fopen-keep-cache. If you have the source, you 
can find that option in fuse-bridge.c. By default, this is enabled. Gluster 
will invalidate cache entries (using FUSE interfaces) when it detects changes 
using STAT.

You could try to disable that mount option to see what difference it makes.

- Original Message -
From: Humble Devassy Chirammal humble.deva...@gmail.com
To: Dave Christianson davidchristians...@gmail.com
Cc: gluster-users@gluster.org
Sent: Friday, April 4, 2014 3:05:20 AM
Subject: Re: [Gluster-users] No performance difference using libgfapi?

Hi David, 

Regarding hdparm: 

'hdparm' has to be used against SATA/IDE device. 

--snip-- 
hdparm - get/set SATA/IDE device parameters 

hdparm provides a command line interface to various kernel interfaces supported 
by the Linux SATA/PATA/SAS libata subsystem and the older 
IDE driver subsystem. Many newer (2008 and later) USB drive enclosures now also 
support SAT (SCSI-ATA Command Translation) and therefore 
may also work with hdparm. E.g. recent WD Passport models and recent 
NexStar-3 enclosures. Some options may work correctly only with 
the latest kernels. 

--/snip-- 

Here in your guest , its 'virtio' disk ( /dev/vd{a,b,c..} which uses 'virtio' 
bus and virtio-blk is not ATA, so this looks to be an incorrect way of using 
'hdparm'. 

More or less, now the virt software allows you to use virtio-scsi ( the disk 
shown inside the guest will be sd{a,b,..}, where most of featureset is 
respected from scsi protocol point of view.. you may look into that as well. 

--Humble 





On Thu, Apr 3, 2014 at 3:35 PM, Dave Christianson  
davidchristians...@gmail.com  wrote: 



Good Morning, 

In my earlier experience invoking a VM using qemu/libgfapi, I reported that it 
was noticeably faster than the same VM invoked from libvirt using a FUSE mount; 
however, this was erroneous as the qemu/libgfapi-invoked image was created 
using 2x the RAM and cpu's... 

So, invoking the image using both methods using consistent specs of 2GB RAM and 
2 cpu's, I attempted to check drive performance using the following commands: 

(For regular FUSE mount I have the gluster volume mounted at 
/var/lib/libvirt/images.) 

(For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.) 

Using libvirt/FUSE mount: 
[root@tester1 ~]# hdparm -Tt /dev/vda1 
/dev/vda1: 
Timing cached reads: 11346 MB in 2.00 seconds = 5681.63 MB/sec 
Timing buffered disk reads: 36 MB in 3.05 seconds = 11.80 MB/sec 
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f 
/tmp/output 
10240+0 records in 
10240+0 records out 
41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec 

Using qemu/libgfapi: 
[root@tester1 ~]# hdparm -Tt /dev/vda1 
/dev/vda1: 
Timing cached reads: 11998 MB in 2.00 seconds = 6008.57 MB/sec 
Timing buffered disk reads: 36 MB in 3.03 seconds = 11.89 MB/sec 
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f 
/tmp/output 
10240+0 records in 
10240+0 records out 
41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec 

Should I be seeing a bigger difference, or am I doing something wrong? 

I'm also curious whether people have found that the performance difference is 
greater as the size of the gluster cluster scales up. 

Thanks, 
David 


___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] No performance difference using libgfapi?

2014-04-03 Thread Dave Christianson
Good Morning,

In my earlier experience invoking a VM using qemu/libgfapi, I reported that
it was noticeably faster than the same VM invoked from libvirt using a FUSE
mount; however, this was erroneous as the qemu/libgfapi-invoked image was
created using 2x the RAM and cpu's...

So, invoking the image using both methods using consistent specs of 2GB RAM
and 2 cpu's, I attempted to check drive performance using the following
commands:

(For regular FUSE mount I have the gluster volume mounted at
/var/lib/libvirt/images.)

(For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.)

Using libvirt/FUSE mount:
[root@tester1 ~]# hdparm -Tt /dev/vda1
/dev/vda1:
 Timing cached reads:11346 MB in 2.00 seconds = 5681.63 MB/sec
 Timing buffered disk reads:36 MB in 3.05 seconds = 11.80 MB/sec
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
/tmp/output
10240+0 records in
10240+0 records out
41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec

Using qemu/libgfapi:
[root@tester1 ~]# hdparm -Tt /dev/vda1
/dev/vda1:
 Timing cached reads:11998 MB in 2.00 seconds = 6008.57 MB/sec
 Timing buffered disk reads:36 MB in 3.03 seconds = 11.89 MB/sec
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
/tmp/output
10240+0 records in
10240+0 records out
41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec

Should I be seeing a bigger difference, or am I doing something wrong?

I'm also curious whether people have found that the performance difference
is greater as the size of the gluster cluster scales up.

Thanks,
David
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] No performance difference using libgfapi?

2014-04-03 Thread Josh Boon
Hey David, 

Can you provide the qemu command to run each of them? What's your 
gluster/disk/network layout look like? 

Depending on your disk and network setup you may be hitting a bottleneck there 
that would prevent gfapi from performing at capacity. Lots of options here that 
could impact things. 


- Original Message -

From: Dave Christianson davidchristians...@gmail.com 
To: gluster-users@gluster.org 
Sent: Thursday, April 3, 2014 6:05:51 AM 
Subject: [Gluster-users] No performance difference using libgfapi? 

Good Morning, 

In my earlier experience invoking a VM using qemu/libgfapi, I reported that it 
was noticeably faster than the same VM invoked from libvirt using a FUSE mount; 
however, this was erroneous as the qemu/libgfapi-invoked image was created 
using 2x the RAM and cpu's... 

So, invoking the image using both methods using consistent specs of 2GB RAM and 
2 cpu's, I attempted to check drive performance using the following commands: 

(For regular FUSE mount I have the gluster volume mounted at 
/var/lib/libvirt/images.) 

(For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.) 

Using libvirt/FUSE mount: 
[root@tester1 ~]# hdparm -Tt /dev/vda1 
/dev/vda1: 
Timing cached reads: 11346 MB in 2.00 seconds = 5681.63 MB/sec 
Timing buffered disk reads: 36 MB in 3.05 seconds = 11.80 MB/sec 
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f 
/tmp/output 
10240+0 records in 
10240+0 records out 
41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec 

Using qemu/libgfapi: 
[root@tester1 ~]# hdparm -Tt /dev/vda1 
/dev/vda1: 
Timing cached reads: 11998 MB in 2.00 seconds = 6008.57 MB/sec 
Timing buffered disk reads: 36 MB in 3.03 seconds = 11.89 MB/sec 
[root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f 
/tmp/output 
10240+0 records in 
10240+0 records out 
41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec 

Should I be seeing a bigger difference, or am I doing something wrong? 

I'm also curious whether people have found that the performance difference is 
greater as the size of the gluster cluster scales up. 

Thanks, 
David 


___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] No performance difference using libgfapi?

2014-04-03 Thread Carlos Capriotti
I second that comment !

Boy, I was chasing ghosts until very recently, regarding disk
configurations (physical RAID).

I had a 4 x 1 TB SAS disks, configured as RAID5 on a Dell PERC 6, and
thought that would be good enough.

My network throughput would not go over 280 Mbps and I was blaming Linux.

Then I finally decided to test disk speed with:

dd if=/dev/zero of=/localvolume bs=512k count=17000

Much to my surprise it reported something like 35-40 Mbytes/s.

I then changed my layout to RAID 10 and the same test bumped results to 400
Mbytes/sec. Average network throughput now is around 600 mbps, with spikes
of 750 mbps.

Ah... You might want to make sure your network is set to use jumbo frames.

Cheers,

Carlos



On Thu, Apr 3, 2014 at 3:52 PM, Josh Boon glus...@joshboon.com wrote:

 Hey David,

 Can you provide the qemu command to run each of them? What's your
 gluster/disk/network layout look like?

 Depending on your disk and network setup you may be hitting a bottleneck
 there that would prevent gfapi from performing at capacity.  Lots of
 options here that could impact things.


 --
 *From: *Dave Christianson davidchristians...@gmail.com
 *To: *gluster-users@gluster.org
 *Sent: *Thursday, April 3, 2014 6:05:51 AM
 *Subject: *[Gluster-users] No performance difference using libgfapi?


 Good Morning,

 In my earlier experience invoking a VM using qemu/libgfapi, I reported
 that it was noticeably faster than the same VM invoked from libvirt using a
 FUSE mount; however, this was erroneous as the qemu/libgfapi-invoked image
 was created using 2x the RAM and cpu's...

 So, invoking the image using both methods using consistent specs of 2GB
 RAM and 2 cpu's, I attempted to check drive performance using the following
 commands:

 (For regular FUSE mount I have the gluster volume mounted at
 /var/lib/libvirt/images.)

 (For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.)

 Using libvirt/FUSE mount:
 [root@tester1 ~]# hdparm -Tt /dev/vda1
 /dev/vda1:
  Timing cached reads:11346 MB in 2.00 seconds = 5681.63 MB/sec
  Timing buffered disk reads:36 MB in 3.05 seconds = 11.80 MB/sec
 [root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
 /tmp/output
 10240+0 records in
 10240+0 records out
 41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec

 Using qemu/libgfapi:
 [root@tester1 ~]# hdparm -Tt /dev/vda1
 /dev/vda1:
  Timing cached reads:11998 MB in 2.00 seconds = 6008.57 MB/sec
  Timing buffered disk reads:36 MB in 3.03 seconds = 11.89 MB/sec
 [root@tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
 /tmp/output
 10240+0 records in
 10240+0 records out
 41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec

 Should I be seeing a bigger difference, or am I doing something wrong?

 I'm also curious whether people have found that the performance difference
 is greater as the size of the gluster cluster scales up.

 Thanks,
 David


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users