Re: Cloustack 4.1.0 + GlusterFS

2013-09-12 Thread Chiradeep Vittal
John, care to write a HOWTO use GlusterFS in the wiki?


On 9/11/13 5:27 PM, John Skinner john.skin...@appcore.com wrote:

I ran each test independently for each block size.

iozone -I -i 0 -i 1 -i 2 -r 4k -s 1G

In order: -I to specify direct-IO, -i 0 to specify write/rewrite, -i 1 to
specify read/re-read, -i 2 to specify random read/write, -r specify block
size, -s to specify file size.

The report is pretty thorough. What I put in the email was different, I
just took the throughput from each test and put it into a table outside
of the full report.

- Original Message -

From: Rafael Weingartner rafaelweingart...@gmail.com
To: users@cloudstack.apache.org
Sent: Wednesday, September 11, 2013 4:16:23 PM
Subject: Re: Cloustack 4.1.0 + GlusterFS

I have never used iozone before,

How did you get that report?

I tried: iozone -s 1024m -t 1 -R

But the report was pretty different from yours.


2013/9/11 John Skinner john.skin...@appcore.com

 I currently have GlusterFS deployed into an 8 node KVM cluster running
on 
 CloudStack 4.1 for primary storage. Gluster is deployed on 28 1TB
drives 
 across 2 separate storage appliances using a distributed-replicated
volume 
 with the replica set to 2. The storage network is 10Gb copper.
 
 These are the options I have configured for the volume in Gluster, most
of 
 them are from a Red Hat document on configuring Red Hat Enterprise
Storage 
 for VM hosting: 
 
 
 
 performance.io-thread-count: 32
 performance.cache-size: 1024MB
 performance.write-behind-window-size: 5MB
 performance.write-behind: on
 network.remote-dio: on
 cluster.eager-lock: enable
 performance.stat-prefetch: off
 performance.io-cache: on
 performance.read-ahead: on
 performance.quick-read: on
 
 Here are some of the numbers I was getting when benchmarking the
storage 
 from the KVM node directly (not a VM)
 
 The below table is in KB/s. The test is single stream 1GB file
utilizing 
 Direct I/O (no cache). I used iozone to run the benchmark.
 
 Write 4k 45729 
 Read 4k 10189 
 Random Write 4k 31983
 Random Read 4k 9859
 Write 16k 182246
 Read 16k 37146 
 Random Write 16k 113026
 Random Read 16k 37237
 Write 64k 420908
 Read 64k 125315 
 Random Write 64k 383848
 Random Read 64k 125218
 Write 256k 567501
 Read 256k 218413
 Random Write 256k 508650
 Random Read 256k 229117
 
 In the above results, I have the volume mounted to each KVM host as a
FUSE 
 glusterfs file system. They are added to CloudStack as a shared mount
 point. In the future it would be great to utilize GlusterFS qemu
libvirt 
 integration with libgfapi so I could bypass fuse altogether. However,
that 
 would require adding that code to CloudStack to support that.
 
 I maybe have 15 or so VMs running from the storage now and it is still
 pretty snappy. Need to do some more testing though and really get it
loaded. 
 
 - Original Message -

 
 From: Rafael Weingartner rafaelweingart...@gmail.com
 To: users@cloudstack.apache.org
 Sent: Wednesday, September 11, 2013 8:48:07 AM
 Subject: Re: Cloustack 4.1.0 + GlusterFS
 
 Right now I can think in three main reasons:
 
 The first reason, performance (I do not know Gluster and its
performance 
 and if the read and write speed are satisfactory). Please if you can
make a 
 test, post the results.
 
 Second consistency, I do not know Gluster, but swift that is also a
 Distributed File System is not consistency and they make it pretty
clear on 
 their page (http://docs.openstack.org/developer/swift/)
 
 Swift is a highly available, distributed, eventually consistent
 object/blob store
 
 I would not accept to storage my VMs images on a FS that is eventually
 consistent. 
 
 Third, network, I haven't used this kind of FS, but I can image that it
 uses a lot of bandwidth to keep synchronizing, managing and securing
the 
 data. So, managing the networking would be a pain.
 
 
 
 2013/9/11 Olivier Mauras oliv...@core-hosting.net
 
  
  
  Hi, 
  
  Those thinking that it's not a good idea, do you mind
  explaining your point of view?
  GlusterFS seems like the only real
  alternative to a highly priced SAN for the primary storage...
  
  
  Thanks, 
  Olivier 
  
  On 2013-09-11 15:08, Rafael Weingartner wrote:
  
   I 
  totally agree with Tomasz, I do not think that using a distributed FS
  as 
   primary storage is a good idea, but as a secondary it sounds
  interesting. 
   
   But, off course you can try *;*)
   
   2013/9/11 
  Shanker Balan shanker.ba...@shapeblue.com
   
   On 11-Sep-2013, at
  5:14 PM, Tomasz Zięba t.a.zi...@gmail.com [1] wrote:
   
   Hi, Some 
  time ago I tried to use GlusterFS as storage for cloudstacka but I
  noticed that cloudstack uses the default settings for mount command.
By 
  default mount command is using the UDP protocol but glusterfs works
   
  only 
   
   using tcp. I think, if cloudstack developers could add -o
  proto=tcp to code glusterfs should works. For example: /bin/mount -t
  nfs -o proto=tcp IP:/share /mnt/gluster/ If you

Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Jake G.
Ive been reading that it is not possible to use Cloudstack with glusterfs + 
VMware hosts?

Can anyone confirm this?




 From: Jake G. dj_dark_jungl...@yahoo.com
To: users@cloudstack.apache.org users@cloudstack.apache.org 
Sent: Tuesday, September 10, 2013 7:26 PM
Subject: Cloustack 4.1.0 + GlusterFS
 

Hi all,

I have a working Cloudstack environment on Centos 6.4. I also have a working 
GlusterFS cluster volume working on two other servers.
I was able to mount the GlusterFS volume to a folder via CentOS CLI but went 
trying to add the volume as a primary or secondary storage via the Coudstack 
GUI it failed.

Anyone know how to do this or have any useful links?

Thank you in advance!
Jake

Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Rafael Weingartner
I think that CS will not even know that it is a Distributed FS since you
can mount the distributed volume in a folder and export it with NFS.

The question should be, is a distributed FS interesting to be used in a
cloud environment? Since there is a cost to use this kind of model and it
may or may not have performance problems since read and write speed are
crucial for the whole system.

As and example here we use a RAID controller as the bases for storage of
our cloud, we achieved something around 60MB+ of writing and reading speed,
which is pretty good. How would be the performance of Gluster in your
environment? All of your storage server have gigabyte connections?


2013/9/11 Jake G. dj_dark_jungl...@yahoo.com

 Ive been reading that it is not possible to use Cloudstack with glusterfs
 + VMware hosts?

 Can anyone confirm this?



 
  From: Jake G. dj_dark_jungl...@yahoo.com
 To: users@cloudstack.apache.org users@cloudstack.apache.org
 Sent: Tuesday, September 10, 2013 7:26 PM
 Subject: Cloustack 4.1.0 + GlusterFS


 Hi all,

 I have a working Cloudstack environment on Centos 6.4. I also have a
 working GlusterFS cluster volume working on two other servers.
 I was able to mount the GlusterFS volume to a folder via CentOS CLI but
 went trying to add the volume as a primary or secondary storage via the
 Coudstack GUI it failed.

 Anyone know how to do this or have any useful links?

 Thank you in advance!
 Jake




-- 
Rafael Weingartner


Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Tomasz Zięba
Hi,

Some time ago I tried to use GlusterFS as storage for cloudstacka but I
noticed that cloudstack uses the default settings for mount command.

By default mount command is using the UDP protocol but glusterfs works only
using tcp.
I think, if cloudstack developers could add -o proto=tcp to code
glusterfs should works.

For example:
/bin/mount -t nfs -o proto=tcp IP:/share /mnt/gluster/

If you are using CitrixXen you should mount the share and make it as SR.
For cloudstacka is clear because you should use the option PreSetup when
creating PrimaryStorage.

Personally, I doubt that using GlusterFS as a primary storage is a good
solution but for secondary storage it should be very usefull.


-- 
Regards,
Tomasz Zięba
Twitter: @TZieba
LinkedIn: 
pl.linkedin.com/pub/tomasz-zięba-ph-d/3b/7a8/ab6/http://pl.linkedin.com/pub/tomasz-zi%C4%99ba-ph-d/3b/7a8/ab6/


2013/9/11 Jake G. dj_dark_jungl...@yahoo.com

 Ive been reading that it is not possible to use Cloudstack with glusterfs
 + VMware hosts?

 Can anyone confirm this?



 
  From: Jake G. dj_dark_jungl...@yahoo.com
 To: users@cloudstack.apache.org users@cloudstack.apache.org
 Sent: Tuesday, September 10, 2013 7:26 PM
 Subject: Cloustack 4.1.0 + GlusterFS


 Hi all,

 I have a working Cloudstack environment on Centos 6.4. I also have a
 working GlusterFS cluster volume working on two other servers.
 I was able to mount the GlusterFS volume to a folder via CentOS CLI but
 went trying to add the volume as a primary or secondary storage via the
 Coudstack GUI it failed.

 Anyone know how to do this or have any useful links?

 Thank you in advance!
 Jake


Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Shanker Balan
On 11-Sep-2013, at 5:14 PM, Tomasz Zięba t.a.zi...@gmail.com wrote:

 Hi,

 Some time ago I tried to use GlusterFS as storage for cloudstacka but I
 noticed that cloudstack uses the default settings for mount command.

 By default mount command is using the UDP protocol but glusterfs works only
 using tcp.
 I think, if cloudstack developers could add -o proto=tcp to code
 glusterfs should works.

 For example:
 /bin/mount -t nfs -o proto=tcp IP:/share /mnt/gluster/

 If you are using CitrixXen you should mount the share and make it as SR.
 For cloudstacka is clear because you should use the option PreSetup when
 creating PrimaryStorage.

 Personally, I doubt that using GlusterFS as a primary storage is a good
 solution but for secondary storage it should be very usefull.


And maybe as a Swift backend.



--
@shankerbalan

M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre, 
Bangalore - 560 055

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is operated under 
license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Rafael Weingartner
I totally agree with Tomasz, I do not think that using a distributed FS as
primary storage is a good idea, but as a secondary it sounds interesting.

But, off course you can try *;*)


2013/9/11 Shanker Balan shanker.ba...@shapeblue.com

 On 11-Sep-2013, at 5:14 PM, Tomasz Zięba t.a.zi...@gmail.com wrote:

  Hi,
 
  Some time ago I tried to use GlusterFS as storage for cloudstacka but I
  noticed that cloudstack uses the default settings for mount command.
 
  By default mount command is using the UDP protocol but glusterfs works
 only
  using tcp.
  I think, if cloudstack developers could add -o proto=tcp to code
  glusterfs should works.
 
  For example:
  /bin/mount -t nfs -o proto=tcp IP:/share /mnt/gluster/
 
  If you are using CitrixXen you should mount the share and make it as SR.
  For cloudstacka is clear because you should use the option PreSetup when
  creating PrimaryStorage.
 
  Personally, I doubt that using GlusterFS as a primary storage is a good
  solution but for secondary storage it should be very usefull.


 And maybe as a Swift backend.



 --
 @shankerbalan

 M: +91 98860 60539 | O: +91 (80) 67935867
 shanker.ba...@shapeblue.com | www.shapeblue.com | Twitter:@shapeblue
 ShapeBlue Services India LLP, 22nd floor, Unit 2201A, World Trade Centre,
 Bangalore - 560 055

 This email and any attachments to it may be confidential and are intended
 solely for the use of the individual to whom it is addressed. Any views or
 opinions expressed are solely those of the author and do not necessarily
 represent those of Shape Blue Ltd or related companies. If you are not the
 intended recipient of this email, you must neither take any action based
 upon its contents, nor copy or show it to anyone. Please contact the sender
 if you believe you have received this email in error. Shape Blue Ltd is a
 company incorporated in England  Wales. ShapeBlue Services India LLP is
 operated under license from Shape Blue Ltd. ShapeBlue is a registered
 trademark.




-- 
Rafael Weingartner


Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Rafael Weingartner
Right now I can think in three main reasons:

The first reason, performance (I do not know Gluster and its performance
and if the read and write speed are satisfactory). Please if you can make a
test, post the results.

Second consistency, I do not know Gluster, but swift that is also a
Distributed File System is not consistency and they make it pretty clear on
their page (http://docs.openstack.org/developer/swift/)

Swift is a highly available, distributed, eventually consistent
object/blob store

I would not accept to storage my VMs images on a FS that is eventually
consistent.

Third, network, I haven't used this kind of FS, but I can image that it
uses a lot of bandwidth to keep synchronizing, managing and securing the
data. So, managing the networking would be a pain.



2013/9/11 Olivier Mauras oliv...@core-hosting.net



 Hi,

 Those thinking that it's not a good idea, do you mind
 explaining your point of view?
 GlusterFS seems like the only real
 alternative to a highly priced SAN for the primary storage...


 Thanks,
 Olivier

 On 2013-09-11 15:08, Rafael Weingartner wrote:

  I
 totally agree with Tomasz, I do not think that using a distributed FS
 as
  primary storage is a good idea, but as a secondary it sounds
 interesting.
 
  But, off course you can try *;*)
 
  2013/9/11
 Shanker Balan shanker.ba...@shapeblue.com
 
  On 11-Sep-2013, at
 5:14 PM, Tomasz Zięba t.a.zi...@gmail.com [1] wrote:
 
  Hi, Some
 time ago I tried to use GlusterFS as storage for cloudstacka but I
 noticed that cloudstack uses the default settings for mount command. By
 default mount command is using the UDP protocol but glusterfs works
 
 only
 
  using tcp. I think, if cloudstack developers could add -o
 proto=tcp to code glusterfs should works. For example: /bin/mount -t
 nfs -o proto=tcp IP:/share /mnt/gluster/ If you are using CitrixXen you
 should mount the share and make it as SR. For cloudstacka is clear
 because you should use the option PreSetup when creating PrimaryStorage.
 Personally, I doubt that using GlusterFS as a primary storage is a good
 solution but for secondary storage it should be very usefull.
  And
 maybe as a Swift backend. -- @shankerbalan M: +91 98860 60539 | O: +91
 (80) 67935867 shanker.ba...@shapeblue.com [2] | www.shapeblue.com [3] |
 Twitter:@shapeblue ShapeBlue Services India LLP, 22nd floor, Unit 2201A,
 World Trade Centre, Bangalore - 560 055 This email and any attachments
 to it may be confidential and are intended solely for the use of the
 individual to whom it is addressed. Any views or opinions expressed are
 solely those of the author and do not necessarily represent those of
 Shape Blue Ltd or related companies. If you are not the intended
 recipient of this email, you must neither take any action based upon its
 contents, nor copy or show it to anyone. Please contact the sender if
 you believe you have received this email in error. Shape Blue Ltd is a
 company incorporated in England  Wales. ShapeBlue Services India LLP is
 operated under license from Shape Blue Ltd. ShapeBlue is a registered
 trademark.
 
  -- Rafael Weingartner



 Links:
 --
 [1]
 mailto:t.a.zi...@gmail.com
 [2] mailto:shanker.ba...@shapeblue.com
 [3]
 http://www.shapeblue.com




-- 
Rafael Weingartner


Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread John Skinner
I currently have GlusterFS deployed into an 8 node KVM cluster running on 
CloudStack 4.1 for primary storage. Gluster is deployed on 28 1TB drives across 
2 separate storage appliances using a distributed-replicated volume with the 
replica set to 2. The storage network is 10Gb copper. 

These are the options I have configured for the volume in Gluster, most of them 
are from a Red Hat document on configuring Red Hat Enterprise Storage for VM 
hosting: 



performance.io-thread-count: 32 
performance.cache-size: 1024MB 
performance.write-behind-window-size: 5MB 
performance.write-behind: on 
network.remote-dio: on 
cluster.eager-lock: enable 
performance.stat-prefetch: off 
performance.io-cache: on 
performance.read-ahead: on 
performance.quick-read: on 

Here are some of the numbers I was getting when benchmarking the storage from 
the KVM node directly (not a VM) 

The below table is in KB/s. The test is single stream 1GB file utilizing Direct 
I/O (no cache). I used iozone to run the benchmark. 

Write 4k45729 
Read 4k 10189 
Random Write 4k 31983 
Random Read 4k  9859 
Write 16k   182246 
Read 16k37146 
Random Write 16k113026 
Random Read 16k 37237 
Write 64k   420908 
Read 64k125315 
Random Write 64k383848 
Random Read 64k 125218 
Write 256k  567501 
Read 256k   218413 
Random Write 256k   508650 
Random Read 256k229117 

In the above results, I have the volume mounted to each KVM host as a FUSE 
glusterfs file system. They are added to CloudStack as a shared mount point. In 
the future it would be great to utilize GlusterFS qemu libvirt integration with 
libgfapi so I could bypass fuse altogether. However, that would require adding 
that code to CloudStack to support that. 

I maybe have 15 or so VMs running from the storage now and it is still pretty 
snappy. Need to do some more testing though and really get it loaded. 

- Original Message -

From: Rafael Weingartner rafaelweingart...@gmail.com 
To: users@cloudstack.apache.org 
Sent: Wednesday, September 11, 2013 8:48:07 AM 
Subject: Re: Cloustack 4.1.0 + GlusterFS 

Right now I can think in three main reasons: 

The first reason, performance (I do not know Gluster and its performance 
and if the read and write speed are satisfactory). Please if you can make a 
test, post the results. 

Second consistency, I do not know Gluster, but swift that is also a 
Distributed File System is not consistency and they make it pretty clear on 
their page (http://docs.openstack.org/developer/swift/) 

Swift is a highly available, distributed, eventually consistent 
object/blob store 

I would not accept to storage my VMs images on a FS that is eventually 
consistent. 

Third, network, I haven't used this kind of FS, but I can image that it 
uses a lot of bandwidth to keep synchronizing, managing and securing the 
data. So, managing the networking would be a pain. 



2013/9/11 Olivier Mauras oliv...@core-hosting.net 

 
 
 Hi, 
 
 Those thinking that it's not a good idea, do you mind 
 explaining your point of view? 
 GlusterFS seems like the only real 
 alternative to a highly priced SAN for the primary storage... 
 
 
 Thanks, 
 Olivier 
 
 On 2013-09-11 15:08, Rafael Weingartner wrote: 
 
  I 
 totally agree with Tomasz, I do not think that using a distributed FS 
 as 
  primary storage is a good idea, but as a secondary it sounds 
 interesting. 
  
  But, off course you can try *;*) 
  
  2013/9/11 
 Shanker Balan shanker.ba...@shapeblue.com 
  
  On 11-Sep-2013, at 
 5:14 PM, Tomasz Zięba t.a.zi...@gmail.com [1] wrote: 
  
  Hi, Some 
 time ago I tried to use GlusterFS as storage for cloudstacka but I 
 noticed that cloudstack uses the default settings for mount command. By 
 default mount command is using the UDP protocol but glusterfs works 
  
 only 
  
  using tcp. I think, if cloudstack developers could add -o 
 proto=tcp to code glusterfs should works. For example: /bin/mount -t 
 nfs -o proto=tcp IP:/share /mnt/gluster/ If you are using CitrixXen you 
 should mount the share and make it as SR. For cloudstacka is clear 
 because you should use the option PreSetup when creating PrimaryStorage. 
 Personally, I doubt that using GlusterFS as a primary storage is a good 
 solution but for secondary storage it should be very usefull. 
  And 
 maybe as a Swift backend. -- @shankerbalan M: +91 98860 60539 | O: +91 
 (80) 67935867 shanker.ba...@shapeblue.com [2] | www.shapeblue.com [3] | 
 Twitter:@shapeblue ShapeBlue Services India LLP, 22nd floor, Unit 2201A, 
 World Trade Centre, Bangalore - 560 055 This email and any attachments 
 to it may be confidential and are intended solely for the use of the 
 individual to whom it is addressed. Any views or opinions expressed are 
 solely those of the author and do not necessarily represent those of 
 Shape Blue Ltd or related companies. If you are not the intended 
 recipient of this email, you must

Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Rafael Weingartner
I have never used iozone before,

How did you get that report?

I tried: iozone -s 1024m -t 1 -R

But the report was pretty different from yours.


2013/9/11 John Skinner john.skin...@appcore.com

 I currently have GlusterFS deployed into an 8 node KVM cluster running on
 CloudStack 4.1 for primary storage. Gluster is deployed on 28 1TB drives
 across 2 separate storage appliances using a distributed-replicated volume
 with the replica set to 2. The storage network is 10Gb copper.

 These are the options I have configured for the volume in Gluster, most of
 them are from a Red Hat document on configuring Red Hat Enterprise Storage
 for VM hosting:



 performance.io-thread-count: 32
 performance.cache-size: 1024MB
 performance.write-behind-window-size: 5MB
 performance.write-behind: on
 network.remote-dio: on
 cluster.eager-lock: enable
 performance.stat-prefetch: off
 performance.io-cache: on
 performance.read-ahead: on
 performance.quick-read: on

 Here are some of the numbers I was getting when benchmarking the storage
 from the KVM node directly (not a VM)

 The below table is in KB/s. The test is single stream 1GB file utilizing
 Direct I/O (no cache). I used iozone to run the benchmark.

 Write 4k45729
 Read 4k 10189
 Random Write 4k 31983
 Random Read 4k  9859
 Write 16k   182246
 Read 16k37146
 Random Write 16k113026
 Random Read 16k 37237
 Write 64k   420908
 Read 64k125315
 Random Write 64k383848
 Random Read 64k 125218
 Write 256k  567501
 Read 256k   218413
 Random Write 256k   508650
 Random Read 256k229117

 In the above results, I have the volume mounted to each KVM host as a FUSE
 glusterfs file system. They are added to CloudStack as a shared mount
 point. In the future it would be great to utilize GlusterFS qemu libvirt
 integration with libgfapi so I could bypass fuse altogether. However, that
 would require adding that code to CloudStack to support that.

 I maybe have 15 or so VMs running from the storage now and it is still
 pretty snappy. Need to do some more testing though and really get it loaded.

 - Original Message -

 From: Rafael Weingartner rafaelweingart...@gmail.com
 To: users@cloudstack.apache.org
 Sent: Wednesday, September 11, 2013 8:48:07 AM
 Subject: Re: Cloustack 4.1.0 + GlusterFS

 Right now I can think in three main reasons:

 The first reason, performance (I do not know Gluster and its performance
 and if the read and write speed are satisfactory). Please if you can make a
 test, post the results.

 Second consistency, I do not know Gluster, but swift that is also a
 Distributed File System is not consistency and they make it pretty clear on
 their page (http://docs.openstack.org/developer/swift/)

 Swift is a highly available, distributed, eventually consistent
 object/blob store

 I would not accept to storage my VMs images on a FS that is eventually
 consistent.

 Third, network, I haven't used this kind of FS, but I can image that it
 uses a lot of bandwidth to keep synchronizing, managing and securing the
 data. So, managing the networking would be a pain.



 2013/9/11 Olivier Mauras oliv...@core-hosting.net

 
 
  Hi,
 
  Those thinking that it's not a good idea, do you mind
  explaining your point of view?
  GlusterFS seems like the only real
  alternative to a highly priced SAN for the primary storage...
 
 
  Thanks,
  Olivier
 
  On 2013-09-11 15:08, Rafael Weingartner wrote:
 
   I
  totally agree with Tomasz, I do not think that using a distributed FS
  as
   primary storage is a good idea, but as a secondary it sounds
  interesting.
  
   But, off course you can try *;*)
  
   2013/9/11
  Shanker Balan shanker.ba...@shapeblue.com
  
   On 11-Sep-2013, at
  5:14 PM, Tomasz Zięba t.a.zi...@gmail.com [1] wrote:
  
   Hi, Some
  time ago I tried to use GlusterFS as storage for cloudstacka but I
  noticed that cloudstack uses the default settings for mount command. By
  default mount command is using the UDP protocol but glusterfs works
  
  only
  
   using tcp. I think, if cloudstack developers could add -o
  proto=tcp to code glusterfs should works. For example: /bin/mount -t
  nfs -o proto=tcp IP:/share /mnt/gluster/ If you are using CitrixXen you
  should mount the share and make it as SR. For cloudstacka is clear
  because you should use the option PreSetup when creating PrimaryStorage.
  Personally, I doubt that using GlusterFS as a primary storage is a good
  solution but for secondary storage it should be very usefull.
   And
  maybe as a Swift backend. -- @shankerbalan M: +91 98860 60539 | O: +91
  (80) 67935867 shanker.ba...@shapeblue.com [2] | www.shapeblue.com [3] |
  Twitter:@shapeblue ShapeBlue Services India LLP, 22nd floor, Unit 2201A,
  World Trade Centre, Bangalore - 560 055 This email and any attachments
  to it may be confidential and are intended solely for the use of the
  individual to whom it is addressed. Any views

Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread John Skinner
I ran each test independently for each block size. 

iozone -I -i 0 -i 1 -i 2 -r 4k -s 1G 

In order: -I to specify direct-IO, -i 0 to specify write/rewrite, -i 1 to 
specify read/re-read, -i 2 to specify random read/write, -r specify block size, 
-s to specify file size. 

The report is pretty thorough. What I put in the email was different, I just 
took the throughput from each test and put it into a table outside of the full 
report. 

- Original Message -

From: Rafael Weingartner rafaelweingart...@gmail.com 
To: users@cloudstack.apache.org 
Sent: Wednesday, September 11, 2013 4:16:23 PM 
Subject: Re: Cloustack 4.1.0 + GlusterFS 

I have never used iozone before, 

How did you get that report? 

I tried: iozone -s 1024m -t 1 -R 

But the report was pretty different from yours. 


2013/9/11 John Skinner john.skin...@appcore.com 

 I currently have GlusterFS deployed into an 8 node KVM cluster running on 
 CloudStack 4.1 for primary storage. Gluster is deployed on 28 1TB drives 
 across 2 separate storage appliances using a distributed-replicated volume 
 with the replica set to 2. The storage network is 10Gb copper. 
 
 These are the options I have configured for the volume in Gluster, most of 
 them are from a Red Hat document on configuring Red Hat Enterprise Storage 
 for VM hosting: 
 
 
 
 performance.io-thread-count: 32 
 performance.cache-size: 1024MB 
 performance.write-behind-window-size: 5MB 
 performance.write-behind: on 
 network.remote-dio: on 
 cluster.eager-lock: enable 
 performance.stat-prefetch: off 
 performance.io-cache: on 
 performance.read-ahead: on 
 performance.quick-read: on 
 
 Here are some of the numbers I was getting when benchmarking the storage 
 from the KVM node directly (not a VM) 
 
 The below table is in KB/s. The test is single stream 1GB file utilizing 
 Direct I/O (no cache). I used iozone to run the benchmark. 
 
 Write 4k 45729 
 Read 4k 10189 
 Random Write 4k 31983 
 Random Read 4k 9859 
 Write 16k 182246 
 Read 16k 37146 
 Random Write 16k 113026 
 Random Read 16k 37237 
 Write 64k 420908 
 Read 64k 125315 
 Random Write 64k 383848 
 Random Read 64k 125218 
 Write 256k 567501 
 Read 256k 218413 
 Random Write 256k 508650 
 Random Read 256k 229117 
 
 In the above results, I have the volume mounted to each KVM host as a FUSE 
 glusterfs file system. They are added to CloudStack as a shared mount 
 point. In the future it would be great to utilize GlusterFS qemu libvirt 
 integration with libgfapi so I could bypass fuse altogether. However, that 
 would require adding that code to CloudStack to support that. 
 
 I maybe have 15 or so VMs running from the storage now and it is still 
 pretty snappy. Need to do some more testing though and really get it loaded. 
 
 - Original Message -

 
 From: Rafael Weingartner rafaelweingart...@gmail.com 
 To: users@cloudstack.apache.org 
 Sent: Wednesday, September 11, 2013 8:48:07 AM 
 Subject: Re: Cloustack 4.1.0 + GlusterFS 
 
 Right now I can think in three main reasons: 
 
 The first reason, performance (I do not know Gluster and its performance 
 and if the read and write speed are satisfactory). Please if you can make a 
 test, post the results. 
 
 Second consistency, I do not know Gluster, but swift that is also a 
 Distributed File System is not consistency and they make it pretty clear on 
 their page (http://docs.openstack.org/developer/swift/) 
 
 Swift is a highly available, distributed, eventually consistent 
 object/blob store 
 
 I would not accept to storage my VMs images on a FS that is eventually 
 consistent. 
 
 Third, network, I haven't used this kind of FS, but I can image that it 
 uses a lot of bandwidth to keep synchronizing, managing and securing the 
 data. So, managing the networking would be a pain. 
 
 
 
 2013/9/11 Olivier Mauras oliv...@core-hosting.net 
 
  
  
  Hi, 
  
  Those thinking that it's not a good idea, do you mind 
  explaining your point of view? 
  GlusterFS seems like the only real 
  alternative to a highly priced SAN for the primary storage... 
  
  
  Thanks, 
  Olivier 
  
  On 2013-09-11 15:08, Rafael Weingartner wrote: 
  
   I 
  totally agree with Tomasz, I do not think that using a distributed FS 
  as 
   primary storage is a good idea, but as a secondary it sounds 
  interesting. 
   
   But, off course you can try *;*) 
   
   2013/9/11 
  Shanker Balan shanker.ba...@shapeblue.com 
   
   On 11-Sep-2013, at 
  5:14 PM, Tomasz Zięba t.a.zi...@gmail.com [1] wrote: 
   
   Hi, Some 
  time ago I tried to use GlusterFS as storage for cloudstacka but I 
  noticed that cloudstack uses the default settings for mount command. By 
  default mount command is using the UDP protocol but glusterfs works 
   
  only 
   
   using tcp. I think, if cloudstack developers could add -o 
  proto=tcp to code glusterfs should works. For example: /bin/mount -t 
  nfs -o proto=tcp IP:/share /mnt/gluster/ If you are using CitrixXen you 
  should mount the share

Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Rafael Weingartner
cool, thanks.



2013/9/11 John Skinner john.skin...@appcore.com

 I ran each test independently for each block size.

 iozone -I -i 0 -i 1 -i 2 -r 4k -s 1G

 In order: -I to specify direct-IO, -i 0 to specify write/rewrite, -i 1 to
 specify read/re-read, -i 2 to specify random read/write, -r specify block
 size, -s to specify file size.

 The report is pretty thorough. What I put in the email was different, I
 just took the throughput from each test and put it into a table outside of
 the full report.

 - Original Message -

 From: Rafael Weingartner rafaelweingart...@gmail.com
 To: users@cloudstack.apache.org
 Sent: Wednesday, September 11, 2013 4:16:23 PM
 Subject: Re: Cloustack 4.1.0 + GlusterFS

 I have never used iozone before,

 How did you get that report?

 I tried: iozone -s 1024m -t 1 -R

 But the report was pretty different from yours.


 2013/9/11 John Skinner john.skin...@appcore.com

  I currently have GlusterFS deployed into an 8 node KVM cluster running on
  CloudStack 4.1 for primary storage. Gluster is deployed on 28 1TB drives
  across 2 separate storage appliances using a distributed-replicated
 volume
  with the replica set to 2. The storage network is 10Gb copper.
 
  These are the options I have configured for the volume in Gluster, most
 of
  them are from a Red Hat document on configuring Red Hat Enterprise
 Storage
  for VM hosting:
 
 
 
  performance.io-thread-count: 32
  performance.cache-size: 1024MB
  performance.write-behind-window-size: 5MB
  performance.write-behind: on
  network.remote-dio: on
  cluster.eager-lock: enable
  performance.stat-prefetch: off
  performance.io-cache: on
  performance.read-ahead: on
  performance.quick-read: on
 
  Here are some of the numbers I was getting when benchmarking the storage
  from the KVM node directly (not a VM)
 
  The below table is in KB/s. The test is single stream 1GB file utilizing
  Direct I/O (no cache). I used iozone to run the benchmark.
 
  Write 4k 45729
  Read 4k 10189
  Random Write 4k 31983
  Random Read 4k 9859
  Write 16k 182246
  Read 16k 37146
  Random Write 16k 113026
  Random Read 16k 37237
  Write 64k 420908
  Read 64k 125315
  Random Write 64k 383848
  Random Read 64k 125218
  Write 256k 567501
  Read 256k 218413
  Random Write 256k 508650
  Random Read 256k 229117
 
  In the above results, I have the volume mounted to each KVM host as a
 FUSE
  glusterfs file system. They are added to CloudStack as a shared mount
  point. In the future it would be great to utilize GlusterFS qemu libvirt
  integration with libgfapi so I could bypass fuse altogether. However,
 that
  would require adding that code to CloudStack to support that.
 
  I maybe have 15 or so VMs running from the storage now and it is still
  pretty snappy. Need to do some more testing though and really get it
 loaded.
 
  - Original Message -

 
  From: Rafael Weingartner rafaelweingart...@gmail.com
  To: users@cloudstack.apache.org
  Sent: Wednesday, September 11, 2013 8:48:07 AM
  Subject: Re: Cloustack 4.1.0 + GlusterFS
 
  Right now I can think in three main reasons:
 
  The first reason, performance (I do not know Gluster and its performance
  and if the read and write speed are satisfactory). Please if you can
 make a
  test, post the results.
 
  Second consistency, I do not know Gluster, but swift that is also a
  Distributed File System is not consistency and they make it pretty clear
 on
  their page (http://docs.openstack.org/developer/swift/)
 
  Swift is a highly available, distributed, eventually consistent
  object/blob store
 
  I would not accept to storage my VMs images on a FS that is eventually
  consistent.
 
  Third, network, I haven't used this kind of FS, but I can image that it
  uses a lot of bandwidth to keep synchronizing, managing and securing the
  data. So, managing the networking would be a pain.
 
 
 
  2013/9/11 Olivier Mauras oliv...@core-hosting.net
 
  
  
   Hi,
  
   Those thinking that it's not a good idea, do you mind
   explaining your point of view?
   GlusterFS seems like the only real
   alternative to a highly priced SAN for the primary storage...
  
  
   Thanks,
   Olivier
  
   On 2013-09-11 15:08, Rafael Weingartner wrote:
  
I
   totally agree with Tomasz, I do not think that using a distributed FS
   as
primary storage is a good idea, but as a secondary it sounds
   interesting.
   
But, off course you can try *;*)
   
2013/9/11
   Shanker Balan shanker.ba...@shapeblue.com
   
On 11-Sep-2013, at
   5:14 PM, Tomasz Zięba t.a.zi...@gmail.com [1] wrote:
   
Hi, Some
   time ago I tried to use GlusterFS as storage for cloudstacka but I
   noticed that cloudstack uses the default settings for mount command. By
   default mount command is using the UDP protocol but glusterfs works
   
   only
   
using tcp. I think, if cloudstack developers could add -o
   proto=tcp to code glusterfs should works. For example: /bin/mount -t
   nfs -o proto=tcp IP:/share

Re: Cloustack 4.1.0 + GlusterFS

2013-09-11 Thread Olivier Mauras
 

It's always interesting seeing people advising against a solution
without actually having it tested :)

Thanks John for sharing your
results.
I guess that you're using XFS as the underlying FS ? Your
write-behind options, clearly show a performance improvement for write
speed.
Have you tried with VM cache enabled? What about read
performances?

I also hope that qemu native integration will be soon
supported in Cloudstack, performances are definitely boosted :)

BTW are
you happy with the cloudstack KVM integration? Seems, reading this list,
that there's some needed features that doesn't work very well with KVM
- HA, VM snapshots,...

Cheers,
Olivier

On 2013-09-11 16:38, John
Skinner wrote: 

 I currently have GlusterFS deployed into an 8 node
KVM cluster running on CloudStack 4.1 for primary storage. Gluster is
deployed on 28 1TB drives across 2 separate storage appliances using a
distributed-replicated volume with the replica set to 2. The storage
network is 10Gb copper. 
 
 These are the options I have configured
for the volume in Gluster, most of them are from a Red Hat document on
configuring Red Hat Enterprise Storage for VM hosting: 
 

performance.io-thread-count: 32 
 performance.cache-size: 1024MB 

performance.write-behind-window-size: 5MB 
 performance.write-behind:
on 
 network.remote-dio: on 
 cluster.eager-lock: enable 

performance.stat-prefetch: off 
 performance.io-cache: on 

performance.read-ahead: on 
 performance.quick-read: on 
 
 Here are
some of the numbers I was getting when benchmarking the storage from the
KVM node directly (not a VM) 
 
 The below table is in KB/s. The test
is single stream 1GB file utilizing Direct I/O (no cache). I used iozone
to run the benchmark. 
 
 Write 4k 45729 
 Read 4k 10189 
 Random
Write 4k 31983 
 Random Read 4k 9859 
 Write 16k 182246 
 Read 16k
37146 
 Random Write 16k 113026 
 Random Read 16k 37237 
 Write 64k
420908 
 Read 64k 125315 
 Random Write 64k 383848 
 Random Read 64k
125218 
 Write 256k 567501 
 Read 256k 218413 
 Random Write 256k
508650 
 Random Read 256k 229117 
 
 In the above results, I have the
volume mounted to each KVM host as a FUSE glusterfs file system. They
are added to CloudStack as a shared mount point. In the future it would
be great to utilize GlusterFS qemu libvirt integration with libgfapi so
I could bypass fuse altogether. However, that would require adding that
code to CloudStack to support that. 
 
 I maybe have 15 or so VMs
running from the storage now and it is still pretty snappy. Need to do
some more testing though and really get it loaded. 
 
 - Original
Message -
 
 From: Rafael Weingartner
rafaelweingart...@gmail.com 
 To: users@cloudstack.apache.org 

Sent: Wednesday, September 11, 2013 8:48:07 AM 
 Subject: Re: Cloustack
4.1.0 + GlusterFS 
 
 Right now I can think in three main reasons: 


 The first reason, performance (I do not know Gluster and its
performance 
 and if the read and write speed are satisfactory). Please
if you can make a 
 test, post the results. 
 
 Second consistency, I
do not know Gluster, but swift that is also a 
 Distributed File System
is not consistency and they make it pretty clear on 
 their page
(http://docs.openstack.org/developer/swift/) 
 
 Swift is a highly
available, distributed, eventually consistent 
 object/blob store

 
 I would not accept to storage my VMs images on a FS that is
eventually 
 consistent. 
 
 Third, network, I haven't used this kind
of FS, but I can image that it 
 uses a lot of bandwidth to keep
synchronizing, managing and securing the 
 data. So, managing the
networking would be a pain. 
 
 2013/9/11 Olivier Mauras
oliv...@core-hosting.net
 
 Hi, Those thinking that it's not a good
idea, do you mind explaining your point of view? GlusterFS seems like
the only real alternative to a highly priced SAN for the primary
storage... Thanks, Olivier On 2013-09-11 15:08, Rafael Weingartner
wrote: 
 
 I
 totally agree with Tomasz, I do not think that
using a distributed FS as 
 
 primary storage is a good idea, but
as a secondary it sounds
 interesting. And maybe as a Swift backend.
-- @shankerbalan M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com [1] [2] | www.shapeblue.com [2][3] |
Twitter:@shapeblue ShapeBlue Services India LLP, 22nd floor, Unit 2201A,
World Trade Centre, Bangalore - 560 055 This email and any attachments
to it may be confidential and are intended solely for the use of the
indivi
 
 sarily represent those of Shape Blue Ltd or related
companies. If you are not the intended recipient of this email, you must
neit
 action based upon its contents, nor copy or show it to anyone.
Please contact the sender if you believe you have received this email in
error. Shape Blue Ltd is a company incorporated in England  Wales.
ShapeBlue Services India LLP is operated under license from Shape Blue
Ltd. ShapeBlue is a registered trademark. -- Rafael Weingartner Links:
-- [1] mailto:t.a.zi...@gmail.com [3] [2