It's always interesting seeing people advising against a solution
without actually having it tested :)

Thanks John for sharing your
results.
I guess that you're using XFS as the underlying FS ? Your
write-behind options, clearly show a performance improvement for write
speed.
Have you tried with VM cache enabled? What about read
performances?

I also hope that qemu native integration will be soon
supported in Cloudstack, performances are definitely boosted :)

BTW are
you happy with the cloudstack KVM integration? Seems, reading this list,
that there's some "needed" features that doesn't work very well with KVM
- HA, VM snapshots,...

Cheers,
Olivier

On 2013-09-11 16:38, John
Skinner wrote: 

> I currently have GlusterFS deployed into an 8 node
KVM cluster running on CloudStack 4.1 for primary storage. Gluster is
deployed on 28 1TB drives across 2 separate storage appliances using a
distributed-replicated volume with the replica set to 2. The storage
network is 10Gb copper. 
> 
> These are the options I have configured
for the volume in Gluster, most of them are from a Red Hat document on
configuring Red Hat Enterprise Storage for VM hosting: 
> 
>
performance.io-thread-count: 32 
> performance.cache-size: 1024MB 
>
performance.write-behind-window-size: 5MB 
> performance.write-behind:
on 
> network.remote-dio: on 
> cluster.eager-lock: enable 
>
performance.stat-prefetch: off 
> performance.io-cache: on 
>
performance.read-ahead: on 
> performance.quick-read: on 
> 
> Here are
some of the numbers I was getting when benchmarking the storage from the
KVM node directly (not a VM) 
> 
> The below table is in KB/s. The test
is single stream 1GB file utilizing Direct I/O (no cache). I used iozone
to run the benchmark. 
> 
> Write 4k 45729 
> Read 4k 10189 
> Random
Write 4k 31983 
> Random Read 4k 9859 
> Write 16k 182246 
> Read 16k
37146 
> Random Write 16k 113026 
> Random Read 16k 37237 
> Write 64k
420908 
> Read 64k 125315 
> Random Write 64k 383848 
> Random Read 64k
125218 
> Write 256k 567501 
> Read 256k 218413 
> Random Write 256k
508650 
> Random Read 256k 229117 
> 
> In the above results, I have the
volume mounted to each KVM host as a FUSE glusterfs file system. They
are added to CloudStack as a shared mount point. In the future it would
be great to utilize GlusterFS qemu libvirt integration with libgfapi so
I could bypass fuse altogether. However, that would require adding that
code to CloudStack to support that. 
> 
> I maybe have 15 or so VMs
running from the storage now and it is still pretty snappy. Need to do
some more testing though and really get it loaded. 
> 
> ----- Original
Message -----
> 
> From: "Rafael Weingartner"
<rafaelweingart...@gmail.com> 
> To: users@cloudstack.apache.org 
>
Sent: Wednesday, September 11, 2013 8:48:07 AM 
> Subject: Re: Cloustack
4.1.0 + GlusterFS 
> 
> Right now I can think in three main reasons: 
>

> The first reason, performance (I do not know Gluster and its
performance 
> and if the read and write speed are satisfactory). Please
if you can make a 
> test, post the results. 
> 
> Second consistency, I
do not know Gluster, but swift that is also a 
> Distributed File System
is not consistency and they make it pretty clear on 
> their page
(http://docs.openstack.org/developer/swift/) 
> 
> "Swift is a highly
available, distributed, eventually consistent 
> object/blob store...".

> 
> I would not accept to storage my VMs images on a FS that is
eventually 
> consistent. 
> 
> Third, network, I haven't used this kind
of FS, but I can image that it 
> uses a lot of bandwidth to keep
synchronizing, managing and securing the 
> data. So, managing the
networking would be a pain. 
> 
> 2013/9/11 Olivier Mauras
<oliv...@core-hosting.net>
> 
>> Hi, Those thinking that it's not a good
idea, do you mind explaining your point of view? GlusterFS seems like
the only real alternative to a highly priced SAN for the primary
storage... Thanks, Olivier On 2013-09-11 15:08, Rafael Weingartner
wrote: 
>> 
>>> I
>> totally agree with Tomasz, I do not think that
using a distributed FS as 
>> 
>>> primary storage is a good idea, but
as a secondary it sounds
>> interesting. And maybe as a Swift backend.
-- @shankerbalan M: +91 98860 60539 | O: +91 (80) 67935867
shanker.ba...@shapeblue.com [1] [2] | www.shapeblue.com [2][3] |
Twitter:@shapeblue ShapeBlue Services India LLP, 22nd floor, Unit 2201A,
World Trade Centre, Bangalore - 560 055 This email and any attachments
to it may be confidential and are intended solely for the use of the
indivi
>> 
>>> sarily represent those of Shape Blue Ltd or related
companies. If you are not the intended recipient of this email, you must
neit
>> action based upon its contents, nor copy or show it to anyone.
Please contact the sender if you believe you have received this email in
error. Shape Blue Ltd is a company incorporated in England & Wales.
ShapeBlue Services India LLP is operated under license from Shape Blue
Ltd. ShapeBlue is a registered trademark. -- Rafael Weingartner Links:
------ [1] mailto:t.a.zi...@gmail.com [3] [2]
mailto:shanker.ba...@shapeblue.com [4] [3] http://www.shapeblue.com
[5]
>> 
>>> 
> 
> -- Rafael Weingartner

 

Links:
------
[1]
mailto:shanker.ba...@shapeblue.com
[2] http://www.shapeblue.com
[3]
mailto:t.a.zi...@gmail.com
[4] mailto:shanker.ba...@shapeblue.com
[5]
http://www.shapeblue.com

Reply via email to