Sure I could look at writing up a howto for gluster. It is pretty straight 
forward. 

As for the other questions; yes I am using XFS as the underlying filesystem for 
gluster. For KVM, I love it. I haven't really noticed any significant issues, 
what are these issues regarding HA and snapshotting? In 4.1 I haven't had any 
issues creating a snapshot of a KVM VM, turning it into a template, and 
launching it again. Live migration seems to work pretty good as well on KVM, 
however, I haven't tested failing KVM nodes yet to see how CloudStack deals 
with the outage... I assume that it will just restart the VM on an available 
KVM node in the cluster. 


----- Original Message -----

From: "Chiradeep Vittal" <chiradeep.vit...@citrix.com> 
To: users@cloudstack.apache.org 
Sent: Thursday, September 12, 2013 11:25:49 AM 
Subject: Re: Cloustack 4.1.0 + GlusterFS 

John, care to write a HOWTO use GlusterFS in the wiki? 


On 9/11/13 5:27 PM, "John Skinner" <john.skin...@appcore.com> wrote: 

>I ran each test independently for each block size. 
> 
>iozone -I -i 0 -i 1 -i 2 -r 4k -s 1G 
> 
>In order: -I to specify direct-IO, -i 0 to specify write/rewrite, -i 1 to 
>specify read/re-read, -i 2 to specify random read/write, -r specify block 
>size, -s to specify file size. 
> 
>The report is pretty thorough. What I put in the email was different, I 
>just took the throughput from each test and put it into a table outside 
>of the full report. 
> 
>----- Original Message ----- 
> 
>From: "Rafael Weingartner" <rafaelweingart...@gmail.com> 
>To: users@cloudstack.apache.org 
>Sent: Wednesday, September 11, 2013 4:16:23 PM 
>Subject: Re: Cloustack 4.1.0 + GlusterFS 
> 
>I have never used iozone before, 
> 
>How did you get that report? 
> 
>I tried: iozone -s 1024m -t 1 -R 
> 
>But the report was pretty different from yours. 
> 
> 
>2013/9/11 John Skinner <john.skin...@appcore.com> 
> 
>> I currently have GlusterFS deployed into an 8 node KVM cluster running 
>>on 
>> CloudStack 4.1 for primary storage. Gluster is deployed on 28 1TB 
>>drives 
>> across 2 separate storage appliances using a distributed-replicated 
>>volume 
>> with the replica set to 2. The storage network is 10Gb copper. 
>> 
>> These are the options I have configured for the volume in Gluster, most 
>>of 
>> them are from a Red Hat document on configuring Red Hat Enterprise 
>>Storage 
>> for VM hosting: 
>> 
>> 
>> 
>> performance.io-thread-count: 32 
>> performance.cache-size: 1024MB 
>> performance.write-behind-window-size: 5MB 
>> performance.write-behind: on 
>> network.remote-dio: on 
>> cluster.eager-lock: enable 
>> performance.stat-prefetch: off 
>> performance.io-cache: on 
>> performance.read-ahead: on 
>> performance.quick-read: on 
>> 
>> Here are some of the numbers I was getting when benchmarking the 
>>storage 
>> from the KVM node directly (not a VM) 
>> 
>> The below table is in KB/s. The test is single stream 1GB file 
>>utilizing 
>> Direct I/O (no cache). I used iozone to run the benchmark. 
>> 
>> Write 4k 45729 
>> Read 4k 10189 
>> Random Write 4k 31983 
>> Random Read 4k 9859 
>> Write 16k 182246 
>> Read 16k 37146 
>> Random Write 16k 113026 
>> Random Read 16k 37237 
>> Write 64k 420908 
>> Read 64k 125315 
>> Random Write 64k 383848 
>> Random Read 64k 125218 
>> Write 256k 567501 
>> Read 256k 218413 
>> Random Write 256k 508650 
>> Random Read 256k 229117 
>> 
>> In the above results, I have the volume mounted to each KVM host as a 
>>FUSE 
>> glusterfs file system. They are added to CloudStack as a shared mount 
>> point. In the future it would be great to utilize GlusterFS qemu 
>>libvirt 
>> integration with libgfapi so I could bypass fuse altogether. However, 
>>that 
>> would require adding that code to CloudStack to support that. 
>> 
>> I maybe have 15 or so VMs running from the storage now and it is still 
>> pretty snappy. Need to do some more testing though and really get it 
>>loaded. 
>> 
>> ----- Original Message ----- 
> 
>> 
>> From: "Rafael Weingartner" <rafaelweingart...@gmail.com> 
>> To: users@cloudstack.apache.org 
>> Sent: Wednesday, September 11, 2013 8:48:07 AM 
>> Subject: Re: Cloustack 4.1.0 + GlusterFS 
>> 
>> Right now I can think in three main reasons: 
>> 
>> The first reason, performance (I do not know Gluster and its 
>>performance 
>> and if the read and write speed are satisfactory). Please if you can 
>>make a 
>> test, post the results. 
>> 
>> Second consistency, I do not know Gluster, but swift that is also a 
>> Distributed File System is not consistency and they make it pretty 
>>clear on 
>> their page (http://docs.openstack.org/developer/swift/) 
>> 
>> "Swift is a highly available, distributed, eventually consistent 
>> object/blob store...". 
>> 
>> I would not accept to storage my VMs images on a FS that is eventually 
>> consistent. 
>> 
>> Third, network, I haven't used this kind of FS, but I can image that it 
>> uses a lot of bandwidth to keep synchronizing, managing and securing 
>>the 
>> data. So, managing the networking would be a pain. 
>> 
>> 
>> 
>> 2013/9/11 Olivier Mauras <oliv...@core-hosting.net> 
>> 
>> > 
>> > 
>> > Hi, 
>> > 
>> > Those thinking that it's not a good idea, do you mind 
>> > explaining your point of view? 
>> > GlusterFS seems like the only real 
>> > alternative to a highly priced SAN for the primary storage... 
>> > 
>> > 
>> > Thanks, 
>> > Olivier 
>> > 
>> > On 2013-09-11 15:08, Rafael Weingartner wrote: 
>> > 
>> > > I 
>> > totally agree with Tomasz, I do not think that using a distributed FS 
>> > as 
>> > > primary storage is a good idea, but as a secondary it sounds 
>> > interesting. 
>> > > 
>> > > But, off course you can try *;*) 
>> > > 
>> > > 2013/9/11 
>> > Shanker Balan <shanker.ba...@shapeblue.com> 
>> > > 
>> > >> On 11-Sep-2013, at 
>> > 5:14 PM, Tomasz Zięba <t.a.zi...@gmail.com [1]> wrote: 
>> > >> 
>> > >>> Hi, Some 
>> > time ago I tried to use GlusterFS as storage for cloudstacka but I 
>> > noticed that cloudstack uses the default settings for mount command. 
>>By 
>> > default mount command is using the UDP protocol but glusterfs works 
>> > >> 
>> > only 
>> > >> 
>> > >>> using tcp. I think, if cloudstack developers could add "-o 
>> > proto=tcp" to code glusterfs should works. For example: /bin/mount -t 
>> > nfs -o proto=tcp IP:/share /mnt/gluster/ If you are using CitrixXen 
>>you 
>> > should mount the share and make it as SR. For cloudstacka is clear 
>> > because you should use the option PreSetup when creating 
>>PrimaryStorage. 
>> > Personally, I doubt that using GlusterFS as a primary storage is a 
>>good 
>> > solution but for secondary storage it should be very usefull. 
>> > >> And 
>> > maybe as a Swift backend. -- @shankerbalan M: +91 98860 60539 | O: 
>>+91 
>> > (80) 67935867 shanker.ba...@shapeblue.com [2] | www.shapeblue.com [3] 
>>| 
>> > Twitter:@shapeblue ShapeBlue Services India LLP, 22nd floor, Unit 
>>2201A, 
>> > World Trade Centre, Bangalore - 560 055 This email and any 
>>attachments 
>> > to it may be confidential and are intended solely for the use of the 
>> > individual to whom it is addressed. Any views or opinions expressed 
>>are 
>> > solely those of the author and do not necessarily represent those of 
>> > Shape Blue Ltd or related companies. If you are not the intended 
>> > recipient of this email, you must neither take any action based upon 
>>its 
>> > contents, nor copy or show it to anyone. Please contact the sender if 
>> > you believe you have received this email in error. Shape Blue Ltd is 
>>a 
>> > company incorporated in England & Wales. ShapeBlue Services India LLP 
>>is 
>> > operated under license from Shape Blue Ltd. ShapeBlue is a registered 
>> > trademark. 
>> > > 
>> > > -- Rafael Weingartner 
>> > 
>> > 
>> > 
>> > Links: 
>> > ------ 
>> > [1] 
>> > mailto:t.a.zi...@gmail.com 
>> > [2] mailto:shanker.ba...@shapeblue.com 
>> > [3] 
>> > http://www.shapeblue.com 
>> > 
>> 
>> 
>> 
>> -- 
>> Rafael Weingartner 
>> 
>> 
> 
> 
>-- 
>Rafael Weingartner 
> 


Reply via email to