Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-11 Thread Hieu LE
Thank you, Mike and Tim.

I will follow this guide to submit code ASAP.

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Git



On Tue, Jun 10, 2014 at 4:33 AM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Yes, I was going to mention what Tim said about using the term XenServer
 instead of Xen as Tim has done a bunch of work recently to separate the
 two.

 I made a few changes in your Wiki when I saw a reference to Xen instead
 of to XenServer.


 On Mon, Jun 9, 2014 at 2:53 PM, Tim Mackey tmac...@gmail.com wrote:

  Hieu,
 
  I made a couple of minor edits to your design to ensure everything is
  XenServer based.  If you haven't done so already, please also fetch the
  most recent master and base off of that.  I refactored the old Xen plugin
  into a XenServer specific one since Xen Project isn't currently
 supported,
  and files have moved.  Also please ensure you don't use the term Xen in
  your code/docs to avoid any future confusion when the Xen Project work
  starts to materialize.
 
  Looking forward to seeing your work!!
 
  -tim
 
 
  On Mon, Jun 9, 2014 at 4:31 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Thanks, Hieu!
  
   I have reviewed your design (making only minor changes to your Wiki).
  
   Please feel free to have me review your code when you are ready.
  
   Also, do you have a plan for integration testing? It would be great if
  you
   could update your Wiki page to include what your plans on in this
 regard.
  
   Thanks!
   Mike
  
  
   On Mon, Jun 9, 2014 at 4:24 AM, Hieu LE hieul...@gmail.com wrote:
  
Hi guys,
   
I have updated this proposal wiki[1], included diagram for VM
 migrate,
volume migrate and snapshot.
   
Please review and give feedback.
   
[1]:
   
   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
   
   
On Fri, Jun 6, 2014 at 7:14 PM, Todd Pigram t...@toddpigram.com
  wrote:
   
 Sorry, thought you were based off the link you provided in this
  reply.

 In our case, we are using CloudStack integrated in VDI solution to
 provived
 pooled VM type[1]. So may be my approach can bring better UX for
 user
with
 lower bootime ...

 A short change in design are followings
 - VM will be deployed with golden primary storage if primary
 storage
  is
 marked golden and this VM template is also marked as golden.
 - Choosing the best deploy destionation for both golden primary
  storage
and
 normal root volume primary storage. Chosen host can also access
 both
 storage pools.
 - New Xen Server plug-in for modifying VHD parent id.

 Is there some place for me to submit my design and code. Can I
 write
  a
new
 proposal in CS wiki ?

 [1]:


   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
  


 On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE hieul...@gmail.com
 wrote:

  Hi Todd,
 
 
  On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com
 
wrote:
 
   Hieu,
  
   I assume you are using MCS for you golden image? What version
 of
   XD?
  Given
   you are using pooled desktops, have you thought about using a
 PVS
   BDM
 iso
   and mount it with in your 1000 VMs? This way you can stagger
   reboots
 via
   PVS console or Studio. This would require a change to your
  delivery
  group.
  
  
  Sorry but I did not use MCS or XenDesktop in my company :-)
 
 
  
   On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
6) The copy_vhd_from_secondarystorage XenServer plug-in is
 not
   used
  when
you're using XenServer + XS62ESP1 + XS62ESP1004. In that
 case,
please
   refer
to copyTemplateToPrimaryStorage(CopyCommand) method in the
Xenserver625StorageProcessor class.
   
  
 
  Thank Mike, I will take note of that.
 
 

On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 Other than going through a for loop and deploying VM
 after
   VM,
I
   don't
 think CloudStack currently supports a bulk-VM-deploy
  operation.

 It would be nice if CS did so at some point in the future;
however,
   that
 is probably a separate proposal from Hieu's.


 On Thu, Jun 5, 2014 at 12:13 AM, Amit Das 
amit@cloudbyte.com
wrote:

 Hi Hieu,

 Will it be good to include bulk operation of this feature?
  In
   addition,
 does Xen support parallel execution of these operations ?

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Thu, Jun 5, 2014 at 8:59 AM, 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-09 Thread Mike Tutkowski
Thanks, Hieu!

I have reviewed your design (making only minor changes to your Wiki).

Please feel free to have me review your code when you are ready.

Also, do you have a plan for integration testing? It would be great if you
could update your Wiki page to include what your plans on in this regard.

Thanks!
Mike


On Mon, Jun 9, 2014 at 4:24 AM, Hieu LE hieul...@gmail.com wrote:

 Hi guys,

 I have updated this proposal wiki[1], included diagram for VM migrate,
 volume migrate and snapshot.

 Please review and give feedback.

 [1]:

 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Fri, Jun 6, 2014 at 7:14 PM, Todd Pigram t...@toddpigram.com wrote:

  Sorry, thought you were based off the link you provided in this reply.
 
  In our case, we are using CloudStack integrated in VDI solution to
  provived
  pooled VM type[1]. So may be my approach can bring better UX for user
 with
  lower bootime ...
 
  A short change in design are followings
  - VM will be deployed with golden primary storage if primary storage is
  marked golden and this VM template is also marked as golden.
  - Choosing the best deploy destionation for both golden primary storage
 and
  normal root volume primary storage. Chosen host can also access both
  storage pools.
  - New Xen Server plug-in for modifying VHD parent id.
 
  Is there some place for me to submit my design and code. Can I write a
 new
  proposal in CS wiki ?
 
  [1]:
 
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
 
 
  On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE hieul...@gmail.com wrote:
 
   Hi Todd,
  
  
   On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com
 wrote:
  
Hieu,
   
I assume you are using MCS for you golden image? What version of XD?
   Given
you are using pooled desktops, have you thought about using a PVS BDM
  iso
and mount it with in your 1000 VMs? This way you can stagger reboots
  via
PVS console or Studio. This would require a change to your delivery
   group.
   
   
   Sorry but I did not use MCS or XenDesktop in my company :-)
  
  
   
On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 6) The copy_vhd_from_secondarystorage XenServer plug-in is not used
   when
 you're using XenServer + XS62ESP1 + XS62ESP1004. In that case,
 please
refer
 to copyTemplateToPrimaryStorage(CopyCommand) method in the
 Xenserver625StorageProcessor class.

   
  
   Thank Mike, I will take note of that.
  
  
 
 On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  Other than going through a for loop and deploying VM after VM,
 I
don't
  think CloudStack currently supports a bulk-VM-deploy operation.
 
  It would be nice if CS did so at some point in the future;
 however,
that
  is probably a separate proposal from Hieu's.
 
 
  On Thu, Jun 5, 2014 at 12:13 AM, Amit Das 
 amit@cloudbyte.com
 wrote:
 
  Hi Hieu,
 
  Will it be good to include bulk operation of this feature? In
addition,
  does Xen support parallel execution of these operations ?
 
  Regards,
  Amit
  *CloudByte Inc.* http://www.cloudbyte.com/
 
 
  On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com
  wrote:
 
   Mike, Punith,
  
   Please review Golden Primary Storage proposal. [1]
  
   Thank you.
  
   [1]:
  
 

   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
  
  
   On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com wrote:
  
   Daan helped out with this. You should be good to go now.
  
  
   On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com
wrote:
  
Hi Mike,
   
Could you please give edit/create permission on ASF
 Jira/Wiki
   confluence ?
I can not add a new Wiki page.
   
My Jira ID: hieulq
Wiki: hieulq89
Review Board: hieulq
   
Thanks !
   
   
On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 Hi,

 Yes, please feel free to add a new Wiki page for your
  design.

 Here is a link to applicable design info:


   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design

 Also, feel free to ask more questions and have me review
  your
  design.

 Thanks!
 Mike


 On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE 
  hieul...@gmail.com
  wrote:

  Hi Mike,
 
  You are right, performance will be decreased over time
because
   writes
 IOPS
  will always end up on 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-09 Thread Tim Mackey
Hieu,

I made a couple of minor edits to your design to ensure everything is
XenServer based.  If you haven't done so already, please also fetch the
most recent master and base off of that.  I refactored the old Xen plugin
into a XenServer specific one since Xen Project isn't currently supported,
and files have moved.  Also please ensure you don't use the term Xen in
your code/docs to avoid any future confusion when the Xen Project work
starts to materialize.

Looking forward to seeing your work!!

-tim


On Mon, Jun 9, 2014 at 4:31 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Thanks, Hieu!

 I have reviewed your design (making only minor changes to your Wiki).

 Please feel free to have me review your code when you are ready.

 Also, do you have a plan for integration testing? It would be great if you
 could update your Wiki page to include what your plans on in this regard.

 Thanks!
 Mike


 On Mon, Jun 9, 2014 at 4:24 AM, Hieu LE hieul...@gmail.com wrote:

  Hi guys,
 
  I have updated this proposal wiki[1], included diagram for VM migrate,
  volume migrate and snapshot.
 
  Please review and give feedback.
 
  [1]:
 
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
 
 
  On Fri, Jun 6, 2014 at 7:14 PM, Todd Pigram t...@toddpigram.com wrote:
 
   Sorry, thought you were based off the link you provided in this reply.
  
   In our case, we are using CloudStack integrated in VDI solution to
   provived
   pooled VM type[1]. So may be my approach can bring better UX for user
  with
   lower bootime ...
  
   A short change in design are followings
   - VM will be deployed with golden primary storage if primary storage is
   marked golden and this VM template is also marked as golden.
   - Choosing the best deploy destionation for both golden primary storage
  and
   normal root volume primary storage. Chosen host can also access both
   storage pools.
   - New Xen Server plug-in for modifying VHD parent id.
  
   Is there some place for me to submit my design and code. Can I write a
  new
   proposal in CS wiki ?
  
   [1]:
  
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html

  
  
   On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Todd,
   
   
On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com
  wrote:
   
 Hieu,

 I assume you are using MCS for you golden image? What version of
 XD?
Given
 you are using pooled desktops, have you thought about using a PVS
 BDM
   iso
 and mount it with in your 1000 VMs? This way you can stagger
 reboots
   via
 PVS console or Studio. This would require a change to your delivery
group.


Sorry but I did not use MCS or XenDesktop in my company :-)
   
   

 On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  6) The copy_vhd_from_secondarystorage XenServer plug-in is not
 used
when
  you're using XenServer + XS62ESP1 + XS62ESP1004. In that case,
  please
 refer
  to copyTemplateToPrimaryStorage(CopyCommand) method in the
  Xenserver625StorageProcessor class.
 

   
Thank Mike, I will take note of that.
   
   
  
  On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Other than going through a for loop and deploying VM after
 VM,
  I
 don't
   think CloudStack currently supports a bulk-VM-deploy operation.
  
   It would be nice if CS did so at some point in the future;
  however,
 that
   is probably a separate proposal from Hieu's.
  
  
   On Thu, Jun 5, 2014 at 12:13 AM, Amit Das 
  amit@cloudbyte.com
  wrote:
  
   Hi Hieu,
  
   Will it be good to include bulk operation of this feature? In
 addition,
   does Xen support parallel execution of these operations ?
  
   Regards,
   Amit
   *CloudByte Inc.* http://www.cloudbyte.com/
  
  
   On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com
   wrote:
  
Mike, Punith,
   
Please review Golden Primary Storage proposal. [1]
   
Thank you.
   
[1]:
   
  
 

   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
   
   
On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:
   
Daan helped out with this. You should be good to go now.
   
   
On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE 
 hieul...@gmail.com
 wrote:
   
 Hi Mike,

 Could you please give edit/create permission on ASF
  Jira/Wiki
confluence ?
 I can not add a new Wiki page.

 My Jira ID: hieulq
 Wiki: hieulq89
 Review Board: hieulq

 Thanks !


Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-09 Thread Mike Tutkowski
Yes, I was going to mention what Tim said about using the term XenServer
instead of Xen as Tim has done a bunch of work recently to separate the
two.

I made a few changes in your Wiki when I saw a reference to Xen instead
of to XenServer.


On Mon, Jun 9, 2014 at 2:53 PM, Tim Mackey tmac...@gmail.com wrote:

 Hieu,

 I made a couple of minor edits to your design to ensure everything is
 XenServer based.  If you haven't done so already, please also fetch the
 most recent master and base off of that.  I refactored the old Xen plugin
 into a XenServer specific one since Xen Project isn't currently supported,
 and files have moved.  Also please ensure you don't use the term Xen in
 your code/docs to avoid any future confusion when the Xen Project work
 starts to materialize.

 Looking forward to seeing your work!!

 -tim


 On Mon, Jun 9, 2014 at 4:31 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  Thanks, Hieu!
 
  I have reviewed your design (making only minor changes to your Wiki).
 
  Please feel free to have me review your code when you are ready.
 
  Also, do you have a plan for integration testing? It would be great if
 you
  could update your Wiki page to include what your plans on in this regard.
 
  Thanks!
  Mike
 
 
  On Mon, Jun 9, 2014 at 4:24 AM, Hieu LE hieul...@gmail.com wrote:
 
   Hi guys,
  
   I have updated this proposal wiki[1], included diagram for VM migrate,
   volume migrate and snapshot.
  
   Please review and give feedback.
  
   [1]:
  
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
  
  
   On Fri, Jun 6, 2014 at 7:14 PM, Todd Pigram t...@toddpigram.com
 wrote:
  
Sorry, thought you were based off the link you provided in this
 reply.
   
In our case, we are using CloudStack integrated in VDI solution to
provived
pooled VM type[1]. So may be my approach can bring better UX for user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary storage
 is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code. Can I write
 a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
 
   
   
On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE hieul...@gmail.com wrote:
   
 Hi Todd,


 On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com
   wrote:

  Hieu,
 
  I assume you are using MCS for you golden image? What version of
  XD?
 Given
  you are using pooled desktops, have you thought about using a PVS
  BDM
iso
  and mount it with in your 1000 VMs? This way you can stagger
  reboots
via
  PVS console or Studio. This would require a change to your
 delivery
 group.
 
 
 Sorry but I did not use MCS or XenDesktop in my company :-)


 
  On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   6) The copy_vhd_from_secondarystorage XenServer plug-in is not
  used
 when
   you're using XenServer + XS62ESP1 + XS62ESP1004. In that case,
   please
  refer
   to copyTemplateToPrimaryStorage(CopyCommand) method in the
   Xenserver625StorageProcessor class.
  
 

 Thank Mike, I will take note of that.


   
   On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Other than going through a for loop and deploying VM after
  VM,
   I
  don't
think CloudStack currently supports a bulk-VM-deploy
 operation.
   
It would be nice if CS did so at some point in the future;
   however,
  that
is probably a separate proposal from Hieu's.
   
   
On Thu, Jun 5, 2014 at 12:13 AM, Amit Das 
   amit@cloudbyte.com
   wrote:
   
Hi Hieu,
   
Will it be good to include bulk operation of this feature?
 In
  addition,
does Xen support parallel execution of these operations ?
   
Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/
   
   
On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com
 
wrote:
   
 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:

   
  
 

   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-06 Thread Hieu LE
Hi Mike,

Done that, I have added new FAQ section.

In addition, I have tested that volume can take a snapshot, create volume
from that snapshot and attach back to VM normally. Currently I am testing
with volume migration.


On Fri, Jun 6, 2014 at 11:11 AM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Hi Hieu,

 Would you be able to place these questions and answers in your design doc
 so that we can more easily track them?

 Thanks!
 Mike


 On Thu, Jun 5, 2014 at 9:55 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Todd,
 
 
  On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com wrote:
 
   Hieu,
  
   I assume you are using MCS for you golden image? What version of XD?
  Given
   you are using pooled desktops, have you thought about using a PVS BDM
 iso
   and mount it with in your 1000 VMs? This way you can stagger reboots
 via
   PVS console or Studio. This would require a change to your delivery
  group.
  
  
  Sorry but I did not use MCS or XenDesktop in my company :-)
 
 
  
   On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
6) The copy_vhd_from_secondarystorage XenServer plug-in is not used
  when
you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please
   refer
to copyTemplateToPrimaryStorage(CopyCommand) method in the
Xenserver625StorageProcessor class.
   
  
 
  Thank Mike, I will take note of that.
 
 

On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 Other than going through a for loop and deploying VM after VM, I
   don't
 think CloudStack currently supports a bulk-VM-deploy operation.

 It would be nice if CS did so at some point in the future; however,
   that
 is probably a separate proposal from Hieu's.


 On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com
wrote:

 Hi Hieu,

 Will it be good to include bulk operation of this feature? In
   addition,
 does Xen support parallel execution of these operations ?

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com
 wrote:

  Mike, Punith,
 
  Please review Golden Primary Storage proposal. [1]
 
  Thank you.
 
  [1]:
 

   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
 
 
  On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Daan helped out with this. You should be good to go now.
 
 
  On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com
   wrote:
 
   Hi Mike,
  
   Could you please give edit/create permission on ASF Jira/Wiki
  confluence ?
   I can not add a new Wiki page.
  
   My Jira ID: hieulq
   Wiki: hieulq89
   Review Board: hieulq
  
   Thanks !
  
  
   On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Hi,
   
Yes, please feel free to add a new Wiki page for your
 design.
   
Here is a link to applicable design info:
   
   
  https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
   
Also, feel free to ask more questions and have me review
 your
 design.
   
Thanks!
Mike
   
   
On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE 
 hieul...@gmail.com
 wrote:
   
 Hi Mike,

 You are right, performance will be decreased over time
   because
  writes
IOPS
 will always end up on slower storage pool.

 In our case, we are using CloudStack integrated in VDI
   solution
 to
provived
 pooled VM type[1]. So may be my approach can bring better
  UX
for
  user
with
 lower bootime ...

 A short change in design are followings
 - VM will be deployed with golden primary storage if
  primary
  storage is
 marked golden and this VM template is also marked as
  golden.
 - Choosing the best deploy destionation for both golden
   primary
  storage
and
 normal root volume primary storage. Chosen host can also
   access
 both
 storage pools.
 - New Xen Server plug-in for modifying VHD parent id.

 Is there some place for me to submit my design and code.
  Can
   I
  write a
new
 proposal in CS wiki ?

 [1]:


   
  
 

   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html


 On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  It is an interesting idea. If the constraints 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-06 Thread Todd Pigram
Sorry, thought you were based off the link you provided in this reply.

In our case, we are using CloudStack integrated in VDI solution to provived
pooled VM type[1]. So may be my approach can bring better UX for user with
lower bootime ...

A short change in design are followings
- VM will be deployed with golden primary storage if primary storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary storage and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.

Is there some place for me to submit my design and code. Can I write a new
proposal in CS wiki ?

[1]:
http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
 


On Thu, Jun 5, 2014 at 11:55 PM, Hieu LE hieul...@gmail.com wrote:

 Hi Todd,


 On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com wrote:

  Hieu,
 
  I assume you are using MCS for you golden image? What version of XD?
 Given
  you are using pooled desktops, have you thought about using a PVS BDM iso
  and mount it with in your 1000 VMs? This way you can stagger reboots via
  PVS console or Studio. This would require a change to your delivery
 group.
 
 
 Sorry but I did not use MCS or XenDesktop in my company :-)


 
  On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   6) The copy_vhd_from_secondarystorage XenServer plug-in is not used
 when
   you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please
  refer
   to copyTemplateToPrimaryStorage(CopyCommand) method in the
   Xenserver625StorageProcessor class.
  
 

 Thank Mike, I will take note of that.


   
   On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Other than going through a for loop and deploying VM after VM, I
  don't
think CloudStack currently supports a bulk-VM-deploy operation.
   
It would be nice if CS did so at some point in the future; however,
  that
is probably a separate proposal from Hieu's.
   
   
On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com
   wrote:
   
Hi Hieu,
   
Will it be good to include bulk operation of this feature? In
  addition,
does Xen support parallel execution of these operations ?
   
Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/
   
   
On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:
   
 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:

   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com
  wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
  
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your
design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com
wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time
  because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI
  solution
to
   provived
pooled VM type[1]. So may be my approach can bring better
 UX
   for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if
 primary
 storage is
marked golden and this VM template is also marked as
 golden.
- Choosing the best deploy destionation for both golden
  primary
 storage
   and
normal root volume primary storage. Chosen host can also
  access
both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code.
 Can
  I
 write a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 

   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Amit Das
Hi Hieu,

Will it be good to include bulk operation of this feature? In addition,
does Xen support parallel execution of these operations ?

Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/


On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution to
   provived
pooled VM type[1]. So may be my approach can bring better UX for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary
 storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code. Can I
 write a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   
On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 It is an interesting idea. If the constraints you face at your
  company
can
 be corrected somewhat by implementing this, then you should go for
  it.

 It sounds like writes will be placed on the slower storage pool.
 This
means
 as you update OS components, those updates will be placed on the
  slower
 storage pool. As such, your performance is likely to somewhat
  decrease
over
 time (as more and more writes end up on the slower storage pool).

 That may be OK for your use case(s), though.

 You'll have to update the storage-pool orchestration logic to take
  this
new
 scheme into account.

 Also, we'll have to figure out how this ties into storage tagging
 (if
   at
 all).

 I'd be happy to review your design and code.


 On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com
 wrote:

  Thanks Mike and Punith for quick reply.
 
  Both solutions you gave here are absolutely correct. But as I
   mentioned
 in
  the first email, I want another better solution for current
 infrastructure
  at my company.
 
  Creating a high IOPS primary storage using storage tags is good
 but
   it
 will
  be very waste of disk capacity. For example, if I only have 1TB
 SSD
   and
  deploy 100 VM from a 100GB template.
 
  So I think about a solution where a high IOPS primary storage
 can
   only
  store golden image (master image), and a child image of this VM
  will
   be
  stored in another normal (NFS, ISCSI...) storage. In this case,
  with
1TB
  SSD Primary Storage I can store as much golden image as I need.
 
  I have also tested it with 256 GB SSD mounted on Xen Server
 6.2.0
   with
 2TB
  local storage 1RPM, 6TB NFS share storage with 1GB network.
 The
IOPS
 of
  VMs which have golden image (master image) in SSD and child
 image
  in
NFS
  increate more than 30-40% compare with VMs which have both
 golden
   image
 and
  child image in NFS. The boot time of each VM is also decrease.
   ('cause
  golden image in SSD only reduced READ IOPS).
 
  Do you think this approach OK ?
 
 
  On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
   Thanks, Punith - this is similar to what I was going to say.
  
   Any time a set of CloudStack volumes share IOPS from a common
  pool,
you
   cannot guarantee IOPS to a given CloudStack volume at a 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Punith S
hi Hieu,

after going through your  Golden Primary Storage proposal , from my
understanding you are creating a SSD golden PS for holding parent
VDH(nothing but the template which go copied from secondary storage) and a
normal primary storage for ROOT volumes(child VHD) for the corresponding
vm's.

from the following flowchart , i have the following questions,

1. since you are having problem with slow boot time of the vm's, will the
booting of the vm's happen in golden PS, ie while cloning ?
 if so, the spawning of the vm's will be always fast .

but i see you are starting the vm after moving the cloned vhd to the
ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
hence , there will be a network traffic between these two
primary storages, which will obviously slow down the vm's performance
forever.

2. what if someone removes the golden primary storage containing the the
parent VHD(template) where all the child VDH's in the root primary storage
are been pointed to ?
   if so, all vm's running will be crashed immediately. since its child
vhd's parent is removed.

thanks


On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution to
   provived
pooled VM type[1]. So may be my approach can bring better UX for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary
 storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code. Can I
 write a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   
On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 It is an interesting idea. If the constraints you face at your
  company
can
 be corrected somewhat by implementing this, then you should go for
  it.

 It sounds like writes will be placed on the slower storage pool.
 This
means
 as you update OS components, those updates will be placed on the
  slower
 storage pool. As such, your performance is likely to somewhat
  decrease
over
 time (as more and more writes end up on the slower storage pool).

 That may be OK for your use case(s), though.

 You'll have to update the storage-pool orchestration logic to take
  this
new
 scheme into account.

 Also, we'll have to figure out how this ties into storage tagging
 (if
   at
 all).

 I'd be happy to review your design and code.


 On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com
 wrote:

  Thanks Mike and Punith for quick reply.
 
  Both solutions you gave here are absolutely correct. But as I
   mentioned
 in
  the first email, I want another better solution for current
 infrastructure
  at my company.
 
  Creating a high IOPS primary storage using storage tags is good
 but
   it
 will
  be very waste of disk capacity. For example, if I only have 1TB
 SSD
   and
  deploy 100 VM from a 100GB template.
 
  So I think about a solution where a high IOPS primary storage
 can
   only
  store golden image (master image), and a child image of this VM
  will
   be
  stored in another normal (NFS, ISCSI...) storage. In this case,
  with
1TB
  SSD Primary Storage I can 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Tim Mackey
Hieu,

If I understand the objective correctly, you are trying to reduce the
IO associated with a desktop start of day boot storm.  In your
proposal, you're effectively wanting to move the CloudStack secondary
storage concept to include a locally attached storage device which is
SSD based.  While that seems viable in concept, in practice with
XenServer your proposed flow could cause a bunch of issues.  Some of
the challenges I see include:

- XenServer hosts with multiple independent local storage are very
rare.  See this KB article covering how to create such storage:
http://support.citrix.com/article/CTX121313
- By default local storage is LVM based, but to enable thin
provisioning you'll want EXT3.  See this blog for how to convert to
EXT3: 
http://thinworldblog.blogspot.com/2011/08/enabling-thin-provisioning-on-existing.html
- It seems like you're planning on using Storage XenMotion to move the
VHD from the golden primary storage to normal primary storage, but
that's going to move the entire VHD chain and it will do so over the
network.  Here's a blog article describing a bit about how it works:
http://blogs.citrix.com/2012/08/24/storage_xenmotion/.  I'm reasonably
certain the design parameters didn't include local-local without
network.
- If someone wants to take a snapshot of the VM, will that snapshot
then got to normal secondary storage or back to the golden master?
- To Punith's point, I *think* VM start will occur post clone, so the
clone will consume network to occur and then will start on local
storage.

The big test I'd like to see first would be creating the golden master
and from it creating a few VMs.  Then once you have those VMs run some
normal XenServer operations like moving a VM within a pool, moving
that VM across pools and assigning a home server.  If those pass, then
things might work out, but if those fail then you'll need to sort
things out within the XenServer code first. If these basic tests do
work, then I'd look at the network usage to see if things did indeed
get better.

-tim

On Thu, Jun 5, 2014 at 8:11 AM, Punith S punit...@cloudbyte.com wrote:
 hi Hieu,

 after going through your  Golden Primary Storage proposal , from my
 understanding you are creating a SSD golden PS for holding parent
 VDH(nothing but the template which go copied from secondary storage) and a
 normal primary storage for ROOT volumes(child VHD) for the corresponding
 vm's.

 from the following flowchart , i have the following questions,

 1. since you are having problem with slow boot time of the vm's, will the
 booting of the vm's happen in golden PS, ie while cloning ?
  if so, the spawning of the vm's will be always fast .

 but i see you are starting the vm after moving the cloned vhd to the
 ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
 hence , there will be a network traffic between these two
 primary storages, which will obviously slow down the vm's performance
 forever.

 2. what if someone removes the golden primary storage containing the the
 parent VHD(template) where all the child VDH's in the root primary storage
 are been pointed to ?
if so, all vm's running will be crashed immediately. since its child
 vhd's parent is removed.

 thanks


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution to
   provived
pooled VM type[1]. So may be my approach can bring better UX for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary
 storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Mike Tutkowski
Hi Hieu,

Thanks for sending a link to your proposal.

Some items we should consider:

1) We need to make sure that CloudStack does not delete your golden
template in the background. As it stands today with XenServer, if a
template resides on a primary storage and no VDI is referencing it, the
template will eventually get deleted. We would need to make sure that -
even though another VDI on another SR is referencing your golden template -
it does not get deleted (i.e. that CloudStack understands not to delete the
template due to this new use case). Also, the reverse should still work: if
no VDI on any SR is referencing this template, the template should get
deleted in a similar fashion to how this works today.

2) Is it true that you are proposing that a given primary storage be
dedicated to hosting only golden templates? In other words, it cannot also
be used for traditional template/root disks?

3) I recommend you diagram how VM migration would work in this new model.

4) I recommend you diagram how a VM snapshot and backup/restore would work
in this new model.

Thanks!
Mike



On Thu, Jun 5, 2014 at 6:11 AM, Punith S punit...@cloudbyte.com wrote:

 hi Hieu,

 after going through your  Golden Primary Storage proposal , from my
 understanding you are creating a SSD golden PS for holding parent
 VDH(nothing but the template which go copied from secondary storage) and a
 normal primary storage for ROOT volumes(child VHD) for the corresponding
 vm's.

 from the following flowchart , i have the following questions,

 1. since you are having problem with slow boot time of the vm's, will the
 booting of the vm's happen in golden PS, ie while cloning ?
  if so, the spawning of the vm's will be always fast .

 but i see you are starting the vm after moving the cloned vhd to the
 ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
 hence , there will be a network traffic between these two
 primary storages, which will obviously slow down the vm's performance
 forever.

 2. what if someone removes the golden primary storage containing the the
 parent VHD(template) where all the child VDH's in the root primary storage
 are been pointed to ?
if so, all vm's running will be crashed immediately. since its child
 vhd's parent is removed.

 thanks


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution to
   provived
pooled VM type[1]. So may be my approach can bring better UX for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary
 storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal root volume primary storage. Chosen host can also access
 both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code. Can I
 write a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   
On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 It is an interesting idea. If the constraints you face at your
  company
can
 be corrected somewhat by implementing this, then you should go
 for
  it.

 It sounds like writes will be placed on the slower storage pool.
 This
means
 as you update OS components, those updates will be placed on the
  slower
 storage pool. As such, your performance is likely to somewhat
  decrease
over
 time (as more and more writes end up on the 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Mike Tutkowski
5) We need to understand how this new model impacts storage tagging, if at
all.


On Thu, Jun 5, 2014 at 12:50 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Hi Hieu,

 Thanks for sending a link to your proposal.

 Some items we should consider:

 1) We need to make sure that CloudStack does not delete your golden
 template in the background. As it stands today with XenServer, if a
 template resides on a primary storage and no VDI is referencing it, the
 template will eventually get deleted. We would need to make sure that -
 even though another VDI on another SR is referencing your golden template -
 it does not get deleted (i.e. that CloudStack understands not to delete the
 template due to this new use case). Also, the reverse should still work: if
 no VDI on any SR is referencing this template, the template should get
 deleted in a similar fashion to how this works today.

 2) Is it true that you are proposing that a given primary storage be
 dedicated to hosting only golden templates? In other words, it cannot also
 be used for traditional template/root disks?

 3) I recommend you diagram how VM migration would work in this new model.

 4) I recommend you diagram how a VM snapshot and backup/restore would work
 in this new model.

 Thanks!
 Mike



 On Thu, Jun 5, 2014 at 6:11 AM, Punith S punit...@cloudbyte.com wrote:

 hi Hieu,

 after going through your  Golden Primary Storage proposal , from my
 understanding you are creating a SSD golden PS for holding parent
 VDH(nothing but the template which go copied from secondary storage) and a
 normal primary storage for ROOT volumes(child VHD) for the corresponding
 vm's.

 from the following flowchart , i have the following questions,

 1. since you are having problem with slow boot time of the vm's, will the
 booting of the vm's happen in golden PS, ie while cloning ?
  if so, the spawning of the vm's will be always fast .

 but i see you are starting the vm after moving the cloned vhd to the
 ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
 hence , there will be a network traffic between these two
 primary storages, which will obviously slow down the vm's performance
 forever.

 2. what if someone removes the golden primary storage containing the the
 parent VHD(template) where all the child VDH's in the root primary storage
 are been pointed to ?
if so, all vm's running will be crashed immediately. since its child
 vhd's parent is removed.

 thanks


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your
 design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution to
   provived
pooled VM type[1]. So may be my approach can bring better UX for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary
 storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal root volume primary storage. Chosen host can also access
 both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code. Can I
 write a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   
On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 It is an interesting idea. If the constraints you face at your
  company
can
 be corrected somewhat by implementing this, then you should go
 for
  it.

 It sounds like writes will be placed on the slower storage
 pool. This
means
 as you update OS 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Mike Tutkowski
To follow up on the storage tagging question I raised, I think it could
work this way:

The storage tag field could still be employed and it would be in reference
to the primary storage that houses the root disks (and VM snapshots)...not
in reference to the golden primary storage that is used to house the golden
templates.

When executing a Compute Offering with a storage tag of, say, XYZ, the
orchestration logic would have to find a primary storage tagged as XYZ that
is accessible to a given host AND that host would have to also be able to
access a golden primary storage where the golden image could be placed
(specifying a storage tag or tags for a golden primary storage probably
would not be useful).


On Thu, Jun 5, 2014 at 12:51 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 5) We need to understand how this new model impacts storage tagging, if at
 all.


 On Thu, Jun 5, 2014 at 12:50 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Hi Hieu,

 Thanks for sending a link to your proposal.

 Some items we should consider:

 1) We need to make sure that CloudStack does not delete your golden
 template in the background. As it stands today with XenServer, if a
 template resides on a primary storage and no VDI is referencing it, the
 template will eventually get deleted. We would need to make sure that -
 even though another VDI on another SR is referencing your golden template -
 it does not get deleted (i.e. that CloudStack understands not to delete the
 template due to this new use case). Also, the reverse should still work: if
 no VDI on any SR is referencing this template, the template should get
 deleted in a similar fashion to how this works today.

 2) Is it true that you are proposing that a given primary storage be
 dedicated to hosting only golden templates? In other words, it cannot also
 be used for traditional template/root disks?

 3) I recommend you diagram how VM migration would work in this new model.

 4) I recommend you diagram how a VM snapshot and backup/restore would
 work in this new model.

 Thanks!
 Mike



 On Thu, Jun 5, 2014 at 6:11 AM, Punith S punit...@cloudbyte.com wrote:

 hi Hieu,

 after going through your  Golden Primary Storage proposal , from my
 understanding you are creating a SSD golden PS for holding parent
 VDH(nothing but the template which go copied from secondary storage) and a
 normal primary storage for ROOT volumes(child VHD) for the corresponding
 vm's.

 from the following flowchart , i have the following questions,

 1. since you are having problem with slow boot time of the vm's, will
 the booting of the vm's happen in golden PS, ie while cloning ?
  if so, the spawning of the vm's will be always fast .

 but i see you are starting the vm after moving the cloned vhd to the
 ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
 hence , there will be a network traffic between these two
 primary storages, which will obviously slow down the vm's performance
 forever.

 2. what if someone removes the golden primary storage containing the the
 parent VHD(template) where all the child VDH's in the root primary storage
 are been pointed to ?
if so, all vm's running will be crashed immediately. since its child
 vhd's parent is removed.

 thanks


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your
 design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com
 wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution
 to
   provived
pooled VM type[1]. So may be my approach can bring better UX for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary
 storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Mike Tutkowski
Other than going through a for loop and deploying VM after VM, I don't
think CloudStack currently supports a bulk-VM-deploy operation.

It would be nice if CS did so at some point in the future; however, that is
probably a separate proposal from Hieu's.


On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com wrote:

 Hi Hieu,

 Will it be good to include bulk operation of this feature? In addition,
 does Xen support parallel execution of these operations ?

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

  Mike, Punith,
 
  Please review Golden Primary Storage proposal. [1]
 
  Thank you.
 
  [1]:
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
 
 
  On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Daan helped out with this. You should be good to go now.
 
 
  On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:
 
   Hi Mike,
  
   Could you please give edit/create permission on ASF Jira/Wiki
  confluence ?
   I can not add a new Wiki page.
  
   My Jira ID: hieulq
   Wiki: hieulq89
   Review Board: hieulq
  
   Thanks !
  
  
   On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Hi,
   
Yes, please feel free to add a new Wiki page for your design.
   
Here is a link to applicable design info:
   
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
   
Also, feel free to ask more questions and have me review your
 design.
   
Thanks!
Mike
   
   
On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
   
 Hi Mike,

 You are right, performance will be decreased over time because
  writes
IOPS
 will always end up on slower storage pool.

 In our case, we are using CloudStack integrated in VDI solution to
provived
 pooled VM type[1]. So may be my approach can bring better UX for
  user
with
 lower bootime ...

 A short change in design are followings
 - VM will be deployed with golden primary storage if primary
  storage is
 marked golden and this VM template is also marked as golden.
 - Choosing the best deploy destionation for both golden primary
  storage
and
 normal root volume primary storage. Chosen host can also access
 both
 storage pools.
 - New Xen Server plug-in for modifying VHD parent id.

 Is there some place for me to submit my design and code. Can I
  write a
new
 proposal in CS wiki ?

 [1]:


   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html


 On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  It is an interesting idea. If the constraints you face at your
   company
 can
  be corrected somewhat by implementing this, then you should go
 for
   it.
 
  It sounds like writes will be placed on the slower storage pool.
  This
 means
  as you update OS components, those updates will be placed on the
   slower
  storage pool. As such, your performance is likely to somewhat
   decrease
 over
  time (as more and more writes end up on the slower storage
 pool).
 
  That may be OK for your use case(s), though.
 
  You'll have to update the storage-pool orchestration logic to
 take
   this
 new
  scheme into account.
 
  Also, we'll have to figure out how this ties into storage
 tagging
  (if
at
  all).
 
  I'd be happy to review your design and code.
 
 
  On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com
  wrote:
 
   Thanks Mike and Punith for quick reply.
  
   Both solutions you gave here are absolutely correct. But as I
mentioned
  in
   the first email, I want another better solution for current
  infrastructure
   at my company.
  
   Creating a high IOPS primary storage using storage tags is
 good
  but
it
  will
   be very waste of disk capacity. For example, if I only have
 1TB
  SSD
and
   deploy 100 VM from a 100GB template.
  
   So I think about a solution where a high IOPS primary storage
  can
only
   store golden image (master image), and a child image of this
 VM
   will
be
   stored in another normal (NFS, ISCSI...) storage. In this
 case,
   with
 1TB
   SSD Primary Storage I can store as much golden image as I
 need.
  
   I have also tested it with 256 GB SSD mounted on Xen Server
  6.2.0
with
  2TB
   local storage 1RPM, 6TB NFS share storage with 1GB
 network.
  The
 IOPS
  of
   VMs which have golden image (master image) in SSD and child
  image
   in
 NFS
   increate more than 30-40% compare with VMs which have both
  golden
image
  

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Mike Tutkowski
6) The copy_vhd_from_secondarystorage XenServer plug-in is not used when
you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please refer
to copyTemplateToPrimaryStorage(CopyCommand) method in the
Xenserver625StorageProcessor class.


On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Other than going through a for loop and deploying VM after VM, I don't
 think CloudStack currently supports a bulk-VM-deploy operation.

 It would be nice if CS did so at some point in the future; however, that
 is probably a separate proposal from Hieu's.


 On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com wrote:

 Hi Hieu,

 Will it be good to include bulk operation of this feature? In addition,
 does Xen support parallel execution of these operations ?

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/


 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

  Mike, Punith,
 
  Please review Golden Primary Storage proposal. [1]
 
  Thank you.
 
  [1]:
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
 
 
  On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Daan helped out with this. You should be good to go now.
 
 
  On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:
 
   Hi Mike,
  
   Could you please give edit/create permission on ASF Jira/Wiki
  confluence ?
   I can not add a new Wiki page.
  
   My Jira ID: hieulq
   Wiki: hieulq89
   Review Board: hieulq
  
   Thanks !
  
  
   On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Hi,
   
Yes, please feel free to add a new Wiki page for your design.
   
Here is a link to applicable design info:
   
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
   
Also, feel free to ask more questions and have me review your
 design.
   
Thanks!
Mike
   
   
On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com
 wrote:
   
 Hi Mike,

 You are right, performance will be decreased over time because
  writes
IOPS
 will always end up on slower storage pool.

 In our case, we are using CloudStack integrated in VDI solution
 to
provived
 pooled VM type[1]. So may be my approach can bring better UX for
  user
with
 lower bootime ...

 A short change in design are followings
 - VM will be deployed with golden primary storage if primary
  storage is
 marked golden and this VM template is also marked as golden.
 - Choosing the best deploy destionation for both golden primary
  storage
and
 normal root volume primary storage. Chosen host can also access
 both
 storage pools.
 - New Xen Server plug-in for modifying VHD parent id.

 Is there some place for me to submit my design and code. Can I
  write a
new
 proposal in CS wiki ?

 [1]:


   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html


 On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  It is an interesting idea. If the constraints you face at your
   company
 can
  be corrected somewhat by implementing this, then you should go
 for
   it.
 
  It sounds like writes will be placed on the slower storage
 pool.
  This
 means
  as you update OS components, those updates will be placed on
 the
   slower
  storage pool. As such, your performance is likely to somewhat
   decrease
 over
  time (as more and more writes end up on the slower storage
 pool).
 
  That may be OK for your use case(s), though.
 
  You'll have to update the storage-pool orchestration logic to
 take
   this
 new
  scheme into account.
 
  Also, we'll have to figure out how this ties into storage
 tagging
  (if
at
  all).
 
  I'd be happy to review your design and code.
 
 
  On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com
  wrote:
 
   Thanks Mike and Punith for quick reply.
  
   Both solutions you gave here are absolutely correct. But as I
mentioned
  in
   the first email, I want another better solution for current
  infrastructure
   at my company.
  
   Creating a high IOPS primary storage using storage tags is
 good
  but
it
  will
   be very waste of disk capacity. For example, if I only have
 1TB
  SSD
and
   deploy 100 VM from a 100GB template.
  
   So I think about a solution where a high IOPS primary storage
  can
only
   store golden image (master image), and a child image of this
 VM
   will
be
   stored in another normal (NFS, ISCSI...) storage. In this
 case,
   with
 1TB
   SSD Primary Storage I can store as much golden image as I
 need.
  
   I have also tested it 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Todd Pigram
Hieu,

I assume you are using MCS for you golden image? What version of XD? Given
you are using pooled desktops, have you thought about using a PVS BDM iso
and mount it with in your 1000 VMs? This way you can stagger reboots via
PVS console or Studio. This would require a change to your delivery group.


On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 6) The copy_vhd_from_secondarystorage XenServer plug-in is not used when
 you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please refer
 to copyTemplateToPrimaryStorage(CopyCommand) method in the
 Xenserver625StorageProcessor class.


 On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  Other than going through a for loop and deploying VM after VM, I don't
  think CloudStack currently supports a bulk-VM-deploy operation.
 
  It would be nice if CS did so at some point in the future; however, that
  is probably a separate proposal from Hieu's.
 
 
  On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com
 wrote:
 
  Hi Hieu,
 
  Will it be good to include bulk operation of this feature? In addition,
  does Xen support parallel execution of these operations ?
 
  Regards,
  Amit
  *CloudByte Inc.* http://www.cloudbyte.com/
 
 
  On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:
 
   Mike, Punith,
  
   Please review Golden Primary Storage proposal. [1]
  
   Thank you.
  
   [1]:
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
  
  
   On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com wrote:
  
   Daan helped out with this. You should be good to go now.
  
  
   On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
Could you please give edit/create permission on ASF Jira/Wiki
   confluence ?
I can not add a new Wiki page.
   
My Jira ID: hieulq
Wiki: hieulq89
Review Board: hieulq
   
Thanks !
   
   
On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 Hi,

 Yes, please feel free to add a new Wiki page for your design.

 Here is a link to applicable design info:

 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design

 Also, feel free to ask more questions and have me review your
  design.

 Thanks!
 Mike


 On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com
  wrote:

  Hi Mike,
 
  You are right, performance will be decreased over time because
   writes
 IOPS
  will always end up on slower storage pool.
 
  In our case, we are using CloudStack integrated in VDI solution
  to
 provived
  pooled VM type[1]. So may be my approach can bring better UX
 for
   user
 with
  lower bootime ...
 
  A short change in design are followings
  - VM will be deployed with golden primary storage if primary
   storage is
  marked golden and this VM template is also marked as golden.
  - Choosing the best deploy destionation for both golden primary
   storage
 and
  normal root volume primary storage. Chosen host can also access
  both
  storage pools.
  - New Xen Server plug-in for modifying VHD parent id.
 
  Is there some place for me to submit my design and code. Can I
   write a
 new
  proposal in CS wiki ?
 
  [1]:
 
 

   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
 
 
  On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   It is an interesting idea. If the constraints you face at
 your
company
  can
   be corrected somewhat by implementing this, then you should
 go
  for
it.
  
   It sounds like writes will be placed on the slower storage
  pool.
   This
  means
   as you update OS components, those updates will be placed on
  the
slower
   storage pool. As such, your performance is likely to somewhat
decrease
  over
   time (as more and more writes end up on the slower storage
  pool).
  
   That may be OK for your use case(s), though.
  
   You'll have to update the storage-pool orchestration logic to
  take
this
  new
   scheme into account.
  
   Also, we'll have to figure out how this ties into storage
  tagging
   (if
 at
   all).
  
   I'd be happy to review your design and code.
  
  
   On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com
   wrote:
  
Thanks Mike and Punith for quick reply.
   
Both solutions you gave here are absolutely correct. But
 as I
 mentioned
   in
the first email, I want another better solution for current
   infrastructure
at my company.
   
Creating a high IOPS 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Hieu LE
Hi guys,

Hm, lots of problems and questions, I will try to resolve one by one.


On Fri, Jun 6, 2014 at 1:51 AM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 5) We need to understand how this new model impacts storage tagging, if at
 all.


 On Thu, Jun 5, 2014 at 12:50 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Hi Hieu,

 Thanks for sending a link to your proposal.

 Some items we should consider:

 1) We need to make sure that CloudStack does not delete your golden
 template in the background. As it stands today with XenServer, if a
 template resides on a primary storage and no VDI is referencing it, the
 template will eventually get deleted. We would need to make sure that -
 even though another VDI on another SR is referencing your golden template -
 it does not get deleted (i.e. that CloudStack understands not to delete the
 template due to this new use case). Also, the reverse should still work: if
 no VDI on any SR is referencing this template, the template should get
 deleted in a similar fashion to how this works today.


I have tested it and can be sure that CloudStack did not delete golden
template in background and vice versa. Just clone VDI from any VM, move
(copy) this VDI to another SR with another UUID name and point to parent
image in diffrent SR. Start/Stop VM and XenServer did not delete both
parent anh child image.





2) Is it true that you are proposing that a given primary storage be
 dedicated to hosting only golden templates? In other words, it cannot also
 be used for traditional template/root disks?


Yes, because some high IOPS partitions like SSD will always have lower
capacity compare to normal storage solutions. So I think golden primary
storage should be dedicated to store only golden templates.




  3) I recommend you diagram how VM migration would work in this new model.

 4) I recommend you diagram how a VM snapshot and backup/restore would
 work in this new model.

 Thanks!
 Mike


Sure, I will diagram them soon.



 On Thu, Jun 5, 2014 at 6:11 AM, Punith S punit...@cloudbyte.com wrote:

 hi Hieu,

 after going through your  Golden Primary Storage proposal , from my
 understanding you are creating a SSD golden PS for holding parent
 VDH(nothing but the template which go copied from secondary storage) and a
 normal primary storage for ROOT volumes(child VHD) for the corresponding
 vm's.

 from the following flowchart , i have the following questions,

 1. since you are having problem with slow boot time of the vm's, will
 the booting of the vm's happen in golden PS, ie while cloning ?
  if so, the spawning of the vm's will be always fast .

 but i see you are starting the vm after moving the cloned vhd to the
 ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
 hence , there will be a network traffic between these two
 primary storages, which will obviously slow down the vm's performance
 forever.


Yes, as Tim said that, VM will start in post-clone process and consume
network traffic while booting. Based on the idea of Xen Server
Intellicache, while VM is booting and running, there will always be network
traffic between shared storage (holding base image) and local storage
(holding base image cache). But instead of copying partially base image
from share storage to local storage and put all READ/WRITE IOPS to local, I
*think* that my approach is a little bit easier to customize and can have
performance better than Intellicache (cause all IOPS are divided into
golden PS and normal PS)




 2. what if someone removes the golden primary storage containing the the
 parent VHD(template) where all the child VDH's in the root primary storage
 are been pointed to ?
if so, all vm's running will be crashed immediately. since its child
 vhd's parent is removed.

 thanks


Yes, all VMs running in this PS will be crashed, so I think this will
become a condition to check if someone want to remove the golden primary
storage. I will take note of that problem.



 On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:

 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Hieu LE
Hi Tim,


On Fri, Jun 6, 2014 at 1:39 AM, Tim Mackey tmac...@gmail.com wrote:

 Hieu,

 If I understand the objective correctly, you are trying to reduce the
 IO associated with a desktop start of day boot storm.  In your
 proposal, you're effectively wanting to move the CloudStack secondary
 storage concept to include a locally attached storage device which is
 SSD based.  While that seems viable in concept, in practice with
 XenServer your proposed flow could cause a bunch of issues.  Some of
 the challenges I see include:

 - XenServer hosts with multiple independent local storage are very
 rare.  See this KB article covering how to create such storage:
 http://support.citrix.com/article/CTX121313


Yes, I have been following this guide to attach a new SSD storage to Xen
Server host.



 - By default local storage is LVM based, but to enable thin
 provisioning you'll want EXT3.  See this blog for how to convert to
 EXT3:
 http://thinworldblog.blogspot.com/2011/08/enabling-thin-provisioning-on-existing.html


Thank you, I did not test with LVM storage repository, so I think I will
give it a try this approach with LVM based repo.



 - It seems like you're planning on using Storage XenMotion to move the
 VHD from the golden primary storage to normal primary storage, but
 that's going to move the entire VHD chain and it will do so over the
 network.  Here's a blog article describing a bit about how it works:
 http://blogs.citrix.com/2012/08/24/storage_xenmotion/.  I'm reasonably
 certain the design parameters didn't include local-local without
 network.


No, I did not use XenMotion to move VHD from golden PS to normal PS. Just a
simply Linux cp command to avoid moving whole VHD chain. This idea refer to
OpenStack Xapi plugins while importing VHD from staging area to SR.

https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py


 - If someone wants to take a snapshot of the VM, will that snapshot
 then got to normal secondary storage or back to the golden master?
 - To Punith's point, I *think* VM start will occur post clone, so the
 clone will consume network to occur and then will start on local
 storage.

 The big test I'd like to see first would be creating the golden master
 and from it creating a few VMs.  Then once you have those VMs run some
 normal XenServer operations like moving a VM within a pool, moving
 that VM across pools and assigning a home server.  If those pass, then
 things might work out, but if those fail then you'll need to sort
 things out within the XenServer code first. If these basic tests do
 work, then I'd look at the network usage to see if things did indeed
 get better.

 -tim


I have tested it and have a few runiing VMs from same golden image in
another SR. I will make the test case of live-migrate or migrate between
pool and report to you soon.



 On Thu, Jun 5, 2014 at 8:11 AM, Punith S punit...@cloudbyte.com wrote:
  hi Hieu,
 
  after going through your  Golden Primary Storage proposal , from my
  understanding you are creating a SSD golden PS for holding parent
  VDH(nothing but the template which go copied from secondary storage) and
 a
  normal primary storage for ROOT volumes(child VHD) for the corresponding
  vm's.
 
  from the following flowchart , i have the following questions,
 
  1. since you are having problem with slow boot time of the vm's, will the
  booting of the vm's happen in golden PS, ie while cloning ?
   if so, the spawning of the vm's will be always fast .
 
  but i see you are starting the vm after moving the cloned vhd to the
  ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
  hence , there will be a network traffic between these two
  primary storages, which will obviously slow down the vm's performance
  forever.
 
  2. what if someone removes the golden primary storage containing the the
  parent VHD(template) where all the child VDH's in the root primary
 storage
  are been pointed to ?
 if so, all vm's running will be crashed immediately. since its child
  vhd's parent is removed.
 
  thanks
 
 
  On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:
 
  Mike, Punith,
 
  Please review Golden Primary Storage proposal. [1]
 
  Thank you.
 
  [1]:
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
 
 
  On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Daan helped out with this. You should be good to go now.
 
 
  On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:
 
   Hi Mike,
  
   Could you please give edit/create permission on ASF Jira/Wiki
  confluence ?
   I can not add a new Wiki page.
  
   My Jira ID: hieulq
   Wiki: hieulq89
   Review Board: hieulq
  
   Thanks !
  
  
   On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Hi,
   
Yes, please feel free to add a new Wiki page for your design.
   
Here 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Hieu LE
Hi Todd,


On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com wrote:

 Hieu,

 I assume you are using MCS for you golden image? What version of XD? Given
 you are using pooled desktops, have you thought about using a PVS BDM iso
 and mount it with in your 1000 VMs? This way you can stagger reboots via
 PVS console or Studio. This would require a change to your delivery group.


Sorry but I did not use MCS or XenDesktop in my company :-)



 On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  6) The copy_vhd_from_secondarystorage XenServer plug-in is not used when
  you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please
 refer
  to copyTemplateToPrimaryStorage(CopyCommand) method in the
  Xenserver625StorageProcessor class.
 


Thank Mike, I will take note of that.


  
  On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Other than going through a for loop and deploying VM after VM, I
 don't
   think CloudStack currently supports a bulk-VM-deploy operation.
  
   It would be nice if CS did so at some point in the future; however,
 that
   is probably a separate proposal from Hieu's.
  
  
   On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com
  wrote:
  
   Hi Hieu,
  
   Will it be good to include bulk operation of this feature? In
 addition,
   does Xen support parallel execution of these operations ?
  
   Regards,
   Amit
   *CloudByte Inc.* http://www.cloudbyte.com/
  
  
   On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:
  
Mike, Punith,
   
Please review Golden Primary Storage proposal. [1]
   
Thank you.
   
[1]:
   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
   
   
On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:
   
Daan helped out with this. You should be good to go now.
   
   
On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com
 wrote:
   
 Hi Mike,

 Could you please give edit/create permission on ASF Jira/Wiki
confluence ?
 I can not add a new Wiki page.

 My Jira ID: hieulq
 Wiki: hieulq89
 Review Board: hieulq

 Thanks !


 On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  Hi,
 
  Yes, please feel free to add a new Wiki page for your design.
 
  Here is a link to applicable design info:
 
  https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
 
  Also, feel free to ask more questions and have me review your
   design.
 
  Thanks!
  Mike
 
 
  On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com
   wrote:
 
   Hi Mike,
  
   You are right, performance will be decreased over time
 because
writes
  IOPS
   will always end up on slower storage pool.
  
   In our case, we are using CloudStack integrated in VDI
 solution
   to
  provived
   pooled VM type[1]. So may be my approach can bring better UX
  for
user
  with
   lower bootime ...
  
   A short change in design are followings
   - VM will be deployed with golden primary storage if primary
storage is
   marked golden and this VM template is also marked as golden.
   - Choosing the best deploy destionation for both golden
 primary
storage
  and
   normal root volume primary storage. Chosen host can also
 access
   both
   storage pools.
   - New Xen Server plug-in for modifying VHD parent id.
  
   Is there some place for me to submit my design and code. Can
 I
write a
  new
   proposal in CS wiki ?
  
   [1]:
  
  
 

   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
  
  
   On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
It is an interesting idea. If the constraints you face at
  your
 company
   can
be corrected somewhat by implementing this, then you should
  go
   for
 it.
   
It sounds like writes will be placed on the slower storage
   pool.
This
   means
as you update OS components, those updates will be placed
 on
   the
 slower
storage pool. As such, your performance is likely to
 somewhat
 decrease
   over
time (as more and more writes end up on the slower storage
   pool).
   
That may be OK for your use case(s), though.
   
You'll have to update the storage-pool orchestration logic
 to
   take
 this
   new
scheme into account.
   
Also, we'll have to figure out how this ties into storage
   tagging
(if
  at
all).
   
I'd be happy to review your design and code.
   
   

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-05 Thread Mike Tutkowski
Hi Hieu,

Would you be able to place these questions and answers in your design doc
so that we can more easily track them?

Thanks!
Mike


On Thu, Jun 5, 2014 at 9:55 PM, Hieu LE hieul...@gmail.com wrote:

 Hi Todd,


 On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com wrote:

  Hieu,
 
  I assume you are using MCS for you golden image? What version of XD?
 Given
  you are using pooled desktops, have you thought about using a PVS BDM iso
  and mount it with in your 1000 VMs? This way you can stagger reboots via
  PVS console or Studio. This would require a change to your delivery
 group.
 
 
 Sorry but I did not use MCS or XenDesktop in my company :-)


 
  On Thu, Jun 5, 2014 at 9:28 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   6) The copy_vhd_from_secondarystorage XenServer plug-in is not used
 when
   you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please
  refer
   to copyTemplateToPrimaryStorage(CopyCommand) method in the
   Xenserver625StorageProcessor class.
  
 

 Thank Mike, I will take note of that.


   
   On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
Other than going through a for loop and deploying VM after VM, I
  don't
think CloudStack currently supports a bulk-VM-deploy operation.
   
It would be nice if CS did so at some point in the future; however,
  that
is probably a separate proposal from Hieu's.
   
   
On Thu, Jun 5, 2014 at 12:13 AM, Amit Das amit@cloudbyte.com
   wrote:
   
Hi Hieu,
   
Will it be good to include bulk operation of this feature? In
  addition,
does Xen support parallel execution of these operations ?
   
Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/
   
   
On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:
   
 Mike, Punith,

 Please review Golden Primary Storage proposal. [1]

 Thank you.

 [1]:

   
  
 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


 On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com
  wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki
 confluence ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
  
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your
design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com
wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time
  because
 writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI
  solution
to
   provived
pooled VM type[1]. So may be my approach can bring better
 UX
   for
 user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if
 primary
 storage is
marked golden and this VM template is also marked as
 golden.
- Choosing the best deploy destionation for both golden
  primary
 storage
   and
normal root volume primary storage. Chosen host can also
  access
both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code.
 Can
  I
 write a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 

   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   
On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 It is an interesting idea. If the constraints you face at
   your
  company
can
 be corrected somewhat by implementing this, then you
 should
   go
for
  it.

 It sounds like writes will be placed on the slower
 storage
pool.
 This
means
 as you update OS components, those updates will be placed
  on
the
  slower
 storage pool. As such, your performance is likely to
  somewhat
  decrease
over
 time (as more and more writes end up on the slower
 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-04 Thread Mike Tutkowski
Daan helped out with this. You should be good to go now.


On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

 Hi Mike,

 Could you please give edit/create permission on ASF Jira/Wiki confluence ?
 I can not add a new Wiki page.

 My Jira ID: hieulq
 Wiki: hieulq89
 Review Board: hieulq

 Thanks !


 On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  Hi,
 
  Yes, please feel free to add a new Wiki page for your design.
 
  Here is a link to applicable design info:
 
  https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
 
  Also, feel free to ask more questions and have me review your design.
 
  Thanks!
  Mike
 
 
  On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
 
   Hi Mike,
  
   You are right, performance will be decreased over time because writes
  IOPS
   will always end up on slower storage pool.
  
   In our case, we are using CloudStack integrated in VDI solution to
  provived
   pooled VM type[1]. So may be my approach can bring better UX for user
  with
   lower bootime ...
  
   A short change in design are followings
   - VM will be deployed with golden primary storage if primary storage is
   marked golden and this VM template is also marked as golden.
   - Choosing the best deploy destionation for both golden primary storage
  and
   normal root volume primary storage. Chosen host can also access both
   storage pools.
   - New Xen Server plug-in for modifying VHD parent id.
  
   Is there some place for me to submit my design and code. Can I write a
  new
   proposal in CS wiki ?
  
   [1]:
  
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
  
  
   On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com
wrote:
  
It is an interesting idea. If the constraints you face at your
 company
   can
be corrected somewhat by implementing this, then you should go for
 it.
   
It sounds like writes will be placed on the slower storage pool. This
   means
as you update OS components, those updates will be placed on the
 slower
storage pool. As such, your performance is likely to somewhat
 decrease
   over
time (as more and more writes end up on the slower storage pool).
   
That may be OK for your use case(s), though.
   
You'll have to update the storage-pool orchestration logic to take
 this
   new
scheme into account.
   
Also, we'll have to figure out how this ties into storage tagging (if
  at
all).
   
I'd be happy to review your design and code.
   
   
On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com wrote:
   
 Thanks Mike and Punith for quick reply.

 Both solutions you gave here are absolutely correct. But as I
  mentioned
in
 the first email, I want another better solution for current
infrastructure
 at my company.

 Creating a high IOPS primary storage using storage tags is good but
  it
will
 be very waste of disk capacity. For example, if I only have 1TB SSD
  and
 deploy 100 VM from a 100GB template.

 So I think about a solution where a high IOPS primary storage can
  only
 store golden image (master image), and a child image of this VM
 will
  be
 stored in another normal (NFS, ISCSI...) storage. In this case,
 with
   1TB
 SSD Primary Storage I can store as much golden image as I need.

 I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0
  with
2TB
 local storage 1RPM, 6TB NFS share storage with 1GB network. The
   IOPS
of
 VMs which have golden image (master image) in SSD and child image
 in
   NFS
 increate more than 30-40% compare with VMs which have both golden
  image
and
 child image in NFS. The boot time of each VM is also decrease.
  ('cause
 golden image in SSD only reduced READ IOPS).

 Do you think this approach OK ?


 On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

  Thanks, Punith - this is similar to what I was going to say.
 
  Any time a set of CloudStack volumes share IOPS from a common
 pool,
   you
  cannot guarantee IOPS to a given CloudStack volume at a given
 time.
 
  Your choices at present are:
 
  1) Use managed storage (where you can create a 1:1 mapping
 between
  a
  CloudStack volume and a volume on a storage system that has QoS).
  As
 Punith
  mentioned, this requires that you purchase storage from a vendor
  who
  provides guaranteed QoS on a volume-by-volume bases AND has this
 integrated
  into CloudStack.
 
  2) Create primary storage in CloudStack that is not managed, but
  has
   a
 high
  number of IOPS (ex. using SSDs). You can then storage tag this
   primary
  storage and create Compute and Disk Offerings that use this
 storage
   tag
 to
  make sure 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-04 Thread Hieu LE
Mike, Punith,

Please review Golden Primary Storage proposal. [1]

Thank you.

[1]:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage


On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Daan helped out with this. You should be good to go now.


 On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  Could you please give edit/create permission on ASF Jira/Wiki confluence
 ?
  I can not add a new Wiki page.
 
  My Jira ID: hieulq
  Wiki: hieulq89
  Review Board: hieulq
 
  Thanks !
 
 
  On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   Hi,
  
   Yes, please feel free to add a new Wiki page for your design.
  
   Here is a link to applicable design info:
  
   https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
  
   Also, feel free to ask more questions and have me review your design.
  
   Thanks!
   Mike
  
  
   On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:
  
Hi Mike,
   
You are right, performance will be decreased over time because writes
   IOPS
will always end up on slower storage pool.
   
In our case, we are using CloudStack integrated in VDI solution to
   provived
pooled VM type[1]. So may be my approach can bring better UX for user
   with
lower bootime ...
   
A short change in design are followings
- VM will be deployed with golden primary storage if primary storage
 is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary
 storage
   and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.
   
Is there some place for me to submit my design and code. Can I write
 a
   new
proposal in CS wiki ?
   
[1]:
   
   
  
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
   
   
On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com
 wrote:
   
 It is an interesting idea. If the constraints you face at your
  company
can
 be corrected somewhat by implementing this, then you should go for
  it.

 It sounds like writes will be placed on the slower storage pool.
 This
means
 as you update OS components, those updates will be placed on the
  slower
 storage pool. As such, your performance is likely to somewhat
  decrease
over
 time (as more and more writes end up on the slower storage pool).

 That may be OK for your use case(s), though.

 You'll have to update the storage-pool orchestration logic to take
  this
new
 scheme into account.

 Also, we'll have to figure out how this ties into storage tagging
 (if
   at
 all).

 I'd be happy to review your design and code.


 On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com
 wrote:

  Thanks Mike and Punith for quick reply.
 
  Both solutions you gave here are absolutely correct. But as I
   mentioned
 in
  the first email, I want another better solution for current
 infrastructure
  at my company.
 
  Creating a high IOPS primary storage using storage tags is good
 but
   it
 will
  be very waste of disk capacity. For example, if I only have 1TB
 SSD
   and
  deploy 100 VM from a 100GB template.
 
  So I think about a solution where a high IOPS primary storage can
   only
  store golden image (master image), and a child image of this VM
  will
   be
  stored in another normal (NFS, ISCSI...) storage. In this case,
  with
1TB
  SSD Primary Storage I can store as much golden image as I need.
 
  I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0
   with
 2TB
  local storage 1RPM, 6TB NFS share storage with 1GB network.
 The
IOPS
 of
  VMs which have golden image (master image) in SSD and child image
  in
NFS
  increate more than 30-40% compare with VMs which have both golden
   image
 and
  child image in NFS. The boot time of each VM is also decrease.
   ('cause
  golden image in SSD only reduced READ IOPS).
 
  Do you think this approach OK ?
 
 
  On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
   Thanks, Punith - this is similar to what I was going to say.
  
   Any time a set of CloudStack volumes share IOPS from a common
  pool,
you
   cannot guarantee IOPS to a given CloudStack volume at a given
  time.
  
   Your choices at present are:
  
   1) Use managed storage (where you can create a 1:1 mapping
  between
   a
   CloudStack volume and a volume on a storage system that has
 QoS).
   As
  Punith
   mentioned, this requires that you 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-03 Thread Hieu LE
Hi Mike,

You are right, performance will be decreased over time because writes IOPS
will always end up on slower storage pool.

In our case, we are using CloudStack integrated in VDI solution to provived
pooled VM type[1]. So may be my approach can bring better UX for user with
lower bootime ...

A short change in design are followings
- VM will be deployed with golden primary storage if primary storage is
marked golden and this VM template is also marked as golden.
- Choosing the best deploy destionation for both golden primary storage and
normal root volume primary storage. Chosen host can also access both
storage pools.
- New Xen Server plug-in for modifying VHD parent id.

Is there some place for me to submit my design and code. Can I write a new
proposal in CS wiki ?

[1]:
http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html


On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 It is an interesting idea. If the constraints you face at your company can
 be corrected somewhat by implementing this, then you should go for it.

 It sounds like writes will be placed on the slower storage pool. This means
 as you update OS components, those updates will be placed on the slower
 storage pool. As such, your performance is likely to somewhat decrease over
 time (as more and more writes end up on the slower storage pool).

 That may be OK for your use case(s), though.

 You'll have to update the storage-pool orchestration logic to take this new
 scheme into account.

 Also, we'll have to figure out how this ties into storage tagging (if at
 all).

 I'd be happy to review your design and code.


 On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com wrote:

  Thanks Mike and Punith for quick reply.
 
  Both solutions you gave here are absolutely correct. But as I mentioned
 in
  the first email, I want another better solution for current
 infrastructure
  at my company.
 
  Creating a high IOPS primary storage using storage tags is good but it
 will
  be very waste of disk capacity. For example, if I only have 1TB SSD and
  deploy 100 VM from a 100GB template.
 
  So I think about a solution where a high IOPS primary storage can only
  store golden image (master image), and a child image of this VM will be
  stored in another normal (NFS, ISCSI...) storage. In this case, with 1TB
  SSD Primary Storage I can store as much golden image as I need.
 
  I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0 with
 2TB
  local storage 1RPM, 6TB NFS share storage with 1GB network. The IOPS
 of
  VMs which have golden image (master image) in SSD and child image in NFS
  increate more than 30-40% compare with VMs which have both golden image
 and
  child image in NFS. The boot time of each VM is also decrease. ('cause
  golden image in SSD only reduced READ IOPS).
 
  Do you think this approach OK ?
 
 
  On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
   Thanks, Punith - this is similar to what I was going to say.
  
   Any time a set of CloudStack volumes share IOPS from a common pool, you
   cannot guarantee IOPS to a given CloudStack volume at a given time.
  
   Your choices at present are:
  
   1) Use managed storage (where you can create a 1:1 mapping between a
   CloudStack volume and a volume on a storage system that has QoS). As
  Punith
   mentioned, this requires that you purchase storage from a vendor who
   provides guaranteed QoS on a volume-by-volume bases AND has this
  integrated
   into CloudStack.
  
   2) Create primary storage in CloudStack that is not managed, but has a
  high
   number of IOPS (ex. using SSDs). You can then storage tag this primary
   storage and create Compute and Disk Offerings that use this storage tag
  to
   make sure their volumes end up on this storage pool (primary storage).
  This
   will still not guarantee IOPS on a CloudStack volume-by-volume basis,
 but
   it will at least place the CloudStack volumes that need a better chance
  of
   getting higher IOPS on a storage pool that could provide the necessary
   IOPS. A big downside here is that you want to watch how many CloudStack
   volumes get deployed on this primary storage because you'll need to
   essentially over-provision IOPS in this primary storage to increase the
   probability that each and every CloudStack volume that uses this
 primary
   storage gets the necessary IOPS (and isn't as likely to suffer from the
   Noisy Neighbor Effect). You should be able to tell CloudStack to only
  use,
   say, 80% (or whatever) of the storage you're providing to it (so as to
   increase your effective IOPS per GB ratio). This over-provisioning of
  IOPS
   to control Noisy Neighbors is avoided in option 1. In that situation,
 you
   only provision the IOPS and capacity you actually need. It is a much
 more
   sophisticated approach.
  
   Thanks,
   Mike
  
  
   On Sun, Jun 1, 2014 at 11:36 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-03 Thread Mike Tutkowski
Hi,

Yes, please feel free to add a new Wiki page for your design.

Here is a link to applicable design info:

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design

Also, feel free to ask more questions and have me review your design.

Thanks!
Mike


On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:

 Hi Mike,

 You are right, performance will be decreased over time because writes IOPS
 will always end up on slower storage pool.

 In our case, we are using CloudStack integrated in VDI solution to provived
 pooled VM type[1]. So may be my approach can bring better UX for user with
 lower bootime ...

 A short change in design are followings
 - VM will be deployed with golden primary storage if primary storage is
 marked golden and this VM template is also marked as golden.
 - Choosing the best deploy destionation for both golden primary storage and
 normal root volume primary storage. Chosen host can also access both
 storage pools.
 - New Xen Server plug-in for modifying VHD parent id.

 Is there some place for me to submit my design and code. Can I write a new
 proposal in CS wiki ?

 [1]:

 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html


 On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com
  wrote:

  It is an interesting idea. If the constraints you face at your company
 can
  be corrected somewhat by implementing this, then you should go for it.
 
  It sounds like writes will be placed on the slower storage pool. This
 means
  as you update OS components, those updates will be placed on the slower
  storage pool. As such, your performance is likely to somewhat decrease
 over
  time (as more and more writes end up on the slower storage pool).
 
  That may be OK for your use case(s), though.
 
  You'll have to update the storage-pool orchestration logic to take this
 new
  scheme into account.
 
  Also, we'll have to figure out how this ties into storage tagging (if at
  all).
 
  I'd be happy to review your design and code.
 
 
  On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com wrote:
 
   Thanks Mike and Punith for quick reply.
  
   Both solutions you gave here are absolutely correct. But as I mentioned
  in
   the first email, I want another better solution for current
  infrastructure
   at my company.
  
   Creating a high IOPS primary storage using storage tags is good but it
  will
   be very waste of disk capacity. For example, if I only have 1TB SSD and
   deploy 100 VM from a 100GB template.
  
   So I think about a solution where a high IOPS primary storage can only
   store golden image (master image), and a child image of this VM will be
   stored in another normal (NFS, ISCSI...) storage. In this case, with
 1TB
   SSD Primary Storage I can store as much golden image as I need.
  
   I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0 with
  2TB
   local storage 1RPM, 6TB NFS share storage with 1GB network. The
 IOPS
  of
   VMs which have golden image (master image) in SSD and child image in
 NFS
   increate more than 30-40% compare with VMs which have both golden image
  and
   child image in NFS. The boot time of each VM is also decrease. ('cause
   golden image in SSD only reduced READ IOPS).
  
   Do you think this approach OK ?
  
  
   On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
   mike.tutkow...@solidfire.com wrote:
  
Thanks, Punith - this is similar to what I was going to say.
   
Any time a set of CloudStack volumes share IOPS from a common pool,
 you
cannot guarantee IOPS to a given CloudStack volume at a given time.
   
Your choices at present are:
   
1) Use managed storage (where you can create a 1:1 mapping between a
CloudStack volume and a volume on a storage system that has QoS). As
   Punith
mentioned, this requires that you purchase storage from a vendor who
provides guaranteed QoS on a volume-by-volume bases AND has this
   integrated
into CloudStack.
   
2) Create primary storage in CloudStack that is not managed, but has
 a
   high
number of IOPS (ex. using SSDs). You can then storage tag this
 primary
storage and create Compute and Disk Offerings that use this storage
 tag
   to
make sure their volumes end up on this storage pool (primary
 storage).
   This
will still not guarantee IOPS on a CloudStack volume-by-volume basis,
  but
it will at least place the CloudStack volumes that need a better
 chance
   of
getting higher IOPS on a storage pool that could provide the
 necessary
IOPS. A big downside here is that you want to watch how many
 CloudStack
volumes get deployed on this primary storage because you'll need to
essentially over-provision IOPS in this primary storage to increase
 the
probability that each and every CloudStack volume that uses this
  primary
storage gets the necessary IOPS (and isn't as likely to suffer from
 the
Noisy Neighbor 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-03 Thread Hieu LE
Hi Mike,

Could you please give edit/create permission on ASF Jira/Wiki confluence ?
I can not add a new Wiki page.

My Jira ID: hieulq
Wiki: hieulq89
Review Board: hieulq

Thanks !


On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Hi,

 Yes, please feel free to add a new Wiki page for your design.

 Here is a link to applicable design info:

 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design

 Also, feel free to ask more questions and have me review your design.

 Thanks!
 Mike


 On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE hieul...@gmail.com wrote:

  Hi Mike,
 
  You are right, performance will be decreased over time because writes
 IOPS
  will always end up on slower storage pool.
 
  In our case, we are using CloudStack integrated in VDI solution to
 provived
  pooled VM type[1]. So may be my approach can bring better UX for user
 with
  lower bootime ...
 
  A short change in design are followings
  - VM will be deployed with golden primary storage if primary storage is
  marked golden and this VM template is also marked as golden.
  - Choosing the best deploy destionation for both golden primary storage
 and
  normal root volume primary storage. Chosen host can also access both
  storage pools.
  - New Xen Server plug-in for modifying VHD parent id.
 
  Is there some place for me to submit my design and code. Can I write a
 new
  proposal in CS wiki ?
 
  [1]:
 
 
 http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
 
 
  On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com
   wrote:
 
   It is an interesting idea. If the constraints you face at your company
  can
   be corrected somewhat by implementing this, then you should go for it.
  
   It sounds like writes will be placed on the slower storage pool. This
  means
   as you update OS components, those updates will be placed on the slower
   storage pool. As such, your performance is likely to somewhat decrease
  over
   time (as more and more writes end up on the slower storage pool).
  
   That may be OK for your use case(s), though.
  
   You'll have to update the storage-pool orchestration logic to take this
  new
   scheme into account.
  
   Also, we'll have to figure out how this ties into storage tagging (if
 at
   all).
  
   I'd be happy to review your design and code.
  
  
   On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com wrote:
  
Thanks Mike and Punith for quick reply.
   
Both solutions you gave here are absolutely correct. But as I
 mentioned
   in
the first email, I want another better solution for current
   infrastructure
at my company.
   
Creating a high IOPS primary storage using storage tags is good but
 it
   will
be very waste of disk capacity. For example, if I only have 1TB SSD
 and
deploy 100 VM from a 100GB template.
   
So I think about a solution where a high IOPS primary storage can
 only
store golden image (master image), and a child image of this VM will
 be
stored in another normal (NFS, ISCSI...) storage. In this case, with
  1TB
SSD Primary Storage I can store as much golden image as I need.
   
I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0
 with
   2TB
local storage 1RPM, 6TB NFS share storage with 1GB network. The
  IOPS
   of
VMs which have golden image (master image) in SSD and child image in
  NFS
increate more than 30-40% compare with VMs which have both golden
 image
   and
child image in NFS. The boot time of each VM is also decrease.
 ('cause
golden image in SSD only reduced READ IOPS).
   
Do you think this approach OK ?
   
   
On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:
   
 Thanks, Punith - this is similar to what I was going to say.

 Any time a set of CloudStack volumes share IOPS from a common pool,
  you
 cannot guarantee IOPS to a given CloudStack volume at a given time.

 Your choices at present are:

 1) Use managed storage (where you can create a 1:1 mapping between
 a
 CloudStack volume and a volume on a storage system that has QoS).
 As
Punith
 mentioned, this requires that you purchase storage from a vendor
 who
 provides guaranteed QoS on a volume-by-volume bases AND has this
integrated
 into CloudStack.

 2) Create primary storage in CloudStack that is not managed, but
 has
  a
high
 number of IOPS (ex. using SSDs). You can then storage tag this
  primary
 storage and create Compute and Disk Offerings that use this storage
  tag
to
 make sure their volumes end up on this storage pool (primary
  storage).
This
 will still not guarantee IOPS on a CloudStack volume-by-volume
 basis,
   but
 it will at least place the CloudStack volumes that need a better
  chance
of
 getting higher IOPS on a storage pool that could provide 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-02 Thread Hieu LE
Thanks Mike and Punith for quick reply.

Both solutions you gave here are absolutely correct. But as I mentioned in
the first email, I want another better solution for current infrastructure
at my company.

Creating a high IOPS primary storage using storage tags is good but it will
be very waste of disk capacity. For example, if I only have 1TB SSD and
deploy 100 VM from a 100GB template.

So I think about a solution where a high IOPS primary storage can only
store golden image (master image), and a child image of this VM will be
stored in another normal (NFS, ISCSI...) storage. In this case, with 1TB
SSD Primary Storage I can store as much golden image as I need.

I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0 with 2TB
local storage 1RPM, 6TB NFS share storage with 1GB network. The IOPS of
VMs which have golden image (master image) in SSD and child image in NFS
increate more than 30-40% compare with VMs which have both golden image and
child image in NFS. The boot time of each VM is also decrease. ('cause
golden image in SSD only reduced READ IOPS).

Do you think this approach OK ?


On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Thanks, Punith - this is similar to what I was going to say.

 Any time a set of CloudStack volumes share IOPS from a common pool, you
 cannot guarantee IOPS to a given CloudStack volume at a given time.

 Your choices at present are:

 1) Use managed storage (where you can create a 1:1 mapping between a
 CloudStack volume and a volume on a storage system that has QoS). As Punith
 mentioned, this requires that you purchase storage from a vendor who
 provides guaranteed QoS on a volume-by-volume bases AND has this integrated
 into CloudStack.

 2) Create primary storage in CloudStack that is not managed, but has a high
 number of IOPS (ex. using SSDs). You can then storage tag this primary
 storage and create Compute and Disk Offerings that use this storage tag to
 make sure their volumes end up on this storage pool (primary storage). This
 will still not guarantee IOPS on a CloudStack volume-by-volume basis, but
 it will at least place the CloudStack volumes that need a better chance of
 getting higher IOPS on a storage pool that could provide the necessary
 IOPS. A big downside here is that you want to watch how many CloudStack
 volumes get deployed on this primary storage because you'll need to
 essentially over-provision IOPS in this primary storage to increase the
 probability that each and every CloudStack volume that uses this primary
 storage gets the necessary IOPS (and isn't as likely to suffer from the
 Noisy Neighbor Effect). You should be able to tell CloudStack to only use,
 say, 80% (or whatever) of the storage you're providing to it (so as to
 increase your effective IOPS per GB ratio). This over-provisioning of IOPS
 to control Noisy Neighbors is avoided in option 1. In that situation, you
 only provision the IOPS and capacity you actually need. It is a much more
 sophisticated approach.

 Thanks,
 Mike


 On Sun, Jun 1, 2014 at 11:36 PM, Punith S punit...@cloudbyte.com wrote:

  hi hieu,
 
  your problem is the bottle neck we see as a storage vendors in the cloud,
  meaning all the vms in the cloud have not been guaranteed iops from the
  primary storage, because in your case i'm assuming you are running
 1000vms
  on a xen cluster whose all vm's disks are lying on a same primary nfs
  storage mounted to the cluster,
  hence you won't get the dedicated iops for each vm since every vm is
  sharing the same storage. to solve this issue in cloudstack we the third
  party vendors have implemented the plugin(namely cloudbyte , solidfire
 etc)
  to support managed storage(dedicated volumes with guaranteed qos for each
  vms) , where we are mapping each root disk(vdi) or data disk of a vm with
  one nfs or iscsi share coming out of a pool, also we are proposing the
 new
  feature to change volume iops on fly in 4.5, where you can increase or
  decrease your root disk iops while booting or at peak times. but to use
  this plugin you have to buy our storage solution.
 
  if not , you can try creating a nfs share out of ssd pool storage and
  create a primary storage in cloudstack out of it named as golden primary
  storage with specific tag like gold, and create a compute offering for
 your
  template with the storage tag as gold, hence all the vm's you create will
  sit on this gold primary storage with high iops. and other data disks on
  other primary storage but still here you cannot guarantee the qos at vm
  level.
 
  thanks
 
 
  On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE hieul...@gmail.com wrote:
 
  Hi all,
 
  There are some problems while deploying a large amount of VMs in my
  company
  with CloudStack. All VMs are deployed from same template (e.g: Windows
 7)
  and the quantity is approximately ~1000VMs. The problems here is low
 IOPS,
  low performance of VM (about ~10-11 IOPS, boot time is very 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-02 Thread Mike Tutkowski
It is an interesting idea. If the constraints you face at your company can
be corrected somewhat by implementing this, then you should go for it.

It sounds like writes will be placed on the slower storage pool. This means
as you update OS components, those updates will be placed on the slower
storage pool. As such, your performance is likely to somewhat decrease over
time (as more and more writes end up on the slower storage pool).

That may be OK for your use case(s), though.

You'll have to update the storage-pool orchestration logic to take this new
scheme into account.

Also, we'll have to figure out how this ties into storage tagging (if at
all).

I'd be happy to review your design and code.


On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com wrote:

 Thanks Mike and Punith for quick reply.

 Both solutions you gave here are absolutely correct. But as I mentioned in
 the first email, I want another better solution for current infrastructure
 at my company.

 Creating a high IOPS primary storage using storage tags is good but it will
 be very waste of disk capacity. For example, if I only have 1TB SSD and
 deploy 100 VM from a 100GB template.

 So I think about a solution where a high IOPS primary storage can only
 store golden image (master image), and a child image of this VM will be
 stored in another normal (NFS, ISCSI...) storage. In this case, with 1TB
 SSD Primary Storage I can store as much golden image as I need.

 I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0 with 2TB
 local storage 1RPM, 6TB NFS share storage with 1GB network. The IOPS of
 VMs which have golden image (master image) in SSD and child image in NFS
 increate more than 30-40% compare with VMs which have both golden image and
 child image in NFS. The boot time of each VM is also decrease. ('cause
 golden image in SSD only reduced READ IOPS).

 Do you think this approach OK ?


 On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

  Thanks, Punith - this is similar to what I was going to say.
 
  Any time a set of CloudStack volumes share IOPS from a common pool, you
  cannot guarantee IOPS to a given CloudStack volume at a given time.
 
  Your choices at present are:
 
  1) Use managed storage (where you can create a 1:1 mapping between a
  CloudStack volume and a volume on a storage system that has QoS). As
 Punith
  mentioned, this requires that you purchase storage from a vendor who
  provides guaranteed QoS on a volume-by-volume bases AND has this
 integrated
  into CloudStack.
 
  2) Create primary storage in CloudStack that is not managed, but has a
 high
  number of IOPS (ex. using SSDs). You can then storage tag this primary
  storage and create Compute and Disk Offerings that use this storage tag
 to
  make sure their volumes end up on this storage pool (primary storage).
 This
  will still not guarantee IOPS on a CloudStack volume-by-volume basis, but
  it will at least place the CloudStack volumes that need a better chance
 of
  getting higher IOPS on a storage pool that could provide the necessary
  IOPS. A big downside here is that you want to watch how many CloudStack
  volumes get deployed on this primary storage because you'll need to
  essentially over-provision IOPS in this primary storage to increase the
  probability that each and every CloudStack volume that uses this primary
  storage gets the necessary IOPS (and isn't as likely to suffer from the
  Noisy Neighbor Effect). You should be able to tell CloudStack to only
 use,
  say, 80% (or whatever) of the storage you're providing to it (so as to
  increase your effective IOPS per GB ratio). This over-provisioning of
 IOPS
  to control Noisy Neighbors is avoided in option 1. In that situation, you
  only provision the IOPS and capacity you actually need. It is a much more
  sophisticated approach.
 
  Thanks,
  Mike
 
 
  On Sun, Jun 1, 2014 at 11:36 PM, Punith S punit...@cloudbyte.com
 wrote:
 
   hi hieu,
  
   your problem is the bottle neck we see as a storage vendors in the
 cloud,
   meaning all the vms in the cloud have not been guaranteed iops from the
   primary storage, because in your case i'm assuming you are running
  1000vms
   on a xen cluster whose all vm's disks are lying on a same primary nfs
   storage mounted to the cluster,
   hence you won't get the dedicated iops for each vm since every vm is
   sharing the same storage. to solve this issue in cloudstack we the
 third
   party vendors have implemented the plugin(namely cloudbyte , solidfire
  etc)
   to support managed storage(dedicated volumes with guaranteed qos for
 each
   vms) , where we are mapping each root disk(vdi) or data disk of a vm
 with
   one nfs or iscsi share coming out of a pool, also we are proposing the
  new
   feature to change volume iops on fly in 4.5, where you can increase or
   decrease your root disk iops while booting or at peak times. but to use
   this plugin you have to buy our storage 

Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-02 Thread Mike Tutkowski
Also, give some thought in your design as to how VM migration will work.

Thanks!

On Monday, June 2, 2014, Mike Tutkowski mike.tutkow...@solidfire.com
wrote:

 It is an interesting idea. If the constraints you face at your company can
 be corrected somewhat by implementing this, then you should go for it.

 It sounds like writes will be placed on the slower storage pool. This
 means as you update OS components, those updates will be placed on the
 slower storage pool. As such, your performance is likely to somewhat
 decrease over time (as more and more writes end up on the slower storage
 pool).

 That may be OK for your use case(s), though.

 You'll have to update the storage-pool orchestration logic to take this
 new scheme into account.

 Also, we'll have to figure out how this ties into storage tagging (if at
 all).

 I'd be happy to review your design and code.


 On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE hieul...@gmail.com wrote:

 Thanks Mike and Punith for quick reply.

 Both solutions you gave here are absolutely correct. But as I mentioned in
 the first email, I want another better solution for current infrastructure
 at my company.

 Creating a high IOPS primary storage using storage tags is good but it will
 be very waste of disk capacity. For example, if I only have 1TB SSD and
 deploy 100 VM from a 100GB template.

 So I think about a solution where a high IOPS primary storage can only
 store golden image (master image), and a child image of this VM will be
 stored in another normal (NFS, ISCSI...) storage. In this case, with 1TB
 SSD Primary Storage I can store as much golden image as I need.

 I have also tested it with 256 GB SSD mounted on Xen Server 6.2.0 with 2TB
 local storage 1RPM, 6TB NFS share storage with 1GB network. The IOPS of
 VMs which have golden image (master image) in SSD and child image in NFS
 increate more than 30-40% compare with VMs which have both golden image and
 child image in NFS. The boot time of each VM is also decrease. ('cause
 golden image in SSD only reduced READ IOPS).

 Do you think this approach OK ?


 On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

  Thanks, Punith - this is similar to what I was going to say.
 
  Any time a set of CloudStack volumes share IOPS from a common pool, you
  cannot guarantee IOPS to a given CloudStack volume at a given time.
 
  Your choices at present are:
 
  1) Use managed storage (where you can create a 1:1 mapping between a
  CloudStack volume and a volume on a storage system that has QoS). As
 Punith
  mentioned, this requires that you purchase storage from a vendor who
  provides guaranteed QoS on a volume-by-volume bases AND has this
 integrated
  into CloudStack.
 
  2) Create primary storage in CloudStack that is not managed, but has a
 high
  number of IOPS (ex. using SSDs). You can then storage tag this primary
  storage and create Compute and Disk Offerings that use this storage tag
 to
  make sure their volumes end up on this storage pool (primary storage).
 This
  will still not guarantee IOPS on a CloudStack volume-by-volume basis, but
  it will at least place the CloudStack volumes that need a better chance
 of
  getting higher IOPS on a storage pool that could provide the necessary
  IOPS. A big downside here is that you want to watch how many CloudStack
  volumes get deployed on this primary storage because you'll need to
  essentially over-provision IOPS in this primary storage to increase the
  probability that each and every CloudStack volume that uses this primary
  storage gets the necessary IOPS (and isn't as likely to suffer from the
  Noisy Neighbor Effect). You should be able to tell CloudStack to only
 use,
  say, 80% (or whatever) of the storage you're providing to it (so as to
  increase your effective IOPS per GB ratio). This over-provisioning of
 IOPS
  to control Noisy Neighbors is avoided in option 1. In that situation, you
  only provision the IOPS and capacity you actually need. It is a much more
  sophisticated approach.
 
  Thanks,
  Mike
 
 
  On Sun, Jun 1, 2014 at 11:36 PM, Punith S punit...@cloudbyte.com
 wrote:
 
   hi hieu,
  
   your problem is the bottle neck we see as a storage vendors in the
 cloud,
   meaning all the vms in the cloud have not been guaranteed iops from the
   primary storage, because in your case i'm assuming you are running
  1000vms
   on a xen cluster whose all vm's disks are lying on a same primary nfs
   storage mounted to the cluster,
   hence you won't get the dedicated iops for each vm since every vm is
   sharing the same storage. to solve this issue in cloudstack we the
 third
   party vendors have implemented the plugin(namely cloudbyte , solidfire
  etc)
   to support managed storage(dedicated volumes with guaranteed qos for
 each
   vms) , where we are mapping each root disk(vdi) or data disk of a vm
 with
   one nfs or iscsi share coming out of a pool, also we are proposing the

 --
 *Mike Tutkowski*
 

[DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-01 Thread Hieu LE
Hi all,

There are some problems while deploying a large amount of VMs in my company
with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
and the quantity is approximately ~1000VMs. The problems here is low IOPS,
low performance of VM (about ~10-11 IOPS, boot time is very high). The
storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
Server nodes have standard server HDD disk raid.

I have found some solutions for this such as:

   - Enable Xen Server Intellicache and some tweaks in CloudStack codes to
   deploy and start VM in Intellicache mode. But this solution will transfer
   all IOPS from shared storage to all local storage, hence affect and limit
   some CloudStack features.
   - Buying some expensive storage solutions and network to increase IOPS.
   Nah..

So, I am thinking about a new feature that (may be) increasing IOPS and
performance of VMs:

   1. Separate golden image in high IOPS partition: buying new SSD, plug in
   Xen Server and deployed a new VM in NFS storage WITH golden image in this
   new SSD partition. This can reduce READ IOPS in shared storage and decrease
   boot time of VM. (Currenty, VM deployed in Xen Server always have a master
   image (golden image - in VMWare) always in the same storage repository with
   different image (child image)). We can do this trick by tweaking in VHD
   header file with new Xen Server plug-in.
   2. Create golden primary storage and VM template that enable this
   feature.
   3. So, all VMs deployed from template that had enabled this feature will
   have a golden image stored in golden primary storage (SSD or some high IOPS
   partition), and different image (child image) stored in other normal
   primary storage.

This new feature will not transfer all IOPS from shared storage to local
storage (because high IOPS partition can be another high IOPS shared
storage) and require less money than buying new storage solution.

What do you think ? If possible, may I write a proposal in CloudStack wiki ?

BRs.

Hieu Lee

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C()$ ULC(++)$ P L++(+++)$ E
!W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)+++ DI- D+ G
e++(+++) h-- r(++)+++ y-
--END GEEK CODE BLOCK--


Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-01 Thread Punith S
hi hieu,

your problem is the bottle neck we see as a storage vendors in the cloud,
meaning all the vms in the cloud have not been guaranteed iops from the
primary storage, because in your case i'm assuming you are running 1000vms
on a xen cluster whose all vm's disks are lying on a same primary nfs
storage mounted to the cluster,
hence you won't get the dedicated iops for each vm since every vm is
sharing the same storage. to solve this issue in cloudstack we the third
party vendors have implemented the plugin(namely cloudbyte , solidfire etc)
to support managed storage(dedicated volumes with guaranteed qos for each
vms) , where we are mapping each root disk(vdi) or data disk of a vm with
one nfs or iscsi share coming out of a pool, also we are proposing the new
feature to change volume iops on fly in 4.5, where you can increase or
decrease your root disk iops while booting or at peak times. but to use
this plugin you have to buy our storage solution.

if not , you can try creating a nfs share out of ssd pool storage and
create a primary storage in cloudstack out of it named as golden primary
storage with specific tag like gold, and create a compute offering for your
template with the storage tag as gold, hence all the vm's you create will
sit on this gold primary storage with high iops. and other data disks on
other primary storage but still here you cannot guarantee the qos at vm
level.

thanks


On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE hieul...@gmail.com wrote:

 Hi all,

 There are some problems while deploying a large amount of VMs in my company
 with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
 and the quantity is approximately ~1000VMs. The problems here is low IOPS,
 low performance of VM (about ~10-11 IOPS, boot time is very high). The
 storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
 Server nodes have standard server HDD disk raid.

 I have found some solutions for this such as:

- Enable Xen Server Intellicache and some tweaks in CloudStack codes to
deploy and start VM in Intellicache mode. But this solution will
 transfer
all IOPS from shared storage to all local storage, hence affect and
 limit
some CloudStack features.
- Buying some expensive storage solutions and network to increase IOPS.
Nah..

 So, I am thinking about a new feature that (may be) increasing IOPS and
 performance of VMs:

1. Separate golden image in high IOPS partition: buying new SSD, plug in
Xen Server and deployed a new VM in NFS storage WITH golden image in
 this
new SSD partition. This can reduce READ IOPS in shared storage and
 decrease
boot time of VM. (Currenty, VM deployed in Xen Server always have a
 master
image (golden image - in VMWare) always in the same storage repository
 with
different image (child image)). We can do this trick by tweaking in VHD
header file with new Xen Server plug-in.
2. Create golden primary storage and VM template that enable this
feature.
3. So, all VMs deployed from template that had enabled this feature will
have a golden image stored in golden primary storage (SSD or some high
 IOPS
partition), and different image (child image) stored in other normal
primary storage.

 This new feature will not transfer all IOPS from shared storage to local
 storage (because high IOPS partition can be another high IOPS shared
 storage) and require less money than buying new storage solution.

 What do you think ? If possible, may I write a proposal in CloudStack wiki
 ?

 BRs.

 Hieu Lee

 --
 -BEGIN GEEK CODE BLOCK-
 Version: 3.1
 GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C()$ ULC(++)$ P L++(+++)$
 E
 !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)+++ DI- D+ G
 e++(+++) h-- r(++)+++ y-
 --END GEEK CODE BLOCK--




-- 
regards,

punith s
cloudbyte.com


Re: [DISCUSS] Increasing VM IOPS by separating golden image in high IOPS partition in Xen Server ?

2014-06-01 Thread Mike Tutkowski
Thanks, Punith - this is similar to what I was going to say.

Any time a set of CloudStack volumes share IOPS from a common pool, you
cannot guarantee IOPS to a given CloudStack volume at a given time.

Your choices at present are:

1) Use managed storage (where you can create a 1:1 mapping between a
CloudStack volume and a volume on a storage system that has QoS). As Punith
mentioned, this requires that you purchase storage from a vendor who
provides guaranteed QoS on a volume-by-volume bases AND has this integrated
into CloudStack.

2) Create primary storage in CloudStack that is not managed, but has a high
number of IOPS (ex. using SSDs). You can then storage tag this primary
storage and create Compute and Disk Offerings that use this storage tag to
make sure their volumes end up on this storage pool (primary storage). This
will still not guarantee IOPS on a CloudStack volume-by-volume basis, but
it will at least place the CloudStack volumes that need a better chance of
getting higher IOPS on a storage pool that could provide the necessary
IOPS. A big downside here is that you want to watch how many CloudStack
volumes get deployed on this primary storage because you'll need to
essentially over-provision IOPS in this primary storage to increase the
probability that each and every CloudStack volume that uses this primary
storage gets the necessary IOPS (and isn't as likely to suffer from the
Noisy Neighbor Effect). You should be able to tell CloudStack to only use,
say, 80% (or whatever) of the storage you're providing to it (so as to
increase your effective IOPS per GB ratio). This over-provisioning of IOPS
to control Noisy Neighbors is avoided in option 1. In that situation, you
only provision the IOPS and capacity you actually need. It is a much more
sophisticated approach.

Thanks,
Mike


On Sun, Jun 1, 2014 at 11:36 PM, Punith S punit...@cloudbyte.com wrote:

 hi hieu,

 your problem is the bottle neck we see as a storage vendors in the cloud,
 meaning all the vms in the cloud have not been guaranteed iops from the
 primary storage, because in your case i'm assuming you are running 1000vms
 on a xen cluster whose all vm's disks are lying on a same primary nfs
 storage mounted to the cluster,
 hence you won't get the dedicated iops for each vm since every vm is
 sharing the same storage. to solve this issue in cloudstack we the third
 party vendors have implemented the plugin(namely cloudbyte , solidfire etc)
 to support managed storage(dedicated volumes with guaranteed qos for each
 vms) , where we are mapping each root disk(vdi) or data disk of a vm with
 one nfs or iscsi share coming out of a pool, also we are proposing the new
 feature to change volume iops on fly in 4.5, where you can increase or
 decrease your root disk iops while booting or at peak times. but to use
 this plugin you have to buy our storage solution.

 if not , you can try creating a nfs share out of ssd pool storage and
 create a primary storage in cloudstack out of it named as golden primary
 storage with specific tag like gold, and create a compute offering for your
 template with the storage tag as gold, hence all the vm's you create will
 sit on this gold primary storage with high iops. and other data disks on
 other primary storage but still here you cannot guarantee the qos at vm
 level.

 thanks


 On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE hieul...@gmail.com wrote:

 Hi all,

 There are some problems while deploying a large amount of VMs in my
 company
 with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
 and the quantity is approximately ~1000VMs. The problems here is low IOPS,
 low performance of VM (about ~10-11 IOPS, boot time is very high). The
 storage of my company is SAN/NAS with NFS and Xen Server 6.2.0. All Xen
 Server nodes have standard server HDD disk raid.

 I have found some solutions for this such as:

- Enable Xen Server Intellicache and some tweaks in CloudStack codes to
deploy and start VM in Intellicache mode. But this solution will
 transfer
all IOPS from shared storage to all local storage, hence affect and
 limit
some CloudStack features.
- Buying some expensive storage solutions and network to increase IOPS.
Nah..

 So, I am thinking about a new feature that (may be) increasing IOPS and
 performance of VMs:

1. Separate golden image in high IOPS partition: buying new SSD, plug
 in
Xen Server and deployed a new VM in NFS storage WITH golden image in
 this
new SSD partition. This can reduce READ IOPS in shared storage and
 decrease
boot time of VM. (Currenty, VM deployed in Xen Server always have a
 master
image (golden image - in VMWare) always in the same storage repository
 with
different image (child image)). We can do this trick by tweaking in VHD
header file with new Xen Server plug-in.
2. Create golden primary storage and VM template that enable this
feature.
3. So, all VMs deployed from template that had