Hi,

Yeah, Alex, I think that's the way we were planning (with storage tags). I
believe John was just throwing out an idea that - in addition to storage
tags - we could look into these allocators (storage QoS being preferred,
then hypervisor QoS if storage QoS is not available, but hypervisor QoS is).

I think John's concern is that you can enter in values for Wei's and my
feature that are not honored by other vendors (at least yet), so he was
hoping - in addition to storage tags - for the allocators to prefer these
vendors when these fields are filled in. As it stands today in CloudStack,
we already have this kind of an issue with other features (fields in
dialogs for features that not all vendors support). Perhaps post 4.2 we
could look into generic name/value pairs (this is how OpenStack addresses
the issue).

Honestly, I think we're too late in the game (two weeks until code freeze)
to go too deeply down that path in 4.2.

It's probably better if we - at least for 4.2 - keep Wei's fields and my
fields as is, make sure only one or the other feature has data entered for
it (or neither), and call it good for 4.2.

Then let's step back and look into a more general-purpose design that can
be applied throughout CloudStack where we have these situations.

What do you think?



On Wed, Jun 12, 2013 at 5:21 PM, John Burwell <jburw...@basho.com> wrote:

> Mike,
>
> I just published my review @ https://reviews.apache.org/r/11479/.
>
> I apologize for the delay,
> -John
>
> On Jun 12, 2013, at 12:43 PM, Mike Tutkowski <mike.tutkow...@solidfire.com>
> wrote:
>
> > No problem, John.
> >
> > I still want to re-review it by myself before coming up with a new patch
> > file.
> >
> > Also, maybe I should first wait for Wei's changes to be checked in and
> > merge those into mine before generating a new patch file?
> >
> >
> > On Wed, Jun 12, 2013 at 10:40 AM, John Burwell <jburw...@basho.com>
> wrote:
> >
> >> Mike,
> >>
> >> I just realized that I forgot to publish my review.  I am offline ATM,
> >> but I will publish it in the next couple of hours.
> >>
> >> Do you plan to update your the patch in Review Board?
> >>
> >> Sorry for the oversight,
> >> -John
> >>
> >>
> >>
> >>
> >> On Jun 12, 2013, at 2:26 AM, Mike Tutkowski
> >> <mike.tutkow...@solidfire.com> wrote:
> >>
> >>> Hi Edison, John, and Wei (and whoever else is reading this :) ),
> >>>
> >>> Just an FYI that I believe I have implemented all the areas we wanted
> >>> addressed.
> >>>
> >>> I plan to review the code again tomorrow morning or afternoon, then
> send
> >> in
> >>> another patch.
> >>>
> >>> Thanks for all the work on this everyone!
> >>>
> >>>
> >>> On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski <
> >>> mike.tutkow...@solidfire.com> wrote:
> >>>
> >>>> Sure, that sounds good.
> >>>>
> >>>>
> >>>> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU <ustcweiz...@gmail.com>
> >> wrote:
> >>>>
> >>>>> Hi Mike,
> >>>>>
> >>>>> It looks the two feature do not have many conflicts in Java code,
> >> except
> >>>>> the cloudstack UI.
> >>>>> If you do not mind, I will merge disk_io_throttling branch into
> master
> >>>>> this
> >>>>> week, so that you can develop based on it.
> >>>>>
> >>>>> -Wei
> >>>>>
> >>>>>
> >>>>> 2013/6/11 Mike Tutkowski <mike.tutkow...@solidfire.com>
> >>>>>
> >>>>>> Hey John,
> >>>>>>
> >>>>>> The SolidFire patch does not depend on the object_store branch, but
> -
> >> as
> >>>>>> Edison mentioned - it might be easier if we merge the SolidFire
> branch
> >>>>> into
> >>>>>> the object_store branch before object_store goes into master.
> >>>>>>
> >>>>>> I'm not sure how the disk_io_throttling fits into this merge
> strategy.
> >>>>>> Perhaps Wei can chime in on that.
> >>>>>>
> >>>>>>
> >>>>>> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell <jburw...@basho.com>
> >>>>> wrote:
> >>>>>>
> >>>>>>> Mike,
> >>>>>>>
> >>>>>>> We have a delicate merge dance to perform.  The disk_io_throttling,
> >>>>>>> solidfire, and object_store appear to have a number of overlapping
> >>>>>>> elements.  I understand the dependencies between the patches to be
> as
> >>>>>>> follows:
> >>>>>>>
> >>>>>>>       object_store <- solidfire -> disk_io_throttling
> >>>>>>>
> >>>>>>> Am I correct that the device management aspects of SolidFire are
> >>>>> additive
> >>>>>>> to the object_store branch or there are circular dependency between
> >>>>> the
> >>>>>>> branches?  Once we understand the dependency graph, we can
> determine
> >>>>> the
> >>>>>>> best approach to land the changes in master.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> -John
> >>>>>>>
> >>>>>>>
> >>>>>>> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
> >>>>>> mike.tutkow...@solidfire.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Also, if we are good with Edison merging my code into his branch
> >>>>> before
> >>>>>>>> going into master, I am good with that.
> >>>>>>>>
> >>>>>>>> We can remove the StoragePoolType.Dynamic code after his merge and
> >>>>> we
> >>>>>> can
> >>>>>>>> deal with Burst IOPS then, as well.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
> >>>>>>>> mike.tutkow...@solidfire.com> wrote:
> >>>>>>>>
> >>>>>>>>> Let me make sure I follow where we're going here:
> >>>>>>>>>
> >>>>>>>>> 1) There should be NO references to hypervisor code in the
> storage
> >>>>>>>>> plug-ins code (this includes the default storage plug-in, which
> >>>>>>> currently
> >>>>>>>>> sends several commands to the hypervisor in use (although it does
> >>>>> not
> >>>>>>> know
> >>>>>>>>> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
> >>>>>>>>>
> >>>>>>>>> 2) managed=true or managed=false can be placed in the url field
> (if
> >>>>>> not
> >>>>>>>>> present, we default to false). This info is stored in the
> >>>>>>>>> storage_pool_details table.
> >>>>>>>>>
> >>>>>>>>> 3) When the "attach" command is sent to the hypervisor in
> >>>>> question, we
> >>>>>>>>> pass the managed property along (this takes the place of the
> >>>>>>>>> StoragePoolType.Dynamic check).
> >>>>>>>>>
> >>>>>>>>> 4) execute(AttachVolumeCommand) in the hypervisor checks for the
> >>>>>> managed
> >>>>>>>>> property. If true for an attach, the necessary hypervisor data
> >>>>>>> structure is
> >>>>>>>>> created and the rest of the attach command executes to attach the
> >>>>>>> volume.
> >>>>>>>>>
> >>>>>>>>> 5) When execute(AttachVolumeCommand) is invoked to detach a
> volume,
> >>>>>> the
> >>>>>>>>> same check is made. If managed, the hypervisor data structure is
> >>>>>>> removed.
> >>>>>>>>>
> >>>>>>>>> 6) I do not see an clear way to support Burst IOPS in 4.2 unless
> >>>>> it is
> >>>>>>>>> stored in the volumes and disk_offerings table. If we have some
> >>>>> idea,
> >>>>>>>>> that'd be cool.
> >>>>>>>>>
> >>>>>>>>> Thanks!
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> >>>>>>>>> mike.tutkow...@solidfire.com> wrote:
> >>>>>>>>>
> >>>>>>>>>> "+1 -- Burst IOPS can be implemented while avoiding
> implementation
> >>>>>>>>>> attributes.  I always wondered about the details field.  I think
> >>>>> we
> >>>>>>> should
> >>>>>>>>>> beef up the description in the documentation regarding the
> >>>>> expected
> >>>>>>> format
> >>>>>>>>>> of the field.  In 4.1, I noticed that the details are not
> >>>>> returned on
> >>>>>>> the
> >>>>>>>>>> createStoratePool updateStoragePool, or listStoragePool
> response.
> >>>>>> Why
> >>>>>>>>>> don't we return it?  It seems like it would be useful for
> clients
> >>>>> to
> >>>>>> be
> >>>>>>>>>> able to inspect the contents of the details field."
> >>>>>>>>>>
> >>>>>>>>>> Not sure how this would work storing Burst IOPS here.
> >>>>>>>>>>
> >>>>>>>>>> Burst IOPS need to be variable on a Disk Offering-by-Disk
> Offering
> >>>>>>>>>> basis. For each Disk Offering created, you have to be able to
> >>>>>> associate
> >>>>>>>>>> unique Burst IOPS. There is a disk_offering_details table. Maybe
> >>>>> it
> >>>>>>> could
> >>>>>>>>>> go there?
> >>>>>>>>>>
> >>>>>>>>>> I'm also not sure how you would accept the Burst IOPS in the GUI
> >>>>> if
> >>>>>>> it's
> >>>>>>>>>> not stored like the Min and Max fields are in the DB.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> *Mike Tutkowski*
> >>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>>> o: 303.746.7302
> >>>>>>>>> Advancing the way the world uses the cloud<
> >>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>> *™*
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> *Mike Tutkowski*
> >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>> o: 303.746.7302
> >>>>>>>> Advancing the way the world uses the
> >>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> *™*
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> *Mike Tutkowski*
> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>> e: mike.tutkow...@solidfire.com
> >>>>>> o: 303.746.7302
> >>>>>> Advancing the way the world uses the
> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>> *™*
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Mike Tutkowski*
> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> e: mike.tutkow...@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the cloud<
> >> http://solidfire.com/solution/overview/?video=play>
> >>>> *™*
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkow...@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the
> >>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to