Mike,

Are the checks only implemented in the UI?

Thanks,
-John




On Jun 12, 2013, at 1:02 PM, Mike Tutkowski
<mike.tutkow...@solidfire.com> wrote:

> Hi John,
>
> Wei and I have discussed making the two features mutually exclusive. We
> agree with you that only one should be active at a time. We plan to
> implement in the GUI a mechanism (maybe radio buttons) to turn his feature
> on and mine off and vice versa.
>
> I was thinking if I wait until he checks in his code, then I update and
> merge that I will be the person resolving merge conflicts in the JavaScript
> code (there shouldn't be a problem in the Java code) as opposed to putting
> that work on someone else.
>
> Let me know what you think.
>
> Oh, was going to ask you what "FS" stands for here.
>
> Thanks!
>
>
> On Wed, Jun 12, 2013 at 10:56 AM, John Burwell <jburw...@basho.com> wrote:
>
>> Mike,
>>
>> How have Wei and you resolved the issue of conflicting QoS mechanisms
>> between the Hypervisor and Storage layers?  Have the affected FSs been
>> updated with that decision?
>>
>> In terms of merge timing, can you describe the dependencies between the
>> patches?
>>
>> Thanks,
>> -John
>>
>>
>>
>>
>> On Jun 12, 2013, at 12:43 PM, Mike Tutkowski
>> <mike.tutkow...@solidfire.com> wrote:
>>
>>> No problem, John.
>>>
>>> I still want to re-review it by myself before coming up with a new patch
>>> file.
>>>
>>> Also, maybe I should first wait for Wei's changes to be checked in and
>>> merge those into mine before generating a new patch file?
>>>
>>>
>>> On Wed, Jun 12, 2013 at 10:40 AM, John Burwell <jburw...@basho.com>
>> wrote:
>>>
>>>> Mike,
>>>>
>>>> I just realized that I forgot to publish my review.  I am offline ATM,
>>>> but I will publish it in the next couple of hours.
>>>>
>>>> Do you plan to update your the patch in Review Board?
>>>>
>>>> Sorry for the oversight,
>>>> -John
>>>>
>>>>
>>>>
>>>>
>>>> On Jun 12, 2013, at 2:26 AM, Mike Tutkowski
>>>> <mike.tutkow...@solidfire.com> wrote:
>>>>
>>>>> Hi Edison, John, and Wei (and whoever else is reading this :) ),
>>>>>
>>>>> Just an FYI that I believe I have implemented all the areas we wanted
>>>>> addressed.
>>>>>
>>>>> I plan to review the code again tomorrow morning or afternoon, then
>> send
>>>> in
>>>>> another patch.
>>>>>
>>>>> Thanks for all the work on this everyone!
>>>>>
>>>>>
>>>>> On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski <
>>>>> mike.tutkow...@solidfire.com> wrote:
>>>>>
>>>>>> Sure, that sounds good.
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU <ustcweiz...@gmail.com>
>>>> wrote:
>>>>>>
>>>>>>> Hi Mike,
>>>>>>>
>>>>>>> It looks the two feature do not have many conflicts in Java code,
>>>> except
>>>>>>> the cloudstack UI.
>>>>>>> If you do not mind, I will merge disk_io_throttling branch into
>> master
>>>>>>> this
>>>>>>> week, so that you can develop based on it.
>>>>>>>
>>>>>>> -Wei
>>>>>>>
>>>>>>>
>>>>>>> 2013/6/11 Mike Tutkowski <mike.tutkow...@solidfire.com>
>>>>>>>
>>>>>>>> Hey John,
>>>>>>>>
>>>>>>>> The SolidFire patch does not depend on the object_store branch, but
>> -
>>>> as
>>>>>>>> Edison mentioned - it might be easier if we merge the SolidFire
>> branch
>>>>>>> into
>>>>>>>> the object_store branch before object_store goes into master.
>>>>>>>>
>>>>>>>> I'm not sure how the disk_io_throttling fits into this merge
>> strategy.
>>>>>>>> Perhaps Wei can chime in on that.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell <jburw...@basho.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Mike,
>>>>>>>>>
>>>>>>>>> We have a delicate merge dance to perform.  The disk_io_throttling,
>>>>>>>>> solidfire, and object_store appear to have a number of overlapping
>>>>>>>>> elements.  I understand the dependencies between the patches to be
>> as
>>>>>>>>> follows:
>>>>>>>>>
>>>>>>>>>      object_store <- solidfire -> disk_io_throttling
>>>>>>>>>
>>>>>>>>> Am I correct that the device management aspects of SolidFire are
>>>>>>> additive
>>>>>>>>> to the object_store branch or there are circular dependency between
>>>>>>> the
>>>>>>>>> branches?  Once we understand the dependency graph, we can
>> determine
>>>>>>> the
>>>>>>>>> best approach to land the changes in master.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> -John
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
>>>>>>>> mike.tutkow...@solidfire.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Also, if we are good with Edison merging my code into his branch
>>>>>>> before
>>>>>>>>>> going into master, I am good with that.
>>>>>>>>>>
>>>>>>>>>> We can remove the StoragePoolType.Dynamic code after his merge and
>>>>>>> we
>>>>>>>> can
>>>>>>>>>> deal with Burst IOPS then, as well.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
>>>>>>>>>> mike.tutkow...@solidfire.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Let me make sure I follow where we're going here:
>>>>>>>>>>>
>>>>>>>>>>> 1) There should be NO references to hypervisor code in the
>> storage
>>>>>>>>>>> plug-ins code (this includes the default storage plug-in, which
>>>>>>>>> currently
>>>>>>>>>>> sends several commands to the hypervisor in use (although it does
>>>>>>> not
>>>>>>>>> know
>>>>>>>>>>> which hypervisor (XenServer, ESX(i), etc.) is actually in use))
>>>>>>>>>>>
>>>>>>>>>>> 2) managed=true or managed=false can be placed in the url field
>> (if
>>>>>>>> not
>>>>>>>>>>> present, we default to false). This info is stored in the
>>>>>>>>>>> storage_pool_details table.
>>>>>>>>>>>
>>>>>>>>>>> 3) When the "attach" command is sent to the hypervisor in
>>>>>>> question, we
>>>>>>>>>>> pass the managed property along (this takes the place of the
>>>>>>>>>>> StoragePoolType.Dynamic check).
>>>>>>>>>>>
>>>>>>>>>>> 4) execute(AttachVolumeCommand) in the hypervisor checks for the
>>>>>>>> managed
>>>>>>>>>>> property. If true for an attach, the necessary hypervisor data
>>>>>>>>> structure is
>>>>>>>>>>> created and the rest of the attach command executes to attach the
>>>>>>>>> volume.
>>>>>>>>>>>
>>>>>>>>>>> 5) When execute(AttachVolumeCommand) is invoked to detach a
>> volume,
>>>>>>>> the
>>>>>>>>>>> same check is made. If managed, the hypervisor data structure is
>>>>>>>>> removed.
>>>>>>>>>>>
>>>>>>>>>>> 6) I do not see an clear way to support Burst IOPS in 4.2 unless
>>>>>>> it is
>>>>>>>>>>> stored in the volumes and disk_offerings table. If we have some
>>>>>>> idea,
>>>>>>>>>>> that'd be cool.
>>>>>>>>>>>
>>>>>>>>>>> Thanks!
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
>>>>>>>>>>> mike.tutkow...@solidfire.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> "+1 -- Burst IOPS can be implemented while avoiding
>> implementation
>>>>>>>>>>>> attributes.  I always wondered about the details field.  I think
>>>>>>> we
>>>>>>>>> should
>>>>>>>>>>>> beef up the description in the documentation regarding the
>>>>>>> expected
>>>>>>>>> format
>>>>>>>>>>>> of the field.  In 4.1, I noticed that the details are not
>>>>>>> returned on
>>>>>>>>> the
>>>>>>>>>>>> createStoratePool updateStoragePool, or listStoragePool
>> response.
>>>>>>>> Why
>>>>>>>>>>>> don't we return it?  It seems like it would be useful for
>> clients
>>>>>>> to
>>>>>>>> be
>>>>>>>>>>>> able to inspect the contents of the details field."
>>>>>>>>>>>>
>>>>>>>>>>>> Not sure how this would work storing Burst IOPS here.
>>>>>>>>>>>>
>>>>>>>>>>>> Burst IOPS need to be variable on a Disk Offering-by-Disk
>> Offering
>>>>>>>>>>>> basis. For each Disk Offering created, you have to be able to
>>>>>>>> associate
>>>>>>>>>>>> unique Burst IOPS. There is a disk_offering_details table. Maybe
>>>>>>> it
>>>>>>>>> could
>>>>>>>>>>>> go there?
>>>>>>>>>>>>
>>>>>>>>>>>> I'm also not sure how you would accept the Burst IOPS in the GUI
>>>>>>> if
>>>>>>>>> it's
>>>>>>>>>>>> not stored like the Min and Max fields are in the DB.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkow...@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>> e: mike.tutkow...@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>> *™*
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> e: mike.tutkow...@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the
>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkow...@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkow...@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the
>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkow...@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the
>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Reply via email to