Mike,
That is one possibility. The other possibility is that hypervisor is going to
throttle I/O on all disks attached. Therefore, we need answers to the
following questions:
1. Is I/O throttling applied to the root disk or all disks attached to
the VM?
2. If I/O throttling is applied to all disks, how is the throttling
distributed amongst the disks if only one read/write value is defined?
Thanks,
-John
On Jun 12, 2013, at 5:13 PM, Mike Tutkowski <[email protected]>
wrote:
> Hey John,
>
> Perhaps I don't fully understand how Wei's feature works.
>
> I guess I thought if you choose Hypervisor QoS, you do so on Compute
> Offerings (root disks) and/or Disk Offerings (data disks).
>
> From my thinking, you're root disk could be under Hypervisor QoS, but your
> data disk could be under Storage QoS.
>
> Is that incorrect?
>
> Thanks
>
>
> On Wed, Jun 12, 2013 at 3:10 PM, John Burwell <[email protected]> wrote:
>
>> Mike,
>>
>> As I understand these two patches, the throttled I?O settings are applied
>> from the hypervisor side, and possibly defined in a compute offering where
>> provisioned IOPS are defined on the storage side through disk offerings. I
>> don't see how the management server could enforce this mutual exclusion
>> between provisioned IOPS and throttled I/O until a compute and disk
>> offering were selected. These selections come together when we deploy a
>> VM. Based on these assumptions, I would expect to see enforcement of this
>> rule in UI at the time of VM creation/definition and on the server side, as
>> part of the VM creation. It feels like any attempt to enforce this rule
>> when defining offering would be premature, and unnecessarily limit
>> flexibility. Are my assumptions and understanding correct?
>>
>> Thanks,
>> -John
>>
>> On Jun 12, 2013, at 5:04 PM, Mike Tutkowski <[email protected]>
>> wrote:
>>
>>> Hi John,
>>>
>>> So, maybe I'm wrong about this, but what I was thinking is that we'd
>> build
>>> two radio buttons into the Add Disk Offering dialog (we can ignore
>> Compute
>>> Offerings for 4.2 since my feature doesn't yet support them).
>>>
>>> Let's just called these radio buttons 1) Hypervisor QoS and 2) Storage
>> QoS
>>> for the purpose of this e-mail.
>>>
>>> By default, neither radio button is selected and no QoS fields are
>> visible.
>>>
>>> If the user selects the Storage QoS radio button, then the Custom IOPS
>>> checkbox and the Min and Max text fields are made visible.
>>>
>>> If the user changes his mind and selects the Hypervisor QoS radio button,
>>> then the Custom IOPS checkbox and the Min and Max text fields disappear
>> and
>>> are replaced by the two Hypervisor QoS text fields.
>>>
>>> This way, the user can choose neither QoS option or one of them or the
>>> other, but never both.
>>>
>>> On the API side, I was thinking of having logic in place when a request
>>> comes in to create a Disk Offering to confirm these fields are the way we
>>> want them.
>>>
>>> Once we know the Disk Offering is in order, a user can create a data disk
>>> from it. Since we checked the validity of the Disk Offering when it was
>>> created, the VM should never be asked to use Hypervisor QoS when Storage
>>> QoS in being utilized.
>>>
>>> Does that make sense or did I miss something?
>>>
>>> Thanks
>>>
>>>
>>> On Wed, Jun 12, 2013 at 2:54 PM, John Burwell <[email protected]>
>> wrote:
>>>
>>>> Mike,
>>>>
>>>> Looking through the code, I am trying to understand how
>>>> CreateDiskOfferingCmd would have the context to identify the conflict.
>>>> Naively, it seems to me that this rule would need to be enforced when a
>>>> virtual machine is being deployed. Looking through the code, it seems
>> like
>>>> we should add a private validateStorageQoS method to
>>>> com.cloud.vm.UserVmManagerImpl to check this condition and throws a
>>>> ResourceAllocationException when the QoS definitions are inconsistent.
>> We
>>>> would then add calls to it from each of the VM creation methods in the
>>>> service. Do this type of approach sound reasonable?
>>>>
>>>> Thanks,
>>>> -John
>>>>
>>>> On Jun 12, 2013, at 4:30 PM, Mike Tutkowski <
>> [email protected]>
>>>> wrote:
>>>>
>>>>> Hi John,
>>>>>
>>>>> So, here's what I was planning to do. Of course feel free to correct me
>>>> on
>>>>> this approach.
>>>>>
>>>>> I think it's OK if Wei merges his code into master and then I can draw
>>>> from
>>>>> the main repo and merge master into mine locally.
>>>>>
>>>>> 1) Once I get Wei's code and merge, I plan to add a little GUI code to
>>>> make
>>>>> it user friendly (toggle between these features on the Add Disk
>> Offering
>>>>> window).
>>>>>
>>>>> 2) I plan to write validation logic for the create-disk-offering API
>>>>> command which throws an exception if the rules are not followed (this
>>>>> should never be triggered from the GUI since the GUI will have controls
>>>> in
>>>>> place to toggle between the one feature and the other).
>>>>>
>>>>> I'm not sure about documentation. I haven't had much experience with it
>>>> on
>>>>> CloudStack projects yet.
>>>>>
>>>>> Thanks!
>>>>>
>>>>>
>>>>> On Wed, Jun 12, 2013 at 2:21 PM, John Burwell <[email protected]>
>>>> wrote:
>>>>>
>>>>>> Mike,
>>>>>>
>>>>>> Yes, these server-side rails need to be defined and implemented before
>>>>>> either patch can be merged. From my perspective, I would like to see
>>>> the
>>>>>> rule implemented in the hypervisor as part of the validation of the
>>>> virtual
>>>>>> machine definition. We also need to make sure that this mutual
>>>> exclusion
>>>>>> is documented. Do we usually include this type of documentation with
>>>>>> patches of this nature?
>>>>>>
>>>>>> Thanks,
>>>>>> -John
>>>>>>
>>>>>> On Jun 12, 2013, at 2:18 PM, Mike Tutkowski <
>>>> [email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Currently they are not yet implemented.
>>>>>>>
>>>>>>> We have to make sure they are implemented in the GUI from a usability
>>>>>>> standpoint, but the API must check for consistency and throw an
>>>> exception
>>>>>>> if necessary.
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jun 12, 2013 at 11:03 AM, John Burwell <[email protected]>
>>>>>> wrote:
>>>>>>>
>>>>>>>> Mike,
>>>>>>>>
>>>>>>>> Are the checks only implemented in the UI?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> -John
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Jun 12, 2013, at 1:02 PM, Mike Tutkowski
>>>>>>>> <[email protected]> wrote:
>>>>>>>>
>>>>>>>>> Hi John,
>>>>>>>>>
>>>>>>>>> Wei and I have discussed making the two features mutually
>> exclusive.
>>>> We
>>>>>>>>> agree with you that only one should be active at a time. We plan to
>>>>>>>>> implement in the GUI a mechanism (maybe radio buttons) to turn his
>>>>>>>> feature
>>>>>>>>> on and mine off and vice versa.
>>>>>>>>>
>>>>>>>>> I was thinking if I wait until he checks in his code, then I update
>>>> and
>>>>>>>>> merge that I will be the person resolving merge conflicts in the
>>>>>>>> JavaScript
>>>>>>>>> code (there shouldn't be a problem in the Java code) as opposed to
>>>>>>>> putting
>>>>>>>>> that work on someone else.
>>>>>>>>>
>>>>>>>>> Let me know what you think.
>>>>>>>>>
>>>>>>>>> Oh, was going to ask you what "FS" stands for here.
>>>>>>>>>
>>>>>>>>> Thanks!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Jun 12, 2013 at 10:56 AM, John Burwell <[email protected]
>>>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Mike,
>>>>>>>>>>
>>>>>>>>>> How have Wei and you resolved the issue of conflicting QoS
>>>> mechanisms
>>>>>>>>>> between the Hypervisor and Storage layers? Have the affected FSs
>>>> been
>>>>>>>>>> updated with that decision?
>>>>>>>>>>
>>>>>>>>>> In terms of merge timing, can you describe the dependencies
>> between
>>>>>> the
>>>>>>>>>> patches?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> -John
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Jun 12, 2013, at 12:43 PM, Mike Tutkowski
>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>>> No problem, John.
>>>>>>>>>>>
>>>>>>>>>>> I still want to re-review it by myself before coming up with a
>> new
>>>>>>>> patch
>>>>>>>>>>> file.
>>>>>>>>>>>
>>>>>>>>>>> Also, maybe I should first wait for Wei's changes to be checked
>> in
>>>>>> and
>>>>>>>>>>> merge those into mine before generating a new patch file?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Jun 12, 2013 at 10:40 AM, John Burwell <
>> [email protected]
>>>>>
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Mike,
>>>>>>>>>>>>
>>>>>>>>>>>> I just realized that I forgot to publish my review. I am
>> offline
>>>>>> ATM,
>>>>>>>>>>>> but I will publish it in the next couple of hours.
>>>>>>>>>>>>
>>>>>>>>>>>> Do you plan to update your the patch in Review Board?
>>>>>>>>>>>>
>>>>>>>>>>>> Sorry for the oversight,
>>>>>>>>>>>> -John
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Jun 12, 2013, at 2:26 AM, Mike Tutkowski
>>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Edison, John, and Wei (and whoever else is reading this :)
>> ),
>>>>>>>>>>>>>
>>>>>>>>>>>>> Just an FYI that I believe I have implemented all the areas we
>>>>>> wanted
>>>>>>>>>>>>> addressed.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I plan to review the code again tomorrow morning or afternoon,
>>>> then
>>>>>>>>>> send
>>>>>>>>>>>> in
>>>>>>>>>>>>> another patch.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks for all the work on this everyone!
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 12:29 PM, Mike Tutkowski <
>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Sure, that sounds good.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 12:11 PM, Wei ZHOU <
>>>> [email protected]
>>>>>>>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Mike,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> It looks the two feature do not have many conflicts in Java
>>>> code,
>>>>>>>>>>>> except
>>>>>>>>>>>>>>> the cloudstack UI.
>>>>>>>>>>>>>>> If you do not mind, I will merge disk_io_throttling branch
>> into
>>>>>>>>>> master
>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>> week, so that you can develop based on it.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -Wei
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 2013/6/11 Mike Tutkowski <[email protected]>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hey John,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The SolidFire patch does not depend on the object_store
>>>> branch,
>>>>>>>> but
>>>>>>>>>> -
>>>>>>>>>>>> as
>>>>>>>>>>>>>>>> Edison mentioned - it might be easier if we merge the
>>>> SolidFire
>>>>>>>>>> branch
>>>>>>>>>>>>>>> into
>>>>>>>>>>>>>>>> the object_store branch before object_store goes into
>> master.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I'm not sure how the disk_io_throttling fits into this merge
>>>>>>>>>> strategy.
>>>>>>>>>>>>>>>> Perhaps Wei can chime in on that.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Jun 11, 2013 at 11:07 AM, John Burwell <
>>>>>>>> [email protected]>
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Mike,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> We have a delicate merge dance to perform. The
>>>>>>>> disk_io_throttling,
>>>>>>>>>>>>>>>>> solidfire, and object_store appear to have a number of
>>>>>>>> overlapping
>>>>>>>>>>>>>>>>> elements. I understand the dependencies between the
>> patches
>>>> to
>>>>>>>> be
>>>>>>>>>> as
>>>>>>>>>>>>>>>>> follows:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> object_store <- solidfire -> disk_io_throttling
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Am I correct that the device management aspects of
>> SolidFire
>>>>>> are
>>>>>>>>>>>>>>> additive
>>>>>>>>>>>>>>>>> to the object_store branch or there are circular dependency
>>>>>>>> between
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> branches? Once we understand the dependency graph, we can
>>>>>>>>>> determine
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>> best approach to land the changes in master.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>> -John
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Jun 10, 2013, at 11:10 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>> [email protected]>
>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Also, if we are good with Edison merging my code into his
>>>>>> branch
>>>>>>>>>>>>>>> before
>>>>>>>>>>>>>>>>>> going into master, I am good with that.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> We can remove the StoragePoolType.Dynamic code after his
>>>> merge
>>>>>>>> and
>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>> deal with Burst IOPS then, as well.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 9:08 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Let me make sure I follow where we're going here:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 1) There should be NO references to hypervisor code in
>> the
>>>>>>>>>> storage
>>>>>>>>>>>>>>>>>>> plug-ins code (this includes the default storage plug-in,
>>>>>> which
>>>>>>>>>>>>>>>>> currently
>>>>>>>>>>>>>>>>>>> sends several commands to the hypervisor in use (although
>>>> it
>>>>>>>> does
>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>> know
>>>>>>>>>>>>>>>>>>> which hypervisor (XenServer, ESX(i), etc.) is actually in
>>>>>> use))
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 2) managed=true or managed=false can be placed in the url
>>>>>> field
>>>>>>>>>> (if
>>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>> present, we default to false). This info is stored in the
>>>>>>>>>>>>>>>>>>> storage_pool_details table.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 3) When the "attach" command is sent to the hypervisor in
>>>>>>>>>>>>>>> question, we
>>>>>>>>>>>>>>>>>>> pass the managed property along (this takes the place of
>>>> the
>>>>>>>>>>>>>>>>>>> StoragePoolType.Dynamic check).
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 4) execute(AttachVolumeCommand) in the hypervisor checks
>>>> for
>>>>>>>> the
>>>>>>>>>>>>>>>> managed
>>>>>>>>>>>>>>>>>>> property. If true for an attach, the necessary hypervisor
>>>>>> data
>>>>>>>>>>>>>>>>> structure is
>>>>>>>>>>>>>>>>>>> created and the rest of the attach command executes to
>>>> attach
>>>>>>>> the
>>>>>>>>>>>>>>>>> volume.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 5) When execute(AttachVolumeCommand) is invoked to
>> detach a
>>>>>>>>>> volume,
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> same check is made. If managed, the hypervisor data
>>>> structure
>>>>>>>> is
>>>>>>>>>>>>>>>>> removed.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 6) I do not see an clear way to support Burst IOPS in 4.2
>>>>>>>> unless
>>>>>>>>>>>>>>> it is
>>>>>>>>>>>>>>>>>>> stored in the volumes and disk_offerings table. If we
>> have
>>>>>> some
>>>>>>>>>>>>>>> idea,
>>>>>>>>>>>>>>>>>>> that'd be cool.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
>>>>>>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> "+1 -- Burst IOPS can be implemented while avoiding
>>>>>>>>>> implementation
>>>>>>>>>>>>>>>>>>>> attributes. I always wondered about the details field.
>> I
>>>>>>>> think
>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>> should
>>>>>>>>>>>>>>>>>>>> beef up the description in the documentation regarding
>> the
>>>>>>>>>>>>>>> expected
>>>>>>>>>>>>>>>>> format
>>>>>>>>>>>>>>>>>>>> of the field. In 4.1, I noticed that the details are
>> not
>>>>>>>>>>>>>>> returned on
>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> createStoratePool updateStoragePool, or listStoragePool
>>>>>>>>>> response.
>>>>>>>>>>>>>>>> Why
>>>>>>>>>>>>>>>>>>>> don't we return it? It seems like it would be useful
>> for
>>>>>>>>>> clients
>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>> able to inspect the contents of the details field."
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Not sure how this would work storing Burst IOPS here.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Burst IOPS need to be variable on a Disk
>> Offering-by-Disk
>>>>>>>>>> Offering
>>>>>>>>>>>>>>>>>>>> basis. For each Disk Offering created, you have to be
>> able
>>>>>> to
>>>>>>>>>>>>>>>> associate
>>>>>>>>>>>>>>>>>>>> unique Burst IOPS. There is a disk_offering_details
>> table.
>>>>>>>> Maybe
>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>> could
>>>>>>>>>>>>>>>>>>>> go there?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I'm also not sure how you would accept the Burst IOPS in
>>>> the
>>>>>>>> GUI
>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>>>> not stored like the Min and Max fields are in the DB.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>>> e: [email protected]
>>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>>>> e: [email protected]
>>>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>>>> e: [email protected]
>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>>> e: [email protected]
>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>> Advancing the way the world uses the cloud<
>>>>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>> e: [email protected]
>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>> *™*
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: [email protected]
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the
>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: [email protected]
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the
>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: [email protected]
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the
>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: [email protected]
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the
>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>
>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: [email protected]
>>> o: 303.746.7302
>>> Advancing the way the world uses the
>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: [email protected]
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*