Mike,

Please see my responses in-line below.

Thanks,
-John

On Jun 10, 2013, at 11:08 PM, Mike Tutkowski <mike.tutkow...@solidfire.com> 
wrote:

> Let me make sure I follow where we're going here:
> 
> 1) There should be NO references to hypervisor code in the storage plug-ins
> code (this includes the default storage plug-in, which currently sends
> several commands to the hypervisor in use (although it does not know which
> hypervisor (XenServer, ESX(i), etc.) is actually in use))

The Storage->Hypervisor dependencies have been in CloudStack for some time.  My 
goal is eventually eliminate these, and as part of that evolution, I don't want 
to see any more such dependencies added.  Additionally, as we invert the 
dependency in new code, it will lay the foundation for remove the existing 
Storage->Hypervisor dependencies.

> 
> 2) managed=true or managed=false can be placed in the url field (if not
> present, we default to false). This info is stored in the
> storage_pool_details table.

As I understand the data model, storage_pool_details implementation specific 
properties.  I see the managed flag as a common value for all storage pools 
calculated as follows:

        - If the associated driver does not support device management, the 
value is always set to false.
        - If the associated driver supports device management, the value 
defaults to true, but can be overridden when the device definition is created

As such, it seems to be that it should be a new column on the storage_pool 
table.

> 
> 3) When the "attach" command is sent to the hypervisor in question, we pass
> the managed property along (this takes the place of the
> StoragePoolType.Dynamic check).
> 
> 4) execute(AttachVolumeCommand) in the hypervisor checks for the managed
> property. If true for an attach, the necessary hypervisor data structure is
> created and the rest of the attach command executes to attach the volume.

> 5) When execute(AttachVolumeCommand) is invoked to detach a volume, the
> same check is made. If managed, the hypervisor data structure is removed.

Sounds reasonable to me.  Will there is be a new method added to the Hypervisor 
plugin to create this data structure (e.g. createVMStorage(VolumeTO))?

> 
> 6) I do not see an clear way to support Burst IOPS in 4.2 unless it is
> stored in the volumes and disk_offerings table. If we have some idea,
> that'd be cool.

Shucks.  It sounds like the StoragePool#details won't sufficient.  I think we 
need to address extended data in a number of places (e.g. hypervisor, storage, 
and network drivers, compute and disk offerings, etc).  I propose that we 
address it broadly in 4.3 in a manner that provides a mechanism to store, 
describe, validate, and render them.

> 
> Thanks!
> 
> 
> On Mon, Jun 10, 2013 at 8:58 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
> 
>> "+1 -- Burst IOPS can be implemented while avoiding implementation
>> attributes.  I always wondered about the details field.  I think we should
>> beef up the description in the documentation regarding the expected format
>> of the field.  In 4.1, I noticed that the details are not returned on the
>> createStoratePool updateStoragePool, or listStoragePool response.  Why
>> don't we return it?  It seems like it would be useful for clients to be
>> able to inspect the contents of the details field."
>> 
>> Not sure how this would work storing Burst IOPS here.
>> 
>> Burst IOPS need to be variable on a Disk Offering-by-Disk Offering basis.
>> For each Disk Offering created, you have to be able to associate unique
>> Burst IOPS. There is a disk_offering_details table. Maybe it could go there?
>> 
>> I'm also not sure how you would accept the Burst IOPS in the GUI if it's
>> not stored like the Min and Max fields are in the DB.
>> 
> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Reply via email to