So, the flow is as follows:

* The admin registers the SolidFire driver (which is a type of so-called
Dynamic storage). Once this is done, a new Primary Storage shows up in the
applicable zone.

* The admin creates a Disk Offering that references the storage tag of the
newly created Primary Storage.

* The end user creates a CloudStack volume. This leads to a new row in the
cloud.volumes table.

* The end user attaches the CloudStack volume to a VM (attach disk). This
leads to the storage framework calling the plug-in to create a new volume
on its storage system (in my case, a SAN). The plug-in also updates the
cloud.volumes row with applicable data (like the IQN of the SAN volume).
This plug-in code is only invoked if the CloudStack volume is in the
'Allocated' state. After the attach, the volume will be in the 'Ready'
state (even after a detach disk) and the plug-in code will not be called
again to create this SAN volume.

* The hypervisor-attach logic is run and detects the CloudStack volume to
attach needs "assistance" in the form of a hypervisor data structure (ex.
an SR on XenServer).


On Tue, Jun 4, 2013 at 12:54 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> "To ensure that we are in sync on terminology, volume, in these
> definitions, refers to the physical allocation on the device, correct?"
>
> Yes...when I say 'volume', I try to mean 'SAN volume'.
>
> To refer to the 'volume' the end user can make in CloudStack, I try to use
> 'CloudStack volume'.
>
>
> On Tue, Jun 4, 2013 at 12:50 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Hi John,
>>
>> What you say here may very well make sense, but I'm having a hard time
>> envisioning it.
>>
>> Perhaps we should draw Edison in on this conversation as he was the
>> initial person to suggest the approach I took.
>>
>> What do you think?
>>
>> Thanks!
>>
>>
>> On Tue, Jun 4, 2013 at 12:42 PM, John Burwell <jburw...@basho.com> wrote:
>>
>>> Mike,
>>>
>>> It feels like we are combining two distinct concepts -- storage device
>>> management and storage protocols.  In both cases, we are communicating with
>>> ISCSI, but one allows the system to create/delete volumes (Dynamic) on the
>>> device while the other requires the volume to be volume to be managed
>>> outside of the CloudStack context.  To ensure that we are in sync on
>>> terminology, volume, in these definitions, refers to the physical
>>> allocation on the device, correct?  Minimally, we must be able to
>>> communicate with a storage device to move bits from one place to another,
>>> read bits, delete bits, etc.  Optionally, a storage device may be able to
>>> managed by CloudStack.  Therefore, we can have a unmanaged iSCSI device
>>> onto which we store a Xen SR, and we can have a managed SolidFire iSCSI
>>> device on which CloudStack is capable of allocating LUNs and storing
>>> volumes.  Finally, while CloudStack may be able to manage a device, an
>>> operator may chose to leave it unmanaged by CloudStack (e.g. the device is
>>> shared by many services, and the operator has chosen to dedicate only a
>>> portion of it to CloudStack).  Does my reasoning make sense?
>>>
>>> Assuming my thoughts above are reasonable, it seems appropriate to strip
>>> the management concerns from StoragePoolType, add the notion of a storage
>>> device with an attached driver that indicates whether or not is managed by
>>> CloudStack, and establish a abstraction representing a physical allocation
>>> on a device separate that is associated with a volume.  With these notions
>>> in place, hypervisor drivers can declare which protocols they support and
>>> when they encounter a device managed by CloudStack, utilize the management
>>> operations exposed by the driver to automate allocation.  If these
>>> thoughts/concepts make sense, then we can sit down and drill down to a more
>>> detailed design.
>>>
>>> Thanks,
>>> -John
>>>
>>> On Jun 3, 2013, at 5:25 PM, Mike Tutkowski <mike.tutkow...@solidfire.com>
>>> wrote:
>>>
>>> > Here is the difference between the current iSCSI type and the Dynamic
>>> type:
>>> >
>>> > iSCSI type: The admin has to go in and create a Primary Storage based
>>> on
>>> > the iSCSI type. At this point in time, the iSCSI volume must exist on
>>> the
>>> > storage system (it is pre-allocated). Future CloudStack volumes are
>>> created
>>> > as VDIs on the SR that was created behind the scenes.
>>> >
>>> > Dynamic type: The admin has to go in and create Primary Storage based
>>> on a
>>> > plug-in that will create and delete volumes on its storage system
>>> > dynamically (as is enabled via the storage framework). When a user
>>> wants to
>>> > attach a CloudStack volume that was created, the framework tells the
>>> > plug-in to create a new volume. After this is done, the attach logic
>>> for
>>> > the hypervisor in question is called. No hypervisor data structure
>>> exists
>>> > at this point because the volume was just created. The hypervisor data
>>> > structure must be created.
>>> >
>>> >
>>> > On Mon, Jun 3, 2013 at 3:21 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com
>>> >> wrote:
>>> >
>>> >> These are new terms, so I should probably have defined them up front
>>> for
>>> >> you. :)
>>> >>
>>> >> Static storage: Storage that is pre-allocated (ex. an admin creates a
>>> >> volume on a SAN), then a hypervisor data structure is created to
>>> consume
>>> >> the storage (ex. XenServer SR), then that hypervisor data structure is
>>> >> consumed by CloudStack. Disks (VDI) are later placed on this
>>> hypervisor
>>> >> data structure as needed. In these cases, the attach logic assumes the
>>> >> hypervisor data structure is already in place and simply attaches the
>>> VDI
>>> >> on the hypervisor data structure to the VM in question.
>>> >>
>>> >> Dynamic storage: Storage that is not pre-allocated. Instead of
>>> >> pre-existent storage, this could be a SAN (not a volume on a SAN, but
>>> the
>>> >> SAN itself). The hypervisor data structure must be created when an
>>> attach
>>> >> volume is performed because these types of volumes have not been
>>> pre-hooked
>>> >> up to such a hypervisor data structure by an admin. Once the attach
>>> logic
>>> >> creates, say, an SR on XenServer for this volume, it attaches the one
>>> and
>>> >> only VDI within the SR to the VM in question.
>>> >>
>>> >>
>>> >> On Mon, Jun 3, 2013 at 3:13 PM, John Burwell <jburw...@basho.com>
>>> wrote:
>>> >>
>>> >>> Mike,
>>> >>>
>>> >>> The current implementation of the Dynamic type attach behavior works
>>> in
>>> >>> terms of Xen ISCSI which why I ask about the difference.  Another
>>> way to
>>> >>> ask the question -- what is the definition of a Dynamic storage pool
>>> type?
>>> >>>
>>> >>> Thanks,
>>> >>> -John
>>> >>>
>>> >>> On Jun 3, 2013, at 5:10 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com>
>>> >>> wrote:
>>> >>>
>>> >>>> As far as I know, the iSCSI type is uniquely used by XenServer when
>>> you
>>> >>>> want to set up Primary Storage that is directly based on an iSCSI
>>> >>> target.
>>> >>>> This allows you to skip the step of going to the hypervisor and
>>> >>> creating a
>>> >>>> storage repository based on that iSCSI target as CloudStack does
>>> that
>>> >>> part
>>> >>>> for you. I think this is only supported for XenServer. For all other
>>> >>>> hypervisors, you must first go to the hypervisor and perform this
>>> step
>>> >>>> manually.
>>> >>>>
>>> >>>> I don't really know what RBD is.
>>> >>>>
>>> >>>>
>>> >>>> On Mon, Jun 3, 2013 at 2:13 PM, John Burwell <jburw...@basho.com>
>>> >>> wrote:
>>> >>>>
>>> >>>>> Mike,
>>> >>>>>
>>> >>>>> Reading through the code, what is the difference between the ISCSI
>>> and
>>> >>>>> Dynamic types?  Why isn't RBD considered Dynamic?
>>> >>>>>
>>> >>>>> Thanks,
>>> >>>>> -John
>>> >>>>>
>>> >>>>> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski <
>>> >>> mike.tutkow...@solidfire.com>
>>> >>>>> wrote:
>>> >>>>>
>>> >>>>>> This new type of storage is defined in the Storage.StoragePoolType
>>> >>> class
>>> >>>>>> (called Dynamic):
>>> >>>>>>
>>> >>>>>> public static enum StoragePoolType {
>>> >>>>>>
>>> >>>>>>      Filesystem(false), // local directory
>>> >>>>>>
>>> >>>>>>      NetworkFilesystem(true), // NFS or CIFS
>>> >>>>>>
>>> >>>>>>      IscsiLUN(true), // shared LUN, with a clusterfs overlay
>>> >>>>>>
>>> >>>>>>      Iscsi(true), // for e.g., ZFS Comstar
>>> >>>>>>
>>> >>>>>>      ISO(false), // for iso image
>>> >>>>>>
>>> >>>>>>      LVM(false), // XenServer local LVM SR
>>> >>>>>>
>>> >>>>>>      CLVM(true),
>>> >>>>>>
>>> >>>>>>      RBD(true),
>>> >>>>>>
>>> >>>>>>      SharedMountPoint(true),
>>> >>>>>>
>>> >>>>>>      VMFS(true), // VMware VMFS storage
>>> >>>>>>
>>> >>>>>>      PreSetup(true), // for XenServer, Storage Pool is set up by
>>> >>>>>> customers.
>>> >>>>>>
>>> >>>>>>      EXT(false), // XenServer local EXT SR
>>> >>>>>>
>>> >>>>>>      OCFS2(true),
>>> >>>>>>
>>> >>>>>>      Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>      boolean shared;
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>      StoragePoolType(boolean shared) {
>>> >>>>>>
>>> >>>>>>          this.shared = shared;
>>> >>>>>>
>>> >>>>>>      }
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>      public boolean isShared() {
>>> >>>>>>
>>> >>>>>>          return shared;
>>> >>>>>>
>>> >>>>>>      }
>>> >>>>>>
>>> >>>>>>  }
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski <
>>> >>>>> mike.tutkow...@solidfire.com
>>> >>>>>>> wrote:
>>> >>>>>>
>>> >>>>>>> For example, let's say another storage company wants to
>>> implement a
>>> >>>>>>> plug-in to leverage its Quality of Service feature. It would be
>>> >>> dynamic,
>>> >>>>>>> zone-wide storage, as well. They would need only implement a
>>> storage
>>> >>>>>>> plug-in as I've made the necessary changes to the
>>> hypervisor-attach
>>> >>>>> logic
>>> >>>>>>> to support their plug-in.
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
>>> >>>>>>> mike.tutkow...@solidfire.com> wrote:
>>> >>>>>>>
>>> >>>>>>>> Oh, sorry to imply the XenServer code is SolidFire specific. It
>>> is
>>> >>> not.
>>> >>>>>>>>
>>> >>>>>>>> The XenServer attach logic is now aware of dynamic, zone-wide
>>> >>> storage
>>> >>>>>>>> (and SolidFire is an implementation of this kind of storage).
>>> This
>>> >>>>> kind of
>>> >>>>>>>> storage is new to 4.2 with Edison's storage framework changes.
>>> >>>>>>>>
>>> >>>>>>>> Edison created a new framework that supported the creation and
>>> >>> deletion
>>> >>>>>>>> of volumes dynamically. However, when I visited with him in
>>> Portland
>>> >>>>> back
>>> >>>>>>>> in April, we realized that it was not complete. We realized
>>> there
>>> >>> was
>>> >>>>>>>> nothing CloudStack could do with these volumes unless the attach
>>> >>> logic
>>> >>>>> was
>>> >>>>>>>> changed to recognize this new type of storage and create the
>>> >>>>> appropriate
>>> >>>>>>>> hypervisor data structure.
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell <
>>> jburw...@basho.com>
>>> >>>>> wrote:
>>> >>>>>>>>
>>> >>>>>>>>> Mike,
>>> >>>>>>>>>
>>> >>>>>>>>> It is generally odd to me that any operation in the Storage
>>> layer
>>> >>>>> would
>>> >>>>>>>>> understand or care about details.  I expect to see the Storage
>>> >>>>> services
>>> >>>>>>>>> expose a set of operations that can be composed/driven by the
>>> >>>>> Hypervisor
>>> >>>>>>>>> implementations to allocate space/create structures per their
>>> >>> needs.
>>> >>>>> If
>>> >>>>>>>>> we
>>> >>>>>>>>> don't invert this dependency, we are going to end with a
>>> massive
>>> >>>>> n-to-n
>>> >>>>>>>>> problem that will make the system increasingly difficult to
>>> >>> maintain
>>> >>>>> and
>>> >>>>>>>>> enhance.  Am I understanding that the Xen specific SolidFire
>>> code
>>> >>> is
>>> >>>>>>>>> located in the CitrixResourceBase class?
>>> >>>>>>>>>
>>> >>>>>>>>> Thanks,
>>> >>>>>>>>> -John
>>> >>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>>> >>>>>>>>> mike.tutkow...@solidfire.com
>>> >>>>>>>>>> wrote:
>>> >>>>>>>>>
>>> >>>>>>>>>> To delve into this in a bit more detail:
>>> >>>>>>>>>>
>>> >>>>>>>>>> Prior to 4.2 and aside from one setup method for XenServer,
>>> the
>>> >>> admin
>>> >>>>>>>>> had
>>> >>>>>>>>>> to first create a volume on the storage system, then go into
>>> the
>>> >>>>>>>>> hypervisor
>>> >>>>>>>>>> to set up a data structure to make use of the volume (ex. a
>>> >>> storage
>>> >>>>>>>>>> repository on XenServer or a datastore on ESX(i)). VMs and
>>> data
>>> >>> disks
>>> >>>>>>>>> then
>>> >>>>>>>>>> shared this storage system's volume.
>>> >>>>>>>>>>
>>> >>>>>>>>>> With Edison's new storage framework, storage need no longer
>>> be so
>>> >>>>>>>>> static
>>> >>>>>>>>>> and you can easily create a 1:1 relationship between a storage
>>> >>>>> system's
>>> >>>>>>>>>> volume and the VM's data disk (necessary for storage Quality
>>> of
>>> >>>>>>>>> Service).
>>> >>>>>>>>>>
>>> >>>>>>>>>> You can now write a plug-in that is called to dynamically
>>> create
>>> >>> and
>>> >>>>>>>>> delete
>>> >>>>>>>>>> volumes as needed.
>>> >>>>>>>>>>
>>> >>>>>>>>>> The problem that the storage framework did not address is in
>>> >>> creating
>>> >>>>>>>>> and
>>> >>>>>>>>>> deleting the hypervisor-specific data structure when
>>> performing an
>>> >>>>>>>>>> attach/detach.
>>> >>>>>>>>>>
>>> >>>>>>>>>> That being the case, I've been enhancing it to do so. I've got
>>> >>>>>>>>> XenServer
>>> >>>>>>>>>> worked out and submitted. I've got ESX(i) in my sandbox and
>>> can
>>> >>>>> submit
>>> >>>>>>>>> this
>>> >>>>>>>>>> if we extend the 4.2 freeze date.
>>> >>>>>>>>>>
>>> >>>>>>>>>> Does that help a bit? :)
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
>>> >>>>>>>>>> mike.tutkow...@solidfire.com
>>> >>>>>>>>>>> wrote:
>>> >>>>>>>>>>
>>> >>>>>>>>>>> Hi John,
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> The storage plug-in - by itself - is hypervisor agnostic.
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> The issue is with the volume-attach logic (in the agent
>>> code).
>>> >>> The
>>> >>>>>>>>>> storage
>>> >>>>>>>>>>> framework calls into the plug-in to have it create a volume
>>> as
>>> >>>>>>>>> needed,
>>> >>>>>>>>>> but
>>> >>>>>>>>>>> when the time comes to attach the volume to a hypervisor, the
>>> >>> attach
>>> >>>>>>>>>> logic
>>> >>>>>>>>>>> has to be smart enough to recognize it's being invoked on
>>> >>> zone-wide
>>> >>>>>>>>>> storage
>>> >>>>>>>>>>> (where the volume has just been created) and create, say, a
>>> >>> storage
>>> >>>>>>>>>>> repository (for XenServer) or a datastore (for VMware) to
>>> make
>>> >>> use
>>> >>>>>>>>> of the
>>> >>>>>>>>>>> volume that was just created.
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> I've been spending most of my time recently making the attach
>>> >>> logic
>>> >>>>>>>>> work
>>> >>>>>>>>>>> in the agent code.
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> Does that clear it up?
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> Thanks!
>>> >>>>>>>>>>>
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> On Mon, Jun 3, 2013 at 12:48 PM, John Burwell <
>>> >>> jburw...@basho.com>
>>> >>>>>>>>>> wrote:
>>> >>>>>>>>>>>
>>> >>>>>>>>>>>> Mike,
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>> Can you explain why the the storage driver is hypervisor
>>> >>> specific?
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>> Thanks,
>>> >>>>>>>>>>>> -John
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski <
>>> >>>>>>>>>> mike.tutkow...@solidfire.com>
>>> >>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>>> Yes, ultimately I would like to support all hypervisors
>>> that
>>> >>>>>>>>>> CloudStack
>>> >>>>>>>>>>>>> supports. I think I'm just out of time for 4.2 to get KVM
>>> in.
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>> Right now this plug-in supports XenServer. Depending on
>>> what
>>> >>> we do
>>> >>>>>>>>>> with
>>> >>>>>>>>>>>>> regards to 4.2 feature freeze, I have it working for
>>> VMware in
>>> >>> my
>>> >>>>>>>>>>>> sandbox,
>>> >>>>>>>>>>>>> as well.
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>> Also, just to be clear, this is all in regards to Disk
>>> >>> Offerings.
>>> >>>>>>>>> I
>>> >>>>>>>>>>>> plan to
>>> >>>>>>>>>>>>> support Compute Offerings post 4.2.
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>> On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage <
>>> >>>>>>>>>> kel...@bbits.ca
>>> >>>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> Is there any plan on supporting KVM in the patch cycle
>>> post
>>> >>> 4.2?
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> ----- Original Message -----
>>> >>>>>>>>>>>>>> From: "Mike Tutkowski" <mike.tutkow...@solidfire.com>
>>> >>>>>>>>>>>>>> To: dev@cloudstack.apache.org
>>> >>>>>>>>>>>>>> Sent: Monday, June 3, 2013 10:12:32 AM
>>> >>>>>>>>>>>>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> I agree on merging Wei's feature first, then mine.
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> If his feature is for KVM only, then it is a non issue as
>>> I
>>> >>> don't
>>> >>>>>>>>>>>> support
>>> >>>>>>>>>>>>>> KVM in 4.2.
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU <
>>> >>> ustcweiz...@gmail.com>
>>> >>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> John,
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> For the billing, as no one works on billing now, users
>>> need
>>> >>> to
>>> >>>>>>>>>>>> calculate
>>> >>>>>>>>>>>>>>> the billing by themselves. They can get the
>>> service_offering
>>> >>> and
>>> >>>>>>>>>>>>>>> disk_offering of a VMs and volumes for calculation. Of
>>> course
>>> >>>>>>>>> it is
>>> >>>>>>>>>>>>>> better
>>> >>>>>>>>>>>>>>> to tell user the exact limitation value of individual
>>> volume,
>>> >>>>>>>>> and
>>> >>>>>>>>>>>> network
>>> >>>>>>>>>>>>>>> rate limitation for nics as well. I can work on it
>>> later. Do
>>> >>> you
>>> >>>>>>>>>>>> think it
>>> >>>>>>>>>>>>>>> is a part of I/O throttling?
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> Sorry my misunstand the second the question.
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> Agree with what you said about the two features.
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> -Wei
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> 2013/6/3 John Burwell <jburw...@basho.com>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> Wei,
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> On Jun 3, 2013, at 2:13 AM, Wei ZHOU <
>>> ustcweiz...@gmail.com
>>> >>>>
>>> >>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> Hi John, Mike
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> I hope Mike's aswer helps you. I am trying to adding
>>> more.
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> (1) I think billing should depend on IO statistics
>>> rather
>>> >>> than
>>> >>>>>>>>>> IOPS
>>> >>>>>>>>>>>>>>>>> limitation. Please review disk_io_stat if you have
>>> time.
>>> >>>>>>>>>>>>>> disk_io_stat
>>> >>>>>>>>>>>>>>>> can
>>> >>>>>>>>>>>>>>>>> get the IO statistics including bytes/iops read/write
>>> for
>>> >>> an
>>> >>>>>>>>>>>>>> individual
>>> >>>>>>>>>>>>>>>>> virtual machine.
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> Going by the AWS model, customers are billed more for
>>> >>> volumes
>>> >>>>>>>>> with
>>> >>>>>>>>>>>>>>>> provisioned IOPS, as well as, for those operations (
>>> >>>>>>>>>>>>>>>> http://aws.amazon.com/ebs/).  I would imagine our users
>>> >>> would
>>> >>>>>>>>> like
>>> >>>>>>>>>>>> the
>>> >>>>>>>>>>>>>>>> option to employ similar cost models.  Could an operator
>>> >>>>>>>>> implement
>>> >>>>>>>>>>>>>> such a
>>> >>>>>>>>>>>>>>>> billing model in the current patch?
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> (2) Do you mean IOPS runtime change? KVM supports
>>> setting
>>> >>>>>>>>> IOPS/BPS
>>> >>>>>>>>>>>>>>>>> limitation for a running virtual machine through
>>> command
>>> >>> line.
>>> >>>>>>>>>>>>>> However,
>>> >>>>>>>>>>>>>>>>> CloudStack does not support changing the parameters of
>>> a
>>> >>>>>>>>> created
>>> >>>>>>>>>>>>>>> offering
>>> >>>>>>>>>>>>>>>>> (computer offering or disk offering).
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> I meant at the Java interface level.  I apologize for
>>> being
>>> >>>>>>>>>> unclear.
>>> >>>>>>>>>>>>>> Can
>>> >>>>>>>>>>>>>>>> we more generalize allocation algorithms with a set of
>>> >>>>>>>>> interfaces
>>> >>>>>>>>>>>> that
>>> >>>>>>>>>>>>>>>> describe the service guarantees provided by a resource?
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> (3) It is a good question. Maybe it is better to commit
>>> >>> Mike's
>>> >>>>>>>>>> patch
>>> >>>>>>>>>>>>>>>> after
>>> >>>>>>>>>>>>>>>>> disk_io_throttling as Mike needs to consider the
>>> >>> limitation in
>>> >>>>>>>>>>>>>>> hypervisor
>>> >>>>>>>>>>>>>>>>> type, I think.
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> I will expand on my thoughts in a later response to Mike
>>> >>>>>>>>> regarding
>>> >>>>>>>>>>>> the
>>> >>>>>>>>>>>>>>>> touch points between these two features.  I think that
>>> >>>>>>>>>>>>>> disk_io_throttling
>>> >>>>>>>>>>>>>>>> will need to be merged before SolidFire, but I think we
>>> need
>>> >>>>>>>>> closer
>>> >>>>>>>>>>>>>>>> coordination between the branches (possibly have have
>>> >>> solidfire
>>> >>>>>>>>>> track
>>> >>>>>>>>>>>>>>>> disk_io_throttling) to coordinate on this issue.
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> - Wei
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>> 2013/6/3 John Burwell <jburw...@basho.com>
>>> >>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> Mike,
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> The things I want to understand are the following:
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> 1) Is there value in capturing IOPS policies be
>>> captured
>>> >>> in
>>> >>>>>>>>> a
>>> >>>>>>>>>>>>>> common
>>> >>>>>>>>>>>>>>>>>> data model (e.g. for billing/usage purposes,
>>> expressing
>>> >>>>>>>>>> offerings).
>>> >>>>>>>>>>>>>>>>>> 2) Should there be a common interface model for
>>> reasoning
>>> >>>>>>>>> about
>>> >>>>>>>>>>>>>> IOP
>>> >>>>>>>>>>>>>>>>>> provisioning at runtime?
>>> >>>>>>>>>>>>>>>>>> 3) How are conflicting provisioned IOPS configurations
>>> >>>>>>>>> between
>>> >>>>>>>>>> a
>>> >>>>>>>>>>>>>>>>>> hypervisor and storage device reconciled?  In
>>> particular,
>>> >>> a
>>> >>>>>>>>>>>> scenario
>>> >>>>>>>>>>>>>>>> where
>>> >>>>>>>>>>>>>>>>>> is lead to believe (and billed) for more IOPS
>>> configured
>>> >>> for
>>> >>>>>>>>> a VM
>>> >>>>>>>>>>>>>>> than a
>>> >>>>>>>>>>>>>>>>>> storage device has been configured to deliver.
>>>  Another
>>> >>>>>>>>> scenario
>>> >>>>>>>>>>>>>>> could a
>>> >>>>>>>>>>>>>>>>>> consistent configuration between a VM and a storage
>>> >>> device at
>>> >>>>>>>>>>>>>> creation
>>> >>>>>>>>>>>>>>>>>> time, but a later modification to storage device
>>> >>> introduces
>>> >>>>>>>>>> logical
>>> >>>>>>>>>>>>>>>>>> inconsistency.
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> Thanks,
>>> >>>>>>>>>>>>>>>>>> -John
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
>>> >>>>>>>>>>>>>>>> mike.tutkow...@solidfire.com>
>>> >>>>>>>>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> Hi John,
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> I believe Wei's feature deals with controlling the max
>>> >>>>>>>>> number of
>>> >>>>>>>>>>>>>> IOPS
>>> >>>>>>>>>>>>>>>> from
>>> >>>>>>>>>>>>>>>>>> the hypervisor side.
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> My feature is focused on controlling IOPS from the
>>> storage
>>> >>>>>>>>> system
>>> >>>>>>>>>>>>>>> side.
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> I hope that helps. :)
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell <
>>> >>>>>>>>> jburw...@basho.com
>>> >>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wei,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> My opinion is that no features should be merged
>>> until all
>>> >>>>>>>>>>>>>> functional
>>> >>>>>>>>>>>>>>>>>>> issues have been resolved and it is ready to turn
>>> over to
>>> >>>>>>>>> test.
>>> >>>>>>>>>>>>>>> Until
>>> >>>>>>>>>>>>>>>>>> the
>>> >>>>>>>>>>>>>>>>>>> total Ops vs discrete read/write ops issue is
>>> addressed
>>> >>> and
>>> >>>>>>>>>>>>>>> re-reviewed
>>> >>>>>>>>>>>>>>>>>> by
>>> >>>>>>>>>>>>>>>>>>> Wido, I don't think this criteria has been satisfied.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Also, how does this work intersect/compliment the
>>> >>> SolidFire
>>> >>>>>>>>>> patch
>>> >>>>>>>>>>>> (
>>> >>>>>>>>>>>>>>>>>>> https://reviews.apache.org/r/11479/)?  As I
>>> understand
>>> >>> it
>>> >>>>>>>>> that
>>> >>>>>>>>>>>>>> work
>>> >>>>>>>>>>>>>>> is
>>> >>>>>>>>>>>>>>>>>>> also involves provisioned IOPS.  I would like to
>>> ensure
>>> >>> we
>>> >>>>>>>>> don't
>>> >>>>>>>>>>>>>>> have a
>>> >>>>>>>>>>>>>>>>>>> scenario where provisioned IOPS in KVM and SolidFire
>>> are
>>> >>>>>>>>>>>>>>> unnecessarily
>>> >>>>>>>>>>>>>>>>>>> incompatible.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Thanks,
>>> >>>>>>>>>>>>>>>>>>> -John
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU <
>>> >>> ustcweiz...@gmail.com
>>> >>>>>>>>>>
>>> >>>>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Sure. I will change it next week.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> -Wei
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> 2013/6/1 Wido den Hollander <w...@widodh.nl>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Hi Wei,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> On 06/01/2013 08:24 AM, Wei ZHOU wrote:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Exactly. I have pushed the features into master.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> If anyone object thems for technical reason till
>>> Monday,
>>> >>> I
>>> >>>>>>>>> will
>>> >>>>>>>>>>>>>>> revert
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> them.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> For the sake of clarity I just want to mention again
>>> >>> that we
>>> >>>>>>>>>>>> should
>>> >>>>>>>>>>>>>>>>>> change
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> the total IOps to R/W IOps asap so that we never
>>> release
>>> >>> a
>>> >>>>>>>>>> version
>>> >>>>>>>>>>>>>>> with
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> only total IOps.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> You laid the groundwork for the I/O throttling and
>>> that's
>>> >>>>>>>>> great!
>>> >>>>>>>>>>>> We
>>> >>>>>>>>>>>>>>>>>> should
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> however prevent that we create legacy from day #1.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> -Wei
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander <w...@widodh.nl>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> On 05/31/2013 03:59 PM, John Burwell wrote:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> +1 -- this enhancement must to discretely support
>>> read
>>> >>> and
>>> >>>>>>>>> write
>>> >>>>>>>>>>>>>>> IOPS.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> I
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> don't see how it could be fixed later because I
>>> don't see
>>> >>>>>>>>> how we
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> correctly
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> split total IOPS into read and write.  Therefore, we
>>> >>> would
>>> >>>>>>>>> be
>>> >>>>>>>>>>>> stuck
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> with a
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> total unless/until we decided to break backwards
>>> >>>>>>>>> compatibility.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> What Wei meant was merging it into master now so
>>> that it
>>> >>>>>>>>> will go
>>> >>>>>>>>>>>> in
>>> >>>>>>>>>>>>>>> the
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> 4.2 branch and add Read / Write IOps before the 4.2
>>> >>> release
>>> >>>>>>>>> so
>>> >>>>>>>>>>>> that
>>> >>>>>>>>>>>>>>> 4.2
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> will be released with Read and Write instead of Total
>>> >>> IOps.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> This is to make the May 31st feature freeze date.
>>> But if
>>> >>> the
>>> >>>>>>>>>>>> window
>>> >>>>>>>>>>>>>>>> moves
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (see other threads) then it won't be necessary to do
>>> >>> that.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> I also completely agree that there is no association
>>> >>> between
>>> >>>>>>>>>>>>>> network
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> and
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> disk I/O.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Thanks,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> -John
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> On May 31, 2013, at 9:51 AM, Wido den Hollander <
>>> >>>>>>>>> w...@widodh.nl
>>> >>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Hi Wei,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> On 05/31/2013 03:13 PM, Wei ZHOU wrote:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Hi Wido,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Thanks. Good question.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> I  thought about at the beginning. Finally I decided
>>> to
>>> >>>>>>>>> ignore
>>> >>>>>>>>>> the
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> difference of read and write mainly because the
>>> network
>>> >>>>>>>>>> throttling
>>> >>>>>>>>>>>>>>> did
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> not
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> care the difference of sent and received bytes as
>>> well.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> That reasoning seems odd. Networking and disk I/O
>>> >>> completely
>>> >>>>>>>>>>>>>>> different.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Disk I/O is much more expensive in most situations
>>> then
>>> >>>>>>>>> network
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> bandwith.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Implementing it will be some copy-paste work. It
>>> could be
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> implemented in
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> few days. For the deadline of feature freeze, I will
>>> >>>>>>>>> implement
>>> >>>>>>>>>> it
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> after
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> that , if needed.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> It think it's a feature we can't miss. But if it goes
>>> >>> into
>>> >>>>>>>>> the
>>> >>>>>>>>>> 4.2
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> window we have to make sure we don't release with
>>> only
>>> >>> total
>>> >>>>>>>>>> IOps
>>> >>>>>>>>>>>>>> and
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> fix
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> it in 4.3, that will confuse users.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> -Wei
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> 2013/5/31 Wido den Hollander <w...@widodh.nl>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Hi Wei,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> On 05/30/2013 06:03 PM, Wei ZHOU wrote:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Hi,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> I would like to merge disk_io_throttling branch into
>>> >>> master.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> If nobody object, I will merge into master in 48
>>> hours.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> The purpose is :
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Virtual machines are running on the same storage
>>> device
>>> >>>>>>>>> (local
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> storage or
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> share strage). Because of the rate limitation of
>>> device
>>> >>>>>>>>> (such as
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> iops), if
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> one VM has large disk operation, it may affect the
>>> disk
>>> >>>>>>>>>>>> performance
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> of
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> other VMs running on the same storage device.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> It is neccesary to set the maximum rate and limit the
>>> >>> disk
>>> >>>>>>>>> I/O
>>> >>>>>>>>>> of
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> VMs.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Looking at the code I see you make no difference
>>> between
>>> >>>>>>>>> Read
>>> >>>>>>>>>> and
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Write
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> IOps.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Qemu and libvirt support setting both a different
>>> rate
>>> >>> for
>>> >>>>>>>>> Read
>>> >>>>>>>>>>>> and
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Write
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> IOps which could benefit a lot of users.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> It's also strange, in the polling side you collect
>>> both
>>> >>> the
>>> >>>>>>>>> Read
>>> >>>>>>>>>>>>>> and
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Write
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> IOps, but on the throttling side you only go for a
>>> global
>>> >>>>>>>>> value.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Write IOps are usually much more expensive then Read
>>> >>> IOps,
>>> >>>>>>>>> so it
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> seems
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> like a valid use-case where that an admin would set a
>>> >>> lower
>>> >>>>>>>>>> value
>>> >>>>>>>>>>>>>> for
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> write
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> IOps vs Read IOps.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Since this only supports KVM at this point I think it
>>> >>> would
>>> >>>>>>>>> be
>>> >>>>>>>>>> of
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> great
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> value to at least have the mechanism in place to
>>> support
>>> >>>>>>>>> both,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> implementing
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> this later would be a lot of work.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> If a hypervisor doesn't support setting different
>>> values
>>> >>> for
>>> >>>>>>>>>> read
>>> >>>>>>>>>>>>>> and
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> write you can always sum both up and set that as the
>>> >>> total
>>> >>>>>>>>>> limit.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Can you explain why you implemented it this way?
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wido
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> The feature includes:
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (1) set the maximum rate of VMs (in disk_offering,
>>> and
>>> >>>>>>>>> global
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> configuration)
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (2) change the maximum rate of VMs
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (3) limit the disk rate (total bps and iops)
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> JIRA ticket: https://issues.apache.org/****
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<ht**tps://
>>> >>>>>>>>> issues.apache.org/****
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> jira/browse/CLOUDSTACK-1192<
>>> >>>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <ht**tps://
>>> >>>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1192<
>>> >>>>>>>>>>>>>>>>>>>
>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1192>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <**
>>> >>>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1192<
>>> >>>>>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1192
>>> >
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> FS (I will update later) :
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>
>>> >>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******<
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>
>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>
>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
>>> >>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>
>>> >>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> VM+Disk+IO+Throttling<https://****
>>> >>>>>>>>>>>> cwiki.apache.org/confluence/****
>>> >>>>>>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<
>>> >>> https://cwiki.
>>> >>>>>>>>> **
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>
>>> >>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
>>> >>>>>>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>
>>> >>>
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>>> >>>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Merge check list :-
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> * Did you check the branch's RAT execution success?
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Yes
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> * Are there new dependencies introduced?
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> No
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> * What automated testing (unit and integration) is
>>> >>> included
>>> >>>>>>>>> in
>>> >>>>>>>>>> the
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> new
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> feature?
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Unit tests are added.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> * What testing has been done to check for potential
>>> >>>>>>>>> regressions?
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (1) set the bytes rate and IOPS rate on CloudStack
>>> UI.
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (2) VM operations, including
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> deploy, stop, start, reboot, destroy, expunge.
>>> migrate,
>>> >>>>>>>>> restore
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (3) Volume operations, including
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Attach, Detach
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> To review the code, you can try
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> git diff
>>> c30057635d04a2396f84c588127d7e******be42e503a7
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> f2e5591b710d04cc86815044f5823e******73a4a58944
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Best regards,
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> Wei
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> [1]
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>
>>> >>> https://cwiki.apache.org/******confluence/display/CLOUDSTACK/******<
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>
>>> https://cwiki.apache.org/****confluence/display/CLOUDSTACK/****>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>
>>> https://cwiki.apache.org/****confluence/display/**CLOUDSTACK/**
>>> >>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>
>>> >>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> VM+Disk+IO+Throttling<https://****
>>> >>>>>>>>>>>> cwiki.apache.org/confluence/****
>>> >>>>>>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>> http://cwiki.apache.org/confluence/**>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> display/CLOUDSTACK/VM+Disk+IO+****Throttling<
>>> >>> https://cwiki.
>>> >>>>>>>>> **
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>
>>> >>> apache.org/confluence/display/**CLOUDSTACK/VM+Disk+IO+**Throttling
>>> >>>>>>>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>
>>> >>>
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
>>> >>>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> [2] refs/heads/disk_io_throttling
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> [3]
>>> >>>>>>>>>>>>>>>>>>>
>>> >>> https://issues.apache.org/******jira/browse/CLOUDSTACK-1301
>>> >>>>>>>>> <
>>> >>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/****jira/browse/CLOUDSTACK-1301
>>> >>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <ht**tps://
>>> >>>>>>>>> issues.apache.org/****jira/browse/CLOUDSTACK-1301<
>>> >>>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <ht**tps://
>>> >>>>>>>>> issues.apache.org/**jira/**browse/CLOUDSTACK-1301<
>>> >>>>>>>>>>>>>>>>>>>
>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-1301>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <**
>>> >>>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-1301<
>>> >>>>>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-1301
>>> >
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <ht**tps://
>>> >>>>>>>>> issues.apache.org/****jira/**browse/CLOUDSTACK-2071<
>>> >>>>>>>>>>>>>>>>>>>
>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071
>>> >>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> **<
>>> >>>>>>>>>>>>>>>>>>>
>>> http://issues.apache.org/**jira/**browse/CLOUDSTACK-2071
>>> >>> <
>>> >>>>>>>>>>>>>>>>>>
>>> http://issues.apache.org/jira/**browse/CLOUDSTACK-2071>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <**
>>> >>>>>>>>>>>>>>>>>>>
>>> >>> https://issues.apache.org/****jira/browse/CLOUDSTACK-2071<
>>> >>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/**jira/browse/CLOUDSTACK-2071>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> <h**ttps://
>>> >>> issues.apache.org/jira/**browse/CLOUDSTACK-2071<
>>> >>>>>>>>>>>>>>>>>>>
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2071>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> (**CLOUDSTACK-1301
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>> -     VM Disk I/O Throttling)
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>>> --
>>> >>>>>>>>>>>>>>>>>> *Mike Tutkowski*
>>> >>>>>>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>>>>>>>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>>>>>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>>>>>>>>>> Advancing the way the world uses the
>>> >>>>>>>>>>>>>>>>>> cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>>>>>>>>>>>> *™*
>>> >>>>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> --
>>> >>>>>>>>>>>>>> *Mike Tutkowski*
>>> >>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>>>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>>>>>> Advancing the way the world uses the
>>> >>>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>>>>>>>> *™*
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>> --
>>> >>>>>>>>>>>>> *Mike Tutkowski*
>>> >>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>>>>> Advancing the way the world uses the
>>> >>>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>>>>>>> *™*
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> --
>>> >>>>>>>>>>> *Mike Tutkowski*
>>> >>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>>> Advancing the way the world uses the cloud<
>>> >>>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>>>>> *™*
>>> >>>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>> --
>>> >>>>>>>>>> *Mike Tutkowski*
>>> >>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>> Advancing the way the world uses the
>>> >>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>>>> *™*
>>> >>>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>> --
>>> >>>>>>>> *Mike Tutkowski*
>>> >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>>> o: 303.746.7302
>>> >>>>>>>> Advancing the way the world uses the cloud<
>>> >>>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> *™*
>>> >>>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>> *Mike Tutkowski*
>>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>>> o: 303.746.7302
>>> >>>>>>> Advancing the way the world uses the cloud<
>>> >>>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>> *™*
>>> >>>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> --
>>> >>>>>> *Mike Tutkowski*
>>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>> e: mike.tutkow...@solidfire.com
>>> >>>>>> o: 303.746.7302
>>> >>>>>> Advancing the way the world uses the
>>> >>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>>>> *™*
>>> >>>>>
>>> >>>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>> *Mike Tutkowski*
>>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>> e: mike.tutkow...@solidfire.com
>>> >>>> o: 303.746.7302
>>> >>>> Advancing the way the world uses the
>>> >>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>> *™*
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >> --
>>> >> *Mike Tutkowski*
>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> e: mike.tutkow...@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >> *™*
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkow...@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the 
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to