Yeah, I can verify that my createPhysicalDisk method is not being
called at all in 4.2. The create happens on the mgmt server in your
driver (createAsync), then you pass the lun info on to KVM in
attachVolume (or StartCommand for starting VMs), and it assumes the
LUN will be there on the SAN when it calls your connectPhysicalDisk
implementation to log into the target and do your stuff.

On Fri, Sep 27, 2013 at 5:00 PM, Marcus Sorensen <shadow...@gmail.com> wrote:
> On Fri, Sep 27, 2013 at 4:22 PM, Mike Tutkowski
> <mike.tutkow...@solidfire.com> wrote:
>> Sure, sounds good - let me know when it's up on Review Board and I can take
>> a look.
>>
>> I made most of the changes you and I talked about:
>>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/eb9b2edfc9062f9ca7961fecd5379b180ca3aed1
>>
>> I have a new idea, though, that I think will simplify this:
>>
>> The main "weirdness" right now is when attachVolume is called that the
>> original logic assumed createVolume had been called already.
>>
>> In my case, this doesn't apply, so we had to place extra logic in
>> attachVolume to essentially "create" a volume. We decided to make a connect
>> method, which establishes the iSCSI connection and creates a
>> KVMPhysicalDisk that can be returned when attachVolume calls
>> getPhysicalDisk.
>>
>> The "normal" place where you'd create a KVMPhysicalDisk, however, would be
>> in the createVolume method. Since I don't currently "create" a volume, my
>> only chance to note the size of the volume is in the connect method.
>
> I don't think createVolume applies to plugins. My impression wash that
> createAsync is called on the mgmt server side. If createVolume IS
> being called, that's weird. The idea here is that mgmt server creates
> the LUN, and then on the KVM side attach is called (or StartCommand if
> it's a root volume and vm is being started), and it assumes that the
> LUN is already there, so we call connectPhysicalDisk to attach it to
> the KVM host.
>
>>
>> It ends up being kind of weird to pass a size into the connect method, as
>> you've noted.
>>
>> What if we essentially left the attachVolume and detachVolume methods alone
>> (as in how they were before my changes)? We could have
>> VolumeApiServiceImpl, before sending the AttachCommand, detect if the
>> storage in question is managed. If it is, VolumeApiServiceImpl could send a
>> CreateObjectCommand. I would then implement createPhysicalDisk to connect
>> my iSCSI target and create a KVMPhysicalDisk.
>>
>> On the reverse side, VolumeApiServiceImpl, after sending the DetachCommand,
>> could detect if the storage in question is managed. If it is,
>> VolumeApiServiceImpl could send a DeleteCommand. I would then implement the
>> deletePhysicalDisk method to disconnect my iSCSI session.
>>
>> What do you think?
>
> Maybe I'm just confused, but I thought the create and delete on the
> KVM side only apply to the default storage plugin, which has to pass
> everything on the agent. I thought the creation/deletion of LUNs
> occured via createAsync and deleteAsync in your plugin.
>
>>
>>
>> On Fri, Sep 27, 2013 at 3:21 PM, Marcus Sorensen <shadow...@gmail.com>wrote:
>>
>>> Ok, I've got our plugin working against 4.2. Tested start vm, stop vm,
>>> migrate vm, attach volume, detach volume.  Other functions that we
>>> already had in our StorageAdaptor implementation, such as copying
>>> templates to primary storage, just worked without any modification
>>> from our 4.1 version.
>>>
>>> I'll post a patch to reviewboard with the applicable changes. I was
>>> correct that attachVolume and dettachVolume only apply to
>>> adding/removing disks from running VMs, so there were some more
>>> changes to LibvirtComputingResource. I don't intend for this patch to
>>> be applied (for one it's against 4.2), but I want you to take a look
>>> and see if it will work for you as well. If it does, then it's a good
>>> indicator that it should work for other plugins too, or if it needs to
>>> be tweaked we can work it out.
>>>
>>> The gist is that we needed a connectPhysicalDisk call that can accept
>>> the pool/volume info (which we've discussed), but also a version of
>>> connectPhysicalDisk that can take a vm specification
>>> (VirtualMachineTO) and figure out which pools/disks are needed and
>>> attach them. I largely copied the code we had custom inserted into our
>>> 4.1 and put it into KVMStoragePoolManager so that it will be adaptor
>>> agnostic.
>>>
>>> Same goes for disconnectPhysicalDisk.
>>>
>>> We also needed to pass the VirtualMachineTO in a few other places like
>>> MigrateCommand and StopCommand, it's otherwise hard to know which
>>> storage adaptors we need to deal with when all we have is a vm name or
>>> something like that.
>>>
>>> On Fri, Sep 27, 2013 at 12:56 AM, Mike Tutkowski
>>> <mike.tutkow...@solidfire.com> wrote:
>>> > Thanks for the clarification on how that works.
>>> >
>>> > Also, yeah, I think CHAP only grants you access to a volume. If multiple
>>> > hosts are using the CHAP credentials for a single volume, it's up to
>>> those
>>> > hosts to make sure they don't step on each other's toes (and this is - to
>>> > my understanding - how it works with XenServer and VMware).
>>> >
>>> >
>>> > On Fri, Sep 27, 2013 at 12:45 AM, Marcus Sorensen <shadow...@gmail.com
>>> >wrote:
>>> >
>>> >> On Fri, Sep 27, 2013 at 12:21 AM, Mike Tutkowski
>>> >> <mike.tutkow...@solidfire.com> wrote:
>>> >> > Maybe I should seek a little clarification as to how live migration
>>> works
>>> >> > in CS with KVM.
>>> >> >
>>> >> > Before we do a live migration of VM 1 from Host 1 to Host 2, do we
>>> detach
>>> >> > all disks from VM1?
>>> >> >
>>> >> > If so, then we're good to go there.
>>> >> >
>>> >> > I'm not as clear with HA.
>>> >>
>>> >> During live migration this is what we currently do in our modified
>>> >> 4.1, I'm not sure if the new framework is set up for this, but it
>>> >> should be made to do this if not.
>>> >>
>>> >> PrepareForMigrationCommand is called on destination host. In
>>> >> PrepareForMigrationCommand we added a few lines to call
>>> >> connectPhysicalDisk. This host connects the SAN disks to this new
>>> >> host, then creates a paused VM.
>>> >>
>>> >> MigrateCommand is called on the source host. This sends the proper
>>> >> command to transfer VM memory, then atomically cuts over to the
>>> >> destination host. During this time, the disks are attached on both
>>> >> sides, but the VM is still the only thing that is using them, and it
>>> >> atomically cuts over. There's no caching on the host (qemu is using
>>> >> directio), so this is safe.
>>> >>
>>> >> After MigrateCommand completes it's VM passoff, we detach the disks
>>> >> before returning.
>>> >>
>>> >> >
>>> >> > If VM 1 goes down because Host 1 crashes, is the attach-volume command
>>> >> > invoked as many times as need be (depending on how many volumes need
>>> to
>>> >> be
>>> >> > attached) when VM 1 is restarted on Host 2?
>>> >>
>>> >> From what I can tell, the attachVolume and dettachVolume seemed to
>>> >> only be for attaching disks to existing, running VMs (i.e. inserting
>>> >> new XML into an existing domain definition).  Normally when starting a
>>> >> vm from scratch the vm definition, along with any currently attached
>>> >> disks, is passed in to StartCommand (which would also be called during
>>> >> HA restart of a VM). In our 4.1 branch we also have a call to
>>> >> connectPhysicalDisk here, where we loop through the disk definitions
>>> >> that were passed.
>>> >>
>>> >> Again, I should be able to flesh out the differences in 4.2 and how to
>>> >> go about making this suitable for everyone in the coming days, so long
>>> >> as you and anyone else writing plugins agree with the changes.
>>> >>
>>> >> These processes would make sure the disks are available on the hosts
>>> >> they need to be, but they don't really provide locking or ensure that
>>> >> only the necessary hosts can write to or see the disks at any given
>>> >> time. I don't think CHAP does that either. We currently generate ACLs
>>> >> via our SAN api during connectPhysicalDisk as a safety measure, but if
>>> >> CloudStack is working properly it will be in charge of controlling
>>> >> that the disks are only being used where they should be. The ACLs just
>>> >> ensure that if the VM somehow gets started in two different places
>>> >> (e.g. HA malfunction), only one of them will have access to the disks.
>>> >>
>>> >> >
>>> >> >
>>> >> > On Fri, Sep 27, 2013 at 12:17 AM, Mike Tutkowski <
>>> >> > mike.tutkow...@solidfire.com> wrote:
>>> >> >
>>> >> >> Let me clarify this line a bit:
>>> >> >>
>>> >> >> "We get away without this with XenServer and VMware because - as far
>>> as
>>> >> I
>>> >> >> know - CS delegates HA and live migration to those clusters and they
>>> >> handle
>>> >> >> it most likely with some kind of locking protocol on the
>>> SR/datastore."
>>> >> >>
>>> >> >> When I set up a XenServer or a VMware cluster, all nodes in the
>>> cluster
>>> >> >> have the proper CHAP credentials and can access a shared
>>> SR/datastore.
>>> >> >>
>>> >> >> HA and live migrations are OK here because the cluster controls
>>> access
>>> >> to
>>> >> >> the VDI on the SR (or VMDK on the datastore) with some kind of
>>> locking
>>> >> >> protocol, I expect.
>>> >> >>
>>> >> >> Since KVM isn't really in a cluster outside of the CloudStack world,
>>> >> >> CloudStack has to handle these intricacies.
>>> >> >>
>>> >> >> In my case, I'm just presenting a raw disk to a VM on a KVM host.
>>> >> >>
>>> >> >> In that case, HA and live migration depend on the storage plug-in
>>> being
>>> >> >> able to grant and revoke access to the volume for hosts as needed.
>>> >> >>
>>> >> >> I'd actually rather not even bother with CHAP when using KVM.
>>> >> >>
>>> >> >>
>>> >> >> On Fri, Sep 27, 2013 at 12:06 AM, Mike Tutkowski <
>>> >> >> mike.tutkow...@solidfire.com> wrote:
>>> >> >>
>>> >> >>> Hey Marcus,
>>> >> >>>
>>> >> >>> I agree that CHAP does not fulfill the same role as fencing.
>>> >> >>>
>>> >> >>> I think we're going to have trouble with HA and live migrations on
>>> KVM
>>> >> if
>>> >> >>> the storage plug-in doesn't have a way of knowing when a host wants
>>> to
>>> >> >>> access a volume and when we want to revoke access to that volume.
>>> >> >>>
>>> >> >>> We get away without this with XenServer and VMware because - as far
>>> as
>>> >> I
>>> >> >>> know - CS delegates HA and live migration to those clusters and they
>>> >> handle
>>> >> >>> it most likely with some kind of locking protocol on the
>>> SR/datastore.
>>> >> >>>
>>> >> >>> As far as the path field is concerned, I should be able to populate
>>> it
>>> >> >>> with the IQN of the volume in question.
>>> >> >>>
>>> >> >>> One problem I do see, however, is in the getPhysicalDisk method.
>>> >> >>>
>>> >> >>> How are you envisioning I keep track of KVMPhysicalDisks that I
>>> create
>>> >> in
>>> >> >>> my connect method?
>>> >> >>>
>>> >> >>> Initially I was thinking I'd just keep them in a map. Storage pool
>>> UUID
>>> >> >>> to KVMPhysicalDisks.
>>> >> >>>
>>> >> >>> The problem is, how do I reconstruct that map if the agent is
>>> restarted
>>> >> >>> (say the host crashes or is restarted).
>>> >> >>>
>>> >> >>> For storage pools, we get a message (ModifyStoragePoolCommand) from
>>> the
>>> >> >>> CS MS to tell us about all of the relevant storage pools. With this
>>> >> >>> message, I can reconstruct my cache of storage pools. No problem
>>> there.
>>> >> >>>
>>> >> >>> But how will I know which volumes belong to a given storage pool if
>>> I
>>> >> >>> have to rebuild that map? How will I even know which volumes are in
>>> >> use at
>>> >> >>> all?
>>> >> >>>
>>> >> >>> Thanks
>>> >> >>>
>>> >> >>>
>>> >> >>> On Thu, Sep 26, 2013 at 11:37 PM, Marcus Sorensen <
>>> shadow...@gmail.com
>>> >> >wrote:
>>> >> >>>
>>> >> >>>> On Thu, Sep 26, 2013 at 10:23 PM, Mike Tutkowski
>>> >> >>>> <mike.tutkow...@solidfire.com> wrote:
>>> >> >>>> > My comments are inline:
>>> >> >>>> >
>>> >> >>>> >
>>> >> >>>> > On Thu, Sep 26, 2013 at 9:10 PM, Marcus Sorensen <
>>> >> shadow...@gmail.com
>>> >> >>>> >wrote:
>>> >> >>>> >
>>> >> >>>> >> Ok, let me digest this a bit. I got the github responses but I'd
>>> >> also
>>> >> >>>> >> like to keep it on-list as well.
>>> >> >>>> >>
>>> >> >>>> >>  My initial thoughts are:
>>> >> >>>> >>
>>> >> >>>> >> 1) I don't think disk format and size are necessary parameters
>>> for
>>> >> >>>> >> connectPhysicalDisk, as the format can be determined by the
>>> >> adaptor,
>>> >> >>>> >> and the size is set during the createAsync call in the plugin.
>>> We
>>> >> >>>> >> really just need the disk path and the pool.
>>> >> >>>> >>
>>> >> >>>> > [Mike T.] I agree, format is not needed. The only reason I have
>>> size
>>> >> >>>> passed
>>> >> >>>> > in is because I need to create a KVMPhysicalDisk at the end of
>>> the
>>> >> >>>> connect
>>> >> >>>> > method. Since this KVMPhysicalDisk is (in the code on GitHub)
>>> being
>>> >> >>>> used to
>>> >> >>>> > create our XML to attach the disk, I figured we'd need that size.
>>> >> The
>>> >> >>>> > KVMPhysicalDisk I produce from my implementation of
>>> getPhysicalDisk
>>> >> is
>>> >> >>>> not
>>> >> >>>> > as good because I don't know the size of the disk at this point
>>> (I
>>> >> >>>> don't
>>> >> >>>> > keep that information around). The reason I don't keep that info
>>> >> >>>> around is
>>> >> >>>> > because I don't have a good way to reproduce that info if the KVM
>>> >> host
>>> >> >>>> is
>>> >> >>>> > rebooted. We get info about storage pools in the form of a
>>> >> >>>> > ModifyStoragePoolCommand, but nothing about the volumes inside of
>>> >> the
>>> >> >>>> > storage pool.
>>> >> >>>>
>>> >> >>>> getPhysicalDisk is called in a bunch of places. I'd rely on the
>>> >> >>>> connectPhysicalDisk to only make the block device appear on the
>>> host,
>>> >> >>>> and then getPhysicalDisk to find that block device and fill out
>>> things
>>> >> >>>> like disk size and path (the real path to the local block device)
>>> for
>>> >> >>>> passing and creating the disk XML. Trust me, unless things have
>>> >> >>>> changed significantly you need to be able to identify a given
>>> device
>>> >> >>>> as a specific local disk by whatever you are setting the 'path'
>>> >> >>>> attribute to be.  getPhysicalDisk will be called on your storage
>>> pool
>>> >> >>>> with simply the path attribute, and via your adaptor with the pool
>>> and
>>> >> >>>> path.
>>> >> >>>>
>>> >> >>>> So you may set path as some combination of iqn and target/pool
>>> info,
>>> >> >>>> or if iqn is enough to identify a unique block device (in
>>> >> >>>> /dev/disk/by-id maybe?) on a host then just use that. Path just
>>> needs
>>> >> >>>> to be something, anything, to identify the disk on the host. In
>>> >> >>>> getPhysicalDisk, identify the local block device matching the info,
>>> >> >>>> create a new KVMPhysicalDisk with the local path, size, etc, and
>>> >> >>>> return it.
>>> >> >>>>
>>> >> >>>> >
>>> >> >>>> >>
>>> >> >>>> >> 2) I thought this access group thing you mention are the
>>> >> grantAccess
>>> >> >>>> >> and revokeAccess calls in the storage plugin 2.0 design doc. Was
>>> >> that
>>> >> >>>> >> not implemented?
>>> >> >>>> >>
>>> >> >>>> > [Mike T.] Yeah, as I mentioned in an e-mail way back, those
>>> methods
>>> >> >>>> were
>>> >> >>>> > never implemented in 4.2. I think you said you were going to get
>>> >> around
>>> >> >>>> > them not being implemented by keeping certain logic that talks to
>>> >> the
>>> >> >>>> SAN
>>> >> >>>> > in the agent. I don't think we want any SolidFire-specific code
>>> in
>>> >> the
>>> >> >>>> > agent, however, so I can't go that route. If those methods do not
>>> >> get
>>> >> >>>> > implemented in 4.3, then I will need to use CHAP credentials for
>>> KVM
>>> >> >>>> (just
>>> >> >>>> > like I did with XenServer and VMware).
>>> >> >>>>
>>> >> >>>> I initially figured your StorageAdaptor implementation would be all
>>> >> >>>> solidfire specific, just like the mgmt server side plugin is. If it
>>> >> >>>> can be generic to all iscsi storage then that's great. I agree that
>>> >> >>>> ideally the agent wouldn't be making API calls to your SAN. I don't
>>> >> >>>> think it should be necessary given that you're not going to use the
>>> >> >>>> ACL route. I'm not sure CHAP fills the same purpose of fencing.
>>> >> >>>>
>>> >> >>>> >
>>> >> >>>> >>
>>> >> >>>> >> I see you've  added getters/setters for the attach cmd to pass
>>> the
>>> >> >>>> >> iscsi info you need. Would it perhaps be possible to send a
>>> details
>>> >> >>>> >> Map<String, String> instead? Then any plugin implementer could
>>> >> attach
>>> >> >>>> >> arbitrary data they need. So it might be
>>> >> >>>> >> connectPhysicalDisk(StoragePoolType type, String poolUuid,
>>> String
>>> >> >>>> >> volPath, Map<String, String> details)?  I'll have to look and
>>> see
>>> >> >>>> >> where those cmd. attributes are set, ideally it would be all the
>>> >> way
>>> >> >>>> >> back in the plugin to avoid custom code for every adaptor that
>>> >> wants
>>> >> >>>> >> to set details.
>>> >> >>>> >>
>>> >> >>>> > [Mike T.] If I'm not using the volumes.path field for anything, I
>>> >> can
>>> >> >>>> stick
>>> >> >>>> > the IQN in volumes.path (as well as leaving it in
>>> >> volumes.iscsi_name,
>>> >> >>>> which
>>> >> >>>> > is used elsewhere). That way we only have to ask for getPath().
>>> >> >>>>
>>> >> >>>> Yeah, whatever it is that you'd need to find the right block device
>>> >> >>>> should go in the path. If you look through LibvirtComputingResource
>>> >> >>>> you'll see stuff like this sprinkled around:
>>> >> >>>>
>>> >> >>>>                 KVMPhysicalDisk volume =
>>> >> primaryPool.getPhysicalDisk(cmd
>>> >> >>>>                         .getVolumePath());
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> or:
>>> >> >>>>         String volid = cmd.getPath();
>>> >> >>>>          KVMPhysicalDisk vol = pool.getPhysicalDisk(volid);
>>> >> >>>>
>>> >> >>>> or:
>>> >> >>>>
>>> >> >>>>                     KVMPhysicalDisk physicalDisk =
>>> >> >>>> _storagePoolMgr.getPhysicalDisk( store.getPoolType(),
>>> >> >>>>                             store.getUuid(),
>>> >> >>>>                             data.getPath());
>>> >> >>>>
>>> >> >>>> Maybe some of it is short-circuited by the new KVMStorageProcessor,
>>> >> >>>> but I'd still implement a working one, and then attachVolume can
>>> call
>>> >> >>>> getPhysicalDisk after connectPhysicalDisk, even on your
>>> adaptor/pool.
>>> >> >>>>
>>> >> >>>> >
>>> >> >>>> >>
>>> >> >>>> >> On Thu, Sep 26, 2013 at 7:35 PM, Mike Tutkowski
>>> >> >>>> >> <mike.tutkow...@solidfire.com> wrote:
>>> >> >>>> >> > Also, if we went the non-CHAP route, before attaching a volume
>>> >> to a
>>> >> >>>> VM,
>>> >> >>>> >> > we'd have to tell the plug-in to set up a volume access group.
>>> >> >>>> >> >
>>> >> >>>> >> > When a volume is detached from a VM, we'd have to tell the
>>> >> plug-in
>>> >> >>>> to
>>> >> >>>> >> > delete the volume access group.
>>> >> >>>> >> >
>>> >> >>>> >> >
>>> >> >>>> >> > On Thu, Sep 26, 2013 at 7:32 PM, Mike Tutkowski <
>>> >> >>>> >> > mike.tutkow...@solidfire.com> wrote:
>>> >> >>>> >> >
>>> >> >>>> >> >> I mention this is my comments on GitHub, as well, but CHAP
>>> info
>>> >> is
>>> >> >>>> >> >> associated with an account - not a storage pool.
>>> >> >>>> >> >>
>>> >> >>>> >> >> Ideally we could do without CHAP info entirely if we had a
>>> >> >>>> reliable way
>>> >> >>>> >> to
>>> >> >>>> >> >> tell the storage plug-in that a given host wants to access a
>>> >> given
>>> >> >>>> >> volume.
>>> >> >>>> >> >> In this case, my storage plug-in could create what we call a
>>> >> Volume
>>> >> >>>> >> Access
>>> >> >>>> >> >> Group on the SAN. It would essentially say, "The host with
>>> IQN
>>> >> <x>
>>> >> >>>> can
>>> >> >>>> >> >> access the volume with IQN <y> without using CHAP
>>> credentials."
>>> >> Of
>>> >> >>>> >> course
>>> >> >>>> >> >> we'd need a way to revoke this privilege in the event of a
>>> live
>>> >> >>>> >> migration
>>> >> >>>> >> >> of a VM. Right now, I do not believe such a facility is
>>> >> supported
>>> >> >>>> with
>>> >> >>>> >> the
>>> >> >>>> >> >> storage plug-ins.
>>> >> >>>> >> >>
>>> >> >>>> >> >>
>>> >> >>>> >> >> On Thu, Sep 26, 2013 at 4:56 PM, Marcus Sorensen <
>>> >> >>>> shadow...@gmail.com
>>> >> >>>> >> >wrote:
>>> >> >>>> >> >>
>>> >> >>>> >> >>> Looking at your code, is the chap info stored with the pool,
>>> >> so we
>>> >> >>>> >> >>> could pass the pool to the adaptor? That would be more
>>> >> agnostic,
>>> >> >>>> >> >>> anyone implementing a plugin could pull the specifics they
>>> need
>>> >> >>>> for
>>> >> >>>> >> >>> their stuff out of the pool on the adaptor side, rather than
>>> >> >>>> creating
>>> >> >>>> >> >>> custom signatures.
>>> >> >>>> >> >>>
>>> >> >>>> >> >>> Also, I think we may want to consider implementing
>>> >> >>>> connect/disconnect
>>> >> >>>> >> >>> as just dummy methods in LibvirtStorageAdaptor, so we don't
>>> >> have
>>> >> >>>> to be
>>> >> >>>> >> >>> picky about which adaptors/types in every single place we
>>> may
>>> >> >>>> want to
>>> >> >>>> >> >>> connect/disconnect (in 4.1 there were several, I'm not sure
>>> if
>>> >> >>>> >> >>> everything goes through this in 4.2). We can just call
>>> >> >>>> >> >>> adaptor.connectPhysicalDisk and the adaptor can decide if it
>>> >> >>>> needs to
>>> >> >>>> >> >>> do anything.
>>> >> >>>> >> >>>
>>> >> >>>> >> >>> Comments are attached to your commit, I just wanted to echo
>>> >> them
>>> >> >>>> here
>>> >> >>>> >> >>> on-list.
>>> >> >>>> >> >>>
>>> >> >>>> >> >>> On Thu, Sep 26, 2013 at 4:32 PM, Mike Tutkowski
>>> >> >>>> >> >>> <mike.tutkow...@solidfire.com> wrote:
>>> >> >>>> >> >>> > Oh, SnapshotTestWithFakeData is just modified because the
>>> >> code
>>> >> >>>> wasn't
>>> >> >>>> >> >>> > building until I corrected this. It has nothing really to
>>> do
>>> >> >>>> with my
>>> >> >>>> >> >>> real
>>> >> >>>> >> >>> > changes.
>>> >> >>>> >> >>> >
>>> >> >>>> >> >>> >
>>> >> >>>> >> >>> > On Thu, Sep 26, 2013 at 4:31 PM, Mike Tutkowski <
>>> >> >>>> >> >>> > mike.tutkow...@solidfire.com> wrote:
>>> >> >>>> >> >>> >
>>> >> >>>> >> >>> >> Hey Marcus,
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >> I implemented your recommendations regarding adding
>>> connect
>>> >> and
>>> >> >>>> >> >>> disconnect
>>> >> >>>> >> >>> >> methods. It is not yet checked in (as you know, having
>>> >> trouble
>>> >> >>>> with
>>> >> >>>> >> my
>>> >> >>>> >> >>> KVM
>>> >> >>>> >> >>> >> environment), but it is on GitHub here:
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>>
>>> >> >>>> >>
>>> >> >>>>
>>> >>
>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/f2897c65689012e6157c0a0c2ed7e5355900c59a
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >> Please let me know if you have any more comments.
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >> Thanks!
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >> On Thu, Sep 26, 2013 at 4:05 PM, Marcus Sorensen <
>>> >> >>>> >> shadow...@gmail.com
>>> >> >>>> >> >>> >wrote:
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >>> Mike, everyone,
>>> >> >>>> >> >>> >>>    As I've mentioned on the board, I'm working on
>>> getting
>>> >> our
>>> >> >>>> own
>>> >> >>>> >> >>> >>> internal KVM storage plugin working on 4.2. In the
>>> >> interest of
>>> >> >>>> >> making
>>> >> >>>> >> >>> >>> it forward compatible, I just wanted to confirm what you
>>> >> were
>>> >> >>>> doing
>>> >> >>>> >> >>> >>> with the solidfire plugin as far as attaching your iscsi
>>> >> >>>> LUNs. We
>>> >> >>>> >> had
>>> >> >>>> >> >>> >>> discussed a new connectPhysicalDisk method for the
>>> >> >>>> StorageAdaptor
>>> >> >>>> >> >>> >>> class, something perhaps like:
>>> >> >>>> >> >>> >>>
>>> >> >>>> >> >>> >>> public boolean connectPhysicalDisk(String volumeUuid,
>>> >> >>>> >> KVMStoragePool
>>> >> >>>> >> >>> >>> pool);
>>> >> >>>> >> >>> >>>
>>> >> >>>> >> >>> >>> then added to KVMStoragePoolManager:
>>> >> >>>> >> >>> >>>
>>> >> >>>> >> >>> >>> public boolean connectPhysicalDisk(StoragePoolType type,
>>> >> >>>> String
>>> >> >>>> >> >>> >>> poolUuid, String volPath) {
>>> >> >>>> >> >>> >>>         StorageAdaptor adaptor =
>>> getStorageAdaptor(type);
>>> >> >>>> >> >>> >>>         KVMStoragePool pool =
>>> >> >>>> adaptor.getStoragePool(poolUuid);
>>> >> >>>> >> >>> >>>         return adaptor.connectPhysicalDisk(volPath,
>>> pool);
>>> >> >>>> >> >>> >>>     }
>>> >> >>>> >> >>> >>>
>>> >> >>>> >> >>> >>> Something similar to this for disconnect as well. Then
>>> in
>>> >> the
>>> >> >>>> >> >>> >>> KVMStorageProcessor these can be called as needed for
>>> >> >>>> >> attach/detach.
>>> >> >>>> >> >>> >>> We can probably stub out one in LibvirtStorageAdaptor so
>>> >> >>>> there's no
>>> >> >>>> >> >>> >>> need to switch or if/else for pool types, for example in
>>> >> >>>> >> >>> >>> KVMStorageProcessor.attachVolume.
>>> >> >>>> >> >>> >>>
>>> >> >>>> >> >>> >>> I have debated on whether or not it should just be
>>> rolled
>>> >> into
>>> >> >>>> >> >>> >>> getPhysicalDisk, having it connect the disk if it's not
>>> >> >>>> already
>>> >> >>>> >> >>> >>> connected. getPhysicalDisk is called a lot, and I'm not
>>> >> sure
>>> >> >>>> it
>>> >> >>>> >> always
>>> >> >>>> >> >>> >>> needs to connect the disk when it does. In past
>>> iterations
>>> >> >>>> >> >>> >>> getPhysicalDisk has simply spoken to our SAN api and
>>> >> returned
>>> >> >>>> the
>>> >> >>>> >> disk
>>> >> >>>> >> >>> >>> details, nothing more. So it seemed more flexible and
>>> >> >>>> granular to
>>> >> >>>> >> do
>>> >> >>>> >> >>> >>> the connectPhysicalDisk (we have one now in our 4.1
>>> >> version).
>>> >> >>>> >> >>> >>>
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >> --
>>> >> >>>> >> >>> >> *Mike Tutkowski*
>>> >> >>>> >> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >>>> >> >>> >> e: mike.tutkow...@solidfire.com
>>> >> >>>> >> >>> >> o: 303.746.7302
>>> >> >>>> >> >>> >> Advancing the way the world uses the cloud<
>>> >> >>>> >> >>> http://solidfire.com/solution/overview/?video=play>
>>> >> >>>> >> >>> >> *™*
>>> >> >>>> >> >>> >>
>>> >> >>>> >> >>> >
>>> >> >>>> >> >>> >
>>> >> >>>> >> >>> >
>>> >> >>>> >> >>> > --
>>> >> >>>> >> >>> > *Mike Tutkowski*
>>> >> >>>> >> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >>>> >> >>> > e: mike.tutkow...@solidfire.com
>>> >> >>>> >> >>> > o: 303.746.7302
>>> >> >>>> >> >>> > Advancing the way the world uses the
>>> >> >>>> >> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >>>> >> >>> > *™*
>>> >> >>>> >> >>>
>>> >> >>>> >> >>
>>> >> >>>> >> >>
>>> >> >>>> >> >>
>>> >> >>>> >> >> --
>>> >> >>>> >> >> *Mike Tutkowski*
>>> >> >>>> >> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >>>> >> >> e: mike.tutkow...@solidfire.com
>>> >> >>>> >> >> o: 303.746.7302
>>> >> >>>> >> >> Advancing the way the world uses the cloud<
>>> >> >>>> >> http://solidfire.com/solution/overview/?video=play>
>>> >> >>>> >> >> *™*
>>> >> >>>> >> >>
>>> >> >>>> >> >
>>> >> >>>> >> >
>>> >> >>>> >> >
>>> >> >>>> >> > --
>>> >> >>>> >> > *Mike Tutkowski*
>>> >> >>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >>>> >> > e: mike.tutkow...@solidfire.com
>>> >> >>>> >> > o: 303.746.7302
>>> >> >>>> >> > Advancing the way the world uses the
>>> >> >>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >>>> >> > *™*
>>> >> >>>> >>
>>> >> >>>> >
>>> >> >>>> >
>>> >> >>>> >
>>> >> >>>> > --
>>> >> >>>> > *Mike Tutkowski*
>>> >> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >>>> > e: mike.tutkow...@solidfire.com
>>> >> >>>> > o: 303.746.7302
>>> >> >>>> > Advancing the way the world uses the
>>> >> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >>>> > *™*
>>> >> >>>>
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> --
>>> >> >>> *Mike Tutkowski*
>>> >> >>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >>> e: mike.tutkow...@solidfire.com
>>> >> >>> o: 303.746.7302
>>> >> >>> Advancing the way the world uses the cloud<
>>> >> http://solidfire.com/solution/overview/?video=play>
>>> >> >>> *™*
>>> >> >>>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> --
>>> >> >> *Mike Tutkowski*
>>> >> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> e: mike.tutkow...@solidfire.com
>>> >> >> o: 303.746.7302
>>> >> >> Advancing the way the world uses the cloud<
>>> >> http://solidfire.com/solution/overview/?video=play>
>>> >> >> *™*
>>> >> >>
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > *Mike Tutkowski*
>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > e: mike.tutkow...@solidfire.com
>>> >> > o: 303.746.7302
>>> >> > Advancing the way the world uses the
>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> > *™*
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkow...@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*

Reply via email to