That's right
On Sep 16, 2013 12:31 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
wrote:

> I understand what you're saying now, Marcus.
>
> I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
> (looking into that still), but I see what you mean: If it is, we don't need
> a new adaptor; otherwise, we do.
>
> If Libivirt's iSCSI Storage Pool does work, I could update the current
> adaptor, if need be, to make use of it.
>
>
> On Mon, Sep 16, 2013 at 12:24 PM, Marcus Sorensen <shadow...@gmail.com>wrote:
>
>> Well, you'd use neither of the two pool types, because you are not
>> letting libvirt handle the pool, you are doing it with your own pool and
>> adaptor class. Libvirt will be unaware of everything but the disk XML you
>> attach to a vm. You'd only use those if libvirts functions were
>> advantageous, I.e. if it already did everything you want. Since neither of
>> those seem to provide both iscsi and the 1:1 mapping you want that's why we
>> are talking about your own pool/adaptor.
>>
>> You can log into the target via your implementation of getPhysicalDisk as
>> you mention in AttachVolumeCommand, or log in during your implementation of
>> createStoragePool and simply rescan for luns in getPhysicalDisk. Presumably
>> in most cases the host will be logged in already and new luns have been
>> created in the meantime.
>> On Sep 16, 2013 12:09 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
>> wrote:
>>
>>> Hey Marcus,
>>>
>>> Thanks for that clarification.
>>>
>>> Sorry if this is a redundant question:
>>>
>>> When the AttachVolumeCommand comes in, it sounds like we thought the
>>> best approach would be for me to discover and log in to the iSCSI target
>>> using iscsiadm.
>>>
>>> This will create a new device: /dev/sdX.
>>>
>>> We would then pass this new device into the VM (passing XML into the
>>> appropriate Libvirt API).
>>>
>>> If this is an accurate understanding, can you tell me: Do you think we
>>> should be using a Disk Storage Pool or an iSCSI Storage Pool?
>>>
>>> I believe I recall you leaning toward a Disk Storage Pool because we
>>> will have already discovered the iSCSI target and, as such, will already
>>> have a device to pass into the VM.
>>>
>>> It seems like either way would work.
>>>
>>> Maybe I need to study Libvirt's iSCSI Storage Pools more to understand
>>> if they would do the work of discovering the iSCSI target for me (and maybe
>>> avoid me having to use iscsiadm).
>>>
>>> Thanks for the clarification! :)
>>>
>>>
>>> On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen 
>>> <shadow...@gmail.com>wrote:
>>>
>>>> It will still register the pool.  You still have a primary storage
>>>> pool that you registered, whether it's local, cluster or zone wide.
>>>> NFS is optionally zone wide as well (I'm assuming customers can launch
>>>> your storage only cluster-wide if they choose for resource
>>>> partitioning), but it registers the pool in Libvirt prior to use.
>>>>
>>>> Here's a better explanation of what I meant.  AttachVolumeCommand gets
>>>> both pool and volume info. It first looks up the pool:
>>>>
>>>>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>>>                     cmd.getPooltype(),
>>>>                     cmd.getPoolUuid());
>>>>
>>>> Then it looks up the disk from that pool:
>>>>
>>>>     KVMPhysicalDisk disk = primary.getPhysicalDisk(cmd.getVolumePath());
>>>>
>>>> Most of the commands only pass volume info like this (getVolumePath
>>>> generally means the uuid of the volume), since it looks up the pool
>>>> separately. If you don't save the pool info in a map in your custom
>>>> class when createStoragePool is called, then getStoragePool won't be
>>>> able to find it. This is a simple thing in your implementation of
>>>> createStoragePool, just thought I'd mention it because it is key. Just
>>>> create a map of pool uuid and pool object and save them so they're
>>>> available across all implementations of that class.
>>>>
>>>> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
>>>> <mike.tutkow...@solidfire.com> wrote:
>>>> > Thanks, Marcus
>>>> >
>>>> > About this:
>>>> >
>>>> > "When the agent connects to the
>>>> > management server, it registers all pools in the cluster with the
>>>> > agent."
>>>> >
>>>> > So, my plug-in allows you to create zone-wide primary storage. This
>>>> just
>>>> > means that any cluster can use the SAN (the SAN was registered as
>>>> primary
>>>> > storage as opposed to a preallocated volume from the SAN). Once you
>>>> create a
>>>> > primary storage based on this plug-in, the storage framework will
>>>> invoke the
>>>> > plug-in, as needed, to create and delete volumes on the SAN. For
>>>> example,
>>>> > you could have one SolidFire primary storage (zone wide) and
>>>> currently have
>>>> > 100 volumes created on the SAN to support it.
>>>> >
>>>> > In this case, what will the management server be registering with the
>>>> agent
>>>> > in ModifyStoragePool? If only the storage pool (primary storage) is
>>>> passed
>>>> > in, that will be too vague as it does not contain information on what
>>>> > volumes have been created for the agent.
>>>> >
>>>> > Thanks
>>>> >
>>>> >
>>>> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <
>>>> shadow...@gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Yes, see my previous email from the 13th. You can create your own
>>>> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
>>>> >> have. The previous email outlines how to add your own StorageAdaptor
>>>> >> alongside LibvirtStorageAdaptor to take over all of the calls
>>>> >> (createStoragePool, getStoragePool, etc). As mentioned,
>>>> >> getPhysicalDisk I believe will be the one you use to actually attach
>>>> a
>>>> >> lun.
>>>> >>
>>>> >> Ignore CreateStoragePoolCommand. When the agent connects to the
>>>> >> management server, it registers all pools in the cluster with the
>>>> >> agent. It will call ModifyStoragePoolCommand, passing your storage
>>>> >> pool object (with all of the settings for your SAN). This in turn
>>>> >> calls _storagePoolMgr.createStoragePool, which will route through
>>>> >> KVMStoragePoolManager to your storage adapter that you've registered.
>>>> >> The last argument to createStoragePool is the pool type, which is
>>>> used
>>>> >> to select a StorageAdaptor.
>>>> >>
>>>> >> From then on, most calls will only pass the volume info, and the
>>>> >> volume will have the uuid of the storage pool. For this reason, your
>>>> >> adaptor class needs to have a static Map variable that contains pool
>>>> >> uuid and pool object. Whenever they call createStoragePool on your
>>>> >> adaptor you add that pool to the map so that subsequent volume calls
>>>> >> can look up the pool details for the volume by pool uuid. With the
>>>> >> Libvirt adaptor, libvirt keeps track of that for you.
>>>> >>
>>>> >> When createStoragePool is called, you can log into the iscsi target
>>>> >> (or make sure you are already logged in, as it can be called over
>>>> >> again at any time), and when attach volume commands are fired off,
>>>> you
>>>> >> can attach individual LUNs that are asked for, or rescan (say that
>>>> the
>>>> >> plugin created a new ACL just prior to calling attach), or whatever
>>>> is
>>>> >> necessary.
>>>> >>
>>>> >> KVM is a bit more work, but you can do anything you want. Actually, I
>>>> >> think you can call host scripts with Xen, but having the agent there
>>>> >> that runs your own code gives you the flexibility to do whatever.
>>>> >>
>>>> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>>>> >> <mike.tutkow...@solidfire.com> wrote:
>>>> >> > I see right now LibvirtComputingResource.java has the following
>>>> method
>>>> >> > that
>>>> >> > I might be able to leverage (it's probably not called at present
>>>> and
>>>> >> > would
>>>> >> > need to be implemented in my case to discover my iSCSI target and
>>>> log in
>>>> >> > to
>>>> >> > it):
>>>> >> >
>>>> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>>>> >> >
>>>> >> >         return new Answer(cmd, true, "success");
>>>> >> >
>>>> >> >     }
>>>> >> >
>>>> >> > I would probably be able to call the KVMStorageManager to have it
>>>> use my
>>>> >> > StorageAdaptor to do what's necessary here.
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>>>> >> > <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>
>>>> >> >> Hey Marcus,
>>>> >> >>
>>>> >> >> When I implemented support in the XenServer and VMware plug-ins
>>>> for
>>>> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>>>> >> >> methods in
>>>> >> >> both plug-ins.
>>>> >> >>
>>>> >> >> The code there was changed to check the AttachVolumeCommand
>>>> instance
>>>> >> >> for a
>>>> >> >> "managed" property.
>>>> >> >>
>>>> >> >> If managed was false, the normal attach/detach logic would just
>>>> run and
>>>> >> >> the volume would be attached or detached.
>>>> >> >>
>>>> >> >> If managed was true, new 4.2 logic would run to create (let's talk
>>>> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>>>> >> >> reattach an
>>>> >> >> existing VDI inside an existing SR, if this wasn't the first time
>>>> the
>>>> >> >> volume
>>>> >> >> was attached). If managed was true and we were detaching the
>>>> volume,
>>>> >> >> the SR
>>>> >> >> would be detached from the XenServer hosts.
>>>> >> >>
>>>> >> >> I am currently walking through the execute(AttachVolumeCommand) in
>>>> >> >> LibvirtComputingResource.java.
>>>> >> >>
>>>> >> >> I see how the XML is constructed to describe whether a disk
>>>> should be
>>>> >> >> attached or detached. I also see how we call in to get a
>>>> StorageAdapter
>>>> >> >> (and
>>>> >> >> how I will likely need to write a new one of these).
>>>> >> >>
>>>> >> >> So, talking in XenServer terminology again, I was wondering if you
>>>> >> >> think
>>>> >> >> the approach we took in 4.2 with creating and deleting SRs in the
>>>> >> >> execute(AttachVolumeCommand) method would work here or if there
>>>> is some
>>>> >> >> other way I should be looking at this for KVM?
>>>> >> >>
>>>> >> >> As it is right now for KVM, storage has to be set up ahead of
>>>> time.
>>>> >> >> Assuming this is the case, there probably isn't currently a place
>>>> I can
>>>> >> >> easily inject my logic to discover and log in to iSCSI targets.
>>>> This is
>>>> >> >> why
>>>> >> >> we did it as needed in the execute(AttachVolumeCommand) for
>>>> XenServer
>>>> >> >> and
>>>> >> >> VMware, but I wanted to see if you have an alternative way that
>>>> might
>>>> >> >> be
>>>> >> >> better for KVM.
>>>> >> >>
>>>> >> >> One possible way to do this would be to modify VolumeManagerImpl
>>>> (or
>>>> >> >> whatever its equivalent is in 4.3) before it issues an
>>>> attach-volume
>>>> >> >> command
>>>> >> >> to KVM to check to see if the volume is to be attached to managed
>>>> >> >> storage.
>>>> >> >> If it is, then (before calling the attach-volume command in KVM)
>>>> call
>>>> >> >> the
>>>> >> >> create-storage-pool command in KVM (or whatever it might be
>>>> called).
>>>> >> >>
>>>> >> >> Just wanted to get some of your thoughts on this.
>>>> >> >>
>>>> >> >> Thanks!
>>>> >> >>
>>>> >> >>
>>>> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>>>> >> >> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>
>>>> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
>>>> codebase
>>>> >> >>> and having to merge my code into it in 4.2.
>>>> >> >>>
>>>> >> >>> Thanks for all the details, Marcus! :)
>>>> >> >>>
>>>> >> >>> I can start digging into what you were talking about now.
>>>> >> >>>
>>>> >> >>>
>>>> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>>>> >> >>> <shadow...@gmail.com>
>>>> >> >>> wrote:
>>>> >> >>>>
>>>> >> >>>> Looks like things might be slightly different now in 4.2, with
>>>> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less
>>>> like some
>>>> >> >>>> of the commands were ripped out verbatim from
>>>> >> >>>> LibvirtComputingResource
>>>> >> >>>> and placed here, so in general what I've said is probably still
>>>> true,
>>>> >> >>>> just that the location of things like AttachVolumeCommand might
>>>> be
>>>> >> >>>> different, in this file rather than
>>>> LibvirtComputingResource.java.
>>>> >> >>>>
>>>> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>>>> >> >>>> <shadow...@gmail.com>
>>>> >> >>>> wrote:
>>>> >> >>>> > Ok, KVM will be close to that, of course, because only the
>>>> >> >>>> > hypervisor
>>>> >> >>>> > classes differ, the rest is all mgmt server. Creating a
>>>> volume is
>>>> >> >>>> > just
>>>> >> >>>> > a db entry until it's deployed for the first time.
>>>> >> >>>> > AttachVolumeCommand
>>>> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>>> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a
>>>> KVM
>>>> >> >>>> > StorageAdaptor) to log in the host to the target and then you
>>>> have
>>>> >> >>>> > a
>>>> >> >>>> > block device.  Maybe libvirt will do that for you, but my
>>>> quick
>>>> >> >>>> > read
>>>> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
>>>> pool,
>>>> >> >>>> > not
>>>> >> >>>> > a lun or volume, so you'll need to figure out if that works
>>>> or if
>>>> >> >>>> > you'll have to use iscsiadm commands.
>>>> >> >>>> >
>>>> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because
>>>> Libvirt
>>>> >> >>>> > doesn't really manage your pool the way you want), you're
>>>> going to
>>>> >> >>>> > have to create a version of KVMStoragePool class and a
>>>> >> >>>> > StorageAdaptor
>>>> >> >>>> > class (see LibvirtStoragePool.java and
>>>> LibvirtStorageAdaptor.java),
>>>> >> >>>> > implementing all of the methods, then in
>>>> KVMStorageManager.java
>>>> >> >>>> > there's a "_storageMapper" map. This is used to select the
>>>> correct
>>>> >> >>>> > adaptor, you can see in this file that every call first pulls
>>>> the
>>>> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So you
>>>> can
>>>> >> >>>> > see
>>>> >> >>>> > a comment in this file that says "add other storage adaptors
>>>> here",
>>>> >> >>>> > where it puts to this map, this is where you'd register your
>>>> >> >>>> > adaptor.
>>>> >> >>>> >
>>>> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool
>>>> accepts all
>>>> >> >>>> > of
>>>> >> >>>> > the pool data (host, port, name, path) which would be used to
>>>> log
>>>> >> >>>> > the
>>>> >> >>>> > host into the initiator. I *believe* the method
>>>> getPhysicalDisk
>>>> >> >>>> > will
>>>> >> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand
>>>> >> >>>> > calls
>>>> >> >>>> > this and then creates the XML diskdef and attaches it to the
>>>> VM.
>>>> >> >>>> > Now,
>>>> >> >>>> > one thing you need to know is that createStoragePool is called
>>>> >> >>>> > often,
>>>> >> >>>> > sometimes just to make sure the pool is there. You may want to
>>>> >> >>>> > create
>>>> >> >>>> > a map in your adaptor class and keep track of pools that have
>>>> been
>>>> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this
>>>> because it
>>>> >> >>>> > asks
>>>> >> >>>> > libvirt about which storage pools exist. There are also calls
>>>> to
>>>> >> >>>> > refresh the pool stats, and all of the other calls can be
>>>> seen in
>>>> >> >>>> > the
>>>> >> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone,
>>>> etc,
>>>> >> >>>> > but
>>>> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea
>>>> that
>>>> >> >>>> > volumes are created on the mgmt server via the plugin now, so
>>>> >> >>>> > whatever
>>>> >> >>>> > doesn't apply can just be stubbed out (or optionally
>>>> >> >>>> > extended/reimplemented here, if you don't mind the hosts
>>>> talking to
>>>> >> >>>> > the san api).
>>>> >> >>>> >
>>>> >> >>>> > There is a difference between attaching new volumes and
>>>> launching a
>>>> >> >>>> > VM
>>>> >> >>>> > with existing volumes.  In the latter case, the VM definition
>>>> that
>>>> >> >>>> > was
>>>> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>>>> >> >>>> >
>>>> >> >>>> > I'd be interested in how your pool is defined for Xen, I
>>>> imagine it
>>>> >> >>>> > would need to be kept the same. Is it just a definition to
>>>> the SAN
>>>> >> >>>> > (ip address or some such, port number) and perhaps a volume
>>>> pool
>>>> >> >>>> > name?
>>>> >> >>>> >
>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>> to have
>>>> >> >>>> >> only a
>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>> ideal.
>>>> >> >>>> >
>>>> >> >>>> > That depends on your SAN API.  I was under the impression
>>>> that the
>>>> >> >>>> > storage plugin framework allowed for acls, or for you to do
>>>> >> >>>> > whatever
>>>> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just
>>>> call
>>>> >> >>>> > your
>>>> >> >>>> > SAN API with the host info for the ACLs prior to when the
>>>> disk is
>>>> >> >>>> > attached (or the VM is started).  I'd have to look more at the
>>>> >> >>>> > framework to know the details, in 4.1 I would do this in
>>>> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>>>> >> >>>> >
>>>> >> >>>> >
>>>> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>> >> >>>> > <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>>>> >> >>>> >> different
>>>> >> >>>> >> from how
>>>> >> >>>> >> it works with XenServer and VMware.
>>>> >> >>>> >>
>>>> >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>>>> >> >>>> >>
>>>> >> >>>> >> * The user creates a CS volume (this is just recorded in the
>>>> >> >>>> >> cloud.volumes
>>>> >> >>>> >> table).
>>>> >> >>>> >>
>>>> >> >>>> >> * The user attaches the volume as a disk to a VM for the
>>>> first
>>>> >> >>>> >> time
>>>> >> >>>> >> (if the
>>>> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>>>> >> >>>> >> framework
>>>> >> >>>> >> invokes
>>>> >> >>>> >> a method on the plug-in that creates a volume on the
>>>> SAN...info
>>>> >> >>>> >> like
>>>> >> >>>> >> the IQN
>>>> >> >>>> >> of the SAN volume is recorded in the DB).
>>>> >> >>>> >>
>>>> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is
>>>> executed.
>>>> >> >>>> >> It
>>>> >> >>>> >> determines based on a flag passed in that the storage in
>>>> question
>>>> >> >>>> >> is
>>>> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>>>> >> >>>> >> preallocated
>>>> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
>>>> >> >>>> >> discovered
>>>> >> >>>> >> it
>>>> >> >>>> >> determines if the iSCSI target already contains a storage
>>>> >> >>>> >> repository
>>>> >> >>>> >> (it
>>>> >> >>>> >> would if this were a re-attach situation). If it does
>>>> contain an
>>>> >> >>>> >> SR
>>>> >> >>>> >> already,
>>>> >> >>>> >> then there should already be one VDI, as well. If there is
>>>> no SR,
>>>> >> >>>> >> an
>>>> >> >>>> >> SR is
>>>> >> >>>> >> created and a single VDI is created within it (that takes up
>>>> about
>>>> >> >>>> >> as
>>>> >> >>>> >> much
>>>> >> >>>> >> space as was requested for the CloudStack volume).
>>>> >> >>>> >>
>>>> >> >>>> >> * The normal attach-volume logic continues (it depends on the
>>>> >> >>>> >> existence of
>>>> >> >>>> >> an SR and a VDI).
>>>> >> >>>> >>
>>>> >> >>>> >> The VMware case is essentially the same (mainly just
>>>> substitute
>>>> >> >>>> >> datastore
>>>> >> >>>> >> for SR and VMDK for VDI).
>>>> >> >>>> >>
>>>> >> >>>> >> In both cases, all hosts in the cluster have discovered the
>>>> iSCSI
>>>> >> >>>> >> target,
>>>> >> >>>> >> but only the host that is currently running the VM that is
>>>> using
>>>> >> >>>> >> the
>>>> >> >>>> >> VDI (or
>>>> >> >>>> >> VMKD) is actually using the disk.
>>>> >> >>>> >>
>>>> >> >>>> >> Live Migration should be OK because the hypervisors
>>>> communicate
>>>> >> >>>> >> with
>>>> >> >>>> >> whatever metadata they have on the SR (or datastore).
>>>> >> >>>> >>
>>>> >> >>>> >> I see what you're saying with KVM, though.
>>>> >> >>>> >>
>>>> >> >>>> >> In that case, the hosts are clustered only in CloudStack's
>>>> eyes.
>>>> >> >>>> >> CS
>>>> >> >>>> >> controls
>>>> >> >>>> >> Live Migration. You don't really need a clustered filesystem
>>>> on
>>>> >> >>>> >> the
>>>> >> >>>> >> LUN. The
>>>> >> >>>> >> LUN could be handed over raw to the VM using it.
>>>> >> >>>> >>
>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>> to have
>>>> >> >>>> >> only a
>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>> ideal.
>>>> >> >>>> >>
>>>> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log
>>>> in to
>>>> >> >>>> >> the
>>>> >> >>>> >> iSCSI
>>>> >> >>>> >> target. I'll also need to take the resultant new device and
>>>> pass
>>>> >> >>>> >> it
>>>> >> >>>> >> into the
>>>> >> >>>> >> VM.
>>>> >> >>>> >>
>>>> >> >>>> >> Does this sound reasonable? Please call me out on anything I
>>>> seem
>>>> >> >>>> >> incorrect
>>>> >> >>>> >> about. :)
>>>> >> >>>> >>
>>>> >> >>>> >> Thanks for all the thought on this, Marcus!
>>>> >> >>>> >>
>>>> >> >>>> >>
>>>> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>>> >> >>>> >> <shadow...@gmail.com>
>>>> >> >>>> >> wrote:
>>>> >> >>>> >>>
>>>> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def,
>>>> and the
>>>> >> >>>> >>> attach
>>>> >> >>>> >>> the disk def to the vm. You may need to do your own
>>>> >> >>>> >>> StorageAdaptor
>>>> >> >>>> >>> and run
>>>> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>>>> >> >>>> >>> libvirt
>>>> >> >>>> >>> iscsi
>>>> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't
>>>> how it
>>>> >> >>>> >>> works on
>>>> >> >>>> >>> xen at the momen., nor is it ideal.
>>>> >> >>>> >>>
>>>> >> >>>> >>> Your plugin will handle acls as far as which host can see
>>>> which
>>>> >> >>>> >>> luns
>>>> >> >>>> >>> as
>>>> >> >>>> >>> well, I remember discussing that months ago, so that a disk
>>>> won't
>>>> >> >>>> >>> be
>>>> >> >>>> >>> connected until the hypervisor has exclusive access, so it
>>>> will
>>>> >> >>>> >>> be
>>>> >> >>>> >>> safe and
>>>> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>>>> >> >>>> >>> connectivity
>>>> >> >>>> >>> with. It
>>>> >> >>>> >>> should revoke access to everything but the target host...
>>>> Except
>>>> >> >>>> >>> for
>>>> >> >>>> >>> during
>>>> >> >>>> >>> migration but we can discuss that later, there's a
>>>> migration prep
>>>> >> >>>> >>> process
>>>> >> >>>> >>> where the new host can be added to the acls, and the old
>>>> host can
>>>> >> >>>> >>> be
>>>> >> >>>> >>> removed
>>>> >> >>>> >>> post migration.
>>>> >> >>>> >>>
>>>> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>>>> >> >>>> >>> <mike.tutkow...@solidfire.com>
>>>> >> >>>> >>> wrote:
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> Yeah, that would be ideal.
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> So, I would still need to discover the iSCSI target, log
>>>> in to
>>>> >> >>>> >>>> it,
>>>> >> >>>> >>>> then
>>>> >> >>>> >>>> figure out what /dev/sdX was created as a result (and
>>>> leave it
>>>> >> >>>> >>>> as
>>>> >> >>>> >>>> is - do
>>>> >> >>>> >>>> not format it with any file system...clustered or not). I
>>>> would
>>>> >> >>>> >>>> pass that
>>>> >> >>>> >>>> device into the VM.
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> Kind of accurate?
>>>> >> >>>> >>>>
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>>>> >> >>>> >>>> <shadow...@gmail.com>
>>>> >> >>>> >>>> wrote:
>>>> >> >>>> >>>>>
>>>> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk
>>>> definitions.
>>>> >> >>>> >>>>> There are
>>>> >> >>>> >>>>> ones that work for block devices rather than files. You
>>>> can
>>>> >> >>>> >>>>> piggy
>>>> >> >>>> >>>>> back off
>>>> >> >>>> >>>>> of the existing disk definitions and attach it to the vm
>>>> as a
>>>> >> >>>> >>>>> block device.
>>>> >> >>>> >>>>> The definition is an XML string per libvirt XML format.
>>>> You may
>>>> >> >>>> >>>>> want to use
>>>> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx
>>>> like I
>>>> >> >>>> >>>>> mentioned,
>>>> >> >>>> >>>>> there are by-id paths to the block devices, as well as
>>>> other
>>>> >> >>>> >>>>> ones
>>>> >> >>>> >>>>> that will
>>>> >> >>>> >>>>> be consistent and easier for management, not sure how
>>>> familiar
>>>> >> >>>> >>>>> you
>>>> >> >>>> >>>>> are with
>>>> >> >>>> >>>>> device naming on Linux.
>>>> >> >>>> >>>>>
>>>> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>> <shadow...@gmail.com>
>>>> >> >>>> >>>>> wrote:
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
>>>> initiator
>>>> >> >>>> >>>>>> inside
>>>> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>>>> lun on
>>>> >> >>>> >>>>>> hypervisor) as
>>>> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file
>>>> that
>>>> >> >>>> >>>>>> resides on a
>>>> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
>>>> migration
>>>> >> >>>> >>>>>> I
>>>> >> >>>> >>>>>> think
>>>> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it
>>>> and
>>>> >> >>>> >>>>>> mount
>>>> >> >>>> >>>>>> it in two
>>>> >> >>>> >>>>>> places to facilitate migration unless its a clustered
>>>> >> >>>> >>>>>> filesystem,
>>>> >> >>>> >>>>>> in which
>>>> >> >>>> >>>>>> case you're back to shared mount point.
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically
>>>> LVM
>>>> >> >>>> >>>>>> with
>>>> >> >>>> >>>>>> a xen
>>>> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't
>>>> use a
>>>> >> >>>> >>>>>> filesystem
>>>> >> >>>> >>>>>> either.
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>> >> >>>> >>>>>> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>
>>>> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do
>>>> you
>>>> >> >>>> >>>>>>> mean
>>>> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could
>>>> do that
>>>> >> >>>> >>>>>>> in
>>>> >> >>>> >>>>>>> CS.
>>>> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>>>> >> >>>> >>>>>>> hypervisor,
>>>> >> >>>> >>>>>>> as far as I
>>>> >> >>>> >>>>>>> know.
>>>> >> >>>> >>>>>>>
>>>> >> >>>> >>>>>>>
>>>> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>>>> >> >>>> >>>>>>> <shadow...@gmail.com>
>>>> >> >>>> >>>>>>> wrote:
>>>> >> >>>> >>>>>>>>
>>>> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless
>>>> there is
>>>> >> >>>> >>>>>>>> a
>>>> >> >>>> >>>>>>>> good
>>>> >> >>>> >>>>>>>> reason not to.
>>>> >> >>>> >>>>>>>>
>>>> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>>>>> <shadow...@gmail.com>
>>>> >> >>>> >>>>>>>> wrote:
>>>> >> >>>> >>>>>>>>>
>>>> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a
>>>> mistake
>>>> >> >>>> >>>>>>>>> to
>>>> >> >>>> >>>>>>>>> go to
>>>> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
>>>> to luns
>>>> >> >>>> >>>>>>>>> and then putting
>>>> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a
>>>> QCOW2
>>>> >> >>>> >>>>>>>>> or
>>>> >> >>>> >>>>>>>>> even RAW disk
>>>> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops
>>>> along
>>>> >> >>>> >>>>>>>>> the
>>>> >> >>>> >>>>>>>>> way, and have
>>>> >> >>>> >>>>>>>>> more overhead with the filesystem and its journaling,
>>>> etc.
>>>> >> >>>> >>>>>>>>>
>>>> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>> >> >>>> >>>>>>>>> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in
>>>> KVM with
>>>> >> >>>> >>>>>>>>>> CS.
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today
>>>> is by
>>>> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the
>>>> location of
>>>> >> >>>> >>>>>>>>>> the
>>>> >> >>>> >>>>>>>>>> share.
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>>>> >> >>>> >>>>>>>>>> discovering
>>>> >> >>>> >>>>>>>>>> their
>>>> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
>>>> somewhere
>>>> >> >>>> >>>>>>>>>> on
>>>> >> >>>> >>>>>>>>>> their file
>>>> >> >>>> >>>>>>>>>> system.
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
>>>> >> >>>> >>>>>>>>>> logging
>>>> >> >>>> >>>>>>>>>> in,
>>>> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting
>>>> the
>>>> >> >>>> >>>>>>>>>> current code manage
>>>> >> >>>> >>>>>>>>>> the rest as it currently does?
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>> >> >>>> >>>>>>>>>> <shadow...@gmail.com> wrote:
>>>> >> >>>> >>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
>>>> need to
>>>> >> >>>> >>>>>>>>>>> catch up
>>>> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just
>>>> disk
>>>> >> >>>> >>>>>>>>>>> snapshots + memory
>>>> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably
>>>> be
>>>> >> >>>> >>>>>>>>>>> handled by the SAN,
>>>> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>>>> >> >>>> >>>>>>>>>>> something else. This is
>>>> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will
>>>> want to
>>>> >> >>>> >>>>>>>>>>> see how others are
>>>> >> >>>> >>>>>>>>>>> planning theirs.
>>>> >> >>>> >>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>>>>>>>> <shadow...@gmail.com>
>>>> >> >>>> >>>>>>>>>>> wrote:
>>>> >> >>>> >>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a
>>>> vdi
>>>> >> >>>> >>>>>>>>>>>> style
>>>> >> >>>> >>>>>>>>>>>> on an
>>>> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>>>> >> >>>> >>>>>>>>>>>> format.
>>>> >> >>>> >>>>>>>>>>>> Otherwise you're
>>>> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
>>>> creating
>>>> >> >>>> >>>>>>>>>>>> a
>>>> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
>>>> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance
>>>> killer.
>>>> >> >>>> >>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
>>>> to the
>>>> >> >>>> >>>>>>>>>>>> VM, and
>>>> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
>>>> >> >>>> >>>>>>>>>>>> plugin
>>>> >> >>>> >>>>>>>>>>>> is best. My
>>>> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was
>>>> that
>>>> >> >>>> >>>>>>>>>>>> there
>>>> >> >>>> >>>>>>>>>>>> was a snapshot
>>>> >> >>>> >>>>>>>>>>>> service that would allow the San to handle
>>>> snapshots.
>>>> >> >>>> >>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>>>>>>>>> <shadow...@gmail.com>
>>>> >> >>>> >>>>>>>>>>>> wrote:
>>>> >> >>>> >>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the
>>>> SAN back
>>>> >> >>>> >>>>>>>>>>>>> end, if
>>>> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>>>> could
>>>> >> >>>> >>>>>>>>>>>>> call
>>>> >> >>>> >>>>>>>>>>>>> your plugin for
>>>> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor
>>>> agnostic. As
>>>> >> >>>> >>>>>>>>>>>>> far as space, that
>>>> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With
>>>> ours, we
>>>> >> >>>> >>>>>>>>>>>>> carve out luns from a
>>>> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>>>> and is
>>>> >> >>>> >>>>>>>>>>>>> independent of the
>>>> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
>>>> >> >>>> >>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>> >> >>>> >>>>>>>>>>>>> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>>>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>> won't
>>>> >> >>>> >>>>>>>>>>>>>> work
>>>> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
>>>> snapshots?
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor
>>>> snapshot, the
>>>> >> >>>> >>>>>>>>>>>>>> VDI for
>>>> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
>>>> repository
>>>> >> >>>> >>>>>>>>>>>>>> as
>>>> >> >>>> >>>>>>>>>>>>>> the volume is on.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>>>> >> >>>> >>>>>>>>>>>>>> XenServer
>>>> >> >>>> >>>>>>>>>>>>>> and
>>>> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>>>> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>>>> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what
>>>> the user
>>>> >> >>>> >>>>>>>>>>>>>> requested for the
>>>> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>>>> >> >>>> >>>>>>>>>>>>>> thinly
>>>> >> >>>> >>>>>>>>>>>>>> provisions volumes,
>>>> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it
>>>> needs to
>>>> >> >>>> >>>>>>>>>>>>>> be).
>>>> >> >>>> >>>>>>>>>>>>>> The CloudStack
>>>> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN
>>>> volume
>>>> >> >>>> >>>>>>>>>>>>>> until
>>>> >> >>>> >>>>>>>>>>>>>> a hypervisor
>>>> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also
>>>> reside on
>>>> >> >>>> >>>>>>>>>>>>>> the
>>>> >> >>>> >>>>>>>>>>>>>> SAN volume.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>>>> >> >>>> >>>>>>>>>>>>>> creation
>>>> >> >>>> >>>>>>>>>>>>>> of
>>>> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
>>>> even
>>>> >> >>>> >>>>>>>>>>>>>> if
>>>> >> >>>> >>>>>>>>>>>>>> there were support
>>>> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN
>>>> per
>>>> >> >>>> >>>>>>>>>>>>>> iSCSI
>>>> >> >>>> >>>>>>>>>>>>>> target), then I
>>>> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current
>>>> way this
>>>> >> >>>> >>>>>>>>>>>>>> works
>>>> >> >>>> >>>>>>>>>>>>>> with DIR?
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> What do you think?
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Thanks
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>>>> access
>>>> >> >>>> >>>>>>>>>>>>>>> today.
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I
>>>> might as
>>>> >> >>>> >>>>>>>>>>>>>>> well
>>>> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>> <shadow...@gmail.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>>>> believe
>>>> >> >>>> >>>>>>>>>>>>>>>> it
>>>> >> >>>> >>>>>>>>>>>>>>>> just
>>>> >> >>>> >>>>>>>>>>>>>>>> acts like a
>>>> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>>>> that. The
>>>> >> >>>> >>>>>>>>>>>>>>>> end-user
>>>> >> >>>> >>>>>>>>>>>>>>>> is
>>>> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that
>>>> all KVM
>>>> >> >>>> >>>>>>>>>>>>>>>> hosts can
>>>> >> >>>> >>>>>>>>>>>>>>>> access,
>>>> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>>>> providing the
>>>> >> >>>> >>>>>>>>>>>>>>>> storage.
>>>> >> >>>> >>>>>>>>>>>>>>>> It could
>>>> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>>>> >> >>>> >>>>>>>>>>>>>>>> filesystem,
>>>> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
>>>> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>>>> >> >>>> >>>>>>>>>>>>>>>> images.
>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
>>>> Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>>> <shadow...@gmail.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
>>>> at the
>>>> >> >>>> >>>>>>>>>>>>>>>> > same
>>>> >> >>>> >>>>>>>>>>>>>>>> > time.
>>>> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>>> >> >>>> >>>>>>>>>>>>>>>> >
>>>> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> > <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>>>> pools:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>>> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>>> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>>>> pool
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> based on
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>>>> have one
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
>>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> does
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> not support
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
>>>> see
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> if
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> supports
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>>>> mentioned,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> since
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> being
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting
>>>> at.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>>>> iSCSI,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> is
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> <shadow...@gmail.com>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>>>> iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>>>> logging
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>>>> work
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>>>> device
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
>>>> more
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
>>>> write your
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
>>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>>>> made to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
>>>> see
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>>>> test
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>>>> iscsi
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>>>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mike.tutkow...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
>>>> of the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
>>>> Marcus
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <shadow...@gmail.com>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
>>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>>>> Tutkowski"
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mike.tutkow...@solidfire.com>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>>>> storage
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
>>>> and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
>>>> establish a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>>>> QoS.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>>>> expected
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>>>> volumes
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>>>> friendly).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>>>> work, I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
>>>> they
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>>>> with
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>>>> how I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>>>> have to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
>>>> it for
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>>>> SolidFire
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>> >> >>>> >>>>>>
>>>>
>>> ...

Reply via email to