I see where you're coming from.

John Burwell and I took a different approach for this kind of storage.

If you want to add capacity and/or IOPS to primary storage that's based on
my plug-in, you invoke the updateStoragePool API command and pass in the
new capacity and/or IOPS.

Your existing volumes - the ones in use by the hypervisor - are another
story, though.

I have a JIRA ticket to look into how to modify via CS the capacity and
IOPS of a volume that's based on the SolidFire SAN.

Changing IOPS is no problem. Changing the size of the volume is a bit more
involved.


On Tue, Sep 17, 2013 at 11:03 PM, Marcus Sorensen <shadow...@gmail.com>wrote:

> OK.  Most other storage types interrogate the storage for the
> capacitywhethwr directly or through the hypervisor. This makes it dynamic
> (user could add capacity and cloudstack notices), and provides accurate
> accounting things like thin provisioning. I would be surprised if edison
> didn't allow for this in the new storage framework.
> On Sep 17, 2013 10:34 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
> wrote:
>
> > This should answer your question, I believe:
> >
> > * When you add primary storage that is based on the SolidFire plug-in,
> you
> > specify info like host, port, number of bytes from the SAN that CS can
> use,
> > number of IOPS from the SAN that CS can use, among other info.
> >
> > * When a volume is attached for the first time and the storage framework
> > asks my plug-in to create a volume (LUN) on the SAN, my plug-in
> increments
> > the used_bytes field of the cloud.storage_pool table. If the used_bytes
> > would go above the capacity_bytes, then the allocator would not have
> > selected my plug-in to back the storage. Additionally, if the required
> IOPS
> > would bring the SolidFire SAN above the number of IOPS that were
> dedicated
> > to CS, the allocator would not have selected my plug-in to back the
> > storage.
> >
> > * When a CS volume is deleted that uses my plug-in, the storage framework
> > asks my plug-in to delete the volume (LUN) on the SAN. My plug-in
> > decrements the used_bytes field of the cloud.storage_pool table.
> >
> > So, it just boils down to we don't require the accounting of space and
> IOPS
> > to take place on the hypervisor side.
> >
> >
> > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <shadow...@gmail.com
> > >wrote:
> >
> > > Ok, on most storage pools it shows how many GB free/used when listing
> > > the pool both via API and in the UI. I'm guessing those are empty then
> > > for the solid fire storage, but it seems like the user should have to
> > > define some sort of pool that the luns get carved out of, and you
> > > should be able to get the stats for that, right? Or is a solid fire
> > > appliance only one pool per appliance? This isn't about billing, but
> > > just so cloudstack itself knows whether or not there is space left on
> > > the storage device, so cloudstack can go on allocating from a
> > > different primary storage as this one fills up. There are also
> > > notifications and things. It seems like there should be a call you can
> > > handle for this, maybe Edison knows.
> > >
> > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <shadow...@gmail.com>
> > > wrote:
> > > > You respond to more than attach and detach, right? Don't you create
> > luns
> > > as
> > > > well? Or are you just referring to the hypervisor stuff?
> > > >
> > > > On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> > mike.tutkow...@solidfire.com>
> > > > wrote:
> > > >>
> > > >> Hi Marcus,
> > > >>
> > > >> I never need to respond to a CreateStoragePool call for either
> > XenServer
> > > >> or
> > > >> VMware.
> > > >>
> > > >> What happens is I respond only to the Attach- and Detach-volume
> > > commands.
> > > >>
> > > >> Let's say an attach comes in:
> > > >>
> > > >> In this case, I check to see if the storage is "managed." Talking
> > > >> XenServer
> > > >> here, if it is, I log in to the LUN that is the disk we want to
> > attach.
> > > >> After, if this is the first time attaching this disk, I create an SR
> > > and a
> > > >> VDI within the SR. If it is not the first time attaching this disk,
> > the
> > > >> LUN
> > > >> already has the SR and VDI on it.
> > > >>
> > > >> Once this is done, I let the normal "attach" logic run because this
> > > logic
> > > >> expected an SR and a VDI and now it has it.
> > > >>
> > > >> It's the same thing for VMware: Just substitute datastore for SR and
> > > VMDK
> > > >> for VDI.
> > > >>
> > > >> Does that make sense?
> > > >>
> > > >> Thanks!
> > > >>
> > > >>
> > > >> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> > > >> <shadow...@gmail.com>wrote:
> > > >>
> > > >> > What do you do with Xen? I imagine the user enter the SAN details
> > when
> > > >> > registering the pool? A the pool details are basically just
> > > instructions
> > > >> > on
> > > >> > how to log into a target, correct?
> > > >> >
> > > >> > You can choose to log in a KVM host to the target during
> > > >> > createStoragePool
> > > >> > and save the pool in a map, or just save the pool info in a map
> for
> > > >> > future
> > > >> > reference by uuid, for when you do need to log in. The
> > > createStoragePool
> > > >> > then just becomes a way to save the pool info to the agent.
> > > Personally,
> > > >> > I'd
> > > >> > log in on the pool create and look/scan for specific luns when
> > they're
> > > >> > needed, but I haven't thought it through thoroughly. I just say
> that
> > > >> > mainly
> > > >> > because login only happens once, the first time the pool is used,
> > and
> > > >> > every
> > > >> > other storage command is about discovering new luns or maybe
> > > >> > deleting/disconnecting luns no longer needed. On the other hand,
> you
> > > >> > could
> > > >> > do all of the above: log in on pool create, then also check if
> > you're
> > > >> > logged in on other commands and log in if you've lost connection.
> > > >> >
> > > >> > With Xen, what does your registered pool   show in the UI for
> > > avail/used
> > > >> > capacity, and how does it get that info? I assume there is some
> sort
> > > of
> > > >> > disk pool that the luns are carved from, and that your plugin is
> > > called
> > > >> > to
> > > >> > talk to the SAN and expose to the user how much of that pool has
> > been
> > > >> > allocated. Knowing how you already solves these problems with Xen
> > will
> > > >> > help
> > > >> > figure out what to do with KVM.
> > > >> >
> > > >> > If this is the case, I think the plugin can continue to handle it
> > > rather
> > > >> > than getting details from the agent. I'm not sure if that means
> > nulls
> > > >> > are
> > > >> > OK for these on the agent side or what, I need to look at the
> > storage
> > > >> > plugin arch more closely.
> > > >> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> > > mike.tutkow...@solidfire.com>
> > > >> > wrote:
> > > >> >
> > > >> > > Hey Marcus,
> > > >> > >
> > > >> > > I'm reviewing your e-mails as I implement the necessary methods
> in
> > > new
> > > >> > > classes.
> > > >> > >
> > > >> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
> > all
> > > of
> > > >> > > the pool data (host, port, name, path) which would be used to
> log
> > > the
> > > >> > > host into the initiator."
> > > >> > >
> > > >> > > Can you tell me, in my case, since a storage pool (primary
> > storage)
> > > is
> > > >> > > actually the SAN, I wouldn't really be logging into anything at
> > this
> > > >> > point,
> > > >> > > correct?
> > > >> > >
> > > >> > > Also, what kind of capacity, available, and used bytes make
> sense
> > to
> > > >> > report
> > > >> > > for KVMStoragePool (since KVMStoragePool represents the SAN in
> my
> > > case
> > > >> > and
> > > >> > > not an individual LUN)?
> > > >> > >
> > > >> > > Thanks!
> > > >> > >
> > > >> > >
> > > >> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> > > shadow...@gmail.com
> > > >> > > >wrote:
> > > >> > >
> > > >> > > > Ok, KVM will be close to that, of course, because only the
> > > >> > > > hypervisor
> > > >> > > > classes differ, the rest is all mgmt server. Creating a volume
> > is
> > > >> > > > just
> > > >> > > > a db entry until it's deployed for the first time.
> > > >> > > > AttachVolumeCommand
> > > >> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > >> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a
> > KVM
> > > >> > > > StorageAdaptor) to log in the host to the target and then you
> > > have a
> > > >> > > > block device.  Maybe libvirt will do that for you, but my
> quick
> > > read
> > > >> > > > made it sound like the iscsi libvirt pool type is actually a
> > pool,
> > > >> > > > not
> > > >> > > > a lun or volume, so you'll need to figure out if that works or
> > if
> > > >> > > > you'll have to use iscsiadm commands.
> > > >> > > >
> > > >> > > > If you're NOT going to use LibvirtStorageAdaptor (because
> > Libvirt
> > > >> > > > doesn't really manage your pool the way you want), you're
> going
> > to
> > > >> > > > have to create a version of KVMStoragePool class and a
> > > >> > > > StorageAdaptor
> > > >> > > > class (see LibvirtStoragePool.java and
> > > LibvirtStorageAdaptor.java),
> > > >> > > > implementing all of the methods, then in
> KVMStorageManager.java
> > > >> > > > there's a "_storageMapper" map. This is used to select the
> > correct
> > > >> > > > adaptor, you can see in this file that every call first pulls
> > the
> > > >> > > > correct adaptor out of this map via getStorageAdaptor. So you
> > can
> > > >> > > > see
> > > >> > > > a comment in this file that says "add other storage adaptors
> > > here",
> > > >> > > > where it puts to this map, this is where you'd register your
> > > >> > > > adaptor.
> > > >> > > >
> > > >> > > > So, referencing StorageAdaptor.java, createStoragePool accepts
> > all
> > > >> > > > of
> > > >> > > > the pool data (host, port, name, path) which would be used to
> > log
> > > >> > > > the
> > > >> > > > host into the initiator. I *believe* the method
> getPhysicalDisk
> > > will
> > > >> > > > need to do the work of attaching the lun.  AttachVolumeCommand
> > > calls
> > > >> > > > this and then creates the XML diskdef and attaches it to the
> VM.
> > > >> > > > Now,
> > > >> > > > one thing you need to know is that createStoragePool is called
> > > >> > > > often,
> > > >> > > > sometimes just to make sure the pool is there. You may want to
> > > >> > > > create
> > > >> > > > a map in your adaptor class and keep track of pools that have
> > been
> > > >> > > > created, LibvirtStorageAdaptor doesn't have to do this because
> > it
> > > >> > > > asks
> > > >> > > > libvirt about which storage pools exist. There are also calls
> to
> > > >> > > > refresh the pool stats, and all of the other calls can be seen
> > in
> > > >> > > > the
> > > >> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
> > etc,
> > > >> > > > but
> > > >> > > > it's probably a hold-over from 4.1, as I have the vague idea
> > that
> > > >> > > > volumes are created on the mgmt server via the plugin now, so
> > > >> > > > whatever
> > > >> > > > doesn't apply can just be stubbed out (or optionally
> > > >> > > > extended/reimplemented here, if you don't mind the hosts
> talking
> > > to
> > > >> > > > the san api).
> > > >> > > >
> > > >> > > > There is a difference between attaching new volumes and
> > launching
> > > a
> > > >> > > > VM
> > > >> > > > with existing volumes.  In the latter case, the VM definition
> > that
> > > >> > > > was
> > > >> > > > passed to the KVM agent includes the disks, (StartCommand).
> > > >> > > >
> > > >> > > > I'd be interested in how your pool is defined for Xen, I
> imagine
> > > it
> > > >> > > > would need to be kept the same. Is it just a definition to the
> > SAN
> > > >> > > > (ip address or some such, port number) and perhaps a volume
> pool
> > > >> > > > name?
> > > >> > > >
> > > >> > > > > If there is a way for me to update the ACL list on the SAN
> to
> > > have
> > > >> > > only a
> > > >> > > > > single KVM host have access to the volume, that would be
> > ideal.
> > > >> > > >
> > > >> > > > That depends on your SAN API.  I was under the impression that
> > the
> > > >> > > > storage plugin framework allowed for acls, or for you to do
> > > whatever
> > > >> > > > you want for create/attach/delete/snapshot, etc. You'd just
> call
> > > >> > > > your
> > > >> > > > SAN API with the host info for the ACLs prior to when the disk
> > is
> > > >> > > > attached (or the VM is started).  I'd have to look more at the
> > > >> > > > framework to know the details, in 4.1 I would do this in
> > > >> > > > getPhysicalDisk just prior to connecting up the LUN.
> > > >> > > >
> > > >> > > >
> > > >> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > >> > > > <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> > > >> > > > > different
> > > >> > > from
> > > >> > > > how
> > > >> > > > > it works with XenServer and VMware.
> > > >> > > > >
> > > >> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > >> > > > >
> > > >> > > > > * The user creates a CS volume (this is just recorded in the
> > > >> > > > cloud.volumes
> > > >> > > > > table).
> > > >> > > > >
> > > >> > > > > * The user attaches the volume as a disk to a VM for the
> first
> > > >> > > > > time
> > > >> > (if
> > > >> > > > the
> > > >> > > > > storage allocator picks the SolidFire plug-in, the storage
> > > >> > > > > framework
> > > >> > > > invokes
> > > >> > > > > a method on the plug-in that creates a volume on the
> > SAN...info
> > > >> > > > > like
> > > >> > > the
> > > >> > > > IQN
> > > >> > > > > of the SAN volume is recorded in the DB).
> > > >> > > > >
> > > >> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> > executed.
> > > >> > > > > It
> > > >> > > > > determines based on a flag passed in that the storage in
> > > question
> > > >> > > > > is
> > > >> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > > >> > preallocated
> > > >> > > > > storage). This tells it to discover the iSCSI target. Once
> > > >> > > > > discovered
> > > >> > > it
> > > >> > > > > determines if the iSCSI target already contains a storage
> > > >> > > > > repository
> > > >> > > (it
> > > >> > > > > would if this were a re-attach situation). If it does
> contain
> > an
> > > >> > > > > SR
> > > >> > > > already,
> > > >> > > > > then there should already be one VDI, as well. If there is
> no
> > > SR,
> > > >> > > > > an
> > > >> > SR
> > > >> > > > is
> > > >> > > > > created and a single VDI is created within it (that takes up
> > > about
> > > >> > > > > as
> > > >> > > > much
> > > >> > > > > space as was requested for the CloudStack volume).
> > > >> > > > >
> > > >> > > > > * The normal attach-volume logic continues (it depends on
> the
> > > >> > existence
> > > >> > > > of
> > > >> > > > > an SR and a VDI).
> > > >> > > > >
> > > >> > > > > The VMware case is essentially the same (mainly just
> > substitute
> > > >> > > datastore
> > > >> > > > > for SR and VMDK for VDI).
> > > >> > > > >
> > > >> > > > > In both cases, all hosts in the cluster have discovered the
> > > iSCSI
> > > >> > > target,
> > > >> > > > > but only the host that is currently running the VM that is
> > using
> > > >> > > > > the
> > > >> > > VDI
> > > >> > > > (or
> > > >> > > > > VMKD) is actually using the disk.
> > > >> > > > >
> > > >> > > > > Live Migration should be OK because the hypervisors
> > communicate
> > > >> > > > > with
> > > >> > > > > whatever metadata they have on the SR (or datastore).
> > > >> > > > >
> > > >> > > > > I see what you're saying with KVM, though.
> > > >> > > > >
> > > >> > > > > In that case, the hosts are clustered only in CloudStack's
> > eyes.
> > > >> > > > > CS
> > > >> > > > controls
> > > >> > > > > Live Migration. You don't really need a clustered filesystem
> > on
> > > >> > > > > the
> > > >> > > LUN.
> > > >> > > > The
> > > >> > > > > LUN could be handed over raw to the VM using it.
> > > >> > > > >
> > > >> > > > > If there is a way for me to update the ACL list on the SAN
> to
> > > have
> > > >> > > only a
> > > >> > > > > single KVM host have access to the volume, that would be
> > ideal.
> > > >> > > > >
> > > >> > > > > Also, I agree I'll need to use iscsiadm to discover and log
> in
> > > to
> > > >> > > > > the
> > > >> > > > iSCSI
> > > >> > > > > target. I'll also need to take the resultant new device and
> > pass
> > > >> > > > > it
> > > >> > > into
> > > >> > > > the
> > > >> > > > > VM.
> > > >> > > > >
> > > >> > > > > Does this sound reasonable? Please call me out on anything I
> > > seem
> > > >> > > > incorrect
> > > >> > > > > about. :)
> > > >> > > > >
> > > >> > > > > Thanks for all the thought on this, Marcus!
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > > >> > shadow...@gmail.com>
> > > >> > > > > wrote:
> > > >> > > > >>
> > > >> > > > >> Perfect. You'll have a domain def ( the VM), a disk def,
> and
> > > the
> > > >> > > attach
> > > >> > > > >> the disk def to the vm. You may need to do your own
> > > >> > > > >> StorageAdaptor
> > > >> > and
> > > >> > > > run
> > > >> > > > >> iscsiadm commands to accomplish that, depending on how the
> > > >> > > > >> libvirt
> > > >> > > iscsi
> > > >> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
> > how
> > > it
> > > >> > > works
> > > >> > > > on
> > > >> > > > >> xen at the momen., nor is it ideal.
> > > >> > > > >>
> > > >> > > > >> Your plugin will handle acls as far as which host can see
> > which
> > > >> > > > >> luns
> > > >> > > as
> > > >> > > > >> well, I remember discussing that months ago, so that a disk
> > > won't
> > > >> > > > >> be
> > > >> > > > >> connected until the hypervisor has exclusive access, so it
> > will
> > > >> > > > >> be
> > > >> > > safe
> > > >> > > > and
> > > >> > > > >> fence the disk from rogue nodes that cloudstack loses
> > > >> > > > >> connectivity
> > > >> > > > with. It
> > > >> > > > >> should revoke access to everything but the target host...
> > > Except
> > > >> > > > >> for
> > > >> > > > during
> > > >> > > > >> migration but we can discuss that later, there's a
> migration
> > > prep
> > > >> > > > process
> > > >> > > > >> where the new host can be added to the acls, and the old
> host
> > > can
> > > >> > > > >> be
> > > >> > > > removed
> > > >> > > > >> post migration.
> > > >> > > > >>
> > > >> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > >> > > mike.tutkow...@solidfire.com
> > > >> > > > >
> > > >> > > > >> wrote:
> > > >> > > > >>>
> > > >> > > > >>> Yeah, that would be ideal.
> > > >> > > > >>>
> > > >> > > > >>> So, I would still need to discover the iSCSI target, log
> in
> > to
> > > >> > > > >>> it,
> > > >> > > then
> > > >> > > > >>> figure out what /dev/sdX was created as a result (and
> leave
> > it
> > > >> > > > >>> as
> > > >> > is
> > > >> > > -
> > > >> > > > do
> > > >> > > > >>> not format it with any file system...clustered or not). I
> > > would
> > > >> > pass
> > > >> > > > that
> > > >> > > > >>> device into the VM.
> > > >> > > > >>>
> > > >> > > > >>> Kind of accurate?
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > >> > > shadow...@gmail.com>
> > > >> > > > >>> wrote:
> > > >> > > > >>>>
> > > >> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> > definitions.
> > > >> > There
> > > >> > > > are
> > > >> > > > >>>> ones that work for block devices rather than files. You
> can
> > > >> > > > >>>> piggy
> > > >> > > > back off
> > > >> > > > >>>> of the existing disk definitions and attach it to the vm
> > as a
> > > >> > block
> > > >> > > > device.
> > > >> > > > >>>> The definition is an XML string per libvirt XML format.
> You
> > > may
> > > >> > want
> > > >> > > > to use
> > > >> > > > >>>> an alternate path to the disk rather than just /dev/sdx
> > like
> > > I
> > > >> > > > mentioned,
> > > >> > > > >>>> there are by-id paths to the block devices, as well as
> > other
> > > >> > > > >>>> ones
> > > >> > > > that will
> > > >> > > > >>>> be consistent and easier for management, not sure how
> > > familiar
> > > >> > > > >>>> you
> > > >> > > > are with
> > > >> > > > >>>> device naming on Linux.
> > > >> > > > >>>>
> > > >> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> > > >> > > > >>>> <shadow...@gmail.com>
> > > >> > > > wrote:
> > > >> > > > >>>>>
> > > >> > > > >>>>> No, as that would rely on virtualized network/iscsi
> > > initiator
> > > >> > > inside
> > > >> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your
> lun
> > > on
> > > >> > > > hypervisor) as
> > > >> > > > >>>>> a disk to the VM, rather than attaching some image file
> > that
> > > >> > > resides
> > > >> > > > on a
> > > >> > > > >>>>> filesystem, mounted on the host, living on a target.
> > > >> > > > >>>>>
> > > >> > > > >>>>> Actually, if you plan on the storage supporting live
> > > migration
> > > >> > > > >>>>> I
> > > >> > > > think
> > > >> > > > >>>>> this is the only way. You can't put a filesystem on it
> and
> > > >> > > > >>>>> mount
> > > >> > it
> > > >> > > > in two
> > > >> > > > >>>>> places to facilitate migration unless its a clustered
> > > >> > > > >>>>> filesystem,
> > > >> > > in
> > > >> > > > which
> > > >> > > > >>>>> case you're back to shared mount point.
> > > >> > > > >>>>>
> > > >> > > > >>>>> As far as I'm aware, the xenserver SR style is basically
> > LVM
> > > >> > with a
> > > >> > > > xen
> > > >> > > > >>>>> specific cluster management, a custom CLVM. They don't
> > use a
> > > >> > > > filesystem
> > > >> > > > >>>>> either.
> > > >> > > > >>>>>
> > > >> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > >> > > > >>>>> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>
> > > >> > > > >>>>>> When you say, "wire up the lun directly to the vm," do
> > you
> > > >> > > > >>>>>> mean
> > > >> > > > >>>>>> circumventing the hypervisor? I didn't think we could
> do
> > > that
> > > >> > > > >>>>>> in
> > > >> > > CS.
> > > >> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > > >> > > > >>>>>> hypervisor,
> > > >> > > as
> > > >> > > > far as I
> > > >> > > > >>>>>> know.
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > >> > > > shadow...@gmail.com>
> > > >> > > > >>>>>> wrote:
> > > >> > > > >>>>>>>
> > > >> > > > >>>>>>> Better to wire up the lun directly to the vm unless
> > there
> > > is
> > > >> > > > >>>>>>> a
> > > >> > > good
> > > >> > > > >>>>>>> reason not to.
> > > >> > > > >>>>>>>
> > > >> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > > >> > shadow...@gmail.com>
> > > >> > > > >>>>>>> wrote:
> > > >> > > > >>>>>>>>
> > > >> > > > >>>>>>>> You could do that, but as mentioned I think its a
> > mistake
> > > >> > > > >>>>>>>> to
> > > >> > go
> > > >> > > to
> > > >> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
> to
> > > luns
> > > >> > and
> > > >> > > > then putting
> > > >> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
> > QCOW2
> > > >> > > > >>>>>>>> or
> > > >> > > even
> > > >> > > > RAW disk
> > > >> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
> > along
> > > >> > > > >>>>>>>> the
> > > >> > > > way, and have
> > > >> > > > >>>>>>>> more overhead with the filesystem and its journaling,
> > > etc.
> > > >> > > > >>>>>>>>
> > > >> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > >> > > > >>>>>>>> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in
> KVM
> > > with
> > > >> > CS.
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today
> > is
> > > by
> > > >> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
> location
> > > of
> > > >> > > > >>>>>>>>> the
> > > >> > > > share.
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> > > >> > > > >>>>>>>>> discovering
> > > >> > > their
> > > >> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> > > somewhere
> > > >> > > > >>>>>>>>> on
> > > >> > > > their file
> > > >> > > > >>>>>>>>> system.
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> Would it make sense for me to just do that
> discovery,
> > > >> > > > >>>>>>>>> logging
> > > >> > > in,
> > > >> > > > >>>>>>>>> and mounting behind the scenes for them and letting
> > the
> > > >> > current
> > > >> > > > code manage
> > > >> > > > >>>>>>>>> the rest as it currently does?
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > >> > > > >>>>>>>>> <shadow...@gmail.com> wrote:
> > > >> > > > >>>>>>>>>>
> > > >> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
> need
> > to
> > > >> > catch
> > > >> > > up
> > > >> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
> > > disk
> > > >> > > > snapshots + memory
> > > >> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably
> > be
> > > >> > handled
> > > >> > > > by the SAN,
> > > >> > > > >>>>>>>>>> and then memory dumps can go to secondary storage
> or
> > > >> > something
> > > >> > > > else. This is
> > > >> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
> > want
> > > to
> > > >> > see
> > > >> > > > how others are
> > > >> > > > >>>>>>>>>> planning theirs.
> > > >> > > > >>>>>>>>>>
> > > >> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > >> > > shadow...@gmail.com
> > > >> > > > >
> > > >> > > > >>>>>>>>>> wrote:
> > > >> > > > >>>>>>>>>>>
> > > >> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a
> vdi
> > > >> > > > >>>>>>>>>>> style
> > > >> > on
> > > >> > > > an
> > > >> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> > > >> > > > >>>>>>>>>>> format.
> > > >> > > > Otherwise you're
> > > >> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> > > creating
> > > >> > > > >>>>>>>>>>> a
> > > >> > > > QCOW2 disk image,
> > > >> > > > >>>>>>>>>>> and that seems unnecessary and a performance
> killer.
> > > >> > > > >>>>>>>>>>>
> > > >> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
> to
> > > the
> > > >> > VM,
> > > >> > > > and
> > > >> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> > > >> > > > >>>>>>>>>>> plugin
> > > >> > is
> > > >> > > > best. My
> > > >> > > > >>>>>>>>>>> impression from the storage plugin refactor was
> that
> > > >> > > > >>>>>>>>>>> there
> > > >> > > was
> > > >> > > > a snapshot
> > > >> > > > >>>>>>>>>>> service that would allow the San to handle
> > snapshots.
> > > >> > > > >>>>>>>>>>>
> > > >> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > >> > > > shadow...@gmail.com>
> > > >> > > > >>>>>>>>>>> wrote:
> > > >> > > > >>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the
> SAN
> > > back
> > > >> > end,
> > > >> > > > if
> > > >> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
> > could
> > > >> > > > >>>>>>>>>>>> call
> > > >> > > > your plugin for
> > > >> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> > agnostic.
> > > As
> > > >> > far
> > > >> > > > as space, that
> > > >> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
> ours,
> > > we
> > > >> > carve
> > > >> > > > out luns from a
> > > >> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
> > and
> > > is
> > > >> > > > independent of the
> > > >> > > > >>>>>>>>>>>> LUN size the host sees.
> > > >> > > > >>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > >> > > > >>>>>>>>>>>> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Hey Marcus,
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
> > libvirt
> > > >> > > > >>>>>>>>>>>>> won't
> > > >> > > > work
> > > >> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> > > snapshots?
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
> snapshot,
> > > the
> > > >> > VDI
> > > >> > > > for
> > > >> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> > > repository
> > > >> > > > >>>>>>>>>>>>> as
> > > >> > > the
> > > >> > > > volume is on.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > > >> > > > >>>>>>>>>>>>> XenServer
> > > >> > > and
> > > >> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
> hypervisor
> > > >> > snapshots
> > > >> > > > in 4.2) is I'd
> > > >> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what
> the
> > > user
> > > >> > > > requested for the
> > > >> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> > > >> > > > >>>>>>>>>>>>> thinly
> > > >> > > > provisions volumes,
> > > >> > > > >>>>>>>>>>>>> so the space is not actually used unless it
> needs
> > to
> > > >> > > > >>>>>>>>>>>>> be).
> > > >> > > > The CloudStack
> > > >> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
> > volume
> > > >> > until a
> > > >> > > > hypervisor
> > > >> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
> reside
> > > on
> > > >> > > > >>>>>>>>>>>>> the
> > > >> > > > SAN volume.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > > >> > > > >>>>>>>>>>>>> creation
> > > >> > of
> > > >> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> > > even
> > > >> > > > >>>>>>>>>>>>> if
> > > >> > > > there were support
> > > >> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN
> > per
> > > >> > > > >>>>>>>>>>>>> iSCSI
> > > >> > > > target), then I
> > > >> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current
> way
> > > this
> > > >> > > works
> > > >> > > > >>>>>>>>>>>>> with DIR?
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> What do you think?
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Thanks
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> > > access
> > > >> > > today.
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
> might
> > > as
> > > >> > well
> > > >> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
> Sorensen
> > > >> > > > >>>>>>>>>>>>>> <shadow...@gmail.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> > believe
> > > >> > > > >>>>>>>>>>>>>>> it
> > > >> > > just
> > > >> > > > >>>>>>>>>>>>>>> acts like a
> > > >> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
> that.
> > > The
> > > >> > > > end-user
> > > >> > > > >>>>>>>>>>>>>>> is
> > > >> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that
> all
> > > KVM
> > > >> > hosts
> > > >> > > > can
> > > >> > > > >>>>>>>>>>>>>>> access,
> > > >> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
> providing
> > > the
> > > >> > > > storage.
> > > >> > > > >>>>>>>>>>>>>>> It could
> > > >> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> > > >> > > > >>>>>>>>>>>>>>> filesystem,
> > > >> > > > >>>>>>>>>>>>>>> cloudstack just
> > > >> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> > > >> > > > >>>>>>>>>>>>>>> images.
> > > >> > > > >>>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
> Sorensen
> > > >> > > > >>>>>>>>>>>>>>> <shadow...@gmail.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
> at
> > > the
> > > >> > same
> > > >> > > > >>>>>>>>>>>>>>> > time.
> > > >> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > >> > > > >>>>>>>>>>>>>>> >
> > > >> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> > <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
> > pools:
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > >> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > >> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > >> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > >> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
> > pool
> > > >> > based
> > > >> > > on
> > > >> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
> have
> > > one
> > > >> > LUN,
> > > >> > > > so
> > > >> > > > >>>>>>>>>>>>>>> >>> there would only
> > > >> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
> the
> > > >> > > (libvirt)
> > > >> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
> destroys
> > > >> > > > >>>>>>>>>>>>>>> >>> iSCSI
> > > >> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > >> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> > > >> > > > >>>>>>>>>>>>>>> >>> libvirt
> > > >> > > does
> > > >> > > > >>>>>>>>>>>>>>> >>> not support
> > > >> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
> > see
> > > >> > > > >>>>>>>>>>>>>>> >>> if
> > > >> > > > libvirt
> > > >> > > > >>>>>>>>>>>>>>> >>> supports
> > > >> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> > > mentioned,
> > > >> > since
> > > >> > > > >>>>>>>>>>>>>>> >>> each one of its
> > > >> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > >> > > > targets/LUNs).
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> > > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > >> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > >> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         }
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         }
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>     }
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> > > >> > > > >>>>>>>>>>>>>>> >>>> currently
> > > >> > > being
> > > >> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > >> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting
> > at.
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
> when
> > > >> > > > >>>>>>>>>>>>>>> >>>> someone
> > > >> > > > >>>>>>>>>>>>>>> >>>> selects the
> > > >> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> > > iSCSI,
> > > >> > > > >>>>>>>>>>>>>>> >>>> is
> > > >> > > > that
> > > >> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > >> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> > > >> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> > > >> > > > >>>>>>>>>>>>>>> >>>> <shadow...@gmail.com>
> > > >> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
> > iSCSI
> > > >> > server,
> > > >> > > > and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > >> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> > > >> > > > >>>>>>>>>>>>>>> >>>>> believe
> > > >> > > your
> > > >> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > >> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> > > logging
> > > >> > > > >>>>>>>>>>>>>>> >>>>> in
> > > >> > > and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > >> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
> > work
> > > >> > > > >>>>>>>>>>>>>>> >>>>> in
> > > >> > the
> > > >> > > > Xen
> > > >> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > > >> > > > >>>>>>>>>>>>>>> >>>>> provides
> > > >> > a
> > > >> > > > 1:1
> > > >> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > >> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> > > device
> > > >> > > > >>>>>>>>>>>>>>> >>>>> as
> > > >> > a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > >> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> > > more
> > > >> > about
> > > >> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
> write
> > > your
> > > >> > own
> > > >> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > >> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > > >> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> > > >> > >  We
> > > >> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > >> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
> > the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> java
> > > >> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > http://libvirt.org/sources/java/javadoc/Normally,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls
> made
> > > to
> > > >> > that
> > > >> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > >> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
> > see
> > > >> > > > >>>>>>>>>>>>>>> >>>>> how
> > > >> > > that
> > > >> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > >> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
> > test
> > > >> > > > >>>>>>>>>>>>>>> >>>>> java
> > > >> > > code
> > > >> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > >> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> > > iscsi
> > > >> > > storage
> > > >> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > >> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> > > >> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
> > libvirt
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > more,
> > > >> > > but
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > <mike.tutkow...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
> of
> > > the
> > > >> > > classes
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
> Marcus
> > > >> > Sorensen
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> <shadow...@gmail.com>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
> > the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> > > >> > > for
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> > > >> > > > login.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> > > >> > and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> > > Tutkowski"
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> <mike.tutkow...@solidfire.com>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> > > >> > I
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> > storage
> > > >> > > > framework
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
> > and
> > > >> > delete
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> establish
> > a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> > > >> > > > mapping
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
> > QoS.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> > expected
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >> > > > admin
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> > volumes
> > > >> > would
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> > friendly).
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
> > work, I
> > > >> > needed
> > > >> > > > to
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
> they
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
> > with
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
> this
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> > > >> > > work
> > > >> > > > on
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
> > how
> > > I
> > > >> > will
> > > >> > > > need
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
> > have
> > > to
> > > >> > > expect
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
> it
> > > for
> > > >> > this
> > > >> > > to
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> > SolidFire
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses
> the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
> SolidFire
> > > Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> > > cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > --
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> > > Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> > > cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> --
> > > >> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire
> Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the
> cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> --
> > > >> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire
> Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
> cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >> --
> > > >> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>>>>>>>> >> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> --
> > > >> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>>>>>>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> --
> > > >> > > > >>>>>>>>>>>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>>>>>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>>>>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> --
> > > >> > > > >>>>>>>>> Mike Tutkowski
> > > >> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>>>>> o: 303.746.7302
> > > >> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>> --
> > > >> > > > >>>>>> Mike Tutkowski
> > > >> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>>>>> o: 303.746.7302
> > > >> > > > >>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>> --
> > > >> > > > >>> Mike Tutkowski
> > > >> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>> e: mike.tutkow...@solidfire.com
> > > >> > > > >>> o: 303.746.7302
> > > >> > > > >>> Advancing the way the world uses the cloud™
> > > >> > > > >
> > > >> > > > >
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > --
> > > >> > > > > Mike Tutkowski
> > > >> > > > > Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > > e: mike.tutkow...@solidfire.com
> > > >> > > > > o: 303.746.7302
> > > >> > > > > Advancing the way the world uses the cloud™
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > --
> > > >> > > *Mike Tutkowski*
> > > >> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > >> > > e: mike.tutkow...@solidfire.com
> > > >> > > o: 303.746.7302
> > > >> > > Advancing the way the world uses the
> > > >> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > >> > > *™*
> > > >> > >
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> *Mike Tutkowski*
> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >> e: mike.tutkow...@solidfire.com
> > > >> o: 303.746.7302
> > > >> Advancing the way the world uses the
> > > >> cloud<http://solidfire.com/solution/overview/?video=play>
> > > >> *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to