> I'd hate to have, say, 10,000 KVM hosts all stuck in the same VAG...each
> host having access (technically) to, say, 20,000 volumes.
Plus this modeling of VAGs does nothing to fence against rouge compute
nodes. If we did use VAGs, I'd like to use them in a way to protect against
rouge nodes.
CH
That works.
Right now the cmd takes in CHAP info in the form of four strings (initiator
username, initiator password, target username, and target password).
I like your idea of a generic list better, though.
We could remove the four strings from the cmd and I could use the map in
not just KVM la
Yeah, maybe we just add a "Map connectDetails =
cmd.getConnectDetails()" into attachVolume, and pass that along to
connectPhysicalDisk. Then just set the details where you're currently
setting the chap info prior to the attach command.
On Sat, Sep 28, 2013 at 12:25 AM, Mike Tutkowski
wrote:
> CHA
If grantAccess/revokeAccess worked in the storage plug-in, though, I could
create a VAG per KVM host and give it access to whatever volumes it needs
access to.
When we revoke access, I could remove the IQN of the volume from the
appropriate VAG. If the VAG is empty, I could delete it.
If you had,
CHAP credentials are per SolidFire/CloudStack account. A volume on the
SolidFire SAN belongs to an account. I have a one-to-one mapping between a
CloudStack account and a SolidFire account.
As far as Volume Access Groups (VAGs) go, I'm not sure what kind of limits
they have.
I'd hate to have, say
On a side note, I'm actually a proponent of the storage plugin only talking
to its storage api. Best to be hypervisor agnostic. Its just in the case of
the default storage plugin the storage API is the cloudstack agent.
But enough about that, we can spin that off into another thread if we need
to.
I actually think its irrelevant to this discussion, since its really only a
contention point for the default KVM plugin. I did post this with the
intention of getting feedback from anyone who cares about the KVM side and
how it handles things.
What you describe with Xen sounds identical to what we
I think we should get John Burwell an Edison Su involved in these
discussions.
John was a proponent for not having storage plug-ins talk to hypervisors
and Edison felt it was fine.
Until we resolve that design issue, we might just be talking in circles
here. :)
On Fri, Sep 27, 2013 at 7:29 PM,
Let's see...to answer your question here:
With Xen, as with any hypervisor, the storage framework invokes createAsync
for you to create your volume just before it is to be attached to a VM for
the first time.
A call is not necessarily made from the storage plug-in to register it as
an SR, however
Yeah, OK. That's kind of what I'm talking about. With KVM we have storage
adaptors, so they can just pass the VM info to the hypervisor and it can
select the right way to handle the storage on the fly, which is a bit
easier, even though ultimately you still have to write the hypervisor side
code.
O
Well, the default KVM one does, but only because the hypervisor is playing
the part or storage API. My plugin certainly doesn't have to talk to the
hypervisor, and I don't think I've instructed you to.
On Sep 27, 2013 7:29 PM, "Marcus Sorensen" wrote:
> The plugin itself doesn't talk to the hyper
The plugin itself doesn't talk to the hypervisor.
On Sep 27, 2013 7:28 PM, "Mike Tutkowski"
wrote:
> Yeah, we should bring John Burwell into this conversation. He had a strong
> opposition to storage plug-ins talking to hypervisors. Perhaps he was just
> referring to shared storage, though (not l
Yeah, we should bring John Burwell into this conversation. He had a strong
opposition to storage plug-ins talking to hypervisors. Perhaps he was just
referring to shared storage, though (not local disks).
Anyways, prior to 4.2 for XenServer and VMware, you had to preallocate your
SR/datastore (the
On Fri, Sep 27, 2013 at 6:03 PM, Marcus Sorensen wrote:
> On Fri, Sep 27, 2013 at 5:16 PM, Mike Tutkowski
> wrote:
>> createAsync is just for creating the SAN (or whatever storage) volume.
>> deleteAsync is the reverse.
>
> Exactly. It used to be that the hypervisor created the disk/lun/file
> vo
https://reviews.apache.org/r/14381/
On Fri, Sep 27, 2013 at 6:03 PM, Marcus Sorensen wrote:
> On Fri, Sep 27, 2013 at 5:16 PM, Mike Tutkowski
> wrote:
>> createAsync is just for creating the SAN (or whatever storage) volume.
>> deleteAsync is the reverse.
>
> Exactly. It used to be that the hype
On Fri, Sep 27, 2013 at 5:16 PM, Mike Tutkowski
wrote:
> createAsync is just for creating the SAN (or whatever storage) volume.
> deleteAsync is the reverse.
Exactly. It used to be that the hypervisor created the disk/lun/file
volume via createPhysicalDisk. Now it's done on the SAN if the plugin
createAsync is just for creating the SAN (or whatever storage) volume.
deleteAsync is the reverse.
Technically even the default plug-in should not call into the hypervisor
layer.
The storage layer should probably not be aware of the hypervisor layer.
On Fri, Sep 27, 2013 at 5:14 PM, Mike Tutkow
Well, from what I saw with XenServer and VMware, that hypervisor logic's
attachVolume command also assumed a VDI/VMDK was created in advance.
I had to put logic in those attachVolume methods to create the SR/VDI or
datastore/VMDK.
However, thinking back on it, it might have made more sense for th
Yeah, I can verify that my createPhysicalDisk method is not being
called at all in 4.2. The create happens on the mgmt server in your
driver (createAsync), then you pass the lun info on to KVM in
attachVolume (or StartCommand for starting VMs), and it assumes the
LUN will be there on the SAN when i
On Fri, Sep 27, 2013 at 4:22 PM, Mike Tutkowski
wrote:
> Sure, sounds good - let me know when it's up on Review Board and I can take
> a look.
>
> I made most of the changes you and I talked about:
>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/eb9b2edfc9062f9ca7961fecd5379b180c
URL has changed to this:
https://github.com/mike-tutkowski/incubator-cloudstack/commit/636cf78bcd9d32ae9f20c0ccd631fcf41b829d43
I've just been squashing my commits and sending them to GitHub as one
commit. It should make it easier to see what's changed when comparing
against what's fairly recent
Sure, sounds good - let me know when it's up on Review Board and I can take
a look.
I made most of the changes you and I talked about:
https://github.com/mike-tutkowski/incubator-cloudstack/commit/eb9b2edfc9062f9ca7961fecd5379b180ca3aed1
I have a new idea, though, that I think will simplify this
Ok, I've got our plugin working against 4.2. Tested start vm, stop vm,
migrate vm, attach volume, detach volume. Other functions that we
already had in our StorageAdaptor implementation, such as copying
templates to primary storage, just worked without any modification
from our 4.1 version.
I'll
Thanks for the clarification on how that works.
Also, yeah, I think CHAP only grants you access to a volume. If multiple
hosts are using the CHAP credentials for a single volume, it's up to those
hosts to make sure they don't step on each other's toes (and this is - to
my understanding - how it wo
On Fri, Sep 27, 2013 at 12:21 AM, Mike Tutkowski
wrote:
> Maybe I should seek a little clarification as to how live migration works
> in CS with KVM.
>
> Before we do a live migration of VM 1 from Host 1 to Host 2, do we detach
> all disks from VM1?
>
> If so, then we're good to go there.
>
> I'm
Maybe I should seek a little clarification as to how live migration works
in CS with KVM.
Before we do a live migration of VM 1 from Host 1 to Host 2, do we detach
all disks from VM1?
If so, then we're good to go there.
I'm not as clear with HA.
If VM 1 goes down because Host 1 crashes, is the
Let me clarify this line a bit:
"We get away without this with XenServer and VMware because - as far as I
know - CS delegates HA and live migration to those clusters and they handle
it most likely with some kind of locking protocol on the SR/datastore."
When I set up a XenServer or a VMware clust
On Fri, Sep 27, 2013 at 12:06 AM, Mike Tutkowski
wrote:
> Hey Marcus,
>
> I agree that CHAP does not fulfill the same role as fencing.
>
> I think we're going to have trouble with HA and live migrations on KVM if
> the storage plug-in doesn't have a way of knowing when a host wants to
> access a v
Hey Marcus,
I agree that CHAP does not fulfill the same role as fencing.
I think we're going to have trouble with HA and live migrations on KVM if
the storage plug-in doesn't have a way of knowing when a host wants to
access a volume and when we want to revoke access to that volume.
We get away
On Thu, Sep 26, 2013 at 10:23 PM, Mike Tutkowski
wrote:
> My comments are inline:
>
>
> On Thu, Sep 26, 2013 at 9:10 PM, Marcus Sorensen wrote:
>
>> Ok, let me digest this a bit. I got the github responses but I'd also
>> like to keep it on-list as well.
>>
>> My initial thoughts are:
>>
>> 1) I
My comments are inline:
On Thu, Sep 26, 2013 at 9:10 PM, Marcus Sorensen wrote:
> Ok, let me digest this a bit. I got the github responses but I'd also
> like to keep it on-list as well.
>
> My initial thoughts are:
>
> 1) I don't think disk format and size are necessary parameters for
> connec
Ok, let me digest this a bit. I got the github responses but I'd also
like to keep it on-list as well.
My initial thoughts are:
1) I don't think disk format and size are necessary parameters for
connectPhysicalDisk, as the format can be determined by the adaptor,
and the size is set during the c
Also, if we went the non-CHAP route, before attaching a volume to a VM,
we'd have to tell the plug-in to set up a volume access group.
When a volume is detached from a VM, we'd have to tell the plug-in to
delete the volume access group.
On Thu, Sep 26, 2013 at 7:32 PM, Mike Tutkowski <
mike.tutk
I mention this is my comments on GitHub, as well, but CHAP info is
associated with an account - not a storage pool.
Ideally we could do without CHAP info entirely if we had a reliable way to
tell the storage plug-in that a given host wants to access a given volume.
In this case, my storage plug-in
Hey Marcus,
Thanks for the comments.
I have added comments of my own.
Please let me know what you think.
Thanks!
On Thu, Sep 26, 2013 at 4:56 PM, Marcus Sorensen wrote:
> Looking at your code, is the chap info stored with the pool, so we
> could pass the pool to the adaptor? That would be mo
Looking at your code, is the chap info stored with the pool, so we
could pass the pool to the adaptor? That would be more agnostic,
anyone implementing a plugin could pull the specifics they need for
their stuff out of the pool on the adaptor side, rather than creating
custom signatures.
Also, I t
Oh, SnapshotTestWithFakeData is just modified because the code wasn't
building until I corrected this. It has nothing really to do with my real
changes.
On Thu, Sep 26, 2013 at 4:31 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Hey Marcus,
>
> I implemented your recommendations reg
Hey Marcus,
I implemented your recommendations regarding adding connect and disconnect
methods. It is not yet checked in (as you know, having trouble with my KVM
environment), but it is on GitHub here:
https://github.com/mike-tutkowski/incubator-cloudstack/commit/f2897c65689012e6157c0a0c2ed7e5355
Mike, everyone,
As I've mentioned on the board, I'm working on getting our own
internal KVM storage plugin working on 4.2. In the interest of making
it forward compatible, I just wanted to confirm what you were doing
with the solidfire plugin as far as attaching your iscsi LUNs. We had
discussed
39 matches
Mail list logo