Just as an FYI to those it may concern:

The way I got around this issue with XenServer is to use a SolidFire
feature we refer to as Volume Access Groups (VAGs).

A VAG is essentially a way to map a host IQN to the volumes it can access
on the SAN without using CHAP.

For the sake of consistency (and because VAGs are actually more powerful
than CHAP), I went ahead and updated the SolidFire plug-in to use VAGs for
all of the hypervisor types it currently supports (XenServer, VMware, and
KVM).

Technically the SolidFire plug-in has no knowledge of the hypervisor in
question, so my updates also required that I continue with work Edison Su
had started in 4.2 to notify storage plug-ins when a host needs access to a
volume and when a host's access to a volume should be revoked. At these
notifications points is when VAGs are leveraged.


On Wed, Dec 18, 2013 at 7:56 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> One alternative I have is to abandon using SolidFire's multi-tenancy
> ability (separating volumes by accounts for reporting purposes). If I only
> used one uber account, then I'd only be using one set of credentials.
>
> Another is more work and would have to wait until 4.4: Add logic to the
> storage plug-in framework to notify storage plug-ins when a host needs
> access to a given volume that's supplied by that plug-in's storage. In this
> model, the plug-in could choose to not use CHAP and instead tell the
> storage system that host x is OKed for accessing volume y.
>
> We already have the ability to listen for host-connect-to-storage events,
> but that is not granular enough for my purposes (as my storage is zone wide
> and can be used by tons of hosts at the same time, so I wouldn't want to
> give them all access to all of the volumes of the SAN).
>
> Does anyone know if this is just a XenServer problem?
>
> Thanks
>
>
> On Wed, Dec 18, 2013 at 7:37 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Good points, Tim.
>>
>> Do you know if this is an issue the XenServer group plans to address
>> anytime soon?
>>
>> Thanks
>>
>>
>> On Wed, Dec 18, 2013 at 5:13 PM, Tim Mackey <tmac...@gmail.com> wrote:
>>
>>> The problem with doing that is during host reboot. Then only one of the
>>> credential sets will be used to do the discovery, and on each reboot a
>>> discovery will occur. It'll also have issues with multipath.
>>>
>>> There will also be an issue during pool join, and there could be
>>> replication issues in xenstore.
>>>
>>> Net is that you can make things work by doing that, but error recovery
>>> paths and reboots break.
>>> On Dec 18, 2013 6:07 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
>>> wrote:
>>>
>>> > We have noticed if I ssh into XenServer and delete the file /etc/iscsi/
>>> > 10.10.8.108,3260 (where 10.10.8.108 is our storage's IP address and
>>> 3260 is
>>> > the port) that I can create SRs using different CHAP credentials.
>>> >
>>> > Can anyone think of a "got-cha" here?
>>> >
>>> > Thanks!
>>> >
>>> >
>>> > On Wed, Dec 18, 2013 at 3:18 PM, Tim Mackey <tmac...@gmail.com> wrote:
>>> >
>>> > > Mike,
>>> > >
>>> > > I'm referring to the open-iscsi code used by XAPI.  XAPI has a
>>> storage
>>> > > manager API which deals with all the SR management.  It's also where
>>> the
>>> > > issue you're running into exists.
>>> > >
>>> > > In terms of clearing the connections and credentials, the easiest
>>> way is
>>> > > via a reboot.  Since your using multiple CHAP credentials, only one
>>> set
>>> > > will be used and any SRs which use a different CHAP secret will fail
>>> to
>>> > > have their targets discovered during the pdb-plug phase of storage
>>> > > initialization.  You can then destroy the SRs which failed to come
>>> up and
>>> > > move forward.
>>> > >
>>> > > -tim
>>> > >
>>> > >
>>> > > On Wed, Dec 18, 2013 at 4:35 PM, Mike Tutkowski <
>>> > > mike.tutkow...@solidfire.com> wrote:
>>> > >
>>> > > > Hey Tim,
>>> > > >
>>> > > > When you refer to modifying the storage manager, are you referring
>>> to
>>> > > > CloudStack?
>>> > > >
>>> > > > Perhaps you are referring to CitrixResourceBase, which is where we
>>> > > discover
>>> > > > and log in to iSCSI targets.
>>> > > >
>>> > > > Do you know of a way to delete those cached CHAP credentials via
>>> XAPI
>>> > so
>>> > > > when new ones are used for discovery they can work?
>>> > > >
>>> > > > Thanks!
>>> > > >
>>> > > >
>>> > > > On Wed, Dec 18, 2013 at 2:22 PM, Tim Mackey <tmac...@gmail.com>
>>> wrote:
>>> > > >
>>> > > > > Unfortunately what you're experiencing is how it works.  While
>>> > > XenServer
>>> > > > > does support different CHAP credentials by SR, it only supports a
>>> > > single
>>> > > > > CHAP credential for discovery.  It can be made to work, but you'd
>>> > need
>>> > > to
>>> > > > > either modify how the storage manager works to pull it off, or
>>> > rewrite
>>> > > > some
>>> > > > > of the init scripts to cache the discovery data.
>>> > > > >
>>> > > > >
>>> > > > > On Wed, Dec 18, 2013 at 3:55 PM, Mike Tutkowski <
>>> > > > > mike.tutkow...@solidfire.com> wrote:
>>> > > > >
>>> > > > > > Hi,
>>> > > > > >
>>> > > > > > I just noticed a problem today while creating SRs in XenServer.
>>> > > Perhaps
>>> > > > > > someone with related experience could point me in the right
>>> > > direction.
>>> > > > > >
>>> > > > > > Let's say my SAN's management IP address is X.
>>> > > > > >
>>> > > > > > I can have XenServer create a shared SR using IP address X with
>>> > CHAP
>>> > > > > > credentials Y.
>>> > > > > >
>>> > > > > > If I try to have XenServer create another shared SR using IP
>>> > address
>>> > > X
>>> > > > > that
>>> > > > > > uses different CHAP credentials (ex. CHAP credentials Z),
>>> XenServer
>>> > > > > returns
>>> > > > > > a discovery failure.
>>> > > > > >
>>> > > > > > It's like XenServer is expecting all iSCSI targets at the same
>>> IP
>>> > > > address
>>> > > > > > to have the same CHAP credentials.
>>> > > > > >
>>> > > > > > Does anyone know if I am mistaken here? This seems like a
>>> critical
>>> > > > defect
>>> > > > > > if this is true.
>>> > > > > >
>>> > > > > > Thanks!
>>> > > > > >
>>> > > > > > --
>>> > > > > > *Mike Tutkowski*
>>> > > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > > > > e: mike.tutkow...@solidfire.com
>>> > > > > > o: 303.746.7302
>>> > > > > > Advancing the way the world uses the
>>> > > > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > > > > > *™*
>>> > > > > >
>>> > > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > > --
>>> > > > *Mike Tutkowski*
>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > > e: mike.tutkow...@solidfire.com
>>> > > > o: 303.746.7302
>>> > > > Advancing the way the world uses the
>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > > > *™*
>>> > > >
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkow...@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>> >
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkow...@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the 
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to