Nothing special you need on the Linux end.

Other than what Mike said.

So ... 0x500507680130eda4, 0x500507680140ed9c, 0x500507680120eda4, and
0x500507680110ed9c are the NPIV WWPNs on the Linux end, yes?

What are the storage FA WWPNs?

Linux would be pointing at

   storageFAone/0x0000000000000000
   storageFAtwo/0x0000000000000000
   storageFAone/0x0001000000000000
   storageFAtwo/0x0001000000000000

(Assuming both LUNs are presented by the same FA.)




On Wed, Nov 20, 2013 at 11:57 AM, Martha McConaghy <u...@vm.marist.edu> wrote:
> Rick,
>
> We already did the zoning, etc.  (I can do that stuff in my sleep now.)  I'm
> training someone as my backup, so he needed the practice.
>
> However, I agree that using the same devices for both LUNs would be good,
> especially since one of the LUNs is not very busy.  I just could not get
> zFCP to let me do it.  The two original devices are 2000 and 3000.  I tried
> to define LUN 1 on them in addition to LUN 0.  It just won't take.  Is there
> some parm I need to set instead of the defaults to make it work?
>
> Martha
>
>
> On Wed, 20 Nov 2013 11:55:57 -0500 Richard Troth said:
>>I recommend that you use the same FCP adapters for the new LUN. That
>>way, your (NPIV or not) WWPNs for the guest are unique to that guest
>>regardless how many LUNs it gets. If you add another set of FCP
>>adapters for each LUN, then you'll have to zone and mask each new LUN
>>to a different set of WWPNs ... even though they're intended for the
>>same guest "client".
>>
>>In this case, you need to be sure you brought the new FCPs online to
>>Linux. (Maybe you already did and I missed that. Sorry.)
>>
>>
>>
>>
>>
>>
>>On Wed, Nov 20, 2013 at 11:15 AM, Martha McConaghy <u...@vm.marist.edu> wrote:
>>> I've run into an annoying problem and hope someone can point me in the right
>>> direction.  Its probably a wrong config parm somewhere, but I'm just not
>>> seeing it.
>>>
>>> I have a SLES 11 SP1 server running under z/VM.  It already has 1 SAN LUN
>>> attached to it via direct connections and NPIV.  zFCP and multipathd are
>>> already in place and it works fine.  I'm adding a 2nd LUN to the server, 
>>> from
>>> the same storage host.  At first, I assumed that I should add 2 new paths to
>>> the server for the new LUN, which is what I did.  The original 2 vdevices
>>> are 2000 and 3000.  So, I added 2100 and 3100 and connected them to two new
>>> rdevs on the VM side, as usual.  The LUN was created on the storage host,
>>> mapped to server and SAN zones created.  All good.
>>>
>>> Now, I defined the zFCP configs for 2100 and 3100 and mapped them to LUN 1.
>>> 2000 and 3000 are still mapped to LUN 0.  Things look OK.
>>>
>>> lxfdrwb2:/etc # lszfcp -D
>>> 0.0.2000/0x500507680130eda4/0x0000000000000000 1:0:2:0
>>> 0.0.3000/0x500507680140ed9c/0x0000000000000000 0:0:7:0
>>> 0.0.2100/0x500507680120eda4/0x0000000000000001 2:0:3:0
>>> 0.0.3100/0x500507680110ed9c/0x0000000000000001 3:0:2:0
>>>
>>> However, multipathd continues to ONLY see the original 3.9TB LUN.  It seems
>>> to interpret the changes as 4 paths to LUN 0, instead of 2 paths to LUN 0 
>>> and
>>> 2 paths to LUN 1.
>>>
>>> lxfdrwb2:/etc # multipath -ll
>>> 3600507680180876ce000000000000029 dm-0 IBM,2145
>>> size=3.9T features='1 queue_if_no_path' hwhandler='0' wp=rw
>>> |-+- policy='round-robin 0' prio=50 status=active
>>> | |- 0:0:7:0   sda   8:0   active ready running
>>> | `- 3:0:2:0   sdd   8:48  active ready running
>>> `-+- policy='round-robin 0' prio=10 status=enabled
>>>   |- 1:0:2:0   sdb   8:16  active ready running
>>>   `- 2:0:3:0   sdc   8:32  active ready running
>>>
>>> I've tried flushing the multipath map.  I've even deleted the original zFCP
>>> configuration and rebuilding it.  Nothing seems to help.  It also occurred 
>>> to
>>> me that I might use the original 2 paths (2000 and 3000) to also connect to
>>> LUN 1, but zFCP will have none of that.
>>>
>>> I suspect that there is a multipath or zfcp parameter that I have wrong, but
>>> Googling around hasn't yielded any answers yet.  I'm sure others have done
>>> this, can you steer me in the right direction?  I do this all the time with
>>> Edev disks, but not as much for direct attaches.
>>>
>>> Martha
>>>
>>> ----------------------------------------------------------------------
>>> For LINUX-390 subscribe / signoff / archive access instructions,
>>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>>visit
>>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>> ----------------------------------------------------------------------
>>> For more information on Linux on System z, visit
>>> http://wiki.linuxvm.org/
>>
>>
>>
>>--
>>-- R;
>>Rick Troth
>>Velocity Software
>>http://www.velocitysoftware.com/
>>
>>----------------------------------------------------------------------
>>For LINUX-390 subscribe / signoff / archive access instructions,
>>send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
>>http://www.marist.edu/htbin/wlvindex?LINUX-390
>>----------------------------------------------------------------------
>>For more information on Linux on System z, visit
>>http://wiki.linuxvm.org/
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> ----------------------------------------------------------------------
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to