Re: SLES and XIV san

2014-09-30 Thread Berthold Gunreben
On Tue, 30 Sep 2014 03:38:58 +
Smith, Ann (CTO Service Delivery) ann.sm...@thehartford.com wrote:

 For EMC vmax1 and vmax2  san we have 2 fcp addresses dedicated to the
 server/guest id (1 on each of 2 facbrics) 1 zfcp_disk_configure
 command for each lun  - 1 for each fcp address So 2 total
 zfcp_disk_configure commands - 1 per zfcp device - 1 per fabric
 
 For IBM XIV the doc I have found indicates XIV supports 3 paths - not
 sure what you mean by 6 interface modules- but If still have 2 zfcp
 devices - does that mean I can do 3 zfcp_disk_configure's per zfcp
 device ?

It really depends on the setup of the SAN. What you have to do is
running zfcp_disk_configure once for every available path. 

If we assume that the storage is attached to two fabrics, and if we
also assume that it has three interfaces attached to each of the
fabrics, you have 6 different paths from your two zFCP devices to the
storage. This also means, that you have 6 fibre connections from the
storage to your two fabrics.

I don't have XIV here, but the principle is the same with all storages. 

If you really have three ioports per fabric, you can find the
respective WWPN of the IOPORT in the Storage System. With DS8000 and
dscli, it would be shown with 

lsioport

To activate the disks, you will also need the LUN. Again in DS8000 and
dscli, you can find the first 8 numbers with 

showvolgrp -lunmap groupid

The second 8 are just zero in that case. In order to activate a path to
say a WWPN  50050763051B473A and the LUN 4010400A on the zFCP device
0.0.fc00 you would then run the command

zfcp_disk_configure 0.0.fc00 50050763051b473a 4010400a 1

Your local device FC00 would need to be in the same fabric as the
Storage IOPORT with WWPN 50050763051b473a of course.

With SLES11 SP3, after activating the zFCP device (with
zfcp_host_configure), you can also use the command lsluns to display
the available luns. The above disk would probably look similar to this:

Scanning for LUNs on adapter 0.0.fc00
at port 0x50050763051b473a:
0x4010400a

Hope this helps...

Berthold

-- 
--
 Berthold Gunreben  Build Service Team
 http://www.suse.de/ Maxfeldstr. 5
 SUSE LINUX Products GmbH   D-90409 Nuernberg, Germany
 GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer
 HRB 16746 (AG Nürnberg)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES and XIV san

2014-09-29 Thread Smith, Ann (CTO Service Delivery)
For EMC vmax1 and vmax2  san we have 2 fcp addresses dedicated to the 
server/guest id (1 on each of 2 facbrics)
1 zfcp_disk_configure  command for each lun  - 1 for each fcp address
So 2 total zfcp_disk_configure commands - 1 per zfcp device - 1 per fabric

For IBM XIV the doc I have found indicates XIV supports 3 paths - not sure what 
you mean by 6 interface modules- but
If still have 2 zfcp devices - does that mean I can do 3 zfcp_disk_configure's 
per zfcp device ?

We are not NPIV   (wish we were)

Also - Mark I've never used yast for san configuration
Have had issues with yast and Hipersockets with SLES11 (have not had time to 
debug and call in - have to verify not caused by bad statements in SLES10 
configs before upgraded to 11)
But really was sticking with commands partly due to need to recover  san at DR 
site (DR tests use lun snap copies with different hex lun values).
Of course I wonder if XIV will be same- I need to talk to san support folks . 
We have no test san luns - which does not help - trying to get one.
Maybe you know something I don't know (I am sure you do).  Maybe there is a way 
to use yast (we can't use YaST any more) to define san at primary and DR sites 
- I am open for suggestions on that too.

Ann Smith
-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Offer 
Baruch
Sent: Tuesday, September 23, 2014 12:52 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: SLES and XIV san

One more thing...
When using xiv it is important to utilize all 6 interface modules (if you have 
all 6 of them).
Meaning 6 paths to the xiv. 3 from each fabric.
Use at least 2 zfcp devices... 1 from each fabric

Offer Baruch
On Sep 23, 2014 7:42 PM, Offer Baruch offerbar...@gmail.com wrote:

 Hi...
 If you are using npiv you can use the following zfcp module parameter:
 zfcp.allow_lun_scan=1
 This will automatically add the zfcp luns automatically.
 All you need is to set the zfcp device online and all luns that are 
 available through the san will be there dynamically.

 Really no need to define them one by one.

 Offer Baruch
 On Sep 23, 2014 5:13 PM, Mark Post mp...@suse.com wrote:

  On 9/23/2014 at 08:45 AM, Smith, Ann (CTO Service Delivery)
 ann.sm...@thehartford.com wrote:
  We need to migrate all san luns from EMC vmax2 to IBM XIV.
  I've been looking for doc on how to configure XIV luns with SLES11.
  For example - are multiple zfcp_disk_configure commands required?

 I would say that if you're going to be adding (and eventually 
 removing) a fair number of LUNs from each guest, then using the YaST 
 dialog would probably be more efficient for you.  If ti's only one or 
 two, then manually running a zcfp_disk_configure command for each of them 
 won't be too bad.

  How many fcp addresses are dedicated to VM guest?

 Only you can decide how many there should be, or discover how many 
 there are currently.

  I am a bit confused about the additional paths.

 Which additional paths are you referring to?


 Mark Post

 -
 - For LINUX-390 subscribe / signoff / archive access instructions, 
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 
 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 -
 - For more information on Linux on System z, visit 
 http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/

This communication, including attachments, is for the exclusive use of 
addressee and may contain proprietary, confidential and/or privileged 
information.  If you are not the intended recipient, any use, copying, 
disclosure, dissemination or distribution is strictly prohibited.  If you are 
not the intended recipient, please notify the sender immediately by return 
e-mail, delete this communication and destroy all copies.



SLES and XIV san

2014-09-23 Thread Smith, Ann (CTO Service Delivery)
We need to migrate all san luns from EMC vmax2 to IBM XIV.
I've been looking for doc on how to configure XIV luns with SLES11.
For example - are multiple zfcp_disk_configure commands required?
How many fcp addresses are dedicated to VM guest?
I am a bit confused about the additional paths.

If other folks have moved to XIV perhaps they can point me in the correct 
direction.
Thanks

Ann Smith


This communication, including attachments, is for the exclusive use of 
addressee and may contain proprietary, confidential and/or privileged 
information.  If you are not the intended recipient, any use, copying, 
disclosure, dissemination or distribution is strictly prohibited.  If you are 
not the intended recipient, please notify the sender immediately by return 
e-mail, delete this communication and destroy all copies.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES and XIV san

2014-09-23 Thread Rick Troth
On 09/23/2014 08:45 AM, Smith, Ann (CTO Service Delivery) wrote:
 We need to migrate all san luns from EMC vmax2 to IBM XIV.
 I've been looking for doc on how to configure XIV luns with SLES11.
 For example - are multiple zfcp_disk_configure commands required?
 How many fcp addresses are dedicated to VM guest?
 I am a bit confused about the additional paths.

I recommend that you configure your IBM LUNs very much like you've
configured your EMC LUNs. But might help to put them on different FCP
adapters. (If there is/was any adapter re-use, which often does make
sense.)

How are the EMC volumes defined now? how many paths?

Are you using LVM? If so, you can _possibly let LVM do the actual
migration_. That's a big help.

Is the IBM storage on different switches or the same fabric?

As far as filesystems and data, a LUN is a LUN. Seriously. It should not
be difficult once the layout is identified.






--

Rick Troth
Senior Software Developer

Velocity Software Inc.
Mountain View, CA 94041
Main: (877) 964-8867
Direct: (614) 594-9768
ri...@velocitysoftware.com mailto:ri...@velocitysoftware.com




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES and XIV san

2014-09-23 Thread Mark Post
 On 9/23/2014 at 08:45 AM, Smith, Ann (CTO Service Delivery)
ann.sm...@thehartford.com wrote: 
 We need to migrate all san luns from EMC vmax2 to IBM XIV.
 I've been looking for doc on how to configure XIV luns with SLES11.
 For example - are multiple zfcp_disk_configure commands required?

I would say that if you're going to be adding (and eventually removing) a fair 
number of LUNs from each guest, then using the YaST dialog would probably be 
more efficient for you.  If ti's only one or two, then manually running a 
zcfp_disk_configure command for each of them won't be too bad.

 How many fcp addresses are dedicated to VM guest?

Only you can decide how many there should be, or discover how many there are 
currently.

 I am a bit confused about the additional paths.

Which additional paths are you referring to?


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES and XIV san

2014-09-23 Thread Offer Baruch
Hi...
If you are using npiv you can use the following zfcp module parameter:
zfcp.allow_lun_scan=1
This will automatically add the zfcp luns automatically.
All you need is to set the zfcp device online and all luns that are
available through the san will be there dynamically.

Really no need to define them one by one.

Offer Baruch
On Sep 23, 2014 5:13 PM, Mark Post mp...@suse.com wrote:

  On 9/23/2014 at 08:45 AM, Smith, Ann (CTO Service Delivery)
 ann.sm...@thehartford.com wrote:
  We need to migrate all san luns from EMC vmax2 to IBM XIV.
  I've been looking for doc on how to configure XIV luns with SLES11.
  For example - are multiple zfcp_disk_configure commands required?

 I would say that if you're going to be adding (and eventually removing) a
 fair number of LUNs from each guest, then using the YaST dialog would
 probably be more efficient for you.  If ti's only one or two, then manually
 running a zcfp_disk_configure command for each of them won't be too bad.

  How many fcp addresses are dedicated to VM guest?

 Only you can decide how many there should be, or discover how many there
 are currently.

  I am a bit confused about the additional paths.

 Which additional paths are you referring to?


 Mark Post

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES and XIV san

2014-09-23 Thread Offer Baruch
One more thing...
When using xiv it is important to utilize all 6 interface modules (if you
have all 6 of them).
Meaning 6 paths to the xiv. 3 from each fabric.
Use at least 2 zfcp devices... 1 from each fabric

Offer Baruch
On Sep 23, 2014 7:42 PM, Offer Baruch offerbar...@gmail.com wrote:

 Hi...
 If you are using npiv you can use the following zfcp module parameter:
 zfcp.allow_lun_scan=1
 This will automatically add the zfcp luns automatically.
 All you need is to set the zfcp device online and all luns that are
 available through the san will be there dynamically.

 Really no need to define them one by one.

 Offer Baruch
 On Sep 23, 2014 5:13 PM, Mark Post mp...@suse.com wrote:

  On 9/23/2014 at 08:45 AM, Smith, Ann (CTO Service Delivery)
 ann.sm...@thehartford.com wrote:
  We need to migrate all san luns from EMC vmax2 to IBM XIV.
  I've been looking for doc on how to configure XIV luns with SLES11.
  For example - are multiple zfcp_disk_configure commands required?

 I would say that if you're going to be adding (and eventually removing) a
 fair number of LUNs from each guest, then using the YaST dialog would
 probably be more efficient for you.  If ti's only one or two, then manually
 running a zcfp_disk_configure command for each of them won't be too bad.

  How many fcp addresses are dedicated to VM guest?

 Only you can decide how many there should be, or discover how many there
 are currently.

  I am a bit confused about the additional paths.

 Which additional paths are you referring to?


 Mark Post

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/