Re: SAN and Linux on zSeries

2007-12-11 Thread Robert J Brenneman
On Dec 10, 2007 5:29 PM, Alan Altmark [EMAIL PROTECTED] wrote:

 Then, if you have to replace a
 card or switch, you can figure out who gets what.  (Though I assume that's
 not a new problem for SAN managers.)

 Alan Altmark
 z/VM Development
 IBM Endicott


Hmm - actually -

If you're using NPIV, you've defined pseudo WWPNs to be assigned to the FCP
subchannels and that gets stored on the SE ( thinkpad on the side )

If you lose that FCP port and Service replaces the adapter, then once you
bring the card back online and get the switch talking to it ( the NPIV
config on the switch side of the link ) you should be back in business. The
SE will write the NPIV config back onto the channel when it comes online.
You shouldn't have to change any Fabric Zoning in the switches or LUN
Masking in the storage subsystem.

I'll have to check that it actually works this way - but this could be a
major reason to use NPIV in and of itself. It could conceivably make
management easier in addition to security and access control.


-- 
Jay Brenneman


Re: SAN and Linux on zSeries

2007-12-10 Thread Romanowski, John (OFT)
Ray,
You mention number of usable NPIV subchannels are subject to the switch 
limitations.

Are switch limitations what the IBM Redbook is referencing when it says don't 
use more than 32 subchannels per physical channel in NPIV mode? 
With the Brocade switch I could use 255 subchannels in NPIV mode?

(see Introducing N_Port Identifier Virtualization
for IBM System z9  page 4
http://www.redbooks.ibm.com/redpapers/pdfs/redp4125.pdf )

 Configuration considerations 
Some general recommendations for using NPIV include:
Do not use more than 32 subchannels per physical channel in NPIV mode.
 Also, do not perform more than 128 total target logins (for example, in a 
configuration with 32 subchannels, limit the number of target logins to no more 
than an average of 4). Using more subchannels, target logins, or both can 
create timeouts.




This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.




From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of 
Raymond Higgs
Sent: Saturday, December 08, 2007 2:01 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SAN and Linux on zSeries


snip
Not all 480 subchannels can be used with NPIV because of switch limitations.  
The 480 number was chosen arbitrarily from the layout of the SIGA vector long 
before NPIV was implemented.  I think Brocade supports the most virtual nports 
per physical port, 255. 

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
[EMAIL PROTECTED]


Re: SAN and Linux on zSeries

2007-12-10 Thread David Boyes
 SMI-S provides a standard management API to make tool building easier. 
 
This is an interesting statement. Would there be interest in a CMS-based SMI-S 
library to enable building tools to automate this sort of stuff, and would IBM 
integrate it if one was available? Seems to be a good enablement project if it 
were.
 
 


Re: SAN and Linux on zSeries

2007-12-10 Thread David Boyes
 The zoning and access control isn't going to go away just because CP is
 involved.

Correct, but you don't have to do it as frequently, which is the point. There 
are too many places in the world where this stuff is done by hand by people 
with mouses.  A lot of people like the EDEV approach because it minimizes the 
amount of time you have to wait for other people to do stuff, and the tools we 
have understand how to manage that right now without modifications. 

 The current thinking is
 that guests need to be managed just like their distributed cousins when it
 comes to SAN connectivity.
 
I think this argument is full of it depends on your organization.   
 
I'd argue that VM needs to provide a virtual analogue to a SAN switch, just as 
you currently provide the VSWITCH for data networking connectivity. IF that 
were the case, and VM implemented the SMI standard interconnects with external 
switches, then this problem goes away and we all live happily ever after 
because the SAN people connect me once and I look like any other peer switch in 
their fabric. Then we can talk about tooling and automation as peers, not as 
clients. 8-)

 Wouldn't it be better to use tools that can talk to the storage
 controllers, the switches, and VM (via SMI-S  or CIM) in order to properly
 orchestrate storage provisioning rather than trying to get the tail to wag
 the dog?  
 
See above. That'd be the ideal solution, but that involves convincing people 
who normally report to different food chains, and going against the management 
by glossy magazine crowd who believe that GUI is god.  3 to 5 years out, this 
is the way to go. Right now, it's not a realistic or practical view in most 
organizations. If you're saying that we should port Aperi or one of the general 
storage manager tools that understand SMI-S, that's more interesting, 
especially if we can get it to understand CMS volumes. 
 
 


Re: SAN and Linux on zSeries

2007-12-10 Thread Rob van der Heij
On Dec 9, 2007 11:08 PM, Alan Altmark [EMAIL PROTECTED] wrote:

 Are you really suggesting inserting CP into the middle of SCSI I/O in
 order to create FC minidisks?  If so, that requirement needs to be made
 with bold underscored italics to get attention.  The current thinking is
 that guests need to be managed just like their distributed cousins when it
 comes to SAN connectivity.

You make it sound like a virtualization is some novel idea that we
would not be certain if it would stay around. CP has the ability to
isolate the virtual machine from details (like where the data is, what
type of device the data is, etc). When the guest does not need to be
aware of such details, CP has the freedom to arrange things has one
sees fit on a system wide level (duplication of data, multiple paths,
amount of cache, etc).
As long as Linux configuration needs to have intimate knowledge about
how its data is managed, things like performance management and
disaster fall-back will remain very fragile at least.

Given the way Linux uses disk space, I would suggest that z/VM should
provide the virtual machine with a bunch of blocks (probably
over-commit that as well, like with SFS). A high-level interface for
the I/O operation will give CP freedom in performing the operations in
a way that is efficient (with the added bonus that you don't need to
virtual machine to be involved to drive the actual I/O).

Rob
-- 
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/


Re: SAN and Linux on zSeries

2007-12-10 Thread David Boyes
 I'd argue that VM needs to provide a virtual analogue to a SAN switch, just 
 as you currently provide the VSWITCH for data
  networking connectivity. IF that were the case, and VM implemented the SMI 
 standard interconnects with external switches, 
 then this problem goes away and we all live happily ever after because the 
 SAN people connect me once and I look like any 
 other peer switch in their fabric. Then we can talk about tooling and 
 automation as peers, not as clients. 8-)

This prompts another look at iSCSI support, methinks. iSCSI support for SFS or 
BFS would be *very* interesting, especially coupled with VSWITCH.  That would 
mirror the general direction in the small systems world as well (10G Ethernet 
is making significant inroads into the SAN marketplace; wouldn't it be fun to 
*beat* the curve rather than follow it -- just once?). 

It'd also actually justify using SFS/BFS for normal VM use, IMHO, especially 
since mainline Linux code now supports iSCSI root partitions. It'd be very 
interesting to move the boot kernel to NSS, and then use iSCSI to get 
everything else. Would also remove all that zipl mucking about. 

 



Re: SAN and Linux on zSeries

2007-12-10 Thread Raymond Higgs
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
12/10/2007 08:33:38 AM:

 Ray,
 You mention number of usable NPIV subchannels are subject to the 
 switch limitations.
 
 Are switch limitations what the IBM Redbook is referencing when it 
 says don't use more than 32 subchannels per physical channel in NPIV 
mode? 
 With the Brocade switch I could use 255 subchannels in NPIV mode?
 
 (see Introducing N_Port Identifier Virtualization
 for IBM System z9  page 4
 http://www.redbooks.ibm.com/redpapers/pdfs/redp4125.pdf )
 
  Configuration considerations 
 Some general recommendations for using NPIV include:
 Do not use more than 32 subchannels per physical channel in NPIV mode.
  Also, do not perform more than 128 total target logins (for 
 example, in a configuration with 32 subchannels, limit the number of
 target logins to no more than an average of 4). Using more 
 subchannels, target logins, or both can create timeouts.
 
 

John,

Yes, this is partially a switch limitation.  The process to log an NPIV 
port into the fabric takes milliseconds.  When there are lots of these, 
the time becomes seconds.  When we did NPIV bringup, we discovered 
situations where linux would time out and escalate its recovery.  The 
easiest way to cause this is with fibre pulls.  Though, any event that 
causes link down, up should lead to it too.

FWIW, using lots of small zones may help alleviate these operating system 
time outs.  When the link goes down/up, the number of state change 
notifications generated by the the switch will be less.  So the switch 
will have more resources to handle logins.

If you are brave, you can try to use more NPIV subchannels.  There is 
nothing in the FCP firmware to prevent you from doing so.

Keep in mind that when we make configuration recommendations, we have to 
make them for the worst case.

It's nice to hear that people are actually interested :)

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
[EMAIL PROTECTED]

Re: SAN and Linux on zSeries

2007-12-10 Thread Raymond Higgs
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
12/10/2007 10:50:40 AM:

  SMI-S provides a standard management API to make tool building easier. 

 
 This is an interesting statement. Would there be interest in a CMS-
 based SMI-S library to enable building tools to automate this sort 
 of stuff, and would IBM integrate it if one was available? Seems to 
 be a good enablement project if it were.
 
 

There would probably need to be some i390 and SE work done too.  They 
manage the WWPNs, and really have the complete picture.

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
[EMAIL PROTECTED]

Re: SAN and Linux on zSeries

2007-12-10 Thread Rick Troth
On Sun, 9 Dec 2007, Alan Altmark wrote:
 Why is it important that VM manage the storage?  Why can't give me a disk
 xxx GB in size be sent to the SAN fabric directly instead of to VM?  I
 mean, you still have to send the give me a disk request to the SAN in
 order to provision the primordial pool managed by VM.

Talking to VM:Secure or DirMaint is orders of magnitude faster
than talking to the SAN people.  Where N-port ID Virtualization
comes into play, the human game gets even slower: we have to have
per-guest virtual WWPN(s) before we make the storage request.

Please understand:
I am not knocking NPIV.
I am only pointing out to those new to SAN
that it is more like they are used to and may also be
easier to manage if the storage is pooled into something VM controls.

-- R;


Re: SAN and Linux on zSeries

2007-12-10 Thread Alan Altmark
On Monday, 12/10/2007 at 10:52 EST, David Boyes [EMAIL PROTECTED] 
wrote:
  SMI-S provides a standard management API to make tool building 
easier. 
  
 This is an interesting statement. Would there be interest in a CMS-based 
SMI-S 
 library to enable building tools to automate this sort of stuff, and 
would IBM 
 integrate it if one was available? Seems to be a good enablement project 
if it 
 were.

Yes, there would be interest by those of us who play in the Systems 
Management arena.  Your question about what IBM would or wouldn't do is 
something I can't answer.  If you'd like to propose a joint study or other 
project and can estimate the cost to IBM, you know who to contact.  :-)

Now, if only all the storage controllers out there would implement SMI-S 
interfaces...

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-10 Thread Raymond Higgs
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
12/10/2007 03:08:00 PM:

 Now, if only all the storage controllers out there would implement SMI-S 

 interfaces...
 
 Alan Altmark
 z/VM Development
 IBM Endicott

We can't hide behind that excuse.  A quick google shows IBM storage, EMC, 
Hitachi, Storage Tek, Cisco and Brocade have some level of support :)

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
[EMAIL PROTECTED]

Re: SAN and Linux on zSeries

2007-12-10 Thread Alan Altmark
On Monday, 12/10/2007 at 12:15 EST, Raymond Higgs/Poughkeepsie/[EMAIL 
PROTECTED] 
wrote:
 The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
12/10/2007 
 10:50:40 AM:
 
   SMI-S provides a standard management API to make tool building 
easier. 

  This is an interesting statement. Would there be interest in a CMS-
  based SMI-S library to enable building tools to automate this sort 
  of stuff, and would IBM integrate it if one was available? Seems to 
  be a good enablement project if it were. 


 
 There would probably need to be some i390 and SE work done too.  They 
manage 
 the WWPNs, and really have the complete picture. 

There is an entire stack of touchpoints that need to be included.  To 
properly provision storage you have to follow six easy steps  ;-)
1. Contact one or more storage controllers to get new LUNs created or 
existing ones allocated
2. Contact VM to find out what WWPN (guest or CP EDEV) needs access to the 
LUNs.  This is needed only if you supply userid as input to the 
provisioning request rather than guest WWPN.
3. Configure an FCP adapter to the VM LPAR if necessary
4. Have VM allocate the correct subchannel to the guest or define an EDEV
5. Configure the switch to let the relevant FCP port/WWPNs talk to the 
LUNs (zoning/masking as needed)
6. Configure the guest to use the new subchannel if necessary and access 
the LUN

I'm sure I left something out.  :-)  The above assumes a static allocation 
of WWPN to a guest (which the current NPIV WWPN allocation mechanism 
attempts to do), so once you assign a particular rdev (subchannel) to a 
guest, leave it there.  That means, too, that you should write down that 
relationship somewhere and keep it safe.  Then, if you have to replace a 
card or switch, you can figure out who gets what.  (Though I assume that's 
not a new problem for SAN managers.)

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-09 Thread David Boyes
  No no ... it's not a question of how many guests can share an
  FCP adapter.  The trouble is that directly connected guests are
  managerially more like discrete systems, so there is no way from VM
  to manage the storage.
 
 Why is it important that VM manage the storage?  Why can't give me a
disk
 xxx GB in size be sent to the SAN fabric directly instead of to VM?
I
 mean, you still have to send the give me a disk request to the SAN
in
 order to provision the primordial pool managed by VM.

I think it has more to do with corporate structure and operations than
any technical reason (although there are some good technical reasons,
too). If your SAN is managed by a different group, then you gotta go
hassle them for every single LUN you want. Asking them for a number of
great big blobs that can be suballocated within VM w/o bugging anyone is
a big win, especially given that almost none of the major SAN vendors
really has any good handle on automation for allocating space (and
getting the zoning and access control right is a bloody nightmare). 

This is an area where IBM, EMC, everyone that makes SAN hardware needs
to do some work; the tools for managing storage virtualization in the
SAN world *suck*. Even the arcane implementation in DIRMAINT is
light-centuries ahead of the SAN world. So I can see Rick's point;
putting the allocation management in the hands of something that DOES
understand it and how to automate it is a good thing. 

 Just give the guest's WWPN access to the LUN(s).  There's no need to
move
 adapters around.  Of course, if you don't have a good management
interface
 on your SAN (storage and switches), then I can understand your point.

See above. Even the few vendors who have usable automation can't get
around the implementation piece and corporate politics. 


Re: SAN and Linux on zSeries

2007-12-09 Thread Alan Altmark
On Sunday, 12/09/2007 at 09:18 EST, David Boyes [EMAIL PROTECTED] 
wrote:
   No no ... it's not a question of how many guests can share an
   FCP adapter.  The trouble is that directly connected guests are
   managerially more like discrete systems, so there is no way from VM
   to manage the storage.
 
  Why is it important that VM manage the storage?  Why can't give me a
 disk
  xxx GB in size be sent to the SAN fabric directly instead of to VM?
 I
  mean, you still have to send the give me a disk request to the SAN
 in
  order to provision the primordial pool managed by VM.
 
 I think it has more to do with corporate structure and operations than
 any technical reason (although there are some good technical reasons,
 too). If your SAN is managed by a different group, then you gotta go
 hassle them for every single LUN you want. Asking them for a number of
 great big blobs that can be suballocated within VM w/o bugging anyone is
 a big win, especially given that almost none of the major SAN vendors
 really has any good handle on automation for allocating space (and
 getting the zoning and access control right is a bloody nightmare).

The zoning and access control isn't going to go away just because CP is 
involved.

Are you really suggesting inserting CP into the middle of SCSI I/O in 
order to create FC minidisks?  If so, that requirement needs to be made 
with bold underscored italics to get attention.  The current thinking is 
that guests need to be managed just like their distributed cousins when it 
comes to SAN connectivity.

 This is an area where IBM, EMC, everyone that makes SAN hardware needs
 to do some work; the tools for managing storage virtualization in the
 SAN world *suck*. Even the arcane implementation in DIRMAINT is
 light-centuries ahead of the SAN world. So I can see Rick's point;
 putting the allocation management in the hands of something that DOES
 understand it and how to automate it is a good thing.

Wouldn't it be better to use tools that can talk to the storage 
controllers, the switches, and VM (via SMI-S  or CIM) in order to properly 
orchestrate storage provisioning rather than trying to get the tail to wag 
the dog?  There is a school of thought that VM needs to be treated as a 
2nd-order storage controller that holds all that partially allocated 
storage and that those tools would first allocate physical storage to VM 
and then talk to VM to suballocate that storage (in physical storage 
increments, not minidisks!) to guests.

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-09 Thread Alan Altmark
On Saturday, 12/08/2007 at 02:02 EST, Raymond Higgs/Poughkeepsie/[EMAIL 
PROTECTED] 
wrote:

 Not all 480 subchannels can be used with NPIV because of switch 
limitations. 
  The 480 number was chosen arbitrarily from the layout of the SIGA 
vector long 
 before NPIV was implemented.  I think Brocade supports the most virtual 
nports 
 per physical port, 255. 

It would be nice if the IOCP Machine Limits had a footnote on that.  But 
even at 255, the value of virtualization is still present.

On Rick's point that you have to manage a z/VM guest just like any other 
node in the SAN fabric, well, there are no miracles.  SMI-S provides a 
standard management API to make tool building easier.  The good news is 
that NPIV allows a guest's SAN access to be managed exactly as a discrete 
server's would be, with no requirement to develop and System z-specific 
code.

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-08 Thread Raymond Higgs
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
12/07/2007 11:25:06 AM:

 On Friday, 12/07/2007 at 06:16 EST, Rick Troth [EMAIL PROTECTED] wrote:
 
  Handing each guest its own HBA (host bus adapter,
  the open systems term for and FCP adapter) kind of blows
  one of the reasons to go virtual.
 
 Eh?  480 servers can use a single FCP adapter (chpid) concurrently. 
That's 
 the whole point of N_port ID virtualization: 480 separate fabric 
 endpoints.
 
 Alan Altmark
 z/VM Development
 IBM Endicott

Not all 480 subchannels can be used with NPIV because of switch 
limitations.  The 480 number was chosen arbitrarily from the layout of the 
SIGA vector long before NPIV was implemented.  I think Brocade supports 
the most virtual nports per physical port, 255.

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
[EMAIL PROTECTED]

Re: SAN and Linux on zSeries

2007-12-08 Thread Rick Troth
  Handing each guest its own HBA (host bus adapter,
  the open systems term for and FCP adapter) kind of blows
  one of the reasons to go virtual.

 Eh?  480 servers can use a single FCP adapter (chpid) concurrently. That's
 the whole point of N_port ID virtualization: 480 separate fabric
 endpoints.

No no ... it's not a question of how many guests can share an
FCP adapter.  The trouble is that directly connected guests are
managerially more like discrete systems, so there is no way from VM
to manage the storage.

N_port ID virtualization adds to this because the storage
is restricted to a specific FCP adapter.  Then if we need to
give that LUN (or LUNs) to a different guest, we have to give
the FCP adapter to it.  What if the recipient already has an FCP
adapter?  It might be better to leverage the adapter already there.

-- R;


Re: SAN and Linux on zSeries

2007-12-08 Thread Raymond Higgs
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
12/08/2007 07:31:05 PM:

   Handing each guest its own HBA (host bus adapter,
   the open systems term for and FCP adapter) kind of blows
   one of the reasons to go virtual.
 
  Eh?  480 servers can use a single FCP adapter (chpid) concurrently. 
That's
  the whole point of N_port ID virtualization: 480 separate fabric
  endpoints.
 
 No no ... it's not a question of how many guests can share an
 FCP adapter.  The trouble is that directly connected guests are
 managerially more like discrete systems, so there is no way from VM
 to manage the storage.
 
 N_port ID virtualization adds to this because the storage
 is restricted to a specific FCP adapter.  Then if we need to
 give that LUN (or LUNs) to a different guest, we have to give
 the FCP adapter to it.  What if the recipient already has an FCP
 adapter?  It might be better to leverage the adapter already there.
 
 -- R;

Well, NPIV was necessary.  Without it, lots of functions were broken.  The 
LUN access control stuff that we implemented only fixed some of these 
problems.

FWIW, the problems you describe aren't unique to the mainframe.  SMI-S is 
supposed to be the industry standard fix for all of the management 
problems, but I do not know much about it.

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
[EMAIL PROTECTED]

Re: SAN and Linux on zSeries

2007-12-08 Thread Alan Altmark
On Saturday, 12/08/2007 at 07:31 EST, Rick Troth [EMAIL PROTECTED] wrote:
   Handing each guest its own HBA (host bus adapter,
   the open systems term for and FCP adapter) kind of blows
   one of the reasons to go virtual.
 
  Eh?  480 servers can use a single FCP adapter (chpid) concurrently. 
That's
  the whole point of N_port ID virtualization: 480 separate fabric
  endpoints.
 
 No no ... it's not a question of how many guests can share an
 FCP adapter.  The trouble is that directly connected guests are
 managerially more like discrete systems, so there is no way from VM
 to manage the storage.

Why is it important that VM manage the storage?  Why can't give me a disk 
xxx GB in size be sent to the SAN fabric directly instead of to VM?  I 
mean, you still have to send the give me a disk request to the SAN in 
order to provision the primordial pool managed by VM.

I'm interested to know how the value of virtualization diminished by the 
lack of virtual FCP adapters.

 N_port ID virtualization adds to this because the storage
 is restricted to a specific FCP adapter.  Then if we need to
 give that LUN (or LUNs) to a different guest, we have to give
 the FCP adapter to it.  What if the recipient already has an FCP
 adapter?  It might be better to leverage the adapter already there.

Just give the guest's WWPN access to the LUN(s).  There's no need to move 
adapters around.  Of course, if you don't have a good management interface 
on your SAN (storage and switches), then I can understand your point.  But 
I don't think I'd lay that problem at VM's feet.

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-07 Thread Rick Troth
We have migrated a number of guests from ECKD to SAN.
The O/S is still on ECKD and will remain there for the
foreseeable future.

We've been giving each SAN participant its own pair of FCP adapters.
(Two, because our SAN guys have architected for dual path access.
But I know of one site which uses four paths instead.)
This makes each guest look more like a stand-alone server.
To isolate them from each other, we then use N-port ID Virtualization.

Thanks to the folks who helped us get all this working!
It is quite reliable and performs well.  The direct connection
presents a whole new administration game, so be prepared!

It would be better to give the SAN volumes directly to VM
(that is, EDEV) and let VM slice things up into minidisks
(or even full-pack or near-full-pack).  There was a performance
issue with EDEV because it naturally introduces more overhead,
so we did not go there at first.  IBM has addressed that in DIAG 250.
(We have not yet revisitted the Linux driver which would use it.
Planning to ... RSN!)

Another question about EDEV is whether or not you can share volumes
across LPARs.  Might require NPIV.  With ECKD and traditional IBM
I/O fabric sharing works as we all know and love it.  I don't know
how well a SAN fabric supports that.  We would need it.

Handing each guest its own HBA (host bus adapter,
the open systems term for and FCP adapter) kind of blows
one of the reasons to go virtual.  By comparison, when VMware
plays the SAN game, it does something akin to EDEV and does not
(to my knowledge) even offer the ability for a guest to connect
directly into SAN space.  z/VM can go either way, direct or pooled.

-- R;


Re: SAN and Linux on zSeries

2007-12-07 Thread Alan Altmark
On Friday, 12/07/2007 at 06:16 EST, Rick Troth [EMAIL PROTECTED] wrote:

 Handing each guest its own HBA (host bus adapter,
 the open systems term for and FCP adapter) kind of blows
 one of the reasons to go virtual.

Eh?  480 servers can use a single FCP adapter (chpid) concurrently. That's 
the whole point of N_port ID virtualization: 480 separate fabric 
endpoints.

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-07 Thread Romanowski, John (OFT)
To avoid timeout errors, IBM recommends having no more then 32 servers
per an NPIV-mode FCP channel.
The 480 servers is ok on non-NPIV FCP.

(see Introducing N_Port Identifier Virtualization
for IBM System z9  page 4
http://www.redbooks.ibm.com/redpapers/pdfs/redp4125.pdf )

 Configuration considerations 
Some general recommendations for using NPIV include:
Do not use more than 32 subchannels per physical channel in NPIV mode.
 Also, do not perform more than 128 total target logins (for example, in
a configuration with 32 subchannels, limit the number of target logins
to no more than an average of 4). Using more subchannels, target logins,
or both can create timeouts.



This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.


-Original Message-

From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Alan Altmark
Sent: Friday, December 07, 2007 11:25 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SAN and Linux on zSeries

On Friday, 12/07/2007 at 06:16 EST, Rick Troth [EMAIL PROTECTED] wrote:

 Handing each guest its own HBA (host bus adapter,
 the open systems term for and FCP adapter) kind of blows
 one of the reasons to go virtual.

Eh?  480 servers can use a single FCP adapter (chpid) concurrently.
That's 
the whole point of N_port ID virtualization: 480 separate fabric 
endpoints.

Alan Altmark
z/VM Development
IBM Endicott


Re: SAN and Linux on zSeries

2007-12-06 Thread Hilliard, Chris
We are running z/Linux on our z800 and connecting to a 2105-800 (Shark)
over a SAN.  I would imagine it to be setup the same way on a z9
connecting to a DS4800.  I can share our experiences and configuration
if that will help.


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Martha McConaghy
Sent: Thursday, December 06, 2007 10:32 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: SAN and Linux on zSeries

We are currently looking to add some disk storage to our SAN and I'm
anticipating connecting our z9 to it at some point to support Linux
guests
on z/VM.  We are looking at buying at least one, perhaps two DS4800's.
However, there doesn't seem to be any info for supporting Linux on
zSeries
on them.  They have host kits available for Linux on Power and Linux
on Intel, but not zSeries.

Does anyone have any experience with the DS4000 series and Linux on z?
If you have your z on the SAN, what types of devices are you using?

Any info would be helpful.  Thanks and happy holidays to everyone!

Martha


Re: SAN and Linux on zSeries

2007-12-06 Thread Steve Wilkins

The DS4000 Series is only supported on System z if it is behind the IBM San
Volume Controller (SVC).

Regards, Steve.

Steve Wilkins
IBM z/VM Development


   
 Hilliard, Chris 
 [EMAIL PROTECTED] 
 orfolk.govTo
 Sent by: The IBM  IBMVM@LISTSERV.UARK.EDU 
 z/VM Operating cc
 System
 [EMAIL PROTECTED] Subject
 ARK.EDU  Re: SAN and Linux on zSeries
   
   
 12/06/2007 11:35  
 AM
   
   
 Please respond to 
   The IBM z/VM
 Operating System  
 [EMAIL PROTECTED] 
 ARK.EDU  
   
   




We are running z/Linux on our z800 and connecting to a 2105-800 (Shark)
over a SAN.  I would imagine it to be setup the same way on a z9
connecting to a DS4800.  I can share our experiences and configuration
if that will help.


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Martha McConaghy
Sent: Thursday, December 06, 2007 10:32 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: SAN and Linux on zSeries

We are currently looking to add some disk storage to our SAN and I'm
anticipating connecting our z9 to it at some point to support Linux
guests
on z/VM.  We are looking at buying at least one, perhaps two DS4800's.
However, there doesn't seem to be any info for supporting Linux on
zSeries
on them.  They have host kits available for Linux on Power and Linux
on Intel, but not zSeries.

Does anyone have any experience with the DS4000 series and Linux on z?
If you have your z on the SAN, what types of devices are you using?

Any info would be helpful.  Thanks and happy holidays to everyone!

Martha