On Dec 10, 2007 5:29 PM, Alan Altmark [EMAIL PROTECTED] wrote:
Then, if you have to replace a
card or switch, you can figure out who gets what. (Though I assume that's
not a new problem for SAN managers.)
Alan Altmark
z/VM Development
IBM Endicott
Hmm - actually -
If you're using NPIV,
@LISTSERV.UARK.EDU
Subject: Re: SAN and Linux on zSeries
snip
Not all 480 subchannels can be used with NPIV because of switch limitations.
The 480 number was chosen arbitrarily from the layout of the SIGA vector long
before NPIV was implemented. I think Brocade supports the most virtual nports
SMI-S provides a standard management API to make tool building easier.
This is an interesting statement. Would there be interest in a CMS-based SMI-S
library to enable building tools to automate this sort of stuff, and would IBM
integrate it if one was available? Seems to be a good
The zoning and access control isn't going to go away just because CP is
involved.
Correct, but you don't have to do it as frequently, which is the point. There
are too many places in the world where this stuff is done by hand by people
with mouses. A lot of people like the EDEV approach
On Dec 9, 2007 11:08 PM, Alan Altmark [EMAIL PROTECTED] wrote:
Are you really suggesting inserting CP into the middle of SCSI I/O in
order to create FC minidisks? If so, that requirement needs to be made
with bold underscored italics to get attention. The current thinking is
that guests
I'd argue that VM needs to provide a virtual analogue to a SAN switch, just
as you currently provide the VSWITCH for data
networking connectivity. IF that were the case, and VM implemented the SMI
standard interconnects with external switches,
then this problem goes away and we all live
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on
12/10/2007 08:33:38 AM:
Ray,
You mention number of usable NPIV subchannels are subject to the
switch limitations.
Are switch limitations what the IBM Redbook is referencing when it
says don't use more than 32 subchannels per
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on
12/10/2007 10:50:40 AM:
SMI-S provides a standard management API to make tool building easier.
This is an interesting statement. Would there be interest in a CMS-
based SMI-S library to enable building tools to automate this
On Sun, 9 Dec 2007, Alan Altmark wrote:
Why is it important that VM manage the storage? Why can't give me a disk
xxx GB in size be sent to the SAN fabric directly instead of to VM? I
mean, you still have to send the give me a disk request to the SAN in
order to provision the primordial pool
On Monday, 12/10/2007 at 10:52 EST, David Boyes [EMAIL PROTECTED]
wrote:
SMI-S provides a standard management API to make tool building
easier.
This is an interesting statement. Would there be interest in a CMS-based
SMI-S
library to enable building tools to automate this sort of
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on
12/10/2007 03:08:00 PM:
Now, if only all the storage controllers out there would implement SMI-S
interfaces...
Alan Altmark
z/VM Development
IBM Endicott
We can't hide behind that excuse. A quick google shows IBM storage,
On Monday, 12/10/2007 at 12:15 EST, Raymond Higgs/Poughkeepsie/[EMAIL
PROTECTED]
wrote:
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on
12/10/2007
10:50:40 AM:
SMI-S provides a standard management API to make tool building
easier.
This is an interesting
No no ... it's not a question of how many guests can share an
FCP adapter. The trouble is that directly connected guests are
managerially more like discrete systems, so there is no way from VM
to manage the storage.
Why is it important that VM manage the storage? Why can't give me a
On Sunday, 12/09/2007 at 09:18 EST, David Boyes [EMAIL PROTECTED]
wrote:
No no ... it's not a question of how many guests can share an
FCP adapter. The trouble is that directly connected guests are
managerially more like discrete systems, so there is no way from VM
to manage the
On Saturday, 12/08/2007 at 02:02 EST, Raymond Higgs/Poughkeepsie/[EMAIL
PROTECTED]
wrote:
Not all 480 subchannels can be used with NPIV because of switch
limitations.
The 480 number was chosen arbitrarily from the layout of the SIGA
vector long
before NPIV was implemented. I think
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on
12/07/2007 11:25:06 AM:
On Friday, 12/07/2007 at 06:16 EST, Rick Troth [EMAIL PROTECTED] wrote:
Handing each guest its own HBA (host bus adapter,
the open systems term for and FCP adapter) kind of blows
one of the reasons to
Handing each guest its own HBA (host bus adapter,
the open systems term for and FCP adapter) kind of blows
one of the reasons to go virtual.
Eh? 480 servers can use a single FCP adapter (chpid) concurrently. That's
the whole point of N_port ID virtualization: 480 separate fabric
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on
12/08/2007 07:31:05 PM:
Handing each guest its own HBA (host bus adapter,
the open systems term for and FCP adapter) kind of blows
one of the reasons to go virtual.
Eh? 480 servers can use a single FCP adapter (chpid)
On Saturday, 12/08/2007 at 07:31 EST, Rick Troth [EMAIL PROTECTED] wrote:
Handing each guest its own HBA (host bus adapter,
the open systems term for and FCP adapter) kind of blows
one of the reasons to go virtual.
Eh? 480 servers can use a single FCP adapter (chpid) concurrently.
We have migrated a number of guests from ECKD to SAN.
The O/S is still on ECKD and will remain there for the
foreseeable future.
We've been giving each SAN participant its own pair of FCP adapters.
(Two, because our SAN guys have architected for dual path access.
But I know of one site which uses
On Friday, 12/07/2007 at 06:16 EST, Rick Troth [EMAIL PROTECTED] wrote:
Handing each guest its own HBA (host bus adapter,
the open systems term for and FCP adapter) kind of blows
one of the reasons to go virtual.
Eh? 480 servers can use a single FCP adapter (chpid) concurrently. That's
the
Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Alan Altmark
Sent: Friday, December 07, 2007 11:25 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SAN and Linux on zSeries
On Friday, 12/07/2007 at 06:16 EST, Rick Troth [EMAIL PROTECTED] wrote:
Handing each guest its own HBA (host bus adapter
We are running z/Linux on our z800 and connecting to a 2105-800 (Shark)
over a SAN. I would imagine it to be setup the same way on a z9
connecting to a DS4800. I can share our experiences and configuration
if that will help.
-Original Message-
From: The IBM z/VM Operating System
System
[EMAIL PROTECTED] Subject
ARK.EDU Re: SAN and Linux on zSeries
24 matches
Mail list logo