The sky is always blue.

I'm just reminding you what your options are.  That's the great thing
about advise, you can take it or leave it.

Tom Duerbusch wrote:
Hi Rich..

Just what color is the sky in your world? <G>

I just don't live in a world where z/VSE comes out and all the VSE
systems immediately convert to it without any problems.

Perhaps in the next mainframe/storage subsystem replacement, FCP only
will be an option.  For now, the only mainframe shops that I would ever
think of making FCP only, are the IFL only shops.

For now, having Ficon and FCP adapters is just doubling up on hardware.
 And if they are going to the same Shark (where you have to pay for both
sets of adapter there also), it comes down to a performance/cost issue.
Plus, some management issues.

But the management issues might break both ways.  VM wouldn't manage it
but the server people may have their own methods that they are use to.
Whether those methods are good or not? ...

Tom Duerbusch
THD Consulting

Tom Duerbusch
THD Consulting


[EMAIL PROTECTED] 03/28/05 4:40 PM >>>

If you are going to use FCP for Linux and Ficon for VM, you would have the extra cost anyway. z/VM, z/VSE and Linux for zSeries can ALL use the FCP dasd. So you wouldn't really need the Ficon at all. Except that at this point the SCSI part of VSE is not nearly as efficient as regular DASD (the overhead is pretty high).

And you're right, I don't think that z/VM or z/VSE support Flashcopy
of
SCSI devices.  That is a major drawback.

Tom Duerbusch wrote:

We are in the process of specking out an z/890 with Shark and I went
thru the same types of questions.

For us, it ended up mostly a cost decision as we still needed some

of

the Shark to be ficon attached. So the additional cost of FCP

channels

and lparing the Shark would cost us more.

In any matter...

z/VM 5.1 can support FCP attached dasd as FBA devices. At that

point,

anything that runs on VM that supports FBA devices can use the dasd

as

FBA. You can use an existing SAN for VM or any FBA application

under

VM.

z/Linux supports FCP attached storage directly.  It can use SAN
attached storage (something about a SAN Switch enters the discussion
somewhere here....)  With FCP attached storage, you don't have VM
entering the mix.  No VM packs.  You access the storage directly.

You

can have large volumes without the need for LVM.  It should be less
overhead as in a z/VM - FICON - Shark mix, the Shark takes 512 byte
sectored blocks, chains them together to emulated CKD devices, which

go

thru the Ficon channels to VM, to Linux that has a device driver

that

converts CKD storage back into "linux native" 512 byte blocks so

Linux

sees what it is use to.

The mainframe overhead is in the device driver in Linux that

emulates

512 byte blocks on mainframe dasd. How much? 1%, 5%...I don't

know.

So if you needed the Shark to be both mainframe and server attached,
the mainframe would need FICON and FCP adapters, and the Shark would
also need FICON and FCP attachments.  (additional cost)

If you use FCP attached dasd, VM doesn't see the dasd, and can't

back

it up. However, Linux can see the dasd and can back it up via

mainframe

tape or, if you also have FCP attached tape drives, via server type
tapes.

With FCP attached Shark, you don't seem to have all the goodies that
are in the Shark controller.  I'm not sure about how it caches the

dasd

or if it can do Flash Copy or not. Also Remote Copy and such may

also

not be available.

In our case, I was looking to add FCP cards to the z/890 to attach

an

existing SAN. Looking for cheap, or in the case of existing space,

free

dasd. But the FCP cards seemed expensive and as it turned out,

wouldn't

reduce the size of the proposed Shark.  So it was just added cost.

We may add FCP cards in the future when we run out of Shark and we

are

faced with more 8-packs or buy FCP adapters and use existing SAN

space.

Tom Duerbusch
THD Consulting



[EMAIL PROTECTED] 03/28/05 3:55 PM >>>

After reading the following http://www.vm.ibm.com/perf/reports/zvm/html/scsi.html I became very confused (like I wasn't already)... Anyway, we're trying to move

along

with a file server project and because of strict time lines, I'm
trying
to avoid reinventing the wheel. Below is a quick rundown of our

system.


We've got a z890 running z/VM 5.1 on one IFL. We're running several instances of SLES9 in 64 bit mode. Our storage is on a shark and we have one SAN defined with 2 fabrics. We define our devices in 3 ways,

both

in
an effort to have some redundancy;

   - As your traditional 3390 device (not a part of this question).
   - As an emulated FBA minidisk (9336) with two defined paths (one
through each fabric).
   - And as a FCP device, using EVMS on Linux to multipath through
each
fabric.

My questions are about the latter two devices. The above document

only

talks about single path connectivity. How would multipathing effect
these different devices? How does the multiple layers (e.g. EVMS,

LVM,

etc...) effect these devices? In the document above it suggests a
substantial increase in CPU for an I/O operation to an FBA device as
opposed to an FCP device, how would multipathing effect this? How

much

overhead is there with EVMS maintaining a multipathed FCP device?
Lastly, LVM1 is only available for an EVMS managed disk, is there a
noticeable increase in overhead between LVM1 and LVM2 (which can be
used
with a FBA device)?

I guess I don't really need specific answers to these questions,

just

an
idea as to what others are doing. Like I said before, I'd rather not
reinvent the wheel. If anyone could shed some light on which one of
these devices (Emulated/multipathed/LVM2/FBA or EVMS/LVM1/FCP)
would/should perform better, that would be GREAT!

Mark Wiggins
University of Connecticut
Operating Systems Programmer
860-486-2792











----------------------------------------------------------------------

For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO

LINUX-390

or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



----------------------------------------------------------------------

For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390

or visit

http://www.marist.edu/htbin/wlvindex?LINUX-390



--
Rich Smrcina
VM Assist, Inc.
Main: (262)392-2026
Cell: (414)491-6001
Ans Service:  (866)569-7378
rich.smrcina at vmassist.com

Catch the WAVV!  http://www.wavv.org
WAVV 2005 - Colorado Springs - May 20-24, 2005

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


-- Rich Smrcina VM Assist, Inc. Main: (262)392-2026 Cell: (414)491-6001 Ans Service: (866)569-7378 rich.smrcina at vmassist.com

Catch the WAVV!  http://www.wavv.org
WAVV 2005 - Colorado Springs - May 20-24, 2005

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to