On Mon, Dec 21, 2009 at 17:50, Martha McConaghy <u...@vm.marist.edu> wrote:
 ...
> I have to now move our production Oracle database server over to SAN in the
> next few months (its using CKD volumes tied together with LVM right now).

You will probably continue to use LVM.  I recommend that.
One reason to continue concatenating physical volumes with LVM,
even in SAN land, is that there is probably a sweet spot LUN size
(your SAN people would know, and if you are "the SAN people" then
ask your vendor).  The ideal LUN size will probably be smaller than
the logical volume size you want for your Oracle filesystems.

General rule: use LVM and avoid partitioning *

> This database is for the college's ERP system, so performance is going to
> be critical.  I'm waying the options for moving it to the SAN and could use
> some advice from those of you who are already running SAN in production.

If performance is critical, consider direct access (that is, ditch EDEV).
I highly recommend EDEV for managing content, for letting VM handle
(and slice) the LUNs.  But there is overhead.  (I truly don't know how much.)

It may be that you don't even really gain much with direct Linux access
to the SAN fabric, but it cuts out one more finger pointing opportunity
which at this time may cause pain.

This is difficult for me to say
because direct Linux access to the fabric can be a sysadmin nightmare.
(Imagine hundreds of OSAs and no VSwitch, but bigger MAC addrs.)

> We haven't done multipathing yet, but are looking into it now.  How many
 ...

Umm... DO.
(Surprised you're not doing multipathing with EDEV.)

 ...
> physical and virtual paths do you normally use for a high-visbility server?

We use two paths.
A pair of paths is easier to manage than (say) four.
But obviously four paths provides more I/O opportunities.

> Would 2 be enough, or should we go for 3 or more?  Does multipathing introduce
> a lot of extra workload for z/VM?  Do you normally use dedicated connections
 ...

I don't have numbers comparing multipath to single path.
For us, single path was never an option.

 ...
> or do you share them with other servers?  (I'm not sure how you monitor the
> workload for these devices since they are dedicated devices and VM isn't
> handling their I/O.)

Right.  The only indication you get is how busy the FCP channel is.   :-(

Without EDEV, I cannot imagine that multiple FCP paths
adds much overhead at the VM level.  And what overhead
it introduces in Linux seems to be trivial.  (theory, not measured)

> Is it worth moving to NPIV before hooking in our production server?  I know
> that it can provide better security for the connections.  Does it also help
> with performance?

NPIV places requirements on your SAN and on the z hardware.
Linux and VM are both ready to rock-n-roll with NPIV, no problem.
I see no performance problem with it.  But why do you need NPIV?

We use NPIV to isolate LUNs, letting a subchannel look (to the fabric)
like a unique host, where normally only each FCP looks like a "host".
Being a financial shop, you can imagine this is a big plus with the
security and audit people.  Any Linux guest can only hit those LUNs
which are zoned/masked to its FCP subchannel(s).  There is no
browsing of LUNs in the fabric.

Ironically, one of the benefits of NPIV is that you can share LUNs
without having to buy more physical FCP cards.  A LUN can be zoned/masked
to multiple Linux-side WWPNs, which (with NPIV) might be on the same card.
A LUN can thus be simultaneously in-use to several "hosts" without giving
a busy signal like you would get with multiple subchannels and no NPIV.

I hope this helps.

* I will clarify "use LVM and avoid partitioning" in a separate note.

-- R;   <><

Reply via email to