You should consider using the XIP2 overlay file system in a DCSS for /usr 
executables - avoid the i/o altogether. Every linux machine shares one "memory" 
copy. The segment will either be in memory or paging DASD - all goodness.
David 


-----Original Message-----
From: Linux on 390 Port on behalf of James Melin
Sent: Wed 4/6/2005 11:20 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Latest on PAV's?
 
Preventative 'what if' thinking mostly. If I share out /usr amongst guests,
the disk that it is on is x many times more likely to have I/O. As x
becomes a larger number, that I/O increases. At some point it will become
excessive I/O for a single device path and you will see a performance hit.
PAV in that situation would give guests several routes to the target data,
which is what it was created to do.  One could LVM a few minidisks on
several devices would also solve that issue to some degree, but I've not
had to implement LVM so far and I don't know what I want to for just this
future issue.

Unlike a lot of folken, we're not using Linux for DB2/Oracle or any heavy
MySQL stuff, so LVM has not been an issue. We're grabbing our legacy data
via Hipersockets to Shadow Direct into DB2.



             Robert J
             Brenneman
             <[EMAIL PROTECTED]                                          To
             om>                       LINUX-390@VM.MARIST.EDU
             Sent by: Linux on                                          cc
             390 Port
             <[EMAIL PROTECTED]                                     Subject
             IST.EDU>                  Re: Latest on PAV's?


             04/06/2005 09:55
             AM


             Please respond to
             Linux on 390 Port
             <[EMAIL PROTECTED]
                 IST.EDU>






No - PAV would not really help you in a DASD sharing environment. If you
are seeing high device contention for a shared /usr volume you could
create multiple volumes to spread that load across if you're using
dedicated volumes.

MDC on a fullpack ( or 1-end mini) could also help here by avoiding the
I/O altogether.

I have not seen an environment where a shared ro /usr gets that much
activity though. Is this preventative planning or a problem in your
current environment?



Jay Brenneman






James Melin <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port <LINUX-390@VM.MARIST.EDU>
06/04/2005 10:26
Please respond to
Linux on 390 Port


To
LINUX-390@VM.MARIST.EDU
cc

Subject
Re: Latest on PAV's?


So LVM integration of PAV devices would not be a good way to do shared
dasd
amongst many guests, such as sharing /usr read only. My understanding is
that Dedicate locks the device to one guest and one guest only.  I guess I
should have been clearer that this was my end-goal. Shared dasd using PAV.
I guess that would be a good reason to supply a pav mini-disk requirement
to IBM for VM enhancement.




             Robert J
             Brenneman
             <[EMAIL PROTECTED]                                          To
             om>                       LINUX-390@VM.MARIST.EDU
             Sent by: Linux on                                          cc
             390 Port
             <[EMAIL PROTECTED]                                     Subject
             IST.EDU>                  Re: Latest on PAV's?


             04/06/2005 09:18
             AM


             Please respond to
             Linux on 390 Port
             <[EMAIL PROTECTED]
                 IST.EDU>






for example:

Suppose you want to have PAVs to one 3390-9 volume for the /home
directory. There is the one base address ( 3000 ) and three aliases (
30FF, 30FE, 30FD )


In the user directory, dedicate all 4 devices to the linux guest:
DEDICATE 3000 3000
DEDICATE 30FF 30FF
DEDICATE 30FE 30FE
DEDICATE 30FD 30FD

In the linux guest, make sure all the devices come up at boot time by
adding them to the dasd= parameter in zipl.conf ( or whatever the
preferred method is for your distro)

Each device will appear to linux as a seperate entry in /dev so it may end
up looking something like this:

device 3000 is /dev/dasdb
device 30FF is /dev/dasdc
device 30FE is /dev/dasdd
device 30FD is /dev/dasde

If youre using SLES-8, add /dev/dasdb1 as a pv, create a vg, and then a lv
on it, then reboot. The LVM startup will see the other 3 /dev nodes as
extra paths to the same backing device and you can then use the pvpath
command to enable the PAVs.

If youre using RHEL-3 or RHEL-4 create a multipath software raid device on
the 4 /dev nodes. The multipath software raid tool will know how to use 4
/dev nodes to access one device.

If youre using SLES-9, I think EVMS will do multipath for you, but I'd
have to get back to you with exactly how that would work.


When using PAV, VM does nothing for you since you have to dedicate the
devices to the guest. It stays out of the way completely.

Jay Brenneman

Linux Test and Integration Center

T/L:       295 - 7745
Extern: 845 - 435 - 7745
[EMAIL PROTECTED]





James Melin <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port <LINUX-390@VM.MARIST.EDU>
06/04/2005 09:36
Please respond to
Linux on 390 Port


To
LINUX-390@VM.MARIST.EDU
cc

Subject
Re: Latest on PAV's?






I presume then, that if a PAV device is dedicated to a linux guest then
the
LVM of all the PAVS is still what you do from linux? Or does VM
parallelize
the I/O for you, being that it is handling the I/O?


---- snip for brevity -----

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to