OK, I am going to recap here what I am hearing, so that anyone can point
out any flaws.

1) VSE is going to queue the I/O, therefore simply changing everything
from full packs to minidisks and adding PAV is not going to get me
anything.

2) A way to trick VSE into not queuing the I/O would be to take my full
pack, and instead of making it a single minidisk, make it (say) three
minidisks. This would have the effect of causing VSE to not queue I/Os
among those three packs, and allow VM to do its PAV magic. The problem
I see with this is that with our predominently sequential processing,
VSE is still probably going to queue on each of the three minidisks on
the physical volume serially, most likely with the end effect of not
buying me anything.

3) The most promising performance increase, especially for a sequential
read such as ours, would be to convert the full packs to minidisks and
use spare memory (which we do have) to run a decently large
(800MB-1GB?) cache against the minidisks. This should, however, be
measured and reality checked by measuring read/write ratios and
checking cache hits by device, which would lead to turning the cache
off for volumes that are not getting any benefit.

4) PAV and MD cache don't play nice together, therefore since MD cache
may benefit me and PAV likely will not, I should forget PAV for now,
although in the future with system updates it may be something to
revisit.

Thank you for your time so far Eric, Catherine, Kris, Rob, Bill, and
Dietltiens.

Reply via email to