To answer Tzafrir comments, because they certainly
have clarified my thoughts on the issue.

Think extreme Linux! Dozens of LPARs, hundreds of VM
EC over several VM machines, thousands of gigabytes of
data and mix in MVS because that's what pays the
bills.

This is a bit long, but, then again, its Friday
afternoon.

(First, I would have replied earlier today, but I was
using the Yahoo message editor and things kept
getting lost. I'll compose this, then give it to Yahoo
to munch).

"With LVM adding space is a simple matter. Does not
require any planning.
With the sub-partitions, you have to hope you'll have
a partition with
enough free disk space. Remembe that you'll do this
moving when a
certain partition will be almost full (and maybe some
others), so some
extra disk space will be needed for the transfer
alone."

Actually, I use LVM extensively for my user
application data. Their backup procedures include
logging their transactions and periodic sequential
backups. To recover their data, if a disk volume is
blitzed for some reason, is to give them a clean
"volume", then they restore their data, and update it
from their logs. Pretty well known, straight forward
stuff - and the restore is NOT filesystem or device
dependent! If the volume is large enough, I put the
user on any dasd subsystem I need to because the
underlying "bucket" size is not important.

Obviously, LVM has several advantages - I can give
them large volumes over on a bunch of small
buckets (3390-3), performance is better because its
striped, and when I switch levels of Linux, the
can use their current LVM volumes. So, I don't want to
give the impression that I discount LVM
completely.

However, the best performance for striped data tends
to be over many stripes over many
subchannels,  so I want to keep my buckets small and
increase the number of subchannels of a logical
volume, hence I keep the 3390-3 size and spread the
data over many volumes. All PAV's really do are
add subchannel exposures to a volume. Without, PAV's,
you get the same effect with a lot of small

volumes.
The best dasd i/o performance I've seen is striped (re
LVM for Linux) over 16 volumes, 16 channels
to a shark, and each physical volume in a different
LSS. Under MVS, I can pretty much drive the data
rate to the shark maximum for E20 sharks. Under Linux,
though I get the best performance, it seems
to be limited by software filesystem locking and how
Linux handles I/O.

With that said, how does that affect performance on my
respacks. Well, except for /tmp, which can
be beat up pretty bad, it probaly doesn't.

Rather,  since I need good performance to the vast
majority of my volumes, thus I need 3390-3 or
PAV's, As a results, I need to keep the Linux respacks
small  because mixing large and small volumes
gets to be a headache at some point. And changing to
larger volumes is data destructive, thus time
consuming (backup all the data in the LSS, rebuild the
LSS, restore the data). Practically, that can
take months to do since we have 3072 Linux volumes and
when you factor in the fact the selfish users really
like to use the machines for themselves! Our MVS folks
spent 3 months converting a dasd subsystem from 3390-3
to 3390-9 volumes for this reason. What ever happened
to the sysprog idea that "users are overhead", anyway?
:-)

So the size issue is really driven by user data
performance and the operational problems with mixed
size volumes.

This leads to the questions of building the respacks
and recovery of damaged respacks.

Basically, from what Mark Post and others have said,
and based on my own experiences on damaged
respacks, they do get damanged and they do need to be
recoverd. The damage is caused because the
filesystem meta data is kept in memory and may not be
written to disk when there is a crash, such as a
kernel panic or other catastrophic event. We were
losing a lot of respacks under ext2. This has eased
since we changed to reiserfs. I understand some prefer
ext3, but the distro we've been using prefers reiser.
(Redhat forces ext3, SuSE forces reiser - take your
choice).

Since we are a mixed OS shop, the operators are used
to punching the reset button for MVS; they try
this on Linux - MVS survives, Linux often does not!.
Either you can get ulcers trying to train your
operators or you try to bullet proof your systems. I'd
rather bullet proof my systems. The most interesting
one I've seen is the 64bit Linux with 5 GB of memory -
it was "idle" over the weekend and Monday morning the
user hit the reset button - the result was the respack
was completely unuseable, even after fsck! Why?
Because the default bdflush cache dirty percent is so
high that meta data got lost! (I currently set my this
value to zero to force writes as soon as possible).

Be that as it may, I would really prefer my respacks
NOT to be LVM or RAID for recovery reasons.

They only complicate the problem and expose more data
to possible damage. If the respack data
needs to be spread over several volumes, it appears to
me the best solution is to use real volumes.

That's why I'm leaning on the multiple real volume
solution rather than LVM or RAID.



=====
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

Reply via email to