I guess my first thought / question would be to identify where the bottleneck 
for the backups is; disk I/O, tape I/O, or CPU?  

Presumably disk I/O didn't change a whole lot for reading the data or you would 
have seen impacts elsewhere too.  My guess would be similar for the CPU.  
However, if either is true, it's possible it's simply more noticable during 
backups than your normal processing.

But my guess would be it's writing to the tape.  You didn't mention what kind 
of tape subsystem you're writing to, but everything has it's throughput limit 
and possibly by pushing more data (uncompressed) down the channel, you've hit 
the throughput limit of the subsystem.  If you're going down ESCON channels to 
the tape, those aren't terribly fast by modern standards, and more data on the 
channel = slower elapsed time.

I discovered a similar situation for our DB2 log archives earlier this year, 
but the interesting part was that I didn't initially recognize we were hitting 
the throughput limit because the data is compressed in the logs and the quoted 
throughput limits seem to assume you're sending uncompressed data to the 
subsystem. Of course you now are sending uncompressed data to the subsystem, 
but likely the tape subsystem compression ratio is different than what you get 
on disk from either SMS or BMC.  

As I was looking at this I also discovered that for my test jobs there was no 
significant difference between using 24K and 256K blocks.  YMMV.  

If you're so interested, I wrote up that experience for one of my "What I 
Learned This Month" columns for MeasureIT:
http://www.cmg.org/measureit/issues/mit80/m_80_5.pdf

Scott Chapman

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to