I defer to Joel on the physics of traditional tape, but this happens to be 
virtual tape. No moving parts (other than internal disk). As for desirable 
block size, we're just going by what the vendor has been telling us. 

.
.
JO.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
jo.skip.robin...@sce.com



From:   "Joel C. Ewing" <jcew...@acm.org>
To:     IBM-MAIN@LISTSERV.UA.EDU, 
Date:   08/21/2013 10:31 AM
Subject:        Re: Large BLKSIZE
Sent by:        IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



On 08/21/2013 10:53 AM, Skip Robinson wrote:
> We're having ongoing 'discussions' with our tape vendor over through-put 

> performance. Vendor is suggesting that we should be using modern 
man-size 
> blocks like 256K. I did some simple testing yesterday to satisfy myself 
> that--whatever it might take to super-size our tape file blocks--simply 
> adding 
> 
>    BLKSIZE=some-large-number 
> 
> to a DD card will not cause the creation of very large blocks. After 
> running such a job with an existing RYO program, the resulting BLKSIZE 
was 
> in fact 32K. No error messages, just no big blocks.
> 
> Am I right in asserting that, whatever benefit we might derive from 
> uber-blocks, we cannot get there by fiddling with JCL? 
> 
> .
> .
> JO.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler 
> SHARE MVS Program Co-Manager
> 626-302-7535 Office
> 323-715-0595 Mobile
> jo.skip.robin...@sce.com
> 
...
I thought all tape drives since 3490 always did hardware compression and
also created hardware "super blocks" on physical tape, making the
efficiency of physical tape writes independent of logical block size.
If the application and JCL allow for enough buffer blocks for the tape
data set, I would expect normal QSAM processing would send multiple 32K
logical blocks to the tape controller in a single I/O and come very
close to the channel efficiency that larger logical blocks would achieve
- perhaps with somewhat larger CPU usage for the additional buffer
handling, but that should have minimal impact on tape performance.

One thing at least in the past that would kill tape throughput on
high-density drives was if there was a bottleneck in getting bytes
through the tape controller fast enough to keep the tape physically
moving - from application processing delays, channel/controller
contention with other jobs or drives, whatever.  Once the tape had to
physically stop, the Inter-Block Gaps were so small by comparison with
tape stopping distance that slow-speed tape positioning was required
before writing could start for the next physical super-block.  If that
happened consistently, throughput was drastically degraded.

You might just try playing games with number of buffers on the DDs.  I
think the default is five, but in the days of "slow" tape drives two was
frequently sufficient for tape I/O and some JCL might still reflect that
practice.  If you want to allow for the possibility of 256K transfers
with 32K logical blocks, that suggests a minimum somewhere between 9 to
16 buffers.

-- 
Joel C. Ewing,    Bentonville, AR       jcew...@acm.org 


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to