On Sun, 29 Nov 1998, Popov wrote:
>
>
> Steve Austin wrote:
>
> > The problem may be to do with your SCSI adaptor or the drive itself; I
> > have been successfully reading and writing 128K blocks on an HP C1599a via
> > an Adaptec 2940 without difficulty (other than a hardware fault in the
> > drive itself) and without modifying the 2.0.32 kernel I've been using.
>
> I would have to agree here with the above email. By default, the kernel
> imposes a 32k limit which you can easily modify and recompile the st
> module. dd and tar might accept larder block sizes, but the lower level
> driver will break up the transfer in 32k chunks at a time.
In combination, the Adaptec/HP system I'm using seems to prefer quite
small data transfers, dissecting the written 128k blocks (when I've run a
detail trace from the low level driver) into pieces as small as 1K for
transfer across the SCSI bus. This is probably an effect caused by timing
events in the main CPU and the SCSI subsystem, leading to small transfers
of all the ready-to-go-data when the bus and the drive become available
for a synchronous write operation. However, the block numbers reported by
the drive via 'mt tell' are consistent with the number of 128K blocks
assembled by dd, suggesting the drive uses it's large buffers to
reassemble the big (variable size) blocks I write before committing them
to the tape.
> I don't think this really matters. As long as the block size doesn't change
> during the dump (and it's not redicuously low, like "16 bytes"), the compression
> should not be effected.
You're probably right. The drive claims to use Liv-Zempel compression,
which shouldn't be affected by block size, only the data patterns inside
the block. However, this may be a function of how thoroughly implemented
the compression algorithm is.
> What I've noticed is that the block size concept in dd and tar is completely
> detached from the SCSI block size. If you tell tar that you want the block
> size to be 128k and it doesn't complain, it really doesn't meant that you are
> transferring 128k per SCSI command. The kernel limit of 32k is really the
> limit of the total transfer per SCSI command so if tar thinks it's writting 128k
> "block sizes", someone else down the driver chain breaks up the transfer in
> 32k chunks.
The blocking performed by dd and tar is indeed totally detached from
anything else at all, inasmuch that all that is done is to assemble a
block of the required size and write it with a single block write call to
the OS. Anything that happens after that is nothing to do with tar or dd.
The (SCSI-2) HP drive uses variable blocksize by default, and seems quite
happy to keep requesting data after a write command until the amount of
data required to fulfil the write command has been sent. The upper size
limit used internally by the driverware seems a little irrelevant, just as
long as it correctly sends the block size to the drive with the write
command.
>
> I have a Sony SDT-7000 drive and to keep it streaming I had to modify the
> limit to 128k and only then did the tar "block size" do what I wanted it to do.
>
I gave up on that. There are big areas of my disks that contain little
data except directory entries and tar is just unbelievably slow when
processing these, writing next to no data to the archive for quite long
periods of time. For optimum throughput everything obviously needs to be
in large regular files which dump rapidly via tar; for big heaps of
symbolic links and special files which produce little output, you probably
have to live with the poor tape usage.
--------------------------------------------------------------------
Steve Austin [EMAIL PROTECTED]
http://www.edensfld.demon.co.uk for a really bad time.
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]