Steve Austin wrote:
> On Sun, 29 Nov 1998, Michael Meissner wrote:
>
> > That may be, however I find I can't reliably write 8k blocks on my WangDat 3800
> > DDS-2 drive, and read them back on the same drive during my dump cycle (so I
> > use 512 byte blocks for the nonce). Also, I believe Linux imposes a limit of
> > 32k without modifying the kernel.
> >
> > --
> > Michael Meissner, Cygnus Solutions (Massachusetts office)
Something doesn't sound quite right here and I doubt it it's a drive problem.
> The problem may be to do with your SCSI adaptor or the drive itself; I
> have been successfully reading and writing 128K blocks on an HP C1599a via
> an Adaptec 2940 without difficulty (other than a hardware fault in the
> drive itself) and without modifying the 2.0.32 kernel I've been using.
I would have to agree here with the above email. By default, the kernel
imposes a 32k limit which you can easily modify and recompile the st
module. dd and tar might accept larder block sizes, but the lower level
driver will break up the transfer in 32k chunks at a time.
> I use large blocks to maximise tape capacity by reducing the number of
> interblock gaps and giving the on-drive compression a fair size chunk of
> data to compress, which I believe more likely to yield good compression
> ratios than small blocks.
I don't think this really matters. As long as the block size doesn't change
during the dump (and it's not redicuously low, like "16 bytes"), the compression
should not be effected.
Alright, skip the rest if you're familiar with SCSI and DDS tape drives ... but
here's my quick lecture:
I think the "block size" option by utilities such as dd and tar further confuse
the SCSI block size concept. The SCSI write command can be "fixed" or
"variable"; if the drive is currently in variable mode, it will return an error
if the software sends a write command with the fixed bit set -- the software
will first have to set the size block size, which by default is usually 512 bytes.
If a fixed command is sent to the drive, the count field in the command indicates
how many records of length X (the fixed block size) the drive should write; the
count field in a variable write command indates the length of that one record
that will be written. Thus, the "block size" in fixed write commands is the
fixed-block-size, and in variable write commands, it's the length of of the
record to be transfered. The total amount of data transfered per command is
really what the drive cares about. It really doesn't matter if the write commands
is fixed or variable, as long as the total transfer of data per command is
reasonably big. With newer and faster drives, the total transfer will have to be
pretty big to keep them streaming. If you want to transfer 128k per command
in fixed mode, and the fixed record length is 512 bytes, then each command
will have to send a count of 128k/512. In variable mode, the count will be 128k --
the
length of that one record.
What I've noticed is that the block size concept in dd and tar is completely
detached from the SCSI block size. If you tell tar that you want the block
size to be 128k and it doesn't complain, it really doesn't meant that you are
transferring 128k per SCSI command. The kernel limit of 32k is really the
limit of the total transfer per SCSI command so if tar thinks it's writting 128k
"block sizes", someone else down the driver chain breaks up the transfer in
32k chunks.
I have a Sony SDT-7000 drive and to keep it streaming I had to modify the
limit to 128k and only then did the tar "block size" do what I wanted it to do.
DDS drives support what's called a read-after-write (RAW) feature. The
head-drum-assembly has 2 write heads and 2 read heads, so the drive
reads back the data some number of degrees after writing it. If "too many"
errors are encountered, the drive will re-write the same data _without_
repositioning. The 4GB capacity rating of a DDS2 drive takes into
account some RAW -- but not excessive RAW rates. If the RAW rate
is high, the capacity can be severely reduced. This is one reason for
reduced capacities. An other reason is repositioning. Each time the drive
repositions and does an "append", it might leave behind amble frames (which
don't have any user data). Furthermore, depending on the vendor, the drive
might not append inside the last DDS data group when repositioning, thus
on the average, the group will be half empty.
I can't stress enough how important it is to make sure your drive is
streaming as much as possible. If you do heavy backups and the drive
is constantly repositioning, you'll get reduced capacities, you'll wear out the
tapes faster, and ultimately you'll wear out the drive faster.
Last, the drive keeps logs of quite a few things, including the RAW rate. These
logs can be obtained using the Log Sense command but none of the unix
utilties will dump all the logs. stt at http://www.netcom.com/~popov will do that
and much more, but as I've said before, I've only tested it on my 7000 drive so
you might have to get the SCSI spec of your particular drive and twick stt -- but
most likely you won't have to do that. If the drive reports the RAW rate correctly,
you can do a quick calculation using the DDS group size and the RAW rate and
come up with roughly the same capacity as the amount of data you were able to
dump (assuming no hardware compression).
By the way, the reported DDS capacities are not in computer K bytes -- 2GB
really means 2,000,000,000 bytes.
Pete
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]