I must be working too much... I can´t see where I took the DDS-thing from... O_o Sorry, now I see it´s a disk-only operation. >X(
 Anyway, you could try the dd thing; something like
# dd if=mysql-m.tgz | tar -zxvf -
 and see if it makes any diference.
There was a relatively new thread in April/May dealing with poor read/write results on 5.x branch.. the primary "target" was RAID, but maybe there´s a bunch of possible tuneups that may be applied to IDE and SCSI disks. Have you tried´em?
 Sorry again for the blatant misunderstanding.
Tulio G. Silva


Tulio Guimarães da Silva wrote:

Hi,
I´ve got the same kind of problem, not only with DDS-[234] tapes, but also with "all-powerful-with-bells-and-whistles" AIT-3 units, with controllers ranging from Adaptec stock 2940 to PCI-X Ultra-320... almost same results. The problems seems to lie in tar itself; I read there´s something to do with block sizes, but using -b with larger values got me not much more than corrupt or incomplete data. :( The only way I got to have decent transfer rates AND reliability was to filter *archiving* through dd, including block sizes. For example, to archive:

 # tar -zcpf - /usr/local | dd of=/dev/sa0 bs=64k

and to restore:
 # tar -b 128 -zxvf /dev/sa0

The above is particullarly true for remote transfers; if you´re using tar over rsh/rmt (-f host:/path), you´ll surely prefer simple "rsh/tar" with output redirection. ;) Note that block sizes in tar count as 512-byte ones, while in dd they can be specified as Kilobytes or even megabytes. Besides speed, there´s a sensible boost on storage space when using dd-block-sized transfers. The apparent reason for that is tar actually uses -b 20 (10kb) blocks, while 10GB+ tapes usually expect larger sizes. For AIT-3, I didn´t notice any real good improvement past 128kb block sizes; I didn´t experiment enough with DDS-4 because our test tape drive got a heart attack and quit... BTW, it returned 2 weeks ago and I didn´t give it any attention; it may be a little depressed by now, so I guess I´ll return it to test beds. :) Anyway, I wouldn´t try anything lower than 32kb blocks on it. I can´t remember if I did any test pointing to /dev/null, as mr. Hartland suggested, nor from /dev/zero or /dev/random... it´s worth a try.
 I´ll post new results as soon as I get´em (if any). ;)
 Have luck,

Tulio G. da Silva

Steven Hartland wrote:

Might be silly but do u get similar results if u:
1. expand to a memory backed disk
2. expand to /dev/null

   Steve
----- Original Message ----- From: "JG" <[EMAIL PROTECTED]>

I had to unpack a lot of tar archives and I occasional noticed terrible
bad performance on freebsd5.


------------------------------------------------------------------------

_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to