On 8/21/05, Todd Walton <[EMAIL PROTECTED]> wrote:
> On 8/20/05, Carl Lowenstein <[EMAIL PROTECTED]> wrote:
> > # dd if=/dev/hdb1 | gzip -c > /mnt/60gbPartition/hdb1.gz
>
> Cool. So I'm using "dd if=/dev/hdb1 | bzip2 -c | wc -c" right now.
> Using gzip I got it down to 78 GB, 18 too many. bzip2 is taking, so
> far, over three times as long as gzip to do the compressing. This is
> expected.
>
> But I found out that I don't *have* to get my 124 GB down to 60 GB. I
> can use the seek and skip options with dd to split up the 124 GB into
> smaller chunks, put part of it on the 60 GB and part on another disk.
> I've read dd's man page many times, but I didn't make that connection.
> This is good to know; I may need it.
>
> Another tip I got was that passing bs=1M to dd might make compression
> a little better. I doubt that will matter much in the end though.
> But if bzip2 doesn't get it down far enough, I'll give it a try. Can
> anyone comment on this block size thing?
I don't think that block size will affect the compression. As I
remember, at least for gzip, the compression program does its own
fixed-size input buffering.
Larger block sizes may speed things up, however, if you are limited by
disk I/O. Some operating systems might read only 512 bytes per disk
revolution if the default dd(1) block size of 512 bytes is used. A
larger block size will increase the amount of data read per disk
revolution, up to some upper limit determined by available memory.
carl
--
carl lowenstein marine physical lab u.c. san diego
[EMAIL PROTECTED]
--
[EMAIL PROTECTED]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list