On Thu, 16 Jun 2005 18:23:30 -0700
Zac Medico <[EMAIL PROTECTED]> wrote:

> Bob Sanders wrote:
>
> This method looks interesting.  I found a quote from Linux Torvalds saying 
> dump can misbehave if there are dirty buffers.  Has anyone experienced that?
> 

I haven't used dump in five or more years, but things to keep in mind -
        
        This technique runs things through memory, thus a quiet system is 
needed.
        No ripping cds in the background, compiling, letting batch jobs run, 
etc.

        For xfsdump, networking needs to up enough to set hostname.  Don't ask, 
it's
        always been that way. 

        It's best to run a repair on the new disk after completion - unmount 
it, then
        a file system check.  For xfs, run xfs_repair.

>  http://www.geoffholden.com/content/presentations/Backups/
> 
> How about benchmarks?  Has anyone seen benchmarks of dump vs. partimage vs. 
> tar vs. rsync vs. cp?  That would be interesting.
>

Why?  The task is to move the data from one partition to a new disk/partition.  
Getting it
reliably done, in a repeatable, sane, manner is more important than speed.  
Using something
like dump/xfsdump, the limiting factor is drive i/o or the disk channel 
depending upon the
setup.  Also, a journaling filesystem will impose a certain amount of overhead 
and disk writes
are going to be the bottleneck.  The only question is - which keeps the 
target's disk buffer full?

The max transfer rate can be calculated from hard drive sustained sequential 
write
performance,  the max speed, minus the overhead of the file system.  The 
assumptions are
dma is used, drives are on different controller channels, and memory is 
sufficient.

Bob 
-- 
-  
-- 
gentoo-user@gentoo.org mailing list

Reply via email to