On Thu, 2011-05-12 at 15:10 +0100, Matt Oates (Home) wrote: > On 12 May 2011 14:57, guerrier <[email protected]> wrote: > > On Thu, May 12, 2011 at 9:48 AM, Matt Oates (Home) <[email protected]> > > wrote: > >> Is parallel a great idea here? > > > > That's my real question. I have a failing hdd, from which i would > > like to move as much data as possible as fast possible. > > If it was me I'd chuck in a blank USB disk and use the 'dd' command to > just verbatim copy the partition/disk over, this is about as quick as > you are going to get. You can either do that to an image file on the > USB disk (with/without compression) or possibly more preferably write > the partition straight to the USB device and have the filesystem put > straight on to there. If it's a remote host it might be worth just > doing scp -C since it will ignore any indexing of files and stuff that > rsync does (not sure if rsync does this if the remote destination is > empty anyway?). Parallel is likely to make a sick disk sicker at this > point as the different parallel file reads will cause the heads to > move all over the place! dd reads sequential blocks from the disk > (ignoring the filesystem) so it's fast and limits the work on the hard > disk. > > http://www.backuphowto.info/linux-backup-hard-disk-clone-dd > > Best of luck! > Matt. >
http://www.gnu.org/software/ddrescue/ddrescue.html Note that this is not the same tool as "dd_rescue" - as far as I can tell GNU ddrescue forked or reimplemented this much earlier tool (which still sticks around in the Debian repos for some reason,) and is superior in every way. -- Ethan Baldridge <[email protected]>
