Hello all,

I am in the early stages of recovering deleted files from a 500GB external
drive. So the task at hand is imaging the drive, then data carving all the
lost files. Unfortunately, only about 20GB of the 500 was used. So I've got
an easter egg hunt on my hands. I'm imaging the 500GB damaged external drive
to a 1.5TB external drive. I've noticed that after writing the first
30-50GB, the data transmission rate drops from ~10MB/s to ~500KB/s
(sometimes dipping below 100KB/s. Both drives are only a couple months old,
and perfectly healthy.

I think the reason for the slow down has nothing to do with either drive.
Instead, I think once the image file reaches such a large size, the
transmission rate necessarily drops so the recovery drive can write the file
and manage it. I have no proof, just a hunch. And I don't know the technical
reasons either. So instead of trying to create a single gigantic 500GB image
file, I'd like to create ten 50GB files. Since I'm doing data carving, I
figured why not?

How do you do this in ddrescue?

I started with the following command:
sudo ddrescue -v -n /dev/sdb /media/recoverydrive/01.img
/media/recoverydrive/01.log
and stopped the process once 01.img reached 50GB

Then, I used the following command:
sudo ddrescue -v -n -i 50GB /dev/sdb /media/recoverydrive/02.img
/media/recoverydrive/02.log

This certainly seems to address the speed problem as I'm back up to ~10MB/s,
however the 02.img file begins as a 50GB file, then increases from there. I
expected it to start at size 0. What did I do wrong?

Again, I just want to chop-up the 500GB into more manageable chunks of 50GB.
How would you do this?

Please advise,

James
_______________________________________________
Bug-ddrescue mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/bug-ddrescue

Reply via email to