On Fri, Aug 18, 2017 at 8:39 PM, Antonio Diaz Diaz <anto...@gnu.org> wrote:
> Ole Tange wrote:
>>
>> It looks like Ketil is right: I get around ~50 MB of tar pit and ~150
>> MB of smooth sailing. Then ~50 MB of tar pit again. The drive is
>> hts545050b9a300 and has 250 GB/platter + 4 heads according to
>> https://www.hgst.com/sites/default/files/resources/TS5K500B_DS_final.pdf
>>
>> Given that I have finite amount of time what is the most efficient way
>> of getting all the data, that is fast to copy, and skip the data, that
>> is in the tarpit?
>
> As ddrescue is currently scraping, and -a only works during the copying
> phase, maybe a combination of --try-again and -c1 (perhaps with a large
> --skip-size if the drive is not returning errors) could solve your problem:
>
>   ddrescue --try-again -c1 --skip-size=50MB

That still keeps me in the tar pit. I get read rates of 1.7 KB/s.

It seems more and more clear to me, that I can rescue almost the whole
drive, if I am willing to wait for 125 GB to be read at 1 KB/s. But
that is not an option to me (as that is roughly 3 years).

But I really would like to get all of the 75% where I can read at 3000 KB/s.

I can walk forwards through the drive by:

seq 10 10 500000 | sudo parallel -uj1 'timeout 10 ddrescue -d  -r5
-T10 -i {}000000 /dev/sdd disk.img disklog'

When this hits a good section, it reads more then 10 MB during the 10
seconds, thus recovering all of the good section from the start byte.
I will still miss up to 10 MB of the good section, and I will still
waste some time in the tar pits.

If I can somehow do the same backwards, I will be able to read the (up
to) 10 MB blocks.

Can I say: Start at byte 100G and read backwards?

It seems when I use -R it always starts from 500 GB.


/Ole

_______________________________________________
Bug-ddrescue mailing list
Bug-ddrescue@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-ddrescue

Reply via email to