Antonio,

Hi, love the awesome job on ddrescue. I had a question that I have been
unable to figure out/sufficiently test. I work in PC Repair and we do quite
a bit of data rescue. With that said, we have noticed random spikes in
NTFS-3g processor usage when utilizing an NTFS drive as a repository for the
images. We have actually been noticing this for a while, but only recently
pegged it to NTFS-3g. I looked on their support page, and they mention block
size and sparse writes being issues when using dd, so I assume similar
issues here. We did use ext3 as a storage volume for the images, and noticed
a marked improvement.

With that all said, I know there would be issues in doing data recovery with
large block sizes (higher rate of missing data, etc.), but I am wondering if
the asynchronous switch would be able to help with this, or perhaps a quick
run through with a large block size, no trim, etc, then a second pass with a
smaller one.

Just wondering what type of thoughts you have on this. Thanks for your help
and keep up the great work!

-Corey Flood
_______________________________________________
Bug-ddrescue mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/bug-ddrescue

Reply via email to