Corey Flood wrote:
With that all said, I know there would be issues in doing data recovery with large block sizes (higher rate of missing data, etc.), but I am wondering if the asynchronous switch would be able to help with this, or perhaps a quick run through with a large block size, no trim, etc, then a second pass with a smaller one.
I suppose you mean the "--synchronous" switch. This switch issues a fsync call after every write to the output file. Not sure if this will help with the NTFS problems.
You can't avoid the trim without manually interrupting ddrescue. What you can try is a first pass with --no-split and a large value for --cluster-size, say 1024, followed by a default pass with --retrim, --try-again and perhaps --direct:
ddrescue --no-split --cluster-size=1024 in out log ddrescue --retrim --try-again --direct in out log Regards, Antonio. _______________________________________________ Bug-ddrescue mailing list [email protected] http://lists.gnu.org/mailman/listinfo/bug-ddrescue
