Hi Terry,

> The above link also suggests that dd is the longest method.

dd(1) is quick.  It read(2)s a block of bytes and write(2)s that block.
The kernel does the transfer of bytes from the device to dd's memory and
vica versa.  If you don't choose a block size then it might be quite
small,

    $ dd if=/dev/zero of=/dev/null count=1
    1+0 records in
    1+0 records out
    512 bytes copied, 0.000276964 s, 1.8 MB/s
    $

and most of your time is spent in overhead of switching between dd and
the kernel.  Using bs=1M would cut down that overhead as you're unlikely
to be using a device that insists on a particular block size.

> I must say that I used to get exceeding bored when copying a 4 GB SD
> Card.

The destination media is the bottleneck there.

> However, I tried it and then realised that I would have to wait for it
> to finish before I found out the duration

dd(1) says to send a USR1 signal.  The arrows are the lines I typed.
The bulleted lines are the response to the signal.

    $ dd bs=2
  → foo
    foo
  • 2+0 records in
  • 2+0 records out
  • 4 bytes copied, 4.40433 s, 0.0 kB/s
  → bar
    bar
    4+0 records in
    4+0 records out
    8 bytes copied, 7.53309 s, 0.0 kB/s
    $

> sudo dcfldd if=/dev/urandom of=/dev/sdc

/dev/urandom can be quite slow for large amounts.

    $ dd if=/dev/urandom bs=1M count=1K of=/dev/null
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.6809 s, 84.7 MB/s
    $
    $ openssl rand $((2**20)) >rnd
    $ r=rnd
    $ r="$r $r $r $r"
    $ r="$r $r $r $r"
    $ r="$r $r $r $r"
    $ r="$r $r $r $r"
    $ r="$r $r $r $r"
    $ cat $r | wc -c
    1073741824
    $ echo $((2**30))
    1073741824
    $ while cat $r; do :; done |
    > dd iflag=fullblock bs=1M count=1K of=/dev/null
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.04574 s, 353 MB/s
    $

One can go faster still by cutting out a read(2) for every write(2) by
having a little Perl script or C program that loops, flinging the same
data read once into every write().

> It seems to be pretty quick having reached 22 GB done in around 40
> minutes.

I'm assuing that's GiB to the drive's TB.

    $ units -1v 22GiB/40minutes hour/TB
            reciprocal conversion
            1 / (22GiB/40minutes) = 28.221896 hour/TB
    $

Cheers, Ralph.

-- 
Next meeting:  Bournemouth, Tuesday, 2018-03-06 20:00
Meets, Mailing list, IRC, LinkedIn, ...  http://dorset.lug.org.uk/
New thread:  mailto:[email protected] / CHECK IF YOU'RE REPLYING
Reporting bugs well:  http://goo.gl/4Xue     / TO THE LIST OR THE AUTHOR

Reply via email to