On Mon, Sep 28, 2009 at 04:35:59PM +0200, TomC!E! BodE>C!r wrote:
> On Mon, Sep 28, 2009 at 4:15 PM, Daniel Melameth <dan...@melameth.com> wrote:
> > 2009/9/28 TomC!E! BodE>C!r <tomas.bod...@gmail.com>:
> >> when I try dd command I will get similar numbers :
> >>
> >> $ dd if=/dev/urandom of=test bs=1k count=1024
> >> 1024+0 records in
> >> 1024+0 records out
> >> 1048576 bytes transferred in 6.798 secs (154233 bytes/sec)
> >>
> >> On my old desktop with Ubuntu I have about 1,7MB/s,my friends with
> >> Linux have from about 3 to 8MB/s, [SNIP: more disk-performance
> >> related guesses -- Joachim].
> >
> > Are you testing the speed of urandom or your HD? If the latter, you
> > might want to use something like /dev/zero instead.
> 
> Thanks to all for points.Now I'm dived in man pages :-)
> For disk there is a option for AHCI mode,but not possible on my laptop.
> I have Win in dual boot and they don't like AHCI heh.
> For urandom I'm reading man pages around it on Linux and OpenBSD to try
> find difference.

Huh? There is no need to read man pages, just check

$ dd if=/dev/urandom of=/dev/null bs=1k count=1024

for a reasonable upper bound on the performance of dd reading from
/dev/urandom. You may find that it is very close to the above numbers,
i.e. the disk is not the bottleneck.

You can then repeat with /dev/zero, as suggested. If you are worried
about the predictable pattern, use /dev/arandom, which is a lot faster
than /dev/urandom - you don't need cryptographically secure random
numbers, after all.

On my machine, for instance,

$ dd if=/dev/urandom of=/dev/null bs=1k count=1024
1024+0 records in
1024+0 records out
1048576 bytes transferred in 9.143 secs (114683 bytes/sec)

$ dd if=/dev/zero of=/dev/null bs=1k count=1024
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.002 secs (379094722 bytes/sec)

$ dd if=/dev/arandom of=/dev/null bs=1k count=1024
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.040 secs (25746458 bytes/sec)

$ dd if=/dev/arandom of=$HOME/test bs=1k count=1024
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.547 secs (1915326 bytes/sec)

In other words, urandom is noticeably slower than the disk.

On a higher level, you aren't really clear what you are trying to
measure, but if it's disk performance there are a lot of factors you
haven't considered. Repeating your experiment with a larger count and/or
block size may be instructive (for instance, I saw a 25% loss in
performance going to count=16384.)

In fact, I'm pretty sure that someone with a strong Linux background
could persuade that OS to cache the complete write in memory...

                Joachim

Reply via email to