Interesting, so comparing a large (4cores and "high" i/o performance) ec2 instance (the first number on each line below) vs the host I used in the latency test (the second number on each line):

ebs cache        817 vs 11532 ~  7% (ec2 7% as performant)
ebs bufread       53 vs    88 ~ 60%
native cache     829 vs 11532 ~  7%
native bufread    80 vs    88 ~ 90%

dd 512m 106s vs 74s ~ 43% longer for ec2 large

md5sum 512m 2.13s vs 1.5 ~ 42% longer

Good thing we don't rely on disk cache. ;-) Raw processing power looks about half. Could you test networking - scping data between hosts? (I was seeing 64.1MB/s for a 512mb file - the one created by dd, random data)

Small anyone?

Patrick

Ted Dunning wrote:
/dev/sdp is an EBS volume.  /dev/sdb is a native volume.

This is a large instance.

r...@domu-<mumble>#:~# hdparm -tT /dev/sdp

/dev/sdp:
 Timing cached reads:   1634 MB in  2.00 seconds = 817.30 MB/sec
 Timing buffered disk reads:  160 MB in  3.00 seconds =  53.27 MB/sec
r...@domu-<mumble>:~# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   1658 MB in  2.00 seconds = 829.44 MB/sec
 Timing buffered disk reads:  242 MB in  3.00 seconds =  80.56 MB/sec
r...@domu-<mumble>:~# time dd if=/dev/urandom bs=512000 of=/tmp/memtest
count=1050
1050+0 records in
1050+0 records out
537600000 bytes (538 MB) copied, 106.525 s, 5.0 MB/s

real    1m46.517s
user    0m0.000s
sys    1m46.127s
r...@domu-<mumble>:~# time md5sum /tmp/memtest; time md5sum /tmp/memtest;
time md5sum /tmp/memtest
f79304f68ce04011ca0aebfbd548134a  /tmp/memtest

real    0m2.234s
user    0m1.613s
sys    0m0.590s
f79304f68ce04011ca0aebfbd548134a  /tmp/memtest

real    0m2.136s
user    0m1.560s
sys    0m0.584s
f79304f68ce04011ca0aebfbd548134a  /tmp/memtest

real    0m2.123s
user    0m1.640s
sys    0m0.481s
r...@domu-<mumble>:~#


On Mon, Nov 9, 2009 at 4:54 PM, Patrick Hunt <ph...@apache.org> wrote:

I'm really interested to know how ec2 compares wrt disk and network
performance to what I've documented here under the "hardware" section:
http://wiki.apache.org/hadoop/ZooKeeper/ServiceLatencyOverview#Hardware

Is it possible for someone to compare the network and disk performance
(scp, dd, md5sum, etc...) that I document in the wiki page on say, EC2
small/large nodes? I'd do it myself but I've not used ec2. If anyone could
try these and report I'd appreciate it.

Patrick


Ted Dunning wrote:

Worked pretty well for me.  We did extend all of our timeouts.  The
biggest
worry for us was timeouts on the client side.  The ZK server side was no
problem in that respect.

On Mon, Nov 9, 2009 at 4:20 PM, Jun Rao <jun...@almaden.ibm.com> wrote:

 Has anyone deployed ZK on EC2? What's the experience there? Are there
more
timeouts, lead re-election, etc? Thanks,

Jun
IBM Almaden Research Center
K55/B1, 650 Harry Road, San Jose, CA  95120-6099

jun...@almaden.ibm.com







Reply via email to