[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14115900#comment-14115900
 ] 

Todd Lipcon commented on MAPREDUCE-2841:
----------------------------------------

Hey Joy. Nice to hear from you, and glad to hear the benchmark was useful.

A couple interesting points:

bq. Took the average of 3 runs after one warmup run (all in same JVM)

Do you typically enable JVM reuse? How many runs do you typically get within 
the same JVM in typical Qubole applications?

I found that, if I increase the number of runs within a JVM to 30 or 40, then 
the existing collector becomes nearly as efficient as the native one. But, it 
really takes this many runs for the JIT to fully kick in. So, one of the main 
advantages of the native collector isn't that C++ code is so much faster than 
JITted Java code, but rather that, in the context of a map task, we rarely have 
a process living long enough to get the full available performance of the JIT.

I ran some benchmarks with -XX:+PrintCompilation and found that the JIT was 
indeed kicking in on the first run. But, after many runs, some key functions 
got re-jitted and became much faster.

Given that most people I know do not enable JVM reuse, and even if they do, 
typically do not manage to run 30-40 tasks within a JVM, I think there is a 
significant boost to running precompiled code for this hot part of the code.

bq. Old Collector: 20.3s
bq. New Collector: 7.48s

This is comparing the MR2 collector vs the FB collector (BMOB?) Did you also 
try the native collector? It's interesting that your "old collector" runtimes 
are so slow. Did you tweak anything about the benchmark? On my system, the 
current MR2 collector pretty quickly gets down to <10sec.

bq.  I think query latency is absolutely the wrong benchmark for measuring the 
utility of these optimizations. The problem is Hive runtime (for example) is 
dominated by startup and launch overheads for these types of queries. But in a 
CPU/throughput bound cluster - the improvements would matter much more than 
straight line query latency improvements would indicate.

Agreed. That's why the benchmark also reports total CPU time. The native 
collector is single-threaded whereas the existing MR2 collector is 
multi-threaded. So even though the wall time of a single task may not improve 
that much, it's using significantly less CPU to do the same work (meaning in a 
real job you'll get better overall throughput and cluster utilization).




> Task level native optimization
> ------------------------------
>
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux/Unix
>            Reporter: Binglin Chang
>            Assignee: Sean Zhong
>         Attachments: DESIGN.html, MAPREDUCE-2841.v1.patch, 
> MAPREDUCE-2841.v2.patch, dualpivot-0.patch, dualpivotv20-0.patch, 
> fb-shuffle.patch, hadoop-3.0-mapreduce-2841-2014-7-17.patch
>
>
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs 
> emitted by mapper, therefore sort, spill, IFile serialization can all be done 
> in native code, preliminary test(on Xeon E5410, jdk6u24) showed promising 
> results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is 
> supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware 
> CRC32C is used, things can get much faster(1G/
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to 
> prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if 
> IdentityMapper(mapper does nothing) is used
> There are limitations of course, currently only Text and BytesWritable is 
> supported, and I have not think through many things right now, such as how to 
> support map side combine. I had some discussion with somebody familiar with 
> hive, it seems that these limitations won't be much problem for Hive to 
> benefit from those optimizations, at least. Advices or discussions about 
> improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), 
> which checks if key/value type, comparator type, combiner are all compatible, 
> then MapTask can choose to enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better 
> final results, and I believe similar optimization can be adopt to reduce task 
> and shuffle too. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to