[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14045189#comment-14045189
 ] 

Todd Lipcon commented on MAPREDUCE-2841:
----------------------------------------

Hey Arun. I think Sean did a good job answering many of your questions, but 
here are a few more responses to specifics from your earlier comment.

bq. I'm confused. There already exists large amounts of code on the github for 
a the full task runtime. Is that abandoned? Are you saying there no intention 
to contribute that to Hadoop, ever? Why would that be? Would that be a separate 
project?

As Sean pointed out, there is a branch on github which is just the native 
collector.

I won't speak for Sean, but my own opinion is the same as yours with regard to 
the runtime. If we want to make a full native MR framework, it's a larger 
project that should probably be in the incubator and build on APIs exposed by 
YARN and MR. A strict accelerator of the existing MR, though, doesn't seem to 
make as much sense as a separate project.

bq. C++ still is a major problem w.r.t different compiler versions

So long as you avoid C\+\+11, C\+\+ can be very portable. AFAIK there is no 
usage of C\+\+11 features in this contribution, and I would agree with you that 
we should avoid them and stick to a proper subset of C\+\+. Personally, I am 
currently working on another project which uses C\+\+ with the Google style 
guidelines and we have no problem building on a wide variety of operating 
systems (despite having orders of magnitude more code and complexity).

bq. Furthermore, there are considerably more security issues which open up in 
C++ land such as buffer overflow etc.

I'm not sure how this is a concern, since the new code only runs in the context 
of tasks, not daemons. C certainly has the same issues (and in my experience, 
buffer overflows and memory leaks are more common in C vs C++ due to lack of 
safe containers and smart pointers, but that's a separate discussion). It might 
be a stability concern, but that's easy to address with extensive testing, 
which Sean and team have already been doing for the past year or more.

bq. I'm sure we both would take 2x on Pig/Hive anyday... smile

Well, it's not quite 2x, but the performance benchmarks referenced on the wiki 
show "hive aggregation" having a 50% improvement. So, while I agree that 
terasort is not representative of many workloads, Sean and his team have done a 
good job showing that this optimization benefits a large class of diverse 
workloads, with no change required to the upper-level framework.

bq. Furthermore, this jira was opened nearly 3 years ago and only has sporadic 
bursts of activity - not a good sign for long-term maintainability.

I'm not sure that the past is indicative of the future here. Many times we open 
JIRAs and don't have time to fully push them to fruition until the future -- eg 
YARN sat around on JIRA with sporadic activity for many years until your team 
at Yahoo really got started on it. Even then, if I recall correctly, a lot of 
the development happened in a separate repository before there was an initial 
code drop to a branch at Apache. The same is true of this project (though of 
course on much smaller scale) -- the project idea was a few years back, and 
then it was developed in Intel's repository until it is now being proposed to 
be integrated.

bq. Finally, what is the concern you see with starting this as an incubator 
project and allowing folks to develop a community around it? We can certainly 
help on our end by making it easy for them to plug in via interfaces etc.

The main concern is that it would be difficult for users to install/plug in. 
Speaking with my Apache hat on, I think this benefits all MR users and it would 
be great to say "Upgrade to 2.5, and your jobs will go 50% faster in many 
cases!" With my vendor hat on, it might actually be beneficial for this to live 
elsewhere -- we could tout it as a unique feature of our distro :) But, I'm 
trying to do the right thing here for the community at large, and also 
encourage a new group of developers to make contributions to our project.

bq. <discussion of line counts, etc>
My metrics were using the 'sloccount' program which counts non-comment 
non-empty lines. Sean already gave a good breakdown of the code. But, I think 
it's unimportant to squabble over details - my main point there was just that 
the contribution is meaty but not massive. It's also relatively simple code (eg 
entirely single-threaded) which is confined to the task (no concerns of daemon 
stability) and entirely optional (users can switch on a per-job basis whether 
to use this collector). I'd assume that in our first release we would leave the 
feature off by default and only make it on-by-default after we observe that 
many users have enabled it with good results. In the very worst case, since it 
is a fully transparent optimization, we could even remove it in a future 
version in a compatible manner if it turns out to be unused or irreparably 
unstable.


> Task level native optimization
> ------------------------------
>
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux/Unix
>            Reporter: Binglin Chang
>            Assignee: Sean Zhong
>         Attachments: DESIGN.html, MAPREDUCE-2841.v1.patch, 
> MAPREDUCE-2841.v2.patch, dualpivot-0.patch, dualpivotv20-0.patch, 
> fb-shuffle.patch
>
>
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs 
> emitted by mapper, therefore sort, spill, IFile serialization can all be done 
> in native code, preliminary test(on Xeon E5410, jdk6u24) showed promising 
> results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is 
> supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware 
> CRC32C is used, things can get much faster(1G/
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to 
> prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if 
> IdentityMapper(mapper does nothing) is used
> There are limitations of course, currently only Text and BytesWritable is 
> supported, and I have not think through many things right now, such as how to 
> support map side combine. I had some discussion with somebody familiar with 
> hive, it seems that these limitations won't be much problem for Hive to 
> benefit from those optimizations, at least. Advices or discussions about 
> improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), 
> which checks if key/value type, comparator type, combiner are all compatible, 
> then MapTask can choose to enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better 
> final results, and I believe similar optimization can be adopt to reduce task 
> and shuffle too. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to