+1
-steve
[bcc'ing hdfs-dev and mapreduce-dev]
Konstantin, can you outline the effects of this incompatibility?
Thx,
Nige
On Mar 28, 2011, at 10:19 PM, Dhruba Borthakur wrote:
This is a very effective optimization, +1 on pulling it to 0.22.
-dhruba
On Mon, Mar 28, 2011 at 9:39 PM, Konstantin
+1
2011/3/29 Doug Cutting cutt...@apache.org
+1
I don't think this creates an incompatibility. It changes the RPC wire
format, but we already require that clients and servers run identical
builds. No application that ran with a prior version of Hadoop would be
broken by this change when
+1
n.b. that the vote lost hdfs and common dev at some point. I've added
them back.
On Tue, Mar 29, 2011 at 9:18 AM, Amit Sangroya sangroyaa...@gmail.com wrote:
+1
On Tue, Mar 29, 2011 at 6:04 PM, Stephen Boesch java...@gmail.com wrote:
+1
2011/3/29 Doug Cutting cutt...@apache.org
+1
I
Nigel,
The nature of incompatibility is that the RPC version is changing, which
means
VersionedProtocol-s become incompatible all at once. As opposed to say
only DatanodeProtocol or mr.ClientProtocol.
Doug is right because of our strict requirements for protocol compatibility
this
will not affect
HADOOP-6949 introduced a very important optimization to the RPC layer. Based
on the benchmarks presented in HDFS-1583 this provides an order of magnitude
improvement of current RPC implementation.
RPC is a common component of Hadoop projects. Many of them should benefit
from this change. But since
This is a very effective optimization, +1 on pulling it to 0.22.
-dhruba
On Mon, Mar 28, 2011 at 9:39 PM, Konstantin Shvachko
shv.had...@gmail.comwrote:
HADOOP-6949 introduced a very important optimization to the RPC layer.
Based
on the benchmarks presented in HDFS-1583 this provides an
+1
On Mon, Mar 28, 2011 at 9:39 PM, Konstantin Shvachko
shv.had...@gmail.com wrote:
HADOOP-6949 introduced a very important optimization to the RPC layer. Based
on the benchmarks presented in HDFS-1583 this provides an order of magnitude
improvement of current RPC implementation.
RPC is a