[
https://issues.apache.org/jira/browse/HADOOP-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635522#action_12635522
]
Doug Cutting commented on HADOOP-4049:
--------------------------------------
This looks much cleaner.
We need to detect incompatibility. We currently version the transport
(Client/Server) and application (protocol) layers, but this changes the RPC
layer, between those, for which we do not currently have a version checking
mechanism. Newer clients attempting to talk to older servers and vice versa
will fail in unexpected ways.
The best way I can think to fix this is to increment the transport version
(Server#CURRENT_VERSION, even though that layer has not changed), and add an
RPC-layer version field in Invocation, so that if we ever change the RPC layer
again we'll be able to. This can just be a single-byte field in Invocation
that write() sends and readFields() checks against a constant value. Then,
should we ever change Invocation and/or RPCResponse again we'll be able to
detect it. Does that make sense?
Finally, is NullRPCInstrumentation needed? You seem to accept null as a value
for an instrumentation, so I don't see why this class is needed. If we do need
such a class, then it would be shorter to replace RPCInstrumentation's abstract
methods with {}. Less code is almost always better with me!
> Cross-system causal tracing within Hadoop
> -----------------------------------------
>
> Key: HADOOP-4049
> URL: https://issues.apache.org/jira/browse/HADOOP-4049
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs, ipc, mapred
> Affects Versions: 0.18.0, 0.18.1
> Reporter: George Porter
> Attachments: HADOOP-4049.2-ipc.patch, HADOOP-4049.3-ipc.patch,
> HADOOP-4049.4-rpc.patch, HADOOP-4049.6-rpc.patch, HADOOP-4049.patch,
> multiblockread.png, multiblockwrite.png
>
>
> Much of Hadoop's behavior is client-driven, with clients responsible for
> contacting individual datanodes to read and write data, as well as dividing
> up work for map and reduce tasks. In a large deployment with many concurrent
> users, identifying the effects of individual clients on the infrastructure is
> a challenge. The use of data pipelining in HDFS and Map/Reduce make it hard
> to follow the effects of a given client request through the system.
> This proposal is to instrument the HDFS, IPC, and Map/Reduce layers of Hadoop
> with X-Trace. X-Trace is an open-source framework for capturing causality of
> events in a distributed system. It can correlate operations making up a
> single user request, even if those operations span multiple machines. As an
> example, you could use X-Trace to follow an HDFS write operation as it is
> pipelined through intermediate nodes. Additionally, you could trace a single
> Map/Reduce job and see how it is decomposed into lower-layer HDFS operations.
> Matei Zaharia and Andy Konwinski initially integrated X-Trace with a local
> copy of the 0.14 release, and I've brought that code up to release 0.17.
> Performing the integration involves modifying the IPC protocol,
> inter-datanode protocol, and some data structures in the map/reduce layer to
> include 20-byte long tracing metadata. With release 0.18, the generated
> traces could be collected with Chukwa.
> I've attached some example traces of HDFS and IPC layers from the 0.17 patch
> to this JIRA issue.
> More information about X-Trace is available from http://www.x-trace.net/ as
> well as in a paper that appeared at NSDI 2007, available online at
> http://www.usenix.org/events/nsdi07/tech/fonseca.html
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.