Rob,

Also please see the brief html document available under:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/overview.html

This should give you an idea about steps involved in building:
# protobuf protocol definitions
# Corresponding java API

If you find more details that can be added to it, let me know (or better
still, file a jira and provide updates).

Regards,
Suresh


On Fri, Jul 19, 2013 at 6:50 AM, Robert Evans <[email protected]> wrote:

> NOTE I am not an RPC expert.  I have dabbled with it a little, but have
> never dug into it in a lot of detail. The RPC code is rather complex.  The
> implementation is in hadoop-common
> (hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/),
>  but there is lots of reflection and inversion of control in there that
> make it very difficult to follow just by reading it.  There are also
> several different implementations the older Writeable one, and on
> trunk/branch-2 the ProtobufRpcEngine. You might be better off tracing it
> using a debugger like eclipse then trying to read all of the code.
>
> In general the RPC client will hide the actual network connections being
> made from the end user.  It will cache the connections and try to reuse
> them whenever possible.  It also has retry logic built in so if a
> connection is lost it will create a new one and retry.
>
> --Bobby
>
> On 7/18/13 5:42 PM, "ur lops" <[email protected]> wrote:
>
> >Hi Folks,
> >
> >I am debugging yarn to write my own yarn application. One of the
> >question I needs to get answer is;
> >
> >Is is like, client open a new connection for each RPC request to yarn
> >resource manager or yarn creates a connection pool and connection
> >requests are handled using that connection pool?  Could you please
> >give some pointer to me in terns of which java class to look for or
> >any documentation?
> >Any help is highly appreciated.
> >Thanks
> >Rob
>
>


-- 
http://hortonworks.com/download/

Reply via email to