NOTE I am not an RPC expert.  I have dabbled with it a little, but have
never dug into it in a lot of detail. The RPC code is rather complex.  The
implementation is in hadoop-common
(hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/),
 but there is lots of reflection and inversion of control in there that
make it very difficult to follow just by reading it.  There are also
several different implementations the older Writeable one, and on
trunk/branch-2 the ProtobufRpcEngine. You might be better off tracing it
using a debugger like eclipse then trying to read all of the code.

In general the RPC client will hide the actual network connections being
made from the end user.  It will cache the connections and try to reuse
them whenever possible.  It also has retry logic built in so if a
connection is lost it will create a new one and retry.

--Bobby 

On 7/18/13 5:42 PM, "ur lops" <[email protected]> wrote:

>Hi Folks,
>
>I am debugging yarn to write my own yarn application. One of the
>question I needs to get answer is;
>
>Is is like, client open a new connection for each RPC request to yarn
>resource manager or yarn creates a connection pool and connection
>requests are handled using that connection pool?  Could you please
>give some pointer to me in terns of which java class to look for or
>any documentation?
>Any help is highly appreciated.
>Thanks
>Rob

Reply via email to