I don't really want any, I just want to learn the internals :)

So why would someone not want to use the client, for data intensive tasks
like mapreduce etc. where they want direct access to the files?

On Tue, May 29, 2012 at 11:00 AM, N Keywal <nkey...@gmail.com> wrote:

> There are two levels:
> - communication between hbase client and hbase cluster: this is the
> code you have in hbase client package. As a end user you don't really
> care, but you care if you want to learn hbase internals.
> - communication between customer code and hbase as a whole if you
> don't want to use the hbase client. Then several options are
> available, thrift being one of them (I'm not sure of avro status).
>
> What do you want to do exactly?
>
> On Tue, May 29, 2012 at 4:33 PM, S Ahmed <sahmed1...@gmail.com> wrote:
> > So how does thrift and avro fit into the picture?  (I believe I saw
> > references to that somewhere, are those alternate connection libs?)
> >
> > I know protobuf is just generating types for various languages...
> >
> > On Tue, May 29, 2012 at 10:26 AM, N Keywal <nkey...@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> If you're speaking about preparing the query it's in HTable and
> >> HConnectionManager.
> >> If you're on the pure network level, then, on trunk, it's now done
> >> with a third party called protobuf.
> >>
> >> See the code from HConnectionManager#createCallable to see how it's
> used.
> >>
> >> Cheers,
> >>
> >> N.
> >>
> >> On Tue, May 29, 2012 at 4:15 PM, S Ahmed <sahmed1...@gmail.com> wrote:
> >> > I'm looking at the client code here:
> >> >
> >>
> https://github.com/apache/hbase/tree/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client
> >> >
> >> > Is this the high level operations, and the actual sending of this data
> >> over
> >> > the network is done somewhere else?
> >> >
> >> > For example, during a PUT, you may want it to write to n nodes, where
> is
> >> > the code that does that? And the actual network connection etc?
> >>
>

Reply via email to