I am running test on the bulk load option on hbase I run the map and
everything runs fine for small data but I tryto load 3-500MB of data and I
get timed out and it tries to load the data several times
I found the problem just can not find how to fix it how to do you change the
negotiated
ok I got a MR job I am trying to import into hbase it works with small input
loads with in secs and the command line returns
when Iadd more input to the map reduce job the completebulkload hangs on
the command line never returning.
when I run a large completebulkload it keeps trying to copy
Looks like it was a time out issue I seen the bonding log messages more then
once so.
Does anyone know what the timeout setting name is on completebulkload?
Billy
Billy Pearson sa...@pearsonwholesale.com
wrote in message news:jnslmk$9ic$1...@dough.gmane.org...
ok I got a MR job I am trying
when I run a job hbase on a tasktracker that doesn ot have a local zookeeper
server on it I get this
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
Thanks stack
It was a classpath problem
Its been a while sense I set one of these up from scratch
Billy Pearson
Stack st...@duboce.net wrote in message
news:aanlktinm5goryax-npb6gk32zv5unz+no3k9+jh4y...@mail.gmail.com...
Hey Billy:
Sounds like the conf directory is not being exported out
.
You can also summarize your data and use a secondary process to
execute a roll up of ICVs... if the number isnt too massive this might
be acceptable.
On Tue, Jan 11, 2011 at 4:07 PM, Billy Pearson
sa...@pearsonwholesale.com wrote:
Is there a way to make a mapreduce job and use incrementColumnValue
Is there a way to make a mapreduce job and use incrementColumnValue in place
of Put?
I am trying to move a job over from thrift and have to be able to use
incrementColumnValue
as a output but I can not seams to work it out with out calling HTable every
map.
small example would be nice if