Hello,
I think the problem is not that. If I don't put this env
variable $HBASE_CONF_DIR, everything starts correctly, I don't think I need
to put it. My problem is that the map reduce is not executing in parallel.
if I ask:
Configuration config = HBaseConfiguration.create();
Hello,
I put it in my .bashrc and resource it, and the problem is still there. I
haven't seen to many documents in the web about this HBASE_CONF_DIR,
something about PIG, but I am not using it.
I have 4 servers, after putting HBASE_CONF_DIR, it only starts the first
node, in the others there is
Hello!
I am experimenting some problems because I think I don't have distributed
computation.
I have a map reduce code where I go to a table and I get something of my
interest. When I do htop to my 4 servers I see that the processors are
working sequentially not in parallel, in other words, one
Roberto,
Is your $HBASE_CONF_DIR pointing to the directory that contains your
hbase-site.xml?
- Dave
On Mon, Mar 26, 2012 at 8:35 AM, Roberto Alonso CIPF ralo...@cipf.eswrote:
Hello!
I am experimenting some problems because I think I don't have distributed
computation.
I have a map reduce
Thanks Dave for your answer. In hbase-env.sh I have $HADOOP_CONF_DIR setted
but no $HBASE_CONF_DIR. I have put it right now in that file but my
zookeeper doesn't start, should I put the variable in the .bashrc or
another file?
thanks!
On 26 March 2012 17:39, Dave Wang d...@cloudera.com wrote:
Roberto,
It should be set in whatever shell you are using. If you are using bash,
then .bashrc seems reasonable. Remember to re-source your .bashrc after
making the change. You can verify by running env | grep HBASE_CONF_DIR
from your shell.
If your ZooKeeper is not starting, we'll need to