hi,
if i remember correctly I had the same problem some while ago.
in my case, port 6 had to be unlocked in the firewall to be able to create
tables etc. via java client
--
Ashish Nigam schrieb am Do., 6. Sep 2012 01:57 MESZ:
Hi,
I have three node HBase
Hello,
We are developing a web-application that uses HBase as database, with
Tomcat as application server. Currently, our server-side code can act
as a sort of NoSQL abstraction-layer for either HBase or Google
AppEngine. HBase is used in production, AppEngine mainly for testing
and demo
Hi,
You can use HBase in standalone mode? Cf.
http://hbase.apache.org/book.html#standalone_dist?
I guess you already tried and it didn't work?
Nicolas
On Fri, Sep 7, 2012 at 9:57 AM, Jeroen Hoek jer...@lable.org wrote:
Hello,
We are developing a web-application that uses HBase as database,
Neither right now -- I'm just assuming that it would be a problem
since I would definitely have to support both in a hypothetical
HBase+Hadoop installment that isn't actually built yet.
Did you ever try corralling those jobs by just reducing the number of
available map/reduce tasks or did you
Thank you Doug.
I still have one confusion left. My original question is, why batch update
could resolve the performance (or make improvement) issue caused by same
row update contention by multiple clients. Do you have any ideas or
comments?
regards,
Lin
On Fri, Sep 7, 2012 at 2:26 AM, Doug
I'm new to HBase and HDFS and have a question about what happens when failure is detected and a new region server takes over a region. If the old region server hasn't really failed and "comes back" will it still accept writes?Here's a specific sequence of events:1) region R is currently being
Hi Nick,
When the dead region server comes back, it won't be able to write data
to the WAL any more.
As the first thing of log splitting, the WAL folder for the dead
region server is renamed. When
the dead region server tries to write to the WAL, it will find the
file is not there any more.
Hi Jimmy,Thanks for the quick response. If the paused region server currently has the file open and is writing to it (current stream open to data node -- actually local i guess) will the stream be marked as unusable so the write to it fails? I guess maybe this is more of an HDFS question.-NickOn