I want to add few more points
below is my cluster configuration
*Distribution*
* Total*
*Distribution*
*OS (RAID-1)*
*DATA*
*Total RAM*
*Components*
*Yarn Resource manager/ Node manager*
*Node*
Node- 1
2x6 Core
12 core
6x300 GB
300
Single 900 GB RAID-10
96
Hbase Master,
Hi All
Can any one help me to figure out the root cause I have 4 node cluster and
one data node get down , I did not understand why my Hbase Master not able
to get up
I have belo log
ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is
not running yet
at
Your zookeeper.session.timeout is set as 9 but tickTime=2000.
The max timeout is bounded by 20 times tickTime.
Please increase the tickTime in zoo.cfg
I don't see region server log prior to 18:14:14,928
On Wed, Oct 19, 2016 at 7:13 PM, who.cat wrote:
> ok.i have posted
ok.i have posted the more detail RS,Gc log and the ZK ,HBase
config,https://github.com/eswidy/waterspider/tree/master/rscase
Thanks
-- Original --
From: "Ted Yu";;
Date: Oct 20, 2016
To: "user@hbase.apache.org";
Storing the value on hdfs and using reference to the hdfs location in key value
is an option.
> On Oct 19, 2016, at 6:49 PM, big data wrote:
>
> actually, there is only one huge value in the hbase cell which large
> than 100M, maybe it's not a good idea to store such
actually, there is only one huge value in the hbase cell which large
than 100M, maybe it's not a good idea to store such huge value in hbase.
Any suggestions to store this huge objects?
在 16/10/19 下午8:16, 吴国泉wgq 写道:
> hi biodata:
>
> you can try “ scan.setbatch()” or other filter to
There was one 25 second pause before the abort.
Can you pastebin your hbase-site.xml (and zookeeper configs) ?
Do you have more of the region server log (prior to 18:14:14,928) ?
Thanks
On Wed, Oct 19, 2016 at 6:01 PM, who.cat wrote:
> i've upload the file to git hub ,and the
i've upload the file to git hub ,and the url is
:https://github.com/eswidy/waterspider/blob/master/regionServer.log
thanks so much.
-- Original --
From: "Ted Yu";;
Date: Oct 19, 2016
To: "user@hbase.apache.org";
Sorted this one out
Need to put phoenix-4.8.0-HBase-0.98-client.jar in $HBASE_HOME/lib directory
though it does say anything about Phoenix
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Dear Apache Enthusiast,
ApacheCon Sevilla is now less than a month out, and we need your help
getting the word out. Please tell your colleagues, your friends, and
members of related technical communities, about this event. Rates go up
November 3rd, so register today!
ApacheCon, and Apache Big
This used to work before. I think it started playing up when I started
region server all on the same host. In short I could create the table.
Now I am getting
hive> create external table marketDataHbase (key STRING, ticker STRING,
timecreated STRING, price STRING)
> STORED BY
The log file was not delivered by the mailing list.
Consider using pastebin or third party site.
On Tue, Oct 18, 2016 at 10:38 PM, who.cat wrote:
> thanks fyi.Yes,i did not turn the debug and try it now .I also doubt the
> heavy cpu load caused ,then checked cpu highest
IIRL this is fixed on recent versions of HBase. Which version are you
using? If you face this issue, I don't think you can fix it with any
setting :( You might have to upgrade your version of HBase...
2016-10-19 8:16 GMT-04:00 吴国泉wgq :
> hi biodata:
>
> you can try “
hi biodata:
you can try “ scan.setbatch()” or other filter to limit the number of
column returned.
This is because: There is a very large row in your table,when you try to
retrieve it, OOM will happen.
As I can see, There is no other method to solve this problem.
I've adjusted the jvm xmx in hbase-env.xml, now in hbase shell, count
runs well.
But java client still crashes because :
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol
message was too large. May be malicious. Use
CodedInputStream.setSizeLimit() to increase the size
Interesting. Can you bump the client heap size? How much do you have for
the client?
JMS
2016-10-19 3:50 GMT-04:00 big data :
> Dear all,
>
> I've a hbase table, one row has a huge keyvalue, about 100M size.
>
> When I execute count table in hbase shell, hbase crash to
Dear all,
I've a hbase table, one row has a huge keyvalue, about 100M size.
When I execute count table in hbase shell, hbase crash to bash, and
display error like this:
hbase(main):005:0> count 'table', CACHE=>1
java.lang.OutOfMemoryError: Java heap space
Dumping heap to
17 matches
Mail list logo