my cluster setup: both 6 machines are virtual machine. each machine:
4CPU Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 16GB memory
192.168.10.48 namenode/jobtracker
192.168.10.47 secondary namenode
192.168.10.45 datanode/tasktracker
192.168.10.46 datanode/tasktracker
192.168.10.49 datanode/tasktracker
one big possible issue is that you have a high concurrent request on HDFS
or HBASE, then all Data nodes handlers are all busy, then more requests are
pending, then timeout, so you can try to increase
dfs.datanode.handler.count and dfs.namenode.handler.count in the
hdfs-site.xml, then restart the
property
namedfs.datanode.handler.count/name
value100/value
descriptionThe number of server threads for the datanode./description
/property
1. namenode/master 192.168.10.48
http://pastebin.com/7M0zzAAc
$free -m (this is value when I restart the hadoop and hbase now, not
the value when it
Hi
I'm using hbase0.96.0, with 1 hmaster,3 regionservers.
Write request is About 1~10w/s.
Today I found HBase Master Hangs ,Regionservers dead and oldWALs dir is Very
Large.
/apps/hbase/data/data is about 800G. /apps/hbase/data/oldWALs is about 4.2T.
This cause HDFS Full.
any
Do you still have the same issue?
and:
-Xmx8000m -server -XX:NewSize=512m -XX:MaxNewSize=512m
the Eden size is too small.
On Tue, Apr 22, 2014 at 2:55 PM, Li Li fancye...@gmail.com wrote:
property
namedfs.datanode.handler.count/name
value100/value
descriptionThe number of server
Hi,
I've noticed that in 0.94.7, when you execute a coprocessor, the result
object is converted into a byte buffer, using write() method which is on
the result object.
So, if my result object is 500mb in size, another 500mb is consumed from
the heap, since it is converted to a byte buffer before
I am now restart the sever and running. maybe an hour later the load
will become high
On Tue, Apr 22, 2014 at 3:02 PM, Azuryy Yu azury...@gmail.com wrote:
Do you still have the same issue?
and:
-Xmx8000m -server -XX:NewSize=512m -XX:MaxNewSize=512m
the Eden size is too small.
On Tue,
how much MaxNewSize needed for my configuration?
On Tue, Apr 22, 2014 at 3:02 PM, Azuryy Yu azury...@gmail.com wrote:
Do you still have the same issue?
and:
-Xmx8000m -server -XX:NewSize=512m -XX:MaxNewSize=512m
the Eden size is too small.
On Tue, Apr 22, 2014 at 2:55 PM, Li Li
hbase current statistics:
Region Servers
ServerName Start time Load
app-hbase-1,60020,1398141516916 Tue Apr 22 12:38:36 CST 2014
requestsPerSecond=6100, numberOfOnlineRegions=7, usedHeapMB=1201,
maxHeapMB=7948
app-hbase-2,60020,1398141516914 Tue Apr 22 12:38:36 CST 2014
requestsPerSecond=1770,
Region Servers
ServerName Start time Load
app-hbase-1,60020,1398141516916 Tue Apr 22 12:38:36 CST 2014
requestsPerSecond=448799, numberOfOnlineRegions=8, usedHeapMB=1241,
maxHeapMB=7948
app-hbase-2,60020,1398141516914 Tue Apr 22 12:38:36 CST 2014
requestsPerSecond=0, numberOfOnlineRegions=3,
jmap -heap of app-hbase-1 why oldSize so small?
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 8388608000 (8000.0MB)
NewSize = 536870912 (512.0MB)
MaxNewSize = 536870912 (512.0MB)
OldSize = 5439488 (5.1875MB)
I found an exception in regionserver log:
2014-04-22 16:20:11,490 INFO
org.apache.hadoop.hbase.regionserver.SplitRequest: Region split, META
updated, and report to master.
Parent=vc2.url_db,\xBF\xC3\x04c\xF7\xD2\xE0\xF0\xD4]\xCA\x83/x\xAAZ,1398152426132.788da9fbd09aff035ff4fc53bc7e6e5b.,
new
Hi Ted,
I inserted the output at pastebin
http://pastebin.com/n3mMPxBA
At the moment the hbase master process holds 10716 handles.
We stopped making snapshots last week.
After 4 days the count is still the same.
Regards Hansi
Gesendet: Donnerstag, 17. April 2014 um 19:09 Uhr
Von: Ted Yu
Hi,
i am running webapp written on jaxrs framework which performs CRUD
opereations on hbase.
app was working fine till last week,
now when i perform reading opeartion from hbase i don't see any data, i
don't see any errors or exceptions but i found this lines in the log
*Unable to get data of
Hi Yeshwanth,
Which HBase version are you using? What do you have in your HBase config
file for the znode values?
JM
2014-04-22 8:18 GMT-04:00 yeshwanth kumar yeshwant...@gmail.com:
Hi,
i am running webapp written on jaxrs framework which performs CRUD
opereations on hbase.
app was
/hbase/table94 is a compatibility znode that replaces /hbase/table
if you want more details take a look at HBASE-6710.
what is the problem of looking into /hbase/table94 instead of /hbase/table?
Matteo
On Tue, Apr 22, 2014 at 5:18 AM, yeshwanth kumar yeshwant...@gmail.comwrote:
Hi,
i am
On Tue, Apr 22, 2014 at 9:00 AM, yeshwanth kumar yeshwant...@gmail.comwrote:
@matteo
present znode is at /hbase/table where it is empty.
where as all my tables are present in /hbase/table94
now webapp isn't reading the data from hbase.
cdh 4.5.0 doesn't write in /hbase/table due to a
hi matteo,
how do i specify hbase znode to use /hbase/table94 instead of /hbase/table
thanks
On Tue, Apr 22, 2014 at 9:40 PM, Matteo Bertozzi theo.berto...@gmail.comwrote:
On Tue, Apr 22, 2014 at 9:00 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
@matteo
present znode is at
I went over the jstack output.
There were 53 IPC Server handler threads which were mostly in WAITING state.
Please try Stack's suggestion and see if the problem gets resolved.
Cheers
On Tue, Apr 22, 2014 at 3:18 AM, Hansi Klose hansi.kl...@web.de wrote:
Hi Ted,
I inserted the output at
that is already done by the server.
The ZooKeeperWatcher.java is using conf.get(
zookeeper.znode.masterTableEnableDisable, table94));
anyway, why are you looking at the znodes?
a client application should never look at the znodes.
The znodes are only carrying transient informations used for
Hey All, I was wondering if anyone had this issue with 0.90.5 HBASE.
I have a table 'img611', I issue delete of keys like this:
hbase(main):004:0 describe 'img611'
DESCRIPTION
ENABLED
{NAME = 'img611', FAMILIES = [{NAME = 'att',
Looks like the data got cleaned after I manually issued 'split' --
This might be a bug. Has anyone seen it before?
Originally, here is the log entry that issued after major compact was
called on the table:
2014-04-22 09:31:59,068 DEBUG
org.apache.hadoop.hbase.regionserver.CompactSplitThread:
Hi,
I'm unable to access http://blogs.apache.org/hbase/ webpage. Can someone
please let me know if this is just me or the website is down?
Thanks,
Srikanth.
I'm also unable to access, so I would assume it is down.
-Carlos
-Original Message-
From: Srikanth Srungarapu [mailto:srikanth...@gmail.com]
Sent: Tuesday, April 22, 2014 4:45 PM
To: user@hbase.apache.org; d...@hbase.apache.org
Subject: unable to access apache hbase blogs website
Hi,
Please use the following as backup :
http://webcache.googleusercontent.com/search?q=cache:http://blogs.apache.org/hbase
On Tue, Apr 22, 2014 at 5:05 PM, Rendon, Carlos (KBB) cren...@kbb.comwrote:
I'm also unable to access, so I would assume it is down.
-Carlos
-Original Message-
Hi All,
We have written data into our HBase tables using PDataType of Phoenix2.0.2.
We have custom MR loaders that use PDataType so that we can use Phoenix for
adhoc querying
I am trying to migrate to Phoenix3.0.0, but all the Date type columns
values are not coming out correctly. These are big
What makes you say this?
HBase has a lot of very short lived garbage (like KeyValue objects that do not
outlive an RPC request) and a lot of long lived data in the memstore and the
block cache. We want to avoid accumulating the short lived garbage and at the
same time leave most heap for
27 matches
Mail list logo