Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
my cluster setup: both 6 machines are virtual machine. each machine: 4CPU Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 16GB memory 192.168.10.48 namenode/jobtracker 192.168.10.47 secondary namenode 192.168.10.45 datanode/tasktracker 192.168.10.46 datanode/tasktracker 192.168.10.49 datanode/tasktracker

Re: is my hbase cluster overloaded?

2014-04-22 Thread Azuryy Yu
one big possible issue is that you have a high concurrent request on HDFS or HBASE, then all Data nodes handlers are all busy, then more requests are pending, then timeout, so you can try to increase dfs.datanode.handler.count and dfs.namenode.handler.count in the hdfs-site.xml, then restart the

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
property namedfs.datanode.handler.count/name value100/value descriptionThe number of server threads for the datanode./description /property 1. namenode/master 192.168.10.48 http://pastebin.com/7M0zzAAc $free -m (this is value when I restart the hadoop and hbase now, not the value when it

oldWALs too large

2014-04-22 Thread sunweiwei
Hi I'm using hbase0.96.0, with 1 hmaster,3 regionservers. Write request is About 1~10w/s. Today I found HBase Master Hangs ,Regionservers dead and oldWALs dir is Very Large. /apps/hbase/data/data is about 800G. /apps/hbase/data/oldWALs is about 4.2T. This cause HDFS Full. any

Re: is my hbase cluster overloaded?

2014-04-22 Thread Azuryy Yu
Do you still have the same issue? and: -Xmx8000m -server -XX:NewSize=512m -XX:MaxNewSize=512m the Eden size is too small. On Tue, Apr 22, 2014 at 2:55 PM, Li Li fancye...@gmail.com wrote: property namedfs.datanode.handler.count/name value100/value descriptionThe number of server

Coprocessor coprocessor execution result saved in buffer as whole - why?

2014-04-22 Thread Asaf Mesika
Hi, I've noticed that in 0.94.7, when you execute a coprocessor, the result object is converted into a byte buffer, using write() method which is on the result object. So, if my result object is 500mb in size, another 500mb is consumed from the heap, since it is converted to a byte buffer before

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
I am now restart the sever and running. maybe an hour later the load will become high On Tue, Apr 22, 2014 at 3:02 PM, Azuryy Yu azury...@gmail.com wrote: Do you still have the same issue? and: -Xmx8000m -server -XX:NewSize=512m -XX:MaxNewSize=512m the Eden size is too small. On Tue,

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
how much MaxNewSize needed for my configuration? On Tue, Apr 22, 2014 at 3:02 PM, Azuryy Yu azury...@gmail.com wrote: Do you still have the same issue? and: -Xmx8000m -server -XX:NewSize=512m -XX:MaxNewSize=512m the Eden size is too small. On Tue, Apr 22, 2014 at 2:55 PM, Li Li

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
hbase current statistics: Region Servers ServerName Start time Load app-hbase-1,60020,1398141516916 Tue Apr 22 12:38:36 CST 2014 requestsPerSecond=6100, numberOfOnlineRegions=7, usedHeapMB=1201, maxHeapMB=7948 app-hbase-2,60020,1398141516914 Tue Apr 22 12:38:36 CST 2014 requestsPerSecond=1770,

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
Region Servers ServerName Start time Load app-hbase-1,60020,1398141516916 Tue Apr 22 12:38:36 CST 2014 requestsPerSecond=448799, numberOfOnlineRegions=8, usedHeapMB=1241, maxHeapMB=7948 app-hbase-2,60020,1398141516914 Tue Apr 22 12:38:36 CST 2014 requestsPerSecond=0, numberOfOnlineRegions=3,

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
jmap -heap of app-hbase-1 why oldSize so small? Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 8388608000 (8000.0MB) NewSize = 536870912 (512.0MB) MaxNewSize = 536870912 (512.0MB) OldSize = 5439488 (5.1875MB)

Re: is my hbase cluster overloaded?

2014-04-22 Thread Li Li
I found an exception in regionserver log: 2014-04-22 16:20:11,490 INFO org.apache.hadoop.hbase.regionserver.SplitRequest: Region split, META updated, and report to master. Parent=vc2.url_db,\xBF\xC3\x04c\xF7\xD2\xE0\xF0\xD4]\xCA\x83/x\xAAZ,1398152426132.788da9fbd09aff035ff4fc53bc7e6e5b., new

Aw: Re: taking snapshot's creates to many TCP CLOSE_WAIT handles on the hbase master server

2014-04-22 Thread Hansi Klose
Hi Ted, I inserted the output at pastebin http://pastebin.com/n3mMPxBA At the moment the hbase master process holds 10716 handles. We stopped making snapshots last week. After 4 days the count is still the same. Regards Hansi Gesendet: Donnerstag, 17. April 2014 um 19:09 Uhr Von: Ted Yu

Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread yeshwanth kumar
Hi, i am running webapp written on jaxrs framework which performs CRUD opereations on hbase. app was working fine till last week, now when i perform reading opeartion from hbase i don't see any data, i don't see any errors or exceptions but i found this lines in the log *Unable to get data of

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread Jean-Marc Spaggiari
Hi Yeshwanth, Which HBase version are you using? What do you have in your HBase config file for the znode values? JM 2014-04-22 8:18 GMT-04:00 yeshwanth kumar yeshwant...@gmail.com: Hi, i am running webapp written on jaxrs framework which performs CRUD opereations on hbase. app was

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread Matteo Bertozzi
/hbase/table94 is a compatibility znode that replaces /hbase/table if you want more details take a look at HBASE-6710. what is the problem of looking into /hbase/table94 instead of /hbase/table? Matteo On Tue, Apr 22, 2014 at 5:18 AM, yeshwanth kumar yeshwant...@gmail.comwrote: Hi, i am

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread Matteo Bertozzi
On Tue, Apr 22, 2014 at 9:00 AM, yeshwanth kumar yeshwant...@gmail.comwrote: @matteo present znode is at /hbase/table where it is empty. where as all my tables are present in /hbase/table94 now webapp isn't reading the data from hbase. cdh 4.5.0 doesn't write in /hbase/table due to a

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread yeshwanth kumar
hi matteo, how do i specify hbase znode to use /hbase/table94 instead of /hbase/table thanks On Tue, Apr 22, 2014 at 9:40 PM, Matteo Bertozzi theo.berto...@gmail.comwrote: On Tue, Apr 22, 2014 at 9:00 AM, yeshwanth kumar yeshwant...@gmail.com wrote: @matteo present znode is at

Re: Re: taking snapshot's creates to many TCP CLOSE_WAIT handles on the hbase master server

2014-04-22 Thread Ted Yu
I went over the jstack output. There were 53 IPC Server handler threads which were mostly in WAITING state. Please try Stack's suggestion and see if the problem gets resolved. Cheers On Tue, Apr 22, 2014 at 3:18 AM, Hansi Klose hansi.kl...@web.de wrote: Hi Ted, I inserted the output at

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread Matteo Bertozzi
that is already done by the server. The ZooKeeperWatcher.java is using conf.get( zookeeper.znode.masterTableEnableDisable, table94)); anyway, why are you looking at the znodes? a client application should never look at the znodes. The znodes are only carrying transient informations used for

unable to delete rows in some regions

2014-04-22 Thread Jack Levin
Hey All, I was wondering if anyone had this issue with 0.90.5 HBASE. I have a table 'img611', I issue delete of keys like this: hbase(main):004:0 describe 'img611' DESCRIPTION ENABLED {NAME = 'img611', FAMILIES = [{NAME = 'att',

Re: unable to delete rows in some regions

2014-04-22 Thread Jack Levin
Looks like the data got cleaned after I manually issued 'split' -- This might be a bug. Has anyone seen it before? Originally, here is the log entry that issued after major compact was called on the table: 2014-04-22 09:31:59,068 DEBUG org.apache.hadoop.hbase.regionserver.CompactSplitThread:

unable to access apache hbase blogs website

2014-04-22 Thread Srikanth Srungarapu
Hi, I'm unable to access http://blogs.apache.org/hbase/ webpage. Can someone please let me know if this is just me or the website is down? Thanks, Srikanth.

RE: unable to access apache hbase blogs website

2014-04-22 Thread Rendon, Carlos (KBB)
I'm also unable to access, so I would assume it is down. -Carlos -Original Message- From: Srikanth Srungarapu [mailto:srikanth...@gmail.com] Sent: Tuesday, April 22, 2014 4:45 PM To: user@hbase.apache.org; d...@hbase.apache.org Subject: unable to access apache hbase blogs website Hi,

Re: unable to access apache hbase blogs website

2014-04-22 Thread Ted Yu
Please use the following as backup : http://webcache.googleusercontent.com/search?q=cache:http://blogs.apache.org/hbase On Tue, Apr 22, 2014 at 5:05 PM, Rendon, Carlos (KBB) cren...@kbb.comwrote: I'm also unable to access, so I would assume it is down. -Carlos -Original Message-

Date DataType broken when Migrating from Phoenix2.0.2 to Phoenix3.0.0

2014-04-22 Thread anil gupta
Hi All, We have written data into our HBase tables using PDataType of Phoenix2.0.2. We have custom MR loaders that use PDataType so that we can use Phoenix for adhoc querying I am trying to migrate to Phoenix3.0.0, but all the Date type columns values are not coming out correctly. These are big

Re: is my hbase cluster overloaded?

2014-04-22 Thread lars hofhansl
What makes you say this? HBase has a lot of very short lived garbage (like KeyValue objects that do not outlive an RPC request) and a lot of long lived data in the memstore and the block cache. We want to avoid accumulating the short lived garbage and at the same time leave most heap for