hi,i use CH3u4 , no presplit, balancer is enabled,thanks
On Thu, Jul 4, 2013 at 1:25 PM, Ted Yu yuzhih...@gmail.com wrote:
Did you presplit your table ?
Was load balancer enabled ?
What HBase version do you use ?
Thanks
On Jul 3, 2013, at 10:21 PM, ch huang justlo...@gmail.com wrote:
Hi,
If I have enabled shortcircuit reads, should I ever be seeing clienttrace
logs in the datanode for the regionserver DFSClient that is co-located with
the datanode ?
Besides that is there any other way to verify that my setting for short
circuit reads is working fine.
Thanks,
Viral
I think there is a metric in HBase and HDFS (JMX) reflecting that.
If you find it and find it useful, do tell...
On Thursday, July 4, 2013, Viral Bajaria wrote:
Hi,
If I have enabled shortcircuit reads, should I ever be seeing clienttrace
logs in the datanode for the regionserver DFSClient
the load balancer process as i see is very slow i have a big table ,it's
region distrabute is node1: 5 ,node2: 22 ,node3: 5 ,after 4 hours ,it's
became
node1: 6 ,node2: 20 ,node1: 6, change is very very slow!
On Thu, Jul 4, 2013 at 1:54 PM, Azuryy Yu azury...@gmail.com wrote:
I supposed load
@stack: Thanks for explanation. I understand the difference between single
quotes and double quotes. Using single quote to interpret the string
literally is not the behavior I expect. I want the bytes exactly
represented by the escaped hexadecimal strings.
@Ted: I filed a JIRA issue at
hbase(main):006:0
the prompt has three part , hbase(main) 006 0 ,what's the third part
means?
I looked up the ganglia metrics that I have setup for the cluster (both
HBase and HDFS) and don't see it there. Is it not published to ganglia ?
On Wed, Jul 3, 2013 at 11:33 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
I think there is a metric in HBase and HDFS (JMX) reflecting that.
If you
That's just the default irb prompt with the line number...
http://ruby-doc.org/docs/ProgrammingRuby/html/irb.html
Matteo
On Thu, Jul 4, 2013 at 9:38 AM, ch huang justlo...@gmail.com wrote:
hbase(main):006:0
the prompt has three part , hbase(main) 006 0 ,what's the third part
means?
if SCR take effect, you can see releated logs in the datanode log.
On Jul 4, 2013 2:26 PM, Viral Bajaria viral.baja...@gmail.com wrote:
Hi,
If I have enabled shortcircuit reads, should I ever be seeing clienttrace
logs in the datanode for the regionserver DFSClient that is co-located with
Currently datanode shows a lot of clienttrace logs for DFSClient. I did a
quick command line check to see how many clienttrace do I get per active
RegionServer and it seems the local RegionServer had very few ( 1%).
Given that datanode logs are too noisy with clienttrace, I was hoping to
find the
I think we should have some explicit indicator for this feature.
Mind filing a JIRA for this?
Regards
Ram
On Thu, Jul 4, 2013 at 2:30 PM, Viral Bajaria viral.baja...@gmail.comwrote:
Currently datanode shows a lot of clienttrace logs for DFSClient. I did a
quick command line check to see how
+1 we should have a metric for this.
- Original Message -
From: ramkrishna vasudevan ramkrishna.s.vasude...@gmail.com
To: user@hbase.apache.org user@hbase.apache.org
Cc:
Sent: Thursday, July 4, 2013 2:02 AM
Subject: Re: question about clienttrace logs in hdfs and shortcircuit read
I
Created the JIRA at: https://issues.apache.org/jira/browse/HBASE-8868
Sorry if I got a few fields wrong, will learn from this one to open better
JIRAs going forward.
Thanks,
Viral
On Thu, Jul 4, 2013 at 2:02 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
I think we should
Can you check load balancer related log lines in master log and put them on
pastebin so that root cause can be diagnosed ?
Which HBase version are you using ?
Thanks
On Jul 3, 2013, at 11:44 PM, ch huang justlo...@gmail.com wrote:
the load balancer process as i see is very slow i have a big
Yes I saw it. I followed Ted advice to use
scan.setTimeRange(sometimestamp, Long.MAX_VALUE)
On Wed, Jul 3, 2013 at 11:23 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
Seems right. You can make it more efficient by creating your result array
in advance and then fill it.
Regarding time
I check the latest api of Delete class. I am afraid you have to do it by
yourself.
regards!
Yong
On Wed, Jul 3, 2013 at 6:46 PM, Rahul Bhattacharjee rahul.rec@gmail.com
wrote:
Hi,
Like scan with range. I would like to delete rows with range.Is this
supported from hbase shell ?
Lets
It is not supported from shell. Not directly from delete API also..
You can have a look at BulkDeleteEndpoint which can do what you want to
-Anoop-
On Thu, Jul 4, 2013 at 4:09 PM, yonghu yongyong...@gmail.com wrote:
I check the latest api of Delete class. I am afraid you have to do it by
What the hdfs data locality metric?
And remote read and local read?
On Thursday, July 4, 2013, Viral Bajaria wrote:
Currently datanode shows a lot of clienttrace logs for DFSClient. I did a
quick command line check to see how many clienttrace do I get per active
RegionServer and it seems the
Hi Anoop
one more question. Can I use BulkDeleteEndpoint at the client side or
should I use it like coprocessor which deployed in the server side?
Thanks!
Yong
On Thu, Jul 4, 2013 at 12:50 PM, Anoop John anoop.hb...@gmail.com wrote:
It is not supported from shell. Not directly from delete
It seems that jRuby-based HBase shell does not handle ASCII-8bit correctly.
Is there any work-around for this?
My locale settings are all en_US.
LANG=en_US
LC_CTYPE=en_US
LC_NUMERIC=en_US
LC_TIME=en_US
LC_COLLATE=en_US
LC_MONETARY=en_US
LC_MESSAGES=en_US
LC_PAPER=en_US
LC_NAME=en_US
BulkDeleteEndpoint is a coprocessor endpoint impl. For the usage pls
refer TestBulkDeleteProtocol.
You will be able to call the API at client side and the actual execution
will happen at server side.. (This is what will happen with Endpoints :) )
-Anoop-
On Thu, Jul 4, 2013 at 4:29 PM, yonghu
So, I can understand it as a predefined coprocessor. :)
regards!
Yong
On Thu, Jul 4, 2013 at 1:12 PM, Anoop John anoop.hb...@gmail.com wrote:
BulkDeleteEndpoint is a coprocessor endpoint impl. For the usage pls
refer TestBulkDeleteProtocol.
You will be able to call the API at client side
Can you please also checked on the HBase Web UI?
Do you have regions in transition?
Maybe the balancer already gave te balance plan, and regions are
moving, slowly, to their destinations? How big are those regions? I
mean, size in MB/GB and number of rows?
JM
2013/7/4 Ted Yu
bq. CH3u4
Did you mean CDH3u4 ?
How many tables do you have in total ?
Are regions from all the tables balanced across your cluster ?
Cheers
On Wed, Jul 3, 2013 at 11:00 PM, ch huang justlo...@gmail.com wrote:
hi,i use CH3u4 , no presplit, balancer is enabled,thanks
On Thu, Jul 4, 2013
Hi All,
I am using HBase in distributed mode. A number of master and regionserver
nodes are running on different machines.
Now i want to upgrade HBase version to recent stable release.
My main purpose is to upgrade HBase without shutting down whole cluster.
I want to know if on one machine i
Hi Hanish,
Which version are you using, and which version are you targeting?
That will change the responses we will provide you...
JM
2013/7/4 Hanish Bansal hanish.bansal.agar...@gmail.com:
Hi All,
I am using HBase in distributed mode. A number of master and regionserver
nodes are running
Hi
I am using HBase-*0.94.6*.
I am targeting recent stable release *0.94.7 or 0.94.8*.
On Thu, Jul 4, 2013 at 2:13 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Hi Hanish,
Which version are you using, and which version are you targeting?
That will change the responses we will
Hi Hanish,
Clients and servers calls between HBase minor releases are supposed to
be compatible.
If you want to upgrade all your nodes from 0.94.6 to 0.94.8 (or 0.94.9
if you wait few days) simply deploy the new jars and do a
rolling-restart
JM
2013/7/4 Hanish Bansal
Coprocessors and endpoints are slighlty different but the
BulkDeleteEndpoint gives you the provision to do the functionality that you
need. Basically the code gets executed in all the regions and the result
is returned back to the client.
Coprocessors allow you to control the execution on the
Sure you can do it but not using hbase shell directly but writing a hbase
shell script either a shell script python script or rubi script
read about hbase shell scripts for basic reference.
http://hbase.apache.org/shell.html
*Thanks Regards*
∞
Shashwat Shriparv
On Thu, Jul 4, 2013 at
Thank you very much for your response.
Would please guide me about this security flag, where can I enable this
security flag? I am sorry, I am novice in Hbase and really don't have idea,
where to deal with it?
I eagerly waiting for your response.
Thanks In Advance,
--
View this message in
Hi Jean-Marc,
So I can assume that
Rolling upgrade of all nodes from 0.94.6 to 0.94.8 will not cause any data
inconsistency.
Also there will no java api changes between HBase minor releases
Am i right?
On Thu, Jul 4, 2013 at 8:09 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Hi
Hi Hanish,
Are you using 0.94.6? Or 0.94.6.1?
There was some issues on 0.94.6 with the rolling restart I think.
If you are using 0.94.6.1 there should not be any issue.
And you are right, there is no API changes. It's compatible.
JM
2013/7/4 Hanish Bansal hanish.bansal.agar...@gmail.com:
Hi
The HBase Team is pleased to announce the immediate release of HBase 0.94.9.
Download it from your favorite Apache mirror [1].
As usual, all previous 0.92.x and 0.94.x releases can upgraded to 0.94.9 via a
rolling upgrade without downtime, intermediary versions can be skipped.
0.94.9 is the
Here is the JIRA that led to the release of 0.94.6.1:
https://issues.apache.org/jira/browse/HBASE-8259
Cheers
On Thu, Jul 4, 2013 at 10:10 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Hanish,
Are you using 0.94.6? Or 0.94.6.1?
There was some issues on 0.94.6 with the rolling
It looks like that converting a ASCII-8bit bytes array to a UTF-8 string
will result such an decoding error.
see: http://stackoverflow.com/a/11162470
I found a work-around solution by explicitly converting string to bytes
array in HBase shell. For example, executing the following command
split
i see lot's of skip balance info,is balancer refuse to work?why?
2013-07-05 09:56:25,121 DEBUG org.apache.hadoop.hbase.regionserver.HRegion:
DELETING region
hdfs://CH22:9000/hbaseroot/demoK/febcd37c57994604c8e218d8eb9c75c2
2013-07-05 09:56:25,130 INFO org.apache.hadoop.hbase.catalog.MetaEditor:
No regions in transition.
region is default size 256MB
On Thu, Jul 4, 2013 at 9:16 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Can you please also checked on the HBase Web UI?
Do you have regions in transition?
Maybe the balancer already gave te balance plan, and regions are
2013-07-05 10:11:24,894 INFO org.apache.hadoop.hbase.master.LoadBalancer:
Skipping load balancing. servers=2 regions=145 average=72.5 mostloaded=73
leastloaded=72
Balancer saw 2 servers which were balanced - in terms of number of regions.
Where did CH36 go ?
Cheers
On Thu, Jul 4, 2013 at 7:16
Hi ,
Need to calculate data size in HBase. I can do it by using KV length. But
it is time consuming with huge data block.
HFile looks better solution but still I have question with HFile, HFile
will give size of data block. What if I have limit in scan[ ] object start
row and end row. And those
If you are trying to calculate the data size that is already loaded into
HBase, you can use the UI to see the number of store files and the size of
store files to know the size of the data in HBase.
Regards
Ram
On Fri, Jul 5, 2013 at 4:53 AM, Bikash Agrawal er.bikas...@gmail.comwrote:
Hi ,
41 matches
Mail list logo