Re: RegionServer 60030 Show All RPC Handler Task is empty

2015-07-08 Thread Louis Hust
Hi esteban, For the dump url get the following out: Tasks: === Executors: For the json format, got the following output: Shell curl http://:60030/rs-status\?format\=json\filter\=handler [] All output is empty 2015-07-07 14:05

Re: Scan got exception

2015-07-08 Thread Louis Hust
Any idea? 2015-07-01 9:50 GMT+08:00 Louis Hust louis.h...@gmail.com: So the cdh5.2.0 is patched with HBASE-11678 ? 2015-07-01 6:43 GMT+08:00 Stack st...@duboce.net: I checked Vladimir and 5.2.0 is the first release with the necessary HBASE-11678 BucketCache ramCache fills heap after

Automating major compactions

2015-07-08 Thread Dejan Menges
Hi, What's the best way to automate major compactions without enabling it during off peak period? What I was testing is simple script which runs on every node in cluster, checks if there is major compaction already running on that node, if not picks one region for compaction and run compaction

Re: RegionServer 60030 Show All RPC Handler Task is empty

2015-07-08 Thread Esteban Gutierrez
Yeah, it took about one day to see this in one my test clusters with heavy load. Will keep you posted. Thanks, esteban. -- Cloudera, Inc. On Wed, Jul 8, 2015 at 2:32 AM, Louis Hust louis.h...@gmail.com wrote: Hi esteban, For the dump url get the following out: Tasks:

protobuf issue when building hbase 0.94.26 with HDFS 2.5.0

2015-07-08 Thread Neutron sharc
Hi folks, I'm building hbase 0.94.26 with HDFS 2.5.0. I have applied patch HBASE-11076 (to regenerate proto java source files with protoc 2.5.0), and my pom.xml points to protobuf.version 2.5.0. However some unittests still keep failing and complain about:

Re: hbase 0.94.26 hangs when a datanode is suspended via SIGSTOP

2015-07-08 Thread Neutron sharc
Some update. It turns out we were using a wrong HDFS version. The issue is gone once we pull in the right hadoop-hdfs jar. On Mon, Jun 22, 2015 at 10:29 AM, Ted Yu yuzhih...@gmail.com wrote: bq. my hbase client keeps stuck Can you provide stack trace for the client ? Were region servers

[DISCUSS] Bumping Thrift to 0.9.2 in branch-1

2015-07-08 Thread Srikanth Srungarapu
Hi Folks, Currently, HBase is using Thrift 0.9.0 version, with the latest version being 0.9.2. Currently, the HBase Thrift gateway is vulnerable to crashes due to THRIFT-2660 https://issues.apache.org/jira/browse/THRIFT-2660 when used with default transport and the workaround for this problem is

Re: Scan got exception

2015-07-08 Thread Vladimir Rodionov
Is this issue reproducible? If - yes, then please submit a bug. -Vlad On Wed, Jul 8, 2015 at 2:32 AM, Louis Hust louis.h...@gmail.com wrote: Any idea? 2015-07-01 9:50 GMT+08:00 Louis Hust louis.h...@gmail.com: So the cdh5.2.0 is patched with HBASE-11678 ? 2015-07-01 6:43 GMT+08:00

Re: Automating major compactions

2015-07-08 Thread Behdad Forghani
To start major compaction for tablename from cli, you need to run: echo major_compact tablename | hbase shell I do this after bulk loading to the table. FYI, to avoid surprises, I also turn off load balancer and rebalance regions manually. The cli command to turn off balancer is: echo

Re: Automating major compactions

2015-07-08 Thread Dejan Menges
Hi Behdad, Thanks a lot, but this part I do already. My question was more what to use to most intelligently (what exposed or not exposed metrics) figure out where major compaction is needed the most. Currently, choosing the region which has biggest number of store files + the biggest amount of

Re: Automating major compactions

2015-07-08 Thread Dejan Menges
Hi Mikhail, Actually, reason is quite stupid on my side - to avoid compacting one region over and over again while others are waiting in line (reading HTML and sorting only on number of store files gets you at some point having bunch of regions having exactly the same number of store files).

Re: Automating major compactions

2015-07-08 Thread Mikhail Antonov
I totally understand the reasoning behind compacting regions with biggest number of store files, but didn't follow why it's best to compact regions which have biggest store files, maybe I'm missing something? I'd maybe compact regions which have the smallest avg storefile size? You may also want

Re: Automating major compactions

2015-07-08 Thread Vladimir Rodionov
You can find this info yourself, Dejan 1. Locate table dir on HDFS 2. List all regions (directories) 3. Iterate files in each directory and find the oldest one (creation time) 4. The region with the oldest file is your candidate for major compaction /HBASE_ROOT/data/namespace/table/region (If my

Re: Automating major compactions

2015-07-08 Thread Bryan Beaudreault
Our automation uses a combination of the following to determine what to compact: - Which regions have bad locality (% of blocks are local vs remote, using HDFS getBlockLocations APIs) - Which regions have the most number of HFiles (most files per region/cf directory) - Which regions have gone the

Re: Automating major compactions

2015-07-08 Thread Behdad Forghani
Hi, For my project, HBase would come to a halt after about 8 hours. I managed to reduce the load time to 10 minutes. What gave me the best result was: splitting regions to best fit my data, compacting them manually when there was a change to the tables and using snappy for compression. I have

Re: [DISCUSS] Bumping Thrift to 0.9.2 in branch-1

2015-07-08 Thread Ted Yu
bq. some minor additions (new API) in 0.9.2 [5] I don't seem to find [5]. Mind sharing the link ? Thanks On Wed, Jul 8, 2015 at 11:42 AM, Srikanth Srungarapu srikanth...@gmail.com wrote: Hi Folks, Currently, HBase is using Thrift 0.9.0 version, with the latest version being 0.9.2.

Re: Automating major compactions

2015-07-08 Thread Jean-Marc Spaggiari
Just missing the ColumnFamiliy at the end of the path. Your memory is pretty good. JM 2015-07-08 16:39 GMT-04:00 Vladimir Rodionov vladrodio...@gmail.com: You can find this info yourself, Dejan 1. Locate table dir on HDFS 2. List all regions (directories) 3. Iterate files in each directory

Re: [DISCUSS] Bumping Thrift to 0.9.2 in branch-1

2015-07-08 Thread Srikanth Srungarapu
@Sean, I'm thinking of getting this in for 1.3 and master. Do you think we should also get this in for 1.2 release line? @Ted, My bad, the number should have been [4]. It is pointing to release notes of 0.9.2 i.e.

Adding nodes

2015-07-08 Thread Anupam sinha
Hi, I am going to extend my existing hbase cluster by adding hbase nodes I want to know is any effect on my cluster configuration ? Thank you, Anu

Re: [DISCUSS] Bumping Thrift to 0.9.2 in branch-1

2015-07-08 Thread Sean Busbey
On Wed, Jul 8, 2015 at 11:12 PM, Srikanth Srungarapu srikanth...@gmail.com wrote: @Sean, I'm thinking of getting this in for 1.3 and master. Do you think we should also get this in for 1.2 release line? We're a bit too close to 1.2 for my comfort in changing the thrift library version, given

Re: Scan got exception

2015-07-08 Thread ramkrishna vasudevan
+1 to what Vladmir says. If you can reproduce it that would be great too. On Wed, Jul 8, 2015 at 10:18 PM, Vladimir Rodionov vladrodio...@gmail.com wrote: Is this issue reproducible? If - yes, then please submit a bug. -Vlad On Wed, Jul 8, 2015 at 2:32 AM, Louis Hust louis.h...@gmail.com

Re: protobuf issue when building hbase 0.94.26 with HDFS 2.5.0

2015-07-08 Thread Stack
Did you use the protoc compiler from 2.5.0 to regenerate the 0.94 pb classes? St.Ack On Wed, Jul 8, 2015 at 11:21 AM, Neutron sharc neutronsh...@gmail.com wrote: Hi folks, I'm building hbase 0.94.26 with HDFS 2.5.0. I have applied patch HBASE-11076 (to regenerate proto java source files

Re: [DISCUSS] Bumping Thrift to 0.9.2 in branch-1

2015-07-08 Thread Andrew Purtell
Unfortunately, the recently added impersonation support [1] doesn't work with framed transport leaving thrift gateway using this feature susceptible to crashes. Updating thrift version to 0.9.2 will help us in mitigating this problem. Can you say more about how the problem is mitigated?

Re: Adding nodes

2015-07-08 Thread Anupam sinha
how i solve this problem, is it affect my Hbase cluster performance ? Should i configured all nodes as hbase server in the cluster. On Thu, Jul 9, 2015 at 10:51 AM, 伍照坤 tonywu...@gmail.com wrote: will cause balance, some effect on online query, some of ranges hang seconds. Offline usage should

Re: [DISCUSS] Bumping Thrift to 0.9.2 in branch-1

2015-07-08 Thread Sean Busbey
Would this aim for 1.3 or 1.2? -- Sean On Jul 8, 2015 1:42 PM, Srikanth Srungarapu srikanth...@gmail.com wrote: Hi Folks, Currently, HBase is using Thrift 0.9.0 version, with the latest version being 0.9.2. Currently, the HBase Thrift gateway is vulnerable to crashes due to THRIFT-2660