Fwd: Migrating from Apache Cassandra to Hbase

2018-09-10 Thread onmstester onmstester
Any idea? Sent using Zoho Mail Forwarded message From : onmstester onmstester To : "user" Date : Sat, 08 Sep 2018 10:46:25 +0430 Subject : Migrating from Apache Cassandra to Hbase Forwarded message Hi, Currently I'm using Apache Cassandra as

Re: HBase 2.0.1 INCONSISTENT issues

2018-09-10 Thread Oleg Galitskiy
Upgrade to "2.0.2" version didn't help. "Fixed" by re-creating tables, which is not good. Hope in nearest future we will have tool for fix that. On Mon, Sep 10, 2018 at 2:05 PM Oleg Galitskiy wrote: > Yes, bow main issue in "hole". > > Could you give me example how I can fetch or scan section

Re: questions regarding hbase major compaction

2018-09-10 Thread Josh Elser
1. Yes 2. HDFS NN pressure, read slow down, general poor performance 3. Default configuration is weekly, if you don't explicitly know some reasons why weekly doesn't work, this is what you should follow ;) 4. No I would be surprised if you need to do anything special with S3, but I don't know

Re: Extremely high CPU usage after upgrading to Hbase 1.4.4

2018-09-10 Thread Srinidhi Muppalla
It is during a period when the number of client operations was relatively low. It wasn’t zero, but it was definitely off peak hours. On 9/10/18, 12:16 PM, "Ted Yu" wrote: In the previous stack trace you sent, shortCompactions and longCompactions threads were not active. Was

Re: HBase 2.0.1 INCONSISTENT issues

2018-09-10 Thread Oleg Galitskiy
Yes, bow main issue in "hole". Could you give me example how I can fetch or scan section of the table? Thanks. On Mon, Sep 10, 2018 at 1:53 PM Stack wrote: > On Mon, Sep 10, 2018 at 1:32 PM Oleg Galitskiy > wrote: > > > Hello, > > > > Faced with inconsistent issues on HBase 2.0.1: > > -- >

Re: HBase 2.0.1 INCONSISTENT issues

2018-09-10 Thread Stack
On Mon, Sep 10, 2018 at 1:32 PM Oleg Galitskiy wrote: > Hello, > > Faced with inconsistent issues on HBase 2.0.1: > -- > > ERROR: Region \{ meta => null, hdfs => > > hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f, > deployed => >

HBase 2.0.1 INCONSISTENT issues

2018-09-10 Thread Oleg Galitskiy
Hello, Faced with inconsistent issues on HBase 2.0.1: -- ERROR: Region \{ meta => null, hdfs => hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f, deployed => hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f., replicaId

Re: Extremely high CPU usage after upgrading to Hbase 1.4.4

2018-09-10 Thread Ted Yu
In the previous stack trace you sent, shortCompactions and longCompactions threads were not active. Was the stack trace captured during period when the number of client operations was low ? If not, can you capture stack trace during off peak hours ? Cheers On Mon, Sep 10, 2018 at 12:08 PM

Re: Extremely high CPU usage after upgrading to Hbase 1.4.4

2018-09-10 Thread Srinidhi Muppalla
Hi Ted, The highest number of filters used is 10, but the average is generally close to 1. Is it possible the CPU usage spike has to do with Hbase internal maintenance operations? It looks like post-upgrade the spike isn’t correlated with the frequency of reads/writes we are making, because

questions regarding hbase major compaction

2018-09-10 Thread Antonio Si
Hello, As I understand, the deleted records in hbase files do not get removed until a major compaction is performed. I have a few questions regarding major compaction: 1. If I set a TTL and/or a max number of versions, the records are older than the TTL or the expired versions will

Re: Improving on MTTR of cluster [Hbase - 1.1.13]

2018-09-10 Thread Ted Yu
For the second config you mentioned, hbase.master.distributed.log.replay, see http://hbase.apache.org/book.html#upgrade2.0.distributed.log.replay FYI On Mon, Sep 10, 2018 at 8:52 AM sahil aggarwal wrote: > Hi, > > My cluster has around 50k regions and 130 RS. In case of unclean shutdown, > the

Improving on MTTR of cluster [Hbase - 1.1.13]

2018-09-10 Thread sahil aggarwal
Hi, My cluster has around 50k regions and 130 RS. In case of unclean shutdown, the cluster take around 40 50 mins to come up(mostly slow on region assignment from observation). Trying to optimize it found following possible configs: *hbase.assignment.usezk:* which will co-host meta table and