Re: Cannot create table with DateTiered compaction

2016-08-10 Thread ramkrishna vasudevan
Then the choice that you have is to back port the feature and the related changes to your branch. But not sure if that is feasible for you. Then you have only one choice is to upgrade the hbase relase if you are in need of it. REgards Ram On Thu, Aug 11, 2016 at 8:49 AM, spats

Re: Cannot create table with DateTiered compaction

2016-08-10 Thread spats
Hi Ram, Unfortunately i don't have hbase version with DateTiered compaction, tried with version 1.0.0 but as expected doesn't work as changes are not in this version also, as per https://issues.apache.org/jira/browse/HBASE-15181 -- View this message in context:

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Aneela Saleem
Hi Subroto, I checked this. When i set the property in spark-defaults.conf file and log its value from SparkConf, it says "No Such Element Found". But when i set it through SparkConf explicitly, the previous issue is not resolved. I'm trying hard to get it done but no workaround found yet!

Re: Hbase regionserver.MultiVersionConcurrencyControl Warning

2016-08-10 Thread Stack
On Wed, Aug 10, 2016 at 1:13 AM, Sterfield wrote: > Hi, > > Thanks for your answer. > > I'm currently testing OpenTSDB + HBase, so I'm generating thousands of HTTP > POST on OpenTSDB in order to write data points (currently up to 300k/s). > OpenTSDB is only doing increment /

Re: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-10 Thread Stack
Why do you want to lock a whole table Ming Liu? Your throughput will go to zero while the write lock is held. Is this ok by you? On TableLockManager, it is marked @InterfaceAudience.Private, which from the refguide[1], means "...for HBase internal use only." St.Ack 1.

Re: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-10 Thread John Leach
Ming, One challenge with Locking Mechanisms is that you need to account for node failure after lock is acquired. If a region would to be moved, split, etc, the lock needs to survive those operations. Most databases put the locks inline with the data they store and user their transaction

RE: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-10 Thread Liu, Ming (Ming)
Thanks Ted! -Original Message- From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Wednesday, August 10, 2016 10:17 PM To: user@hbase.apache.org Subject: Re: help, try to use HBase's checkAndPut() to implement distribution lock Please take a look at EnableTableHandler where you can find

Re: help, try to use HBase's checkAndPut() to implement distribution lock

2016-08-10 Thread Ted Yu
Please take a look at EnableTableHandler where you can find example - prepare() and process(). On Tue, Aug 9, 2016 at 5:04 PM, Liu, Ming (Ming) wrote: > Thanks Ted for pointing out this. Can this TableLockManager be used from a > client? I am fine to migrate if this API

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Subroto Sanyal
Not sure what could be the problem be but, I would suggest you to double check if the said property is part of SparkConf obejct being created in the code (just by logging it). Cheers, Subroto Sanyal On Wed, Aug 10, 2016 at 1:39 PM, Aneela Saleem wrote: > The property

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Aneela Saleem
The property was already set in spark-default.conf file but still facing same error. On Wed, Aug 10, 2016 at 4:35 PM, Subroto Sanyal wrote: > yes... you can set the property in the conf file or you can the property > explicitly in the Spark Configuration object used while

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Subroto Sanyal
yes... you can set the property in the conf file or you can the property explicitly in the Spark Configuration object used while creation of SparkContext/JavaSparkContext. Cheers, Subroto Sanyal On Wed, Aug 10, 2016 at 12:09 PM, Aneela Saleem wrote: > Thanks Subroto, >

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Aneela Saleem
Thanks Subroto, Do i need to set it to 'true' in spar-default.conf file? On Wed, Aug 10, 2016 at 2:59 PM, Subroto Sanyal wrote: > hi Aneela > > By any chance you are missing the property: > spark.yarn.security.tokens.habse.enabled > This was introduced as part of the fix:

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Subroto Sanyal
hi Aneela By any chance you are missing the property: spark.yarn.security.tokens.habse.enabled This was introduced as part of the fix: https://github.com/apache/spark/pull/8134/files Cheers, Subroto Sanyal On Wed, Aug 10, 2016 at 11:53 AM, Aneela Saleem wrote: > And

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Aneela Saleem
And I'm using Apache distribution of Spark not Cloudera. On Wed, Aug 10, 2016 at 12:06 PM, Aneela Saleem wrote: > Thanks Nkechi, > > I added this dependency as an external jar, when i compile the code, > unfortunately i got the following error: > > error: object cloudera

Re: Cannot create table with DateTiered compaction

2016-08-10 Thread ramkrishna vasudevan
Hi Spats This date tiered version is in 0.98.18 I believe and not in 0.98.6. Can you try latest releases of 0.98 branch? If it does not work pls report back. Regards Ram On Wed, Aug 10, 2016 at 1:56 PM, spats wrote: > Cannot create table with DateTiered compaction in

Cannot create table with DateTiered compaction

2016-08-10 Thread spats
Cannot create table with DateTiered compaction in hbase 0.98.6. Also tried alter table to change existing table to DateTiered Storage, this hangs indefinately. Create table command create 'XXX', { NAME => 'f1', COMPRESSION => 'SNAPPY', DATA_BLOCK_ENCODING => 'FAST_DIFF', REPLICATION_SCOPE => '1',

Re: Hbase regionserver.MultiVersionConcurrencyControl Warning

2016-08-10 Thread Sterfield
Hi, Thanks for your answer. I'm currently testing OpenTSDB + HBase, so I'm generating thousands of HTTP POST on OpenTSDB in order to write data points (currently up to 300k/s). OpenTSDB is only doing increment / append (AFAIK) If I have understood your answer correctly, some write ops are

Re: submitting spark job with kerberized HBase issue

2016-08-10 Thread Aneela Saleem
Thanks Nkechi, I added this dependency as an external jar, when i compile the code, unfortunately i got the following error: error: object cloudera is not a member of package com [ERROR] import com.cloudera.spark.hbase.HBaseContext On Tue, Aug 9, 2016 at 7:51 PM, Nkechi Achara

Re: Hbase regionserver.MultiVersionConcurrencyControl Warning

2016-08-10 Thread Anoop John
Ya it comes with write workload. Not like with concurrent reads. Once the write is done (memstore write and WAL write), we mark that MVCC operation corresponding to this as complete and wait for a global read point to advance to atleast this point. (Every write op will have a number corresponding