Then the choice that you have is to back port the feature and the related
changes to your branch. But not sure if that is feasible for you. Then you
have only one choice is to upgrade the hbase relase if you are in need of
it.
REgards
Ram
On Thu, Aug 11, 2016 at 8:49 AM, spats
Hi Ram,
Unfortunately i don't have hbase version with DateTiered compaction, tried
with version 1.0.0 but as expected doesn't work as changes are not in this
version also, as per https://issues.apache.org/jira/browse/HBASE-15181
--
View this message in context:
Hi Subroto,
I checked this. When i set the property in spark-defaults.conf file and log
its value from SparkConf, it says "No Such Element Found". But when i set
it through SparkConf explicitly, the previous issue is not resolved.
I'm trying hard to get it done but no workaround found yet!
On Wed, Aug 10, 2016 at 1:13 AM, Sterfield wrote:
> Hi,
>
> Thanks for your answer.
>
> I'm currently testing OpenTSDB + HBase, so I'm generating thousands of HTTP
> POST on OpenTSDB in order to write data points (currently up to 300k/s).
> OpenTSDB is only doing increment /
Why do you want to lock a whole table Ming Liu? Your throughput will go to
zero while the write lock is held. Is this ok by you?
On TableLockManager, it is marked @InterfaceAudience.Private, which from
the refguide[1], means "...for HBase internal use only."
St.Ack
1.
Ming,
One challenge with Locking Mechanisms is that you need to account for node
failure after lock is acquired. If a region would to be moved, split, etc, the
lock needs to survive those operations. Most databases put the locks inline
with the data they store and user their transaction
Thanks Ted!
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, August 10, 2016 10:17 PM
To: user@hbase.apache.org
Subject: Re: help, try to use HBase's checkAndPut() to implement distribution
lock
Please take a look at EnableTableHandler where you can find
Please take a look at EnableTableHandler where you can find example
- prepare() and process().
On Tue, Aug 9, 2016 at 5:04 PM, Liu, Ming (Ming) wrote:
> Thanks Ted for pointing out this. Can this TableLockManager be used from a
> client? I am fine to migrate if this API
Not sure what could be the problem be but, I would suggest you to double
check if the said property is part of SparkConf obejct being created in the
code (just by logging it).
Cheers,
Subroto Sanyal
On Wed, Aug 10, 2016 at 1:39 PM, Aneela Saleem
wrote:
> The property
The property was already set in spark-default.conf file but still facing
same error.
On Wed, Aug 10, 2016 at 4:35 PM, Subroto Sanyal
wrote:
> yes... you can set the property in the conf file or you can the property
> explicitly in the Spark Configuration object used while
yes... you can set the property in the conf file or you can the property
explicitly in the Spark Configuration object used while creation of
SparkContext/JavaSparkContext.
Cheers,
Subroto Sanyal
On Wed, Aug 10, 2016 at 12:09 PM, Aneela Saleem
wrote:
> Thanks Subroto,
>
Thanks Subroto,
Do i need to set it to 'true' in spar-default.conf file?
On Wed, Aug 10, 2016 at 2:59 PM, Subroto Sanyal
wrote:
> hi Aneela
>
> By any chance you are missing the property:
> spark.yarn.security.tokens.habse.enabled
> This was introduced as part of the fix:
hi Aneela
By any chance you are missing the property:
spark.yarn.security.tokens.habse.enabled
This was introduced as part of the fix:
https://github.com/apache/spark/pull/8134/files
Cheers,
Subroto Sanyal
On Wed, Aug 10, 2016 at 11:53 AM, Aneela Saleem
wrote:
> And
And I'm using Apache distribution of Spark not Cloudera.
On Wed, Aug 10, 2016 at 12:06 PM, Aneela Saleem
wrote:
> Thanks Nkechi,
>
> I added this dependency as an external jar, when i compile the code,
> unfortunately i got the following error:
>
> error: object cloudera
Hi Spats
This date tiered version is in 0.98.18 I believe and not in 0.98.6. Can you
try latest releases of 0.98 branch? If it does not work pls report back.
Regards
Ram
On Wed, Aug 10, 2016 at 1:56 PM, spats wrote:
> Cannot create table with DateTiered compaction in
Cannot create table with DateTiered compaction in hbase 0.98.6. Also tried
alter table to change existing table to DateTiered Storage, this hangs
indefinately.
Create table command
create 'XXX', { NAME => 'f1', COMPRESSION => 'SNAPPY', DATA_BLOCK_ENCODING
=> 'FAST_DIFF', REPLICATION_SCOPE => '1',
Hi,
Thanks for your answer.
I'm currently testing OpenTSDB + HBase, so I'm generating thousands of HTTP
POST on OpenTSDB in order to write data points (currently up to 300k/s).
OpenTSDB is only doing increment / append (AFAIK)
If I have understood your answer correctly, some write ops are
Thanks Nkechi,
I added this dependency as an external jar, when i compile the code,
unfortunately i got the following error:
error: object cloudera is not a member of package com
[ERROR] import com.cloudera.spark.hbase.HBaseContext
On Tue, Aug 9, 2016 at 7:51 PM, Nkechi Achara
Ya it comes with write workload. Not like with concurrent reads.
Once the write is done (memstore write and WAL write), we mark that
MVCC operation corresponding to this as complete and wait for a global
read point to advance to atleast this point. (Every write op will have
a number corresponding
19 matches
Mail list logo