A HSM offloads storage and use of critically sensitive key material. If instead
that key material is stored in a local keystore, or on a NFS mounted volume,
then node level compromise gains access to both key and data for exfiltration.
HSMs are separate hardware systems hardened against
Thank you both Andrew and Dima,
It is very good to know the performance penalty is not too much, we will
investigate HDFS and HSM, and of course, we will test the perf impact ourselves.
I think I have misunderstanding of the purpose of encryption, if NFS doesn't
provide more protection. The
We implemented this by upserting changed elements and dropping others. On a
given cluster is takes 4.5 hours to load HBase, the trim and cleanup as
currently implemented takes 4 days. Back to the drawing board.
I’ve read the references but still don’t grok what to do. I have a table with
an
Thanks Ted, I have seen that and I have had it set for to true for years
without issue. I was asking in this case because the docs for stripe
compaction explicitly say to disable the table. I will test in our QA
environment first, but would also appreciate input from anyone who has done
this
Have you seen the doc at the top
of ./hbase-shell/src/main/ruby/shell/commands/alter.rb ?
Alter a table. If the "hbase.online.schema.update.enable" property is set to
false, then the table must be disabled (see help 'disable'). If the
"hbase.online.schema.update.enable" property is set to true,
Hello,
We're running hbase 1.2.0-cdh5.7.0. According to the HBase book, in order
to enable stripe compactions on a table we need to first disable the table. We
basically can't disable tables in production. Is it possible to do this
without disabling the table? If not, are there any plans to make
> if we move the encryption to HDFS level, we no longer can enable encryption
> per table I think? I assume encryption will impact performance to some
> extent, so we may would like to enable it per table
That's correct, at the HDFS level encryption will be over the entire HBase
data. I can
FWIW, some engineers at Cloudera who worked on adding encryption at rest to
HDFS wrote a blog post on this where they describe negligible performance
impacts on write and only a slight performance degradation on large reads (
Hi, Andrew again,
I still have a question that if we move the encryption to HDFS level, we no
longer can enable encryption per table I think?
I assume encryption will impact performance to some extent, so we may would
like to enable it per table. Is there any performance tests that shows how
Would do that, thanks!
On Sat, Jun 4, 2016 at 6:55 PM, Ted Yu wrote:
> I think this sounds like a bug.
>
> Search in HBase JIRA first. If there is no JIRA with the same symptom,
> consider filing one.
>
> Cheers
>
> On Fri, Jun 3, 2016 at 1:10 AM, Shuai Lin
Thanks Anoop for replying...
Yes, in our test environment also numDataIndexLevels=2.
In production environment there were successful region split after compaction,
only few region split failed with same error.
Regards,
Pankaj
-Original Message-
From: Anoop John
In ur test env also u have numDataIndexLevels=2? Or it is 1 only?
-Anoop-
On Mon, Jun 6, 2016 at 1:12 PM, Pankaj kr wrote:
> Thanks Ted for replying.
> Yeah, We have a plan to upgrade. But currently I want to know the reason
> behind this. I tried to reproduce this in
Thanks Ted for replying.
Yeah, We have a plan to upgrade. But currently I want to know the reason behind
this. I tried to reproduce this in our test environment but didn’t happen.
in HFilePrettyPrinter output "numDataIndexLevels=2", so there were multilevel
data index. Is which circumstances
13 matches
Mail list logo