Hi,
I've enabled HBase Authorization by adding below properties in
HBase-site.xml and also in log4j Security audit appender is as below.
*hbase-site.xml*
/
hbase.security.authorization
true
hbase.coprocessor.master.classes
>From the logs, it seems there were some issue with the file that was used
by the bucket cache. Probably the volume where the file was mounted had
some issues.
If you can confirm that , then this issue should be pretty straightforward.
If not let us know, we can help.
Regards
Ram
On Sun, Feb 25,
You can refer to HFilePerformanceEvaluation where creation of Writer is
demonstrated:
writer = HFile.getWriterFactoryNoCache(conf)
.withPath(fs, mf)
.withFileContext(hFileContext)
.withComparator(CellComparator.getInstance())
.create();
Cheers
Ted/Anoop
https://issues.apache.org/jira/browse/HBASE-20080
On Sat, Feb 24, 2018 at 12:12 PM, Ted Yu wrote:
> bq. a warning message in the shell should be displayed if simple auth and
> cell visibility are in use together.
>
> Makes sense.
>
> Please log a JIRA.
>
> On
I'm looking into creating HFiles directly from NiFi using the HBase API. It
seems pretty straight forward:
1. Open a HFile.Writer pointing to a file path in HDFS.
2. Write the cells with the HFile API.
3. Call the incremental loader API to have it tell HBase to load the
generated segments.
Is
Here is related code for disabling bucket cache:
if (this.ioErrorStartTime > 0) {
if (cacheEnabled && (now - ioErrorStartTime) > this.
ioErrorsTolerationDuration) {
LOG.error("IO errors duration time has exceeded " +
ioErrorsTolerationDuration +
"ms, disabling
HI,
I am running an HBase 1.3.1 cluster on AWS EMR. The bucket cache is
configured to use two attached EBS disks of 50 GB each and I provisioned
the bucket cache to be a bit less than the total, at a total of 98 GB per
instance to be on the safe side. My tables have column families set to