Re: Hadoop impersonation not handling permissions

2018-07-30 Thread Harinder Singh
Maybe my understanding is not correct. So if hdfs has access on it and "b" is trying to access this dir that "hdfs" has access on. Shouldn't it allow to access that? As hdfs is impersonating "b". Thanks Harinder On Mon, Jul 30, 2018 at 2:02 PM, Wei-Chiu Chuang wrote: > Pretty sure this is the

Re: Hadoop impersonation not handling permissions

2018-07-30 Thread Wei-Chiu Chuang
Pretty sure this is the expected behavior. >From the stacktrace, you're impersonation is configured correctly (i.e. it successfully perform operation on behalf of user b) the problem is your file doesn't allow b to access it. On Mon, Jul 30, 2018 at 1:25 PM Harinder Singh <

Hadoop impersonation not handling permissions

2018-07-30 Thread Harinder Singh
Hi I am using hadoop proxy user/impersonation to access a directory on which the superuser has access, but it's giving me permission errors when the proxy user tries to access it: Say user "a" is a superuser and "b" is trying to access a directory on behalf of it. But "b" does not have permission

Re: HDP3.0: NameNode is not formatted

2018-07-30 Thread Lian Jiang
This document mentions namenode formatting and bootstrapping: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/data-storage/content/format_namenodes.html Can ambari blueprint take care of it? How to streamline the HDP installation automation? On Mon, Jul 30, 2018 at 9:32 AM, Lian Jiang

HDP3.0: NameNode is not formatted

2018-07-30 Thread Lian Jiang
Hi, I am using ambari 2.7 blueprint to install HDP3.0. After installing, the active namenode cannot start due to error: 2018-07-30 04:41:03,839 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(716)) - Encountered exception loading fsimage java.io.IOException: NameNode is not

Using compression support available in native libraries

2018-07-30 Thread Krishna Kishore Bonagiri
Hi, I would like to use compression support available in native libraries. I have tried to google but couldn't know how can I use the LZ4 compression algorithm for compressing the data I am writing to the HDFS files I am writing from my C++ code? Is it possible to get the data written in