Re-adding the user list, which I accidentally left off.

On Wed, Nov 18, 2015 at 3:55 PM, Gabriel Reid <gabriel.r...@gmail.com> wrote:
> Yes, I believe that's correct, if you change the umask you make the
> HFiles readable to all during creation.
>
> I believe that the alternate solutions listed on the jira ticket
> (running the tool as the hbase user or using the alternate HBase
> coprocessor for loading HFiles) won't have this drawback.
>
> - Gabriel
>
>
> On Wed, Nov 18, 2015 at 3:49 PM, Sanooj Padmakumar <p.san...@gmail.com> wrote:
>> Thank you Gabriel..
>>
>> Does it mean that the generated hfile be read/modified by any user other
>> than "hbase" user?
>>
>> Regards
>> Sanooj
>>
>> On 18 Nov 2015 02:39, "Gabriel Reid" <gabriel.r...@gmail.com> wrote:
>>>
>>> Hi Sanooj,
>>>
>>> Yes, I think that should do it, or you can pass that config parameter
>>> as a command line parameter.
>>>
>>> - Gabriel
>>>
>>> On Tue, Nov 17, 2015 at 8:16 PM, Sanooj Padmakumar <p.san...@gmail.com>
>>> wrote:
>>> > Hi Gabriel
>>> >
>>> > Thank you so much
>>> >
>>> > I set the below property and it worked now.. I hope this is the correct
>>> > thing to do ?
>>> >
>>> >  conf.set("fs.permissions.umask-mode", "000");
>>> >
>>> >
>>> > Thanks Again
>>> >
>>> > Sanooj
>>> >
>>> > On Wed, Nov 18, 2015 at 12:29 AM, Gabriel Reid <gabriel.r...@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hi Sanooj,
>>> >>
>>> >> I believe that this is related to the issue described in PHOENIX-976
>>> >> [1]. In that case, it's not strictly related to Kerberos, but instead
>>> >> to file permissions (could it be that your dev environment also
>>> >> doesn't have file permissions turned on?)
>>> >>
>>> >> If you look at the comments on that jira ticket, there are a couple of
>>> >> things that you could try doing to resolve this (running the import
>>> >> job as the hbase user, or using custom file permissions, or using an
>>> >> alternate incremental load coprocessor).
>>> >>
>>> >> - Gabriel
>>> >>
>>> >>
>>> >> 1. https://issues.apache.org/jira/browse/PHOENIX-976
>>> >>
>>> >> On Tue, Nov 17, 2015 at 7:14 PM, Sanooj Padmakumar <p.san...@gmail.com>
>>> >> wrote:
>>> >> > Hello -
>>> >> >
>>> >> > I am using the bulkload of Phoenix on a cluster secured with
>>> >> > Kerberos.
>>> >> > The
>>> >> > mapper runs fine, reducer runs fine .. and then the counters are
>>> >> > printed
>>> >> > fine.. finally the LoadIncrementalHFiles steps fails.. A portion of
>>> >> > the
>>> >> > log
>>> >> > is given below..
>>> >> >
>>> >> >
>>> >> > 15/11/17 09:44:48 INFO mapreduce.LoadIncrementalHFiles: Trying to
>>> >> > load
>>> >> > hfile=hdfs://..........<<masked>>>
>>> >> > 15/11/17 09:45:56 INFO client.RpcRetryingCaller: Call exception,
>>> >> > tries=10,
>>> >> > retries=35, started=68220 ms ago, cancelled=false, msg=row '' on
>>> >> > table
>>> >> > 'TABLE1' at region=TABLE1,<<<masked>>>>, seqNum=26
>>> >> > 15/11/17 09:46:16 INFO client.RpcRetryingCaller: Call exception,
>>> >> > tries=11,
>>> >> > retries=35, started=88315 ms ago, cancelled=false, msg=row '' on
>>> >> > table
>>> >> > 'TABLE1' at region=TABLE1,<<<masked>>>>, seqNum=26
>>> >> >
>>> >> > Is there any setting I should make inorder to make the program work
>>> >> > on
>>> >> > Kerberos secured environment ?
>>> >> >
>>> >> > Please note , our DEV environment doesnt use Kerberos and things are
>>> >> > working
>>> >> > just fine
>>> >> >
>>> >> > --
>>> >> > Thanks in advance,
>>> >> > Sanooj Padmakumar
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Thanks,
>>> > Sanooj Padmakumar

Reply via email to