What Anil says. Sounds like your job is launched with default configs --
which are for local mode. You need to point it at your distributed cluster
install.

For MapReduce jobs, HADOOP_CLASSPATH needs to be set appropriately.

On Thursday, May 26, 2016, anil gupta <[email protected]> wrote:

> Hi,
>
> It seems like your classpath is not setup correctly. /etc/hadoop/conf and
> /etc/hbase/conf needs to be in MapReduce classpath.  Are you able to run
> row counter job of hbase in distributed cluster? What version of Hadoop you
> are using? Did you use Ambari or Clodura Manager to install the cluster?
>
> Thanks,
> Anil Gupta
>
> On Thu, May 26, 2016 at 7:19 AM, Lucie Michaud <
> [email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>>
> wrote:
>
>> Hello everybody,
>>
>>
>>
>> For a few days I developed a MapReduce code to insert values in HBase
>> with Phoenix. But the code runs only in local and overcharge the machine.
>>
>> Whatever changes I make I observe that the mapred.LocalJobRunner class is
>> systematically used.
>>
>>
>>
>> Do you have an idea of the problem?
>>
>>
>>
>> I attached to this post the execution logs of my program.
>>
>>
>>
>> Thank you in advance for your help. :)
>>
>>  Feel free to ask me for more details if it can help.
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Reply via email to