RE: CombineFileInputFormat with Gzip files

2015-09-25 Thread R P
It's creating temp files on the HDFS. See code below.Thanks for your response through, I wrote my own record reader which is passing file splits to LineRecordReader which works for my problem. public CompressedCombineFileRecordReader(CombineFileSplit split, TaskAttemptContext con

Re: Problem running example (wrong IP address)

2015-09-25 Thread Daniel Watrous
hadoop-master http://pastebin.com/yVF8vCYS hadoop-data1 http://pastebin.com/xMEdf01e hadoop-data2 http://pastebin.com/prqd02eZ On Fri, Sep 25, 2015 at 11:53 AM, Brahma Reddy Battula < brahmareddy.batt...@hotmail.com> wrote: > sorry,I am not able to access the logs, could please post in paste bi

Re: Problem running example (wrong IP address)

2015-09-25 Thread Namikaze Minato
http://sprunge.us/EAJJ http://sprunge.us/hDSX http://sprunge.us/CKag On 25 September 2015 at 18:53, Brahma Reddy Battula wrote: > sorry,I am not able to access the logs, could please post in paste bin or > attach the 192.168.51.6( as your query is why different IP) DN logs and > namenode logs her

RE: Problem running example (wrong IP address)

2015-09-25 Thread Brahma Reddy Battula
sorry,I am not able to access the logs, could please post in paste bin or attach the 192.168.51.6( as your query is why different IP) DN logs and namenode logs here..? Thanks And RegardsBrahma Reddy Battula Date: Fri, 25 Sep 2015 11:16:55 -0500 Subject: Re: Problem running example (wrong IP a

Re: Conencting to Hiveserver2 failing with Java Heap size

2015-09-25 Thread Ted Yu
This question seems better suited for user@hive mailing list. Cheers On Fri, Sep 25, 2015 at 9:14 AM, Gangavarapu, Venkata < venkata.gangavar...@bcbsa.com> wrote: > Hi, > > > > I am trying to connect to Hive via Hiveserver2 using beeline. > > > > !connect jdbc:hive2://server1:10001/default;princ

Re: Problem running example (wrong IP address)

2015-09-25 Thread Daniel Watrous
Brahma, Thanks for the reply. I'll keep this conversation here in the user list. The /etc/hosts file is identical on all three nodes hadoop@hadoop-data1:~$ cat /etc/hosts 127.0.0.1 localhost 192.168.51.4 hadoop-master 192.168.52.4 hadoop-data1 192.168.52.6 hadoop-data2 hadoop@hadoop-data2:~$ cat

Conencting to Hiveserver2 failing with Java Heap size

2015-09-25 Thread Gangavarapu, Venkata
Hi, I am trying to connect to Hive via Hiveserver2 using beeline. !connect jdbc:hive2://server1:10001/default;principal=hive/serv...@hadoop.com;hive.server2.proxy.user=hadooptest;hive.server2.transport.mode=http;httpPath=cliservice But, it is failing with below error SLF4J: Class path contains

Re: node remains unused after reboot

2015-09-25 Thread Dmitry Sivachenko
> On 23 сент. 2015 г., at 22:08, Naganarasimha Garla > wrote: > > Sorry for the late Reply, thought of providing you some search strings for > blackListing hence got lil delayed. > As varun mentioned it looks more like app blacklisting case. > mapreduce.job.maxtaskfailures.per.tracker which

RE: Problem running example (wrong IP address)

2015-09-25 Thread Brahma Reddy Battula
Seems DN started in three machines and failed in hadoop-data1(192.168.52.4).. 192.168.51.6 : giving IP as 192.168.51.1...can you please check /etc/hosts file of 192.168.51.6 (might be 192.168.51.1 is configured in /etc/hosts) 192.168.52.4 :

Re: Problem running example (wrong IP address)

2015-09-25 Thread Daniel Watrous
I'm still stuck on this and posted it to stackoverflow: http://stackoverflow.com/questions/32785256/hadoop-datanode-binds-wrong-ip-address Thanks, Daniel On Fri, Sep 25, 2015 at 8:28 AM, Daniel Watrous wrote: > I could really use some help here. As you can see from the output below, > the two a

Re: unknown host exception

2015-09-25 Thread Anubhav Agarwal
Can you ping to that host? Maybe your computer is unable to resolve that hostname. Add the IP to your hosts file. On Fri, Sep 25, 2015 at 9:02 AM, siva kumar wrote: > Hi Folks, > Im trying to write some data into hbase using pentaho. > Im facing issue in connecting to hbase u

Re: Hetergeneous Hadoop Cluster

2015-09-25 Thread Corey Nolet
I'm basically referring to federating multiple namenodes (connecting two different hdfs instances under a single namespace so data can be distributed across them). Here's the documentation for Hadoop 2.6.0 [1] [1] https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Federation.htm

Re: Problem running example (wrong IP address)

2015-09-25 Thread Daniel Watrous
I could really use some help here. As you can see from the output below, the two attached datanodes are identified with a non-existent IP address. Can someone tell me how that gets selected or how to explicitly set it. Also, why are both datanodes shown under the same name/IP? hadoop@hadoop-master

unknown host exception

2015-09-25 Thread siva kumar
Hi Folks, Im trying to write some data into hbase using pentaho. Im facing issue in connecting to hbase using hbase outpout step. com.google.protobuf.serviceexception: java.net.unknownhostexception: unlnown host : x Any suggestions? Thanks in advance. siva

Re: println in MapReduce job

2015-09-25 Thread xeonmailinglist
I have found the error. It was a bug in my code. On 09/24/2015 07:47 PM, xeonmailinglist wrote: No, I am looking to the right place. Here is the output of a maptask. I also thought that the print would come up, but it isn't showing. That is why I am asking. [1] Output from from one map task.

Re: Why would ApplicationManager request RAM more that defaut 1GB?

2015-09-25 Thread Ilya Karpov
Hi Manoj & Naga, I’m surprised but there is no such a property in CHD conf files (greped all *.xml in OSes where yarn lives!) I think that this property is set by Cloudera: http://image.slidesharecdn.com/yarnsaboutyarn-kathleenting112114-141125155911-conversion-gate01/95/yarns-about-yarn-28-638.j

Re: Why would ApplicationManager request RAM more that defaut 1GB?

2015-09-25 Thread Naganarasimha Garla
Hi Manoj & Ilya, >From the logs > 2015-09-21 22:50:34,018 WARN org.apache.hadoop.yarn.server. > nodemanager.containermanager.monitor.ContainersMonitorImpl: Container > [pid=13982,containerID=*container_1442402147223_0165_**01_01*] is > running beyond *physical* memory limits. This indicates