Re: Connection refused Error

2015-04-17 Thread sreebalineni .
if no changes maku sure all daemons running properly though it looks like
started it might not running for long time
On Apr 18, 2015 7:59 AM, "Anand Murali"  wrote:

> Yes
>
> Sent from my iPhone
>
> On 17-Apr-2015, at 9:01 pm, madhav krish  wrote:
>
> Did you start your name node using start-dfs.sh?
> On Apr 17, 2015 1:52 AM, "Anand Murali"  wrote:
>
>> Dear All:
>>
>> I installed Hadoop-2.6 on Ubuntu 14.10 desktop yesterday and was able to
>> connect to hdfs and run mapreduce job on singlenode yarn setup. Today I am
>> unable to connect with following
>>
>> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ start-yarn.sh
>> starting yarn daemons
>> starting resourcemanager, logging to
>> /home/anand_vihar/hadoop-2.6.0/logs/yarn-anand_vihar-resourcemanager-Latitude-E5540.out
>> localhost: starting nodemanager, logging to
>> /home/anand_vihar/hadoop-2.6.0/logs/yarn-anand_vihar-nodemanager-Latitude-E5540.out
>> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/sbin$ refresh-namenodes.sh
>> Refreshing namenode [localhost:9000]
>> refreshNodes: Call From Latitude-E5540/127.0.1.1 to localhost:9000
>> failed on connection exception: java.net.ConnectException: Connection
>> refused; For more details see:
>> http://wiki.apache.org/hadoop/ConnectionRefused
>> Error: refresh of namenodes failed, see error messages above.
>>
>> I have made no changes to any set up for yesterday. can somebody advise.
>>
>> Thanks,
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>


Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

2015-08-04 Thread sreebalineni .
Did you check if input file exist in hdfs by doing ls i think here filenot
found needs attention
On 4 Aug 2015 13:09, "Ravikant Dindokar"  wrote:

> Hi Ashwin,
>
> On namenode, I can see Resource Manager process running.
> [on namenode]
> $ jps
> 7383 ResourceManager
> 7785 SecondaryNameNode
> 7098 NameNode
> 3634 Jps
>
>
> On Tue, Aug 4, 2015 at 12:07 PM, James Bond  wrote:
>
>> Looks like its not able to connect to the Resource Manager. Check if your
>> Resource Manager is configured properly, Resource Manager address.
>>
>> Thanks,
>> Ashwin
>>
>> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
>> ravikant.i...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I am using hadoop 2.2.0. When I am trying to run pi example from jar ,
>>> am getting following output ..
>>>
>>>  $ hadoop jar
>>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>>> 16 10
>>>
>>> Number of Maps  = 16
>>> Samples per Map = 10
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Wrote input for Map #10
>>> Wrote input for Map #11
>>> Wrote input for Map #12
>>> Wrote input for Map #13
>>> Wrote input for Map #14
>>> Wrote input for Map #15
>>> Starting Job
>>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>>> orion-00/192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>>> application_1438664093458_0001 to ResourceManager at orion-00/
>>> 192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>>> http://orion-00:8088/proxy/application_1438664093458_0001/
>>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>>> in uber mode : false
>>>
>>>
>>>
>>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04
>>> 10:27:09 INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO
>>> mapreduce.Job:  map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map
>>> 100% reduce 0%*
>>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>>> with state FAILED due to:
>>> *java.io.FileNotFoundException: File does not exist:
>>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>> at
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>
>>> When I checked the logs , I can see the following error for each slave
>>> node :
>>>
>>> ./container_1438664093458_0001_01_01/syslog:2015-08-04 10:27:09,755 
>>> *ERROR
>>> [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM. *
>>> ./container_1438664093458_0001_01_01/syslog:2015-08-04 10:27:10,770
>>> ERROR [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM.
>>>
>>> I searched for this error, but couldn't get a working solution. Some
>>> solution say that this could be possibly due to hdfs run out of disk space.
>>> If this is the case how can I figure out disk space usage?
>>>
>>> Please help.
>>>
>>> Thanks
>>> Ravikant
>>>
>>
>>
>


Re: Interface expected in the map definition?

2015-09-02 Thread sreebalineni .
That must be jar files missing in classpath. check this link
http://stackoverflow.com/questions/15127082/running-map-reduce-job-on-cdh4-example



On Wed, Sep 2, 2015 at 8:57 PM, xeonmailinglist 
wrote:

> I am setting my wordcount example, which is very similar to the Wordcount
> example that we find in the Internet.
>
>1.
>
>The MyMap class extends and implements the same classes as the ones
>defined in the original wordcount example, but in my case I get the error
>of “Interface expected here”. I really don’t understand why I get this
>error. See my example below [1]. Any help here?
>2.
>
>Is it possible to access the JobConf variable inside the map or reduce
>methods?
>
>
> [1] My Wordcount example
>
>
> package org.apache.hadoop.mapred.examples;
>
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.io.IntWritable;
> import org.apache.hadoop.io.LongWritable;
> import org.apache.hadoop.io.Text;
> import org.apache.hadoop.mapred.*;
> import org.apache.hadoop.mapreduce.Job;
> import org.apache.hadoop.mapreduce.Mapper;
> import org.apache.hadoop.mapreduce.ReduceContext;
> import org.apache.hadoop.mapreduce.Reducer;
> import org.apache.hadoop.util.GenericOptionsParser;
>
> import java.io.IOException;
> import java.util.ArrayList;
> import java.util.Iterator;
> import java.util.List;
> import java.util.StringTokenizer;
>
> /**
>  * My example of a common wordcount. Compare with the official 
> WordCount.class to understand the differences between both classes.
>  */
> public class MyWordCount {
>
> public static class MyMap extends MapReduceBase implements 
> Mapper {  Interface expected 
> here!!!
> private final static IntWritable one = new IntWritable(1);
> private Text word = new Text();
>
> public void map(LongWritable key, Text value, OutputCollector IntWritable> output, Reporter reporter) throws IOException {
> String line = value.toString();
> StringTokenizer tokenizer = new StringTokenizer(line);
> while (tokenizer.hasMoreTokens()) {
> word.set(tokenizer.nextToken());
> output.collect(word, one);
> }
> }
> }
>
>  public static class MyReducer
> extends Reducer {
> private IntWritable result = new IntWritable();
> MedusaDigests parser = new MedusaDigests();
>
> public void reduce(Text key, Iterable values,
>Context context
> ) throws IOException, InterruptedException {
> int sum = 0;
> for (IntWritable val : values) {
> System.out.println(" - key ( " + key.getClass().toString() + 
> "): " + key.toString()
> + " value ( " + val.getClass().toString() + " ): " + 
> val.toString());
> sum += val.get();
> }
> result.set(sum);
> context.write(key, result);
> }
>
> public void run(Context context) throws IOException, 
> InterruptedException {
> setup(context);
> try {
> while (context.nextKey()) {
> System.out.println("Key: " + context.getCurrentKey());
> reduce(context.getCurrentKey(), context.getValues(), 
> context);
> // If a back up store is used, reset it
> Iterator iter = 
> context.getValues().iterator();
> if(iter instanceof ReduceContext.ValueIterator) {
> 
> ((ReduceContext.ValueIterator)iter).resetBackupStore();
> }
> }
> } finally {
> cleanup(context);
> }
> }
>
> protected void cleanup(Context context)
> throws IOException, InterruptedException {
> parser.cleanup(context);
> }
> }
>
> /** Identity mapper set by the user. */
> public static class MyFullyIndentityMapper
> extends Mapper{
>
> private Text word = new Text();
> private IntWritable val = new IntWritable();
>
> public void map(Object key, Text value, Context context
> ) throws IOException, InterruptedException {
>
> StringTokenizer itr = new StringTokenizer(value.toString());
> word.set(itr.nextToken());
> val.set(Integer.valueOf(itr.nextToken()));
> context.write(word, val);
> }
>
> public void run(Context context) throws IOException, 
> InterruptedException {
> setup(context);
> try {
> while (context.nextKeyValue()) {
> System.out.println("Key ( " + 
> context.getCurrentKey().getClass().getName() + " ): " + 
> context.getCurrentKey()
> + " Value (" + 
> context.getCurrentValue().getClass().getName() + "): " + 
> context.getCurrent

Re: Hive showing SemanticException [Error 10002]: Line 3:21 Invalid column reference 'mbdate

2015-10-28 Thread sreebalineni .
Check if the query works without join and alias reference,if yes then the
problem is with alias name,i recently faced the same problem i think adding
as just before giving alias name workef
On 28 Oct 2015 20:22, "Kumar Jayapal"  wrote:

> Hello,
>
>
> Can some please help. When I execute hive query with as case statement I
> get this error " Error while compiling statement: FAILED:
> SemanticException [Error 10002]: Line 3:21 Invalid column reference 'mbdate'
>
> Here is the query :
> select  a.mbcmpy, a.mbwhse, a.mbdept, a.mbitem,
>
> (CASE WHEN to_date(a.mbdate) =  d.today_ly  THEN (a.mbdsun) END) as
> TODAY_LY
> FROM items a
> JOIN ivsdays d
> ON a.mbdate = d.cldatei
> Join ivsref r
> ON r.company = a.mbcmpy
> AND r.warehouse = a.mbwhse
> AND r.itemnumber = a.mbitem
>
> WHERE
> a.mbcmpy = 1
> AND a.mbdept = 20
>
> group by
>a.mbcmpy, a.mbwhse, a.mbdept, a.mbitem, Today_ly
>
> ORDER by
> 1,2,3,4,5
>
> Same query work in Impala. I had checked mbdate column is present in the
> table.
>
>
>
> Here is the hue log :
>
> [27/Oct/2015 14:53:21 -0700] dbms ERRORBad status for request
> TExecuteStatementReq(confOverlay={},
> sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='L1:\x9c3KB\x94\xaf\x8c\xfa\x8d\x98\x97\xe1Q',
> guid='+o\x00\xe8\xc5\x12C\xab\xbb\xb5KV\xe0\xf5\x93\xc9')), runAsync=True,
> statement='select  a.mbcmpy, a.mbwhse, a.mbdept, a.mbitem, \n\n(CASE WHEN
> to_date(a.mbdate) =  d.today_ly  THEN (a.mbdsun) END) as TODAY_LY\n\nFROM
> tlog.item_detail a\nJOIN Adv_analytics.ivsdays d\nON a.mbdate =
> d.cldatei\nJoin adv_analytics.ivsref r\nON r.company = a.mbcmpy\nAND
> r.warehouse = a.mbwhse \nAND r.itemnumber = a.mbitem\n\n\nWHERE\na.mbcmpy =
> 1\nAND a.mbdept = 20\n\n\ngroup by \n   a.mbcmpy, a.mbwhse, a.mbdept,
> a.mbitem, Today_ly\n\nORDER by\n1,2,3,4,5'):
> TExecuteStatementResp(status=TStatus(errorCode=10002, errorMessage="Error
> while compiling statement: FAILED: SemanticException [Error 10002]: Line
> 3:21 Invalid column reference 'mbdate'", sqlState='42000',
> infoMessages=["*org.apache.hive.service.cli.HiveSQLException:Error while
> compiling statement: FAILED: SemanticException [Error 10002]: Line 3:21
> Invalid column reference 'mbdate':17:16",
> 'org.apache.hive.service.cli.operation.Operation:toSQLException:Operation.java:315',
> 'org.apache.hive.service.cli.operation.SQLOperation:prepare:SQLOperation.java:102',
> 'org.apache.hive.service.cli.operation.SQLOperation:runInternal:SQLOperation.java:171',
> 'org.apache.hive.service.cli.operation.Operation:run:Operation.java:257',
> 'org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementInternal:HiveSessionImpl.java:398',
> 'org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementAsync:HiveSessionImpl.java:385',
> 'org.apache.hive.service.cli.CLIService:executeStatementAsync:CLIService.java:258',
> 'org.apache.hive.service.cli.thrift.ThriftCLIService:ExecuteStatement:ThriftCLIService.java:490',
> 'org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1313',
> 'org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1298',
> 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
> 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39',
> 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56',
> 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
> 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145',
> 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615',
> 'java.lang.Thread:run:Thread.java:745',
> "*org.apache.hadoop.hive.ql.parse.SemanticException:Line 3:21 Invalid
> column reference 'mbdate':32:16",
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genAllExprNodeDesc:SemanticAnalyzer.java:10299',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genExprNodeDesc:SemanticAnalyzer.java:10247',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genSelectPlan:SemanticAnalyzer.java:3720',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genSelectPlan:SemanticAnalyzer.java:3499',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genPostGroupByBodyPlan:SemanticAnalyzer.java:8761',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genBodyPlan:SemanticAnalyzer.java:8716',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genPlan:SemanticAnalyzer.java:9573',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genPlan:SemanticAnalyzer.java:9466',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:genOPTree:SemanticAnalyzer.java:9902',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:analyzeInternal:SemanticAnalyzer.java:9913',
> 'org.apache.hadoop.hive.ql.parse.SemanticAnalyzer:analyzeInternal:SemanticAnalyzer.java:9830',
> 'org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer:analyze:BaseSemanticAnalyzer.java:222',
> 'org.apache.hado

Re: datanode is unable to connect to namenode

2016-06-28 Thread sreebalineni .
Are you able to telnet ping. Check the firewalls as well
On Jun 29, 2016 12:39 AM, "Aneela Saleem"  wrote:

> Hi all,
>
> I have setup two nodes cluster with security enabled. I have everything
> running successful like namenode, datanode, resourcemanager, nodemanager,
> jobhistoryserver etc. But datanode is unable to connect to namenode, as i
> can see only one node on the web UI. checking logs of datanode gives
> following warning:
>
> *WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting
> to server: hadoop-master/192.168.23.206:8020 *
>
> Rest of the things look fine. Please help me in this regard, what could be
> the issue?
>


RE: Windows and Linux hadoop cluster

2016-07-20 Thread sreebalineni .
Was it in production and good amount of workload? That's interesting. Which
distribution was used

On Jul 20, 2016 8:13 PM, "Ashish Kumar9"  wrote:

> I have tried heterogeneous hadoop 2.6 cluster across multiple linux
> distros and h/w architecture ( x86_64,ppc64le,aarc64) and it worked . I did
> not see any technical challenge .
>
>
>
> From:Alexander Alten-Lorenz 
> To:Prachi Sharma , "
> user@hadoop.apache.org" 
> Date:07/20/2016 04:42 PM
> Subject:RE: Windows and Linux hadoop cluster
> --
>
>
>
> Hi,
>
> That should be possible, but will have performance impacts / additional
> configurations and potential misbehavior. But in general, it should work
> for Yarn, but not for MRv1.
>
> *https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/SecureContainer.html*
> 
>
> cheers,
>  --alex
>
> --
> b: mapredit.blogspot.com
>
> *From: **Prachi Sharma* 
> *Sent: *Wednesday, July 20, 2016 9:31 AM
> *To: **user@hadoop.apache.org* 
> *Subject: *Windows and Linux hadoop cluster
>
> Hi All,
>
> Please let me know if it’s feasible to have hadoop cluster with data nodes
> running on multiple Operating systems. For instance few data nodes running
> on windows server and others on linux based OS (RHEL,centOS).
>
> If above scenario is feasible then please provide configuration settings
> required in various xml
> files(hdfs-site.xml,core-site.xml,mapred-site.xml,yarn-site.xml) and
> environment files(*hadoop-env.sh/hadoop-cmd.sh*
> ) for windows and linux data nodes
> and namenode.
>
> Thanks !
> Prachi
>
>
>


Re: Pause between tasks or jobs?

2016-08-12 Thread sreebalineni .
Just curious to understand if map reduce is right strategy for this
scenario

On Aug 12, 2016 1:18 AM, "xeon Mailinglist" 
wrote:

> I am looking for a way to pause a chained job or a chained task. I want to
> do this because I want to validate the output of each map or reduce phase,
> or between each job execution. Is it possible to pause the execution of
> chained jobs or chained mappers or reducers in MapReduce V2. I was looking
> for the ChainedMapper and ChainedReducer, but I haven't found anything that
> could allow me to pause the execution.
>