Re: how to use Yarn API to find task/attempt status

2016-03-09 Thread Jeff Zhang
If it is for M/R, then maybe this is what you want https://hadoop.apache.org/docs/r2.6.0/api/org/apache/hadoop/mapreduce/JobStatus.html On Thu, Mar 10, 2016 at 1:58 PM, Frank Luo wrote: > Let’s say there are 10 standard M/R jobs running. How to find how many > tasks are

Re: how to use Yarn API to find task/attempt status

2016-03-09 Thread Sultan Alamro
You still can see the tasks status through the web interfaces. Look at the end of this page https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html > On Mar 10, 2016, at 12:58 AM, Frank Luo wrote: > > Let’s say there are 10 standard M/R

RE: how to use Yarn API to find task/attempt status

2016-03-09 Thread Frank Luo
Let’s say there are 10 standard M/R jobs running. How to find how many tasks are done/running/pending? From: Jeff Zhang [mailto:zjf...@gmail.com] Sent: Wednesday, March 09, 2016 9:33 PM To: Frank Luo Cc: user@hadoop.apache.org Subject: Re: how to use Yarn API to find task/attempt status I don't

how to use Yarn API to find task/attempt status

2016-03-09 Thread Frank Luo
I have a need to programmatically find out how many tasks are pending in Yarn. Is there a way to do it through a Java API? I looked at YarnClient, but not able to find what I need. Thx in advance. Frank Luo This email and any attachments transmitted with it are intended for use by the

Re: Impala

2016-03-09 Thread Juri Yanase Triantaphyllou
Thanks. I will do it! Juri -Original Message- From: Sean Busbey To: Nagalingam, Karthikeyan Cc: Kumar Jayapal ; user ; cdh-user Sent: Wed, Mar 9, 2016 12:53

A Mapreduce job failed. Need Help!

2016-03-09 Thread Juri Yanase Triantaphyllou
Dear Hadoop users: I followed theinstructions of “Build and Install Hadoop 2.x or newer on Windows” and was ableto build and install Hadoop 2.7.2 on a PC running Windowns10. Then, I tried to runa wordcount job on a single node environment, but this mapreduce job failed. Couldsomeone please

Re: Impala

2016-03-09 Thread Sean Busbey
You should join the mailing list for Apache Impala (incubating) and ask your question over there: http://mail-archives.apache.org/mod_mbox/incubator-impala-dev/ On Wed, Mar 9, 2016 at 8:12 AM, Nagalingam, Karthikeyan < karthikeyan.nagalin...@netapp.com> wrote: > Hello, > > > > I am new to

Impala

2016-03-09 Thread Nagalingam, Karthikeyan
Hello, I am new to impala, my goal is to test join, aggregation against 2Million and 10Million records. can you please provide some documentation or website for starter ? Regards, Karthikeyan Nagalingam, Technical Marketing Engineer ( Big Data Analytics) Mobile: 919-376-6422

Re: Showing negative numbers for Hadoop resource manager web interface

2016-03-09 Thread Chathuri Wimalasena
Thank you for quick response.. Regards, Chathuri On Wed, Mar 9, 2016 at 10:40 AM, Dmytro Kabakchei < dmitry.kabakc...@gmail.com> wrote: > Hi, > Checkout https://issues.apache.org/jira/browse/YARN-3933 > It isn't resolved yet, but gives an idea what is going on. Also the patch > is available. >

Re: Showing negative numbers for Hadoop resource manager web interface

2016-03-09 Thread Dmytro Kabakchei
Hi, Checkout https://issues.apache.org/jira/browse/YARN-3933 It isn't resolved yet, but gives an idea what is going on. Also the patch is available. Kind regards, Dmytro Kabakchei On 09.03.2016 17:27, Chathuri Wimalasena wrote: Hi All, We have a hadoop cluster running using hadoop 2.5.1.

oozie java action issue

2016-03-09 Thread Immanuel Fredrick
2016-03-09 03:54:32,070 ERROR [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Error writing History Event: org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent@7594e5ec java.nio.channels.ClosedChannelException at

[Error]Run Spark job as hdfs user from oozie workflow

2016-03-09 Thread Divya Gehlot
Hi, I have non secure Hadoop 2.7.2 cluster on EC2 having Spark 1.5.2 When I am submitting my spark scala script through shell script using Oozie workflow. I am submitting job as hdfs user but It is running as user = "yarn" so all the output should get store under user/yarn directory only . When