Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-02 Thread Devaraj K
+1

Thanks Wangda for the proposal.
I would like to participate in this project, Please add me also to the
project.

Regards
Devaraj K

On Mon, Sep 2, 2019 at 8:50 PM zac yuan  wrote:

> +1
>
> Submarine will be a complete solution for AI service development.  It can
> take advantage of two best cluster systems: yarn and k8s, which will help
> more and more people get AI ability. To be a separate Apache project, will
> accelerate the procedure of development apparently.
>
> Look forward to a big success in submarine project~
>
> 朱林浩  于2019年9月3日周二 上午10:38写道:
>
> > +1,
> > Hopefully, that will become the top project,
> >
> > I also hope to make more contributions to this project.
> >
> >
> >
> >
> >
> >
> >
> >
> > At 2019-09-03 09:26:53, "Naganarasimha Garla" <
> naganarasimha...@apache.org>
> > wrote:
> > >+ 1,
> > >I would also like start participate in this project, hope to get
> > myself
> > >added to the project.
> > >
> > >Thanks and Regards,
> > >+ Naga
> > >
> > >On Tue, Sep 3, 2019 at 8:35 AM Wangda Tan  wrote:
> > >
> > >> Hi Sree,
> > >>
> > >> I put it to the proposal, please let me know what you think:
> > >>
> > >> The traditional path at Apache would have been to create an incubator
> > >> > project, but the code is already being released by Apache and most
> of
> > the
> > >> > developers are familiar with Apache rules and guidelines. In
> > particular,
> > >> > the proposed PMC has 2 Apache TLP PMCs and proposed initial
> committers
> > >> > have 4 Apache TLP PMCs (from 3 different companies). They will
> provide
> > >> > oversight and guidance for the developers that are less experienced
> in
> > >> the
> > >> > Apache Way. Therefore, the Submarine project would like to propose
> > >> becoming
> > >> > a Top Level Project at Apache.
> > >> >
> > >>
> > >> To me, putting to TLP has mostly pros, it is an easier process (same
> as
> > ORC
> > >> spin-off from Hive), much less overhead to both dev community and
> Apache
> > >> side.
> > >>
> > >> Thanks,
> > >> Wangda
> > >>
> > >> On Sun, Sep 1, 2019 at 2:04 PM Sree Vaddi 
> > wrote:
> > >>
> > >> > +1 to move submarine to separate apache project.
> > >> >
> > >> > It is not clear in the proposal, if submarine majority voted to move
> > to a
> > >> > seperate apache project,
> > >> > will it go through the incubation and TLP (top level project) later
> ?
> > >> > 1. Incubation
> > >> > pros and cons
> > >> > efforts towards making it a TLP
> > >> >
> > >> > 2. direct TLP
> > >> >
> > >> >
> > >> > Thank you.
> > >> > /Sree
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > On Saturday, August 31, 2019, 10:19:06 PM PDT, Wangda Tan <
> > >> > wheele...@gmail.com> wrote:
> > >> >
> > >> >
> > >> > Hi all,
> > >> >
> > >> > As we discussed in the previous thread [1],
> > >> >
> > >> > I just moved the spin-off proposal to CWIKI and completed all TODO
> > parts.
> > >> >
> > >> >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
> > >> >
> > >> > If you have interests to learn more about this. Please review the
> > >> proposal
> > >> > let me know if you have any questions/suggestions for the proposal.
> > This
> > >> > will be sent to board post voting passed. (And please note that the
> > >> > previous voting thread [2] to move Submarine to a separate Github
> repo
> > >> is a
> > >> > necessary effort to move Submarine to a separate Apache project but
> > not
> > >> > sufficient so I sent two separate voting thread.)
> > >> >
> > >> > Please let me know if I missed anyone in the proposal, and reply if
> > you'd
> > >> > like to be included in the project.
> > >> >
> > >> > This voting runs for 7 days and will be concluded at Sep 7th, 11 PM
> > PDT.
> > >> >
> > >> > Thanks,
> > >> > Wangda Tan
> > >> >
> > >> > [1]
> > >> >
> > >> >
> > >>
> >
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> > >> > [2]
> > >> >
> > >> >
> > >>
> >
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
> > >> >
> > >>
> >
>


Re: Apply for joining development, review and test

2017-02-14 Thread Devaraj K
Hi Gary,

   Welcome to the community, you can start with
https://wiki.apache.org/hadoop/HowToContribute page if you haven't looked
already.


On Tue, Feb 14, 2017 at 1:10 PM, Gary  wrote:

> Dear Committee,
>
> I'm very interested in this project and I'd like to join your community to
> contribute code and efforts, Thanks.
>
> Very truly yours
> +Gary
>
>
>
>
>
>
>


-- 


Thanks
Devaraj K


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Devaraj K
+1 (non-binding)

Deployed in a 3 node cluster and ran some Yarn Apps and MR examples, works
fine.


On Tue, Jun 30, 2015 at 1:46 AM, Xuan Gong  wrote:

> +1 (non-binding)
>
> Compiled and deployed a single node cluster, ran all the tests.
>
>
> Xuan Gong
>
> On 6/29/15, 1:03 PM, "Arpit Gupta"  wrote:
>
> >+1 (non binding)
> >
> >We have been testing rolling upgrades and downgrades from 2.6 to this
> >release and have had successful runs.
> >
> >--
> >Arpit Gupta
> >Hortonworks Inc.
> >http://hortonworks.com/
> >
> >> On Jun 29, 2015, at 12:45 PM, Lei Xu  wrote:
> >>
> >> +1 binding
> >>
> >> Downloaded src and bin distribution, verified md5, sha1 and sha256
> >> checksums of both tar files.
> >> Built src using mvn package.
> >> Ran a pseudo HDFS cluster
> >> Ran dfs -put some files, and checked files on NN's web interface.
> >>
> >>
> >>
> >> On Mon, Jun 29, 2015 at 11:54 AM, Wangda Tan 
> >>wrote:
> >>> +1 (non-binding)
> >>>
> >>> Compiled and deployed a single node cluster, tried to change node
> >>>labels
> >>> and run distributed_shell with node label specified.
> >>>
> >>> On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:
> >>>
> >>>> +1 (non-binding)
> >>>>
> >>>> Compiled hbase branch-1 with Java 1.8.0_45
> >>>> Ran unit test suite which passed.
> >>>>
> >>>> On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran
> >>>>
> >>>> wrote:
> >>>>
> >>>>>
> >>>>> +1 binding from me.
> >>>>>
> >>>>> Tests:
> >>>>>
> >>>>> Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> >>>>> against a secure cluster.
> >>>>> Repeated for windows running Java 8.
> >>>>>
> >>>>> All tests passed
> >>>>>
> >>>>>
> >>>>>> On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli
> >>>>>>
> >>>>> wrote:
> >>>>>>
> >>>>>> Hi all,
> >>>>>>
> >>>>>> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> >>>>>>
> >>>>>> As discussed before, this is the next stable release to follow up
> >>>> 2.6.0,
> >>>>>> and the first stable one in the 2.7.x line.
> >>>>>>
> >>>>>> The RC is available for validation at:
> >>>>>> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> >>>>>> <http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/>*
> >>>>>>
> >>>>>> The RC tag in git is: release-2.7.1-RC0
> >>>>>>
> >>>>>> The maven artifacts are available via repository.apache.org at
> >>>>>> *
> >>>>>
> >>>>>
> https://repository.apache.org/content/repositories/orgapachehadoop-101
> >>>>>9/
> >>>>>> <
> >>>>>
> >>>>>
> https://repository.apache.org/content/repositories/orgapachehadoop-101
> >>>>>9/
> >>>>> *
> >>>>>>
> >>>>>> Please try the release and vote; the vote will run for the usual 5
> >>>> days.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Vinod
> >>>>>>
> >>>>>> PS: It took 2 months instead of the planned [1] 2 weeks in getting
> >>>>>>this
> >>>>>> release out: post-mortem in a separate thread.
> >>>>>>
> >>>>>> [1]: A 2.7.1 release to follow up 2.7.0
> >>>>>> http://markmail.org/thread/zwzze6cqqgwq4rmw
> >>>>>
> >>>>>
> >>>>
> >>
> >>
> >>
> >> --
> >> Lei (Eddy) Xu
> >> Software Engineer, Cloudera
> >>
> >
> >
>
>


-- 


Thanks
Devaraj K


Re: IMPORTANT: automatic changelog creation

2015-07-03 Thread Devaraj K
+1

Thanks Allen and Andrew for your efforts on this.

Thanks
Devaraj

On Fri, Jul 3, 2015 at 11:29 AM, Varun Vasudev  wrote:

> +1
>
> Many thanks to Allen and Andrew for driving this.
>
> -Varun
>
>
>
> On 7/3/15, 10:25 AM, "Vinayakumar B"  wrote:
>
> >+1 for the auto generation.
> >
> >bq. Besides, after a release R1 is out, someone may (accidentally or
> >intentionally) modify the JIRA summary.
> >Is there any possibility that, we can restrict someone from editing the
> >issue in jira once its marked as "closed" after release?
> >
> >Regards,
> >Vinay
> >
> >On Fri, Jul 3, 2015 at 8:32 AM, Karthik Kambatla 
> wrote:
> >
> >> Huge +1
> >>
> >> On Thursday, July 2, 2015, Chris Nauroth 
> wrote:
> >>
> >> > +1
> >> >
> >> > Thank you to Allen for the script, and thank you to Andrew for
> >> > volunteering to drive the conversion.
> >> >
> >> > --Chris Nauroth
> >> >
> >> >
> >> >
> >> >
> >> > On 7/2/15, 2:01 PM, "Andrew Wang"  >> >
> >> > wrote:
> >> >
> >> > >Hi all,
> >> > >
> >> > >I want to revive the discussion on this thread, since the overhead of
> >> > >CHANGES.txt came up again in the context of backporting fixes for
> >> > >maintenance releases.
> >> > >
> >> > >Allen's automatic generation script (HADOOP-11731) went into trunk
> but
> >> not
> >> > >branch-2, so we're still maintaining CHANGES.txt everywhere. What do
> >> > >people
> >> > >think about backporting this to branch-2 and then removing
> CHANGES.txt
> >> > >from
> >> > >trunk/branch-2 (HADOOP-11792)? Based on discussion on this thread
> and in
> >> > >HADOOP-11731, we seem to agree that CHANGES.txt is an unreliable
> source
> >> of
> >> > >information, and JIRA is at least as reliable and probably much more
> so.
> >> > >Thus I don't see any downsides to backporting it.
> >> > >
> >> > >Would like to hear everyone's thoughts on this, I'm willing to drive
> the
> >> > >effort.
> >> > >
> >> > >Thanks,
> >> > >Andrew
> >> > >
> >> > >On Thu, Apr 2, 2015 at 2:00 PM, Tsz Wo Sze
> 
> >> > >wrote:
> >> > >
> >> > >> Generating change log from JIRA is a good idea.  It bases on an
> >> > >>assumption
> >> > >> that each JIRA has an accurate summary (a.k.a. JIRA title) to
> reflect
> >> > >>the
> >> > >> committed change. Unfortunately, the assumption is invalid for many
> >> > >>cases
> >> > >> since we never enforce that the JIRA summary must be the same as
> the
> >> > >>change
> >> > >> log.  We may compare the current CHANGES.txt with the generated
> change
> >> > >> log.  I beg the diff is long.
> >> > >> Besides, after a release R1 is out, someone may (accidentally or
> >> > >> intentionally) modify the JIRA summary.  Then, the entry for the
> same
> >> > >>item
> >> > >> in a later release R2 could be different from the one in R1.
> >> > >> I agree that manually editing CHANGES.txt is not a perfect
> solution.
> >> > >> However, it works well in the past for many releases.  I suggest we
> >> keep
> >> > >> the current dev workflow.  Try using the new script provided by
> >> > >> HADOOP-11731 to generate the next release.  If everything works
> well,
> >> we
> >> > >> shell remove CHANGES.txt and revise the dev workflow.  What do you
> >> > >>think?
> >> > >> Regards,Tsz-Wo
> >> > >>
> >> > >>
> >> > >>  On Thursday, April 2, 2015 12:57 PM, Allen Wittenauer <
> >> > >> a...@altiscale.com > wrote:
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >> On Apr 2, 2015, at 12:40 PM, Vinod Kumar Vavilapalli <
> >> > >> vino...@hortonworks.com > wrote:
> >> > >>
> >> > >> >
> >> > >> > We'd then doing two commits for every patch. Let's simply not
> remove
> >> > >> CHANGES.txt from trunk, keep the existing dev workflow, but doc the
> >> > >>release
> >> > >> process to remove CHANGES.txt in trunk at the time of a release
> going
> >> > >>out
> >> > >> of trunk.
> >> > >>
> >> > >>
> >> > >>
> >> > >> Might as well copy branch-2¹s changes.txt into trunk then. (or
> 2.7¹s.
> >> > >> Last I looked, people updated branch-2 and not 2.7¹s or vice versa
> for
> >> > >>some
> >> > >> patches that went into both branches.)  So that folks who are
> >> > >>committing to
> >> > >> both branches and want to cherry pick all changes can.
> >> > >>
> >> > >> I mean, trunk¹s is very very very wrong. Right now. Today.
> Borderline
> >> > >> useless. See HADOOP-11718 (which I will now close out as won¹t
> fix)Š
> >> and
> >> > >> that jira is only what is miscategorized, not what is missing.
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> >
> >> >
> >>
> >> --
> >> Mobile
> >>
>
>


-- 


Thanks
Devaraj K


RE: ResourceCalculatorPlugin.getCpuUsage

2012-04-17 Thread Devaraj k
It should return 100.0 for fully utilized machine.

Thanks
Devaraj


From: Radim Kolar [h...@filez.com]
Sent: Tuesday, April 17, 2012 7:53 PM
To: mapreduce-dev@hadoop.apache.org
Subject: ResourceCalculatorPlugin.getCpuUsage

float ResourceCalculatorPlugin.getCpuUsage() with javadoc comment
  /**
* Obtain the CPU usage % of the machine. Return -1 if it is unavailable
*
* @return CPU usage in %
*/

is supposed to return 1.0 or 100.0 for fully CPU utilised machine?


RE: Unable to build hadoop-mapreduce-project

2012-04-18 Thread Devaraj k
It fails in windows with the mentioned error because windows cannot execute the 
.sh file directly. You can do some workaround for this problem to make it work. 

Merge this below patch and check, it will work.

https://issues.apache.org/jira/browse/MAPREDUCE-3881 

Thanks
Devaraj


From: Apurv Verma [dapu...@gmail.com]
Sent: Thursday, April 19, 2012 7:18 AM
To: mapreduce-dev@hadoop.apache.org
Subject: Unable to build hadoop-mapreduce-project

Hello,
 I want to build the hadoop-trunk with maven and make it an eclipse
project. I was following the instructions given on the page [0] Here is
what I have done.


   1. Installed the protobuf library from source. protoc --version gives
   libprotoc 2.4.1
   2. I checked that libprotobuf library is in library path using ldconfig
   -p.
   3. When I cd into hadoop-mapreduce-project and do  *mvn clean install
   -P-cbuild -DskipTests* I get the following errors.

[INFO] Reactor Summary:
[INFO]
[INFO] hadoop-yarn ... SUCCESS [3.687s]
[INFO] hadoop-yarn-api ... SUCCESS [20.117s]
[INFO] hadoop-yarn-common  FAILURE [0.431s]
[INFO] hadoop-yarn-server  SKIPPED
[INFO] hadoop-yarn-server-common . SKIPPED
[INFO] hadoop-yarn-server-nodemanager  SKIPPED
[INFO] hadoop-yarn-server-web-proxy .. SKIPPED
[INFO] hadoop-yarn-server-resourcemanager  SKIPPED
[INFO] hadoop-yarn-server-tests .. SKIPPED
[INFO] hadoop-mapreduce-client ... SKIPPED
[INFO] hadoop-mapreduce-client-core .. SKIPPED
[INFO] hadoop-yarn-applications .. SKIPPED
[INFO] hadoop-yarn-applications-distributedshell . SKIPPED
[INFO] hadoop-yarn-site .. SKIPPED
[INFO] hadoop-mapreduce-client-common  SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
[INFO] hadoop-mapreduce-client-app ... SKIPPED
[INFO] hadoop-mapreduce-client-hs  SKIPPED
[INFO] hadoop-mapreduce-client-jobclient . SKIPPED
[INFO] Apache Hadoop MapReduce Examples .. SKIPPED
[INFO] hadoop-mapreduce .. SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 26.005s
[INFO] Finished at: Thu Apr 19 06:40:03 IST 2012
[INFO] Final Memory: 25M/246M
[INFO]

[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec
(generate-version) on project hadoop-yarn-common: Command execution failed.
Cannot run program "scripts/saveVersion.sh" (in directory
"/media/MyVolume/MyPrograms/Eclipse/hadoop-trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common"):
java.io.IOException: error=13, Permission denied -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn  -rf :hadoop-yarn-common


[0]
http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk#Building_trunk


--
thanks and regards,

Apurv Verma
India


RE: Unable to build hadoop-mapreduce-project

2012-04-18 Thread Devaraj k
Hi Apurv,

I am sorry for my misinterpretation and wrong reply.

Thanks for sharing this.

Thanks
Devaraj


From: Apurv Verma [dapu...@gmail.com]
Sent: Thursday, April 19, 2012 10:52 AM
To: mapreduce-dev@hadoop.apache.org
Subject: Re: Unable to build hadoop-mapreduce-project

Hi,
 I have discovered the problem. The problem is as follows. The trunk I
downloaded is on a separate Volume. So the script file saveVersion.sh
didn't have execute permissions for me.

To anyone in future who is facing problems in building the source, please
refer to this link
http://mackiemathew.com/2011/08/28/building-apache-hadoop-from-source/

--
thanks and regards,

Apurv Verma
India





On Thu, Apr 19, 2012 at 7:18 AM, Apurv Verma  wrote:

> Hello,
>  I want to build the hadoop-trunk with maven and make it an eclipse
> project. I was following the instructions given on the page [0] Here is
> what I have done.
>
>
>1. Installed the protobuf library from source. protoc --version gives
>libprotoc 2.4.1
>2. I checked that libprotobuf library is in library path using
>ldconfig -p.
>3. When I cd into hadoop-mapreduce-project and do  *mvn clean install
>-P-cbuild -DskipTests* I get the following errors.
>
> [INFO] Reactor Summary:
> [INFO]
> [INFO] hadoop-yarn ... SUCCESS [3.687s]
> [INFO] hadoop-yarn-api ... SUCCESS
> [20.117s]
> [INFO] hadoop-yarn-common  FAILURE [0.431s]
> [INFO] hadoop-yarn-server  SKIPPED
> [INFO] hadoop-yarn-server-common . SKIPPED
> [INFO] hadoop-yarn-server-nodemanager  SKIPPED
> [INFO] hadoop-yarn-server-web-proxy .. SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
> [INFO] hadoop-yarn-server-tests .. SKIPPED
> [INFO] hadoop-mapreduce-client ... SKIPPED
> [INFO] hadoop-mapreduce-client-core .. SKIPPED
> [INFO] hadoop-yarn-applications .. SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell . SKIPPED
> [INFO] hadoop-yarn-site .. SKIPPED
> [INFO] hadoop-mapreduce-client-common  SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
> [INFO] hadoop-mapreduce-client-app ... SKIPPED
> [INFO] hadoop-mapreduce-client-hs  SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .. SKIPPED
> [INFO] hadoop-mapreduce .. SKIPPED
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 26.005s
> [INFO] Finished at: Thu Apr 19 06:40:03 IST 2012
> [INFO] Final Memory: 25M/246M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.2:exec (generate-version) on project
> hadoop-yarn-common: Command execution failed. Cannot run program
> "scripts/saveVersion.sh" (in directory
> "/media/MyVolume/MyPrograms/Eclipse/hadoop-trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common"):
> java.io.IOException: error=13, Permission denied -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn  -rf :hadoop-yarn-common
>
>
> [0]
> http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk#Building_trunk
>
>
> --
> thanks and regards,
>
> Apurv Verma
> India
>
>
>
>


RE: Heads up: branch-2.1-beta

2013-06-19 Thread Devaraj k
Hi Arun,

Is there any possibility of including YARN-41 in this release.
 
Thanks
Devaraj K

-Original Message-
From: Arun C Murthy [mailto:a...@hortonworks.com] 
Sent: 19 June 2013 12:29
To: mapreduce-dev@hadoop.apache.org; common-...@hadoop.apache.org; 
hdfs-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: Heads up: branch-2.1-beta

Ping. Any luck?

On Jun 17, 2013, at 4:06 PM, Roman Shaposhnik  wrote:

> On Sun, Jun 16, 2013 at 5:14 PM, Arun C Murthy  wrote:
>> Roman,
>> 
>> Is there a chance you can run the tests with the full stack built against 
>> branch-2.1-beta and help us know where we are?
> 
> I will try to kick off the full build today. And deploy/test tomorrow.
> It is all pretty automated, but takes a long time. Hope the results 
> will still be useful for you guys wrt. 2.1 release.
> 
> Thanks,
> Roman.

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




RE: [VOTE] Release Apache Hadoop 0.23.9

2013-07-08 Thread Devaraj k
+1, downloaded the release, verified signs, ran examples and succeeded.

Thanks
Devaraj k


-Original Message-
From: Thomas Graves [mailto:tgra...@yahoo-inc.com] 
Sent: 01 July 2013 22:50
To: common-...@hadoop.apache.org
Cc: hdfs-...@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 0.23.9

I've created a release candidate (RC0) for hadoop-0.23.9 that I would like to 
release.

The RC is available at:
http://people.apache.org/~tgraves/hadoop-0.23.9-candidate-0/
The RC tag in svn is here:
http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.9-rc0/

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days til 
July 8th.

I am +1 (binding).

thanks,
Tom Graves


RE: My Job always in 'SUBMIT' state. Stuck at Map: 0% - Reduce 0%

2013-07-11 Thread Devaraj k
It seems tasks initialization is not happening for the Job.  Do you see any 
other message or error in the JT log related to that Job/Job-Id.

Thanks
Devaraj k

From: Sreejith Ramakrishnan [mailto:sreejith.c...@gmail.com]
Sent: 11 July 2013 12:20
To: mapreduce-dev@hadoop.apache.org
Subject: My Job always in 'SUBMIT' state. Stuck at Map: 0% - Reduce 0%

I'm making my own scheduler. Since I had errors, I rewrote the scheduler to do 
just a simple thing. When I give it a job, it allocates 1 map() and 1 reduce() 
to it. But that isn't happening. I used a barebones WordCount pgm. I'm stuck at 
map: 0% reduce 0%. And in the jobtracker logs, I see:

INFO org.apache.hadoop.mapred.JobInProgress: Cannot create task split for 

DETAILED INFO:

Here's some snippets of code. The scheduler logic is inside the assignTasks():
We get a JobInProgressListener in the constructor-
this.jobQueueJobInProgressListener = new JobQueueJobInProgressListener();

In start() -
this.taskTrackerManager.addJobInProgressListener(jobQueueJobInProgressListener);

Then I just take the last element in that queue (I assume its the latest 
submitted job) and assign to incomingJob (of type JobInProgress).

Then I obtain a new map task as follows and append this task to a list which is 
the return value of the assignTasks():
Task createdMap = incomingJob.obtainNewMapTask(taskTrackerStatus, 
numTaskTrackers, taskTrackerManager.getNumberOfUniqueHosts());

I've attached the scheduler. Can any expert point out if there's any ignorant 
mistakes from my side?


RE: My Job always in 'SUBMIT' state. Stuck at Map: 0% - Reduce 0%

2013-07-11 Thread Devaraj k
You don't need to explicitly call the initTasks(), you can find this invocation 
in JobTracker.java class. 

>*Yet the **Cannot create task split for ** is still there*.
>About errors relating to the JobID, is this significant? --- INFO
>org.apache.hadoop.mapred.JobInProgress: job_201307111248_0002: nMaps=1
>nReduces=1 max=-1

Here it shows max=-1, which means max configured(i.e default) tasks for the Job 
is -1. 

I think you can see this exception from JT log.

LOG.info(jobId + ": nMaps=" + numMapTasks + " nReduces=" + numReduceTasks + " 
max=" + maxTasks);
if (maxTasks > 0 && (numMapTasks + numReduceTasks) > maxTasks) {
  throw new IOException(
"The number of tasks for this job " + 
(numMapTasks + numReduceTasks) +
" exceeds the configured limit " + maxTasks);
}

Can you try configuring some number for this configuration 
'mapred.jobtracker.maxtasks.per.job' and run the same Job?


Thanks
Devaraj k


-Original Message-
From: Sreejith Ramakrishnan [mailto:sreejith.c...@gmail.com] 
Sent: 11 July 2013 13:26
To: mapreduce-dev@hadoop.apache.org
Subject: Re: My Job always in 'SUBMIT' state. Stuck at Map: 0% - Reduce 0%

@Devaraj K

*I tried explicitly calling incomingJob.initTasks()*and it gave me a pleasant 
INFO org.apache.hadoop.mapred.JobInProgress: Job
job_201307111248_0002 initialized successfully with 1 map tasks and 1 reduce 
tasks

*Yet the **Cannot create task split for ** is still there*.
About errors relating to the JobID, is this significant? --- INFO
org.apache.hadoop.mapred.JobInProgress: job_201307111248_0002: nMaps=1
nReduces=1 max=-1

*P.S: In the log file I shared, anything with "sreejith == " were logged by me. 
Not system-generated*


On Thu, Jul 11, 2013 at 1:18 PM, Sreejith Ramakrishnan < 
sreejith.c...@gmail.com> wrote:

> I've attached the relevant portion of the log.
>
> For easy testing, *I've attached the maven project* for the 
> EDFJobScheduler. Just make a JAR from it and put it in 
> $HADOOP_HOME/lib and add this property to *mapred-site.xml*:
>
> *
> *
> *mapred.jobtracker.taskScheduler
> org.apache.hadoop.mapred.EDFJobScheduler
> *
> **
>
>
> On Thu, Jul 11, 2013 at 12:19 PM, Sreejith Ramakrishnan < 
> sreejith.c...@gmail.com> wrote:
>
>> I'm making my own scheduler. Since I had errors, I rewrote the 
>> scheduler to do just a simple thing. When I give it a job, it 
>> allocates 1 map() and 1
>> reduce() to it. But that isn't happening. I used a barebones WordCount pgm.
>> I'm stuck at map: 0% reduce 0%. And in the jobtracker logs, I see:
>>
>> INFO org.apache.hadoop.mapred.JobInProgress: Cannot create task split 
>> for 
>>
>> DETAILED INFO:
>>
>> Here's some snippets of code. The scheduler logic is inside the
>> assignTasks():
>>
>> We get a JobInProgressListener in the constructor- 
>> this.jobQueueJobInProgressListener = new 
>> JobQueueJobInProgressListener();
>>
>> In start() -
>>
>> this.taskTrackerManager.addJobInProgressListener(jobQueueJobInProgres
>> sListener);
>>
>> Then I just take the last element in that queue (I assume its the 
>> latest submitted job) and assign to incomingJob (of type JobInProgress).
>>
>> Then I obtain a new map task as follows and append this task to a 
>> list which is the return value of the assignTasks():
>> Task createdMap = incomingJob.obtainNewMapTask(taskTrackerStatus,
>> numTaskTrackers, taskTrackerManager.getNumberOfUniqueHosts());
>>
>>
>> I've attached the scheduler. Can any expert point out if there's any 
>> ignorant mistakes from my side?
>>
>
>


Re: My Job always in 'SUBMIT' state. Stuck at Map: 0% - Reduce 0%

2013-07-11 Thread Devaraj k
Could you debug and check the Job Tracker/your scheduler code, then you can
pinpoint the exact issue and fix it.

Thanks
Devaraj k


On Thu, Jul 11, 2013 at 7:58 PM, Sreejith Ramakrishnan <
sreejith.c...@gmail.com> wrote:

> @Devaraj K
>
> I tried setting mapred.jobtracker.maxtasks.per.job to 4 in mapred-site.xml.
> In the subsequent job, I got this in the log which clearly shows the
> configuration has taken effect:
>
> INFO org.apache.hadoop.mapred.JobInProgress: job_201307111721_0001: nMaps=1
> nReduces=1 max=4
>
> Yet, the original problem remains. It doesn't go anywhere from 0%. Btw,
> when we get the jobQueue, should we specify a name for the queue we want?
> (eg. default)
>
> INFO org.apache.hadoop.mapred.JobTracker: Job job_201307111721_0001 added
> successfully for user 'sreejith' to queue 'default'
>
> Also, is there a limit to the no.of tasks which can be obtained in a
> heartbeat?
>
>
> On Thu, Jul 11, 2013 at 2:15 PM, Devaraj k  wrote:
>
> > You don't need to explicitly call the initTasks(), you can find this
> > invocation in JobTracker.java class.
> >
> > >*Yet the **Cannot create task split for ** is still there*.
> > >About errors relating to the JobID, is this significant? --- INFO
> > >org.apache.hadoop.mapred.JobInProgress: job_201307111248_0002: nMaps=1
> > >nReduces=1 max=-1
> >
> > Here it shows max=-1, which means max configured(i.e default) tasks for
> > the Job is -1.
> >
> > I think you can see this exception from JT log.
> >
> > LOG.info(jobId + ": nMaps=" + numMapTasks + " nReduces=" + numReduceTasks
> > + " max=" + maxTasks);
> > if (maxTasks > 0 && (numMapTasks + numReduceTasks) > maxTasks) {
> >   throw new IOException(
> > "The number of tasks for this job " +
> >     (numMapTasks + numReduceTasks) +
> > " exceeds the configured limit " + maxTasks);
> > }
> >
> > Can you try configuring some number for this configuration
> > 'mapred.jobtracker.maxtasks.per.job' and run the same Job?
> >
> >
> > Thanks
> > Devaraj k
> >
> >
> > -Original Message-
> > From: Sreejith Ramakrishnan [mailto:sreejith.c...@gmail.com]
> > Sent: 11 July 2013 13:26
> > To: mapreduce-dev@hadoop.apache.org
> > Subject: Re: My Job always in 'SUBMIT' state. Stuck at Map: 0% - Reduce
> 0%
> >
> > @Devaraj K
> >
> > *I tried explicitly calling incomingJob.initTasks()*and it gave me a
> > pleasant INFO org.apache.hadoop.mapred.JobInProgress: Job
> > job_201307111248_0002 initialized successfully with 1 map tasks and 1
> > reduce tasks
> >
> > *Yet the **Cannot create task split for ** is still there*.
> > About errors relating to the JobID, is this significant? --- INFO
> > org.apache.hadoop.mapred.JobInProgress: job_201307111248_0002: nMaps=1
> > nReduces=1 max=-1
> >
> > *P.S: In the log file I shared, anything with "sreejith == " were logged
> > by me. Not system-generated*
> >
> >
> > On Thu, Jul 11, 2013 at 1:18 PM, Sreejith Ramakrishnan <
> > sreejith.c...@gmail.com> wrote:
> >
> > > I've attached the relevant portion of the log.
> > >
> > > For easy testing, *I've attached the maven project* for the
> > > EDFJobScheduler. Just make a JAR from it and put it in
> > > $HADOOP_HOME/lib and add this property to *mapred-site.xml*:
> > >
> > > *
> > > *
> > > *mapred.jobtracker.taskScheduler
> > > org.apache.hadoop.mapred.EDFJobScheduler
> > > *
> > > **
> > >
> > >
> > > On Thu, Jul 11, 2013 at 12:19 PM, Sreejith Ramakrishnan <
> > > sreejith.c...@gmail.com> wrote:
> > >
> > >> I'm making my own scheduler. Since I had errors, I rewrote the
> > >> scheduler to do just a simple thing. When I give it a job, it
> > >> allocates 1 map() and 1
> > >> reduce() to it. But that isn't happening. I used a barebones WordCount
> > pgm.
> > >> I'm stuck at map: 0% reduce 0%. And in the jobtracker logs, I see:
> > >>
> > >> INFO org.apache.hadoop.mapred.JobInProgress: Cannot create task split
> > >> for 
> > >>
> > >> DETAILED INFO:
> > >>
> > >> Here's some snippets of code. The scheduler logic is inside the
> > >> assignTasks():
> > >>
> > >> We get a JobInProgressListener in the constructor-
> > >> this.jobQueueJobInProgressListener = new
> > >> JobQueueJobInProgressListener();
> > >>
> > >> In start() -
> > >>
> > >> this.taskTrackerManager.addJobInProgressListener(jobQueueJobInProgres
> > >> sListener);
> > >>
> > >> Then I just take the last element in that queue (I assume its the
> > >> latest submitted job) and assign to incomingJob (of type
> JobInProgress).
> > >>
> > >> Then I obtain a new map task as follows and append this task to a
> > >> list which is the return value of the assignTasks():
> > >> Task createdMap = incomingJob.obtainNewMapTask(taskTrackerStatus,
> > >> numTaskTrackers, taskTrackerManager.getNumberOfUniqueHosts());
> > >>
> > >>
> > >> I've attached the scheduler. Can any expert point out if there's any
> > >> ignorant mistakes from my side?
> > >>
> > >
> > >
> >
>


RE: [VOTE] Release Apache Hadoop 2.0.6-alpha (RC1)

2013-08-21 Thread Devaraj k
+1

I downloaded and ran some examples, It works fine.


Thanks
Devaraj k


-Original Message-
From: Konstantin Boudnik [mailto:c...@apache.org] 
Sent: 16 August 2013 11:00
To: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 2.0.6-alpha (RC1)

All,

I have created a release candidate (rc1) for hadoop-2.0.6-alpha that I would 
like to release.

This is a stabilization release that includes fixed for a couple a of issues as 
outlined on the security list.

The RC is available at: http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc1/
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc1

The maven artifacts are available via repository.apache.org.

The only difference between rc0 and rc1 is ASL added to releasenotes.html and 
updated release dates in CHANGES.txt files.

Please try the release bits and vote; the vote will run for the usual 7 days.

Thanks for your voting
  Cos



RE: [VOTE] Release Apache Hadoop 2.2.0

2013-10-09 Thread Devaraj k
+1 (non-binding)

I verified by running some mapreduce examples, it works fine.

Thanks
Devaraj k

-Original Message-
From: Arun C Murthy [mailto:a...@hortonworks.com] 
Sent: 07 October 2013 12:31
To: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-dev@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 2.2.0

Folks,

I've created a release candidate (rc0) for hadoop-2.2.0 that I would like to 
get released - this release fixes a small number of bugs and some protocol/api 
issues which should ensure they are now stable and will not change in 
hadoop-2.x.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.2.0-rc0
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0-rc0

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun

P.S.: Thanks to Colin, Andrew, Daryn, Chris and others for helping nail down 
the symlinks-related issues. I'll release note the fact that we have disabled 
it in 2.2. Also, thanks to Vinod for some heavy-lifting on the YARN side in the 
last couple of weeks.





--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.


Re: [VOTE] Change by-laws on release votes: 5 days instead of 7

2014-06-24 Thread Devaraj K
+1

Thanks
Devaraj K


On Tue, Jun 24, 2014 at 2:23 PM, Arun C Murthy  wrote:

> Folks,
>
>  As discussed, I'd like to call a vote on changing our by-laws to change
> release votes from 7 days to 5.
>
>  I've attached the change to by-laws I'm proposing.
>
>  Please vote, the vote will the usual period of 7 days.
>
> thanks,
> Arun
>
> 
>
> [main]$ svn diff
> Index: author/src/documentation/content/xdocs/bylaws.xml
> ===
> --- author/src/documentation/content/xdocs/bylaws.xml   (revision 1605015)
> +++ author/src/documentation/content/xdocs/bylaws.xml   (working copy)
> @@ -344,7 +344,16 @@
>  Votes are open for a period of 7 days to allow all active
>  voters time to consider the vote. Votes relating to code
>  changes are not subject to a strict timetable but should be
> -made as timely as possible.
> +made as timely as possible.
> +
> + 
> +  Product Release - Vote Timeframe
> +   Release votes, alone, run for a period of 5 days. All other
> + votes are subject to the above timeframe of 7 days.
> + 
> +   
> +   
> +
> 
> 
>  
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>



-- 


Thanks
Devaraj K


Re: [VOTE] Release Apache Hadoop 0.23.11

2014-06-24 Thread Devaraj K
+1 (non-binding)



Deployed in a two node cluster and ran few M/R Jobs, everything works fine.



On Thu, Jun 19, 2014 at 8:44 PM, Thomas Graves <
tgra...@yahoo-inc.com.invalid> wrote:

> Hey Everyone,
>
> There have been various bug fixes that have went into
> branch-0.23 since the 0.23.10 release.  We think its time to do a 0.23.11.
>
> This is also the last planned release off of branch-0.23 we plan on doing.
>
> The RC is available at:
> http://people.apache.org/~tgraves/hadoop-0.23.11-candidate-0/
>
>
> The RC Tag in svn is here:
> http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.11-rc0/
>
> The maven artifacts are available via repository.apache.org.
>
> Please try the release and vote; the vote will run for the usual 7 days
> til June 26th.
>
> I am +1 (binding).
>
> thanks,
> Tom Graves
>
>
>
>
>


-- 


Thanks
Devaraj K


Re: Please Remove me

2014-07-23 Thread Devaraj K
You can send a mail to mapreduce-dev-unsubscr...@hadoop.apache.org for
unsubscribe. Please refer http://hadoop.apache.org/mailing_lists.html.


On Tue, Jul 22, 2014 at 1:17 AM, Siva Reddy  wrote:

> Thanks & Regards,
> Siva Mondeddula
> 630-267-7953
>
>
> On Mon, Jul 21, 2014 at 2:46 PM, Allen Wittenauer (JIRA) 
> wrote:
>
> >
> >  [
> >
> https://issues.apache.org/jira/browse/MAPREDUCE-555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> > ]
> >
> > Allen Wittenauer resolved MAPREDUCE-555.
> > 
> >
> > Resolution: Fixed
> >
> > This was done elsewhere by a flag in the schedulers themselves. Closing
> as
> > fixed.
> >
> > > Provide an option to turn off priorities in jobs
> > > 
> > >
> > > Key: MAPREDUCE-555
> > > URL:
> https://issues.apache.org/jira/browse/MAPREDUCE-555
> > > Project: Hadoop Map/Reduce
> > >  Issue Type: Improvement
> > >Reporter: Hemanth Yamijala
> > >Priority: Minor
> > >
> > > The fairshare scheduler can define pools mapping to queues (as defined
> > in the capacity scheduler - HADOOP-3445). When used in this manner, one
> can
> > imagine queues set up to be used by users who come from disparate teams
> or
> > organizations (say a default queue). For such a queue, it makes sense to
> > ignore job priorities and consider the queue as strict FIFO, as it is
> > difficult to compare priorities of jobs from different users.
> >
> >
> >
> > --
> > This message was sent by Atlassian JIRA
> > (v6.2#6252)
> >
>



-- 


Thanks
Devaraj K


Re: [VOTE] Migration from subversion to git for version control

2014-08-10 Thread Devaraj K
+1 (non-binding)


On Sat, Aug 9, 2014 at 8:27 AM, Karthik Kambatla  wrote:

> I have put together this proposal based on recent discussion on this topic.
>
> Please vote on the proposal. The vote runs for 7 days.
>
>1. Migrate from subversion to git for version control.
>2. Force-push to be disabled on trunk and branch-* branches. Applying
>changes from any of trunk/branch-* to any of branch-* should be through
>"git cherry-pick -x".
>3. Force-push on feature-branches is allowed. Before pulling in a
>feature, the feature-branch should be rebased on latest trunk and the
>changes applied to trunk through "git rebase --onto" or "git cherry-pick
>".
>4. Every time a feature branch is rebased on trunk, a tag that
>identifies the state before the rebase needs to be created (e.g.
>tag_feature_JIRA-2454_2014-08-07_rebase). These tags can be deleted once
>the feature is pulled into trunk and the tags are no longer useful.
>5. The relevance/use of tags stay the same after the migration.
>
> Thanks
> Karthik
>
> PS: Per Andrew Wang, this should be a "Adoption of New Codebase" kind of
> vote and will be Lazy 2/3 majority of PMC members.
>



-- 


Thanks
Devaraj K


Re: [VOTE] Merge branch MAPREDUCE-2841 to trunk

2014-09-11 Thread Devaraj K
+1

Good performance improvement. Nice work…



On Sat, Sep 6, 2014 at 6:05 AM, Chris Douglas  wrote:

> +1
>
> The change to the existing code is very limited and the perf is
> impressive. -C
>
> On Fri, Sep 5, 2014 at 4:58 PM, Todd Lipcon  wrote:
> > Hi all,
> >
> > As I've reported recently [1], work on the MAPREDUCE-2841 branch has
> > progressed well and the development team working on it feels that it is
> > ready to be merged into trunk.
> >
> > For those not familiar with the JIRA (it's a bit lengthy to read from
> start
> > to finish!) the goal of this work is to build a native implementation of
> > the map-side sort code. The native implementation's primary advantage is
> > its speed: for example, terasort is 30% faster on a wall-clock basis and
> > 60% faster on a resource consumption basis. For clusters which make heavy
> > use of MapReduce, this is a substantial improvement to their efficiency.
> > Users may enable the feature by switching a single configuration flag,
> and
> > it will fall back to the original implementation in cases where the
> native
> > code doesn't support the configured features/types.
> >
> > The new work is entirely pluggable and off-by-default to mitigate risk.
> The
> > merge patch itself does not modify even a single line of existing code:
> all
> > necessary plug-points have already been committed to trunk for some time.
> >
> > Though we do not yet have a full +1 precommit Jenkins run on the JIRA,
> > there are only a few small nits to fix before merge, so I figured that we
> > could start the vote in parallel. Of course we will not merge until it
> has
> > a positive precommit run.
> >
> > Though this branch is a new contribution to the Apache repository, it
> > represents work done over several years by a large community of
> developers
> > including the following:
> >
> > Binglin Chang
> > Yang Dong
> > Sean Zhong
> > Manu Zhang
> > Zhongliang Zhu
> > Vincent Wang
> > Yan Dong
> > Cheng Lian
> > Xusen Yin
> > Fangqin Dai
> > Jiang Weihua
> > Gansha Wu
> > Avik Dey
> >
> > The vote will run for 7 days, ending Friday 9/12 EOD PST.
> >
> > I'll start the voting with my own +1.
> >
> > -Todd
> >
> > [1]
> >
> http://search-hadoop.com/m/09oay13EwlV/native+task+progress&subj=Native+task+branch+progress
>



-- 


Thanks
Devaraj K


Problem while running eclipse-files for Next Gen Mapreduce branch

2011-07-08 Thread Devaraj K
o
op/yarn-common/1.0-SNAPSHOT/yarn-common-1.0-SNAPSHOT.jar 
[ivy:resolve]  maven2: tried 
[ivy:resolve]
<http://repo1.maven.org/maven2/org/apache/hadoop/yarn-common/1.0-SNAPSHOT/ya
rn-common-1.0-SNAPSHOT.pom>
http://repo1.maven.org/maven2/org/apache/hadoop/yarn-common/1.0-SNAPSHOT/yar
n-common-1.0-SNAPSHOT.pom 
[ivy:resolve]   -- artifact
org.apache.hadoop#yarn-common;1.0-SNAPSHOT!yarn-common.jar: 
[ivy:resolve]
<http://repo1.maven.org/maven2/org/apache/hadoop/yarn-common/1.0-SNAPSHOT/ya
rn-common-1.0-SNAPSHOT.jar>
http://repo1.maven.org/maven2/org/apache/hadoop/yarn-common/1.0-SNAPSHOT/yar
n-common-1.0-SNAPSHOT.jar 
[ivy:resolve] ::

[ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::

[ivy:resolve] ::

[ivy:resolve] ::
org.apache.hadoop#yarn-server-common;1.0-SNAPSHOT: not found 
[ivy:resolve] ::
org.apache.hadoop#hadoop-mapreduce-client-core;1.0-SNAPSHOT: not found 
[ivy:resolve] :: org.apache.hadoop#yarn-common;1.0-SNAPSHOT:
not found 
[ivy:resolve] ::

[ivy:resolve] 
[ivy:resolve] 
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS 




Devaraj K 


-
This e-mail and its attachments contain confidential information from
HUAWEI, which 
is intended only for the person or entity whose address is listed above. Any
use of the 
information contained herein in any way (including, but not limited to,
total or partial 
disclosure, reproduction, or dissemination) by persons other than the
intended 
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by 
phone or email immediately and delete it!ss

 

 



RE: Subscription for contribution in map-reduce.

2011-08-10 Thread Devaraj K
Hi Balakrishnan,

 Please go through this below link to find the process of contributing to
Apache Hadoop projects.

http://wiki.apache.org/hadoop/HowToContribute


Devaraj K 


-Original Message-
From: Balakrishnan Prasanna [mailto:balkiprasa...@gmail.com] 
Sent: Wednesday, August 10, 2011 3:19 PM
To: mapreduce-dev@hadoop.apache.org
Subject: Subscription for contribution in map-reduce.

Hi,

I would like to contribute for map-reduce algorithm. Can  I please know what
is the procedures ?

Regards,
Prasanna.



[jira] [Created] (MAPREDUCE-6749) MR AM should reuse containers for Map/Reduce Tasks

2016-08-08 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6749:


 Summary: MR AM should reuse containers for Map/Reduce Tasks
 Key: MAPREDUCE-6749
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6749
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: applicationmaster, mrv2
Reporter: Devaraj K
Assignee: Devaraj K


It is with the continuation of MAPREDUCE-3902, MR AM should reuse containers 
for Map/Reduce Tasks similar to the JVM Reuse feature we had in MRv1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6772) Add MR Job Configurations for Containers reuse

2016-08-31 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6772:


 Summary: Add MR Job Configurations for Containers reuse
 Key: MAPREDUCE-6772
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6772
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K
Assignee: Devaraj K


This task adds configurations required for MR AM Container reuse feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6773) Implement RM Container Reuse Requestor to handle the reuse containers for resource requests

2016-08-31 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6773:


 Summary: Implement RM Container Reuse Requestor to handle the 
reuse containers for resource requests
 Key: MAPREDUCE-6773
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6773
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K


Add RM Container Reuse Requestor which handles the reuse containers against the 
Job reource requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6781) YarnChild should wait for another task when reuse is enabled

2016-09-21 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6781:


 Summary: YarnChild should wait for another task when reuse is 
enabled
 Key: MAPREDUCE-6781
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6781
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6784) JobImpl state changes for containers reuse

2016-09-28 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6784:


 Summary: JobImpl state changes for containers reuse
 Key: MAPREDUCE-6784
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6784
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K
Assignee: Devaraj K


Add JobImpl state changes for supporting reusing of containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6785) ContainerLauncherImpl support for reusing the containers

2016-09-28 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6785:


 Summary: ContainerLauncherImpl support for reusing the containers
 Key: MAPREDUCE-6785
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6785
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K
Assignee: Devaraj K






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6786) TaskAttemptImpl state changes for containers reuse

2016-09-28 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6786:


 Summary: TaskAttemptImpl state changes for containers reuse
 Key: MAPREDUCE-6786
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6786
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K
Assignee: Devaraj K


Update TaskAttemptImpl to support the reuse of containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6809) Create ContainerRequestor interface and refactor RMContainerRequestor to use it

2016-11-07 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6809:


 Summary: Create ContainerRequestor interface and refactor 
RMContainerRequestor to use it
 Key: MAPREDUCE-6809
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6809
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K
Assignee: Devaraj K


As per the discussion in MAPREDUCE-6773, create a ContainerRequestor interface 
and refactor RMContainerRequestor to use this interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6833) Display only corresponding Task logs for each task attempt in AM and JHS Web UI

2017-01-19 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6833:


 Summary: Display only corresponding Task logs for each task 
attempt in AM and JHS Web UI
 Key: MAPREDUCE-6833
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6833
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Devaraj K
Assignee: Devaraj K


When the container gets reused for multiple tasks, Logs get generated in the 
same container log file. At present Task Attempt log is linked to Container 
log, In UI for each attempt it shows the whole  container log file. This task 
is to handle the display of the corresponding Task logs for each task attempt 
in AM and JHS Web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[jira] [Resolved] (MAPREDUCE-2647) Memory sharing across all the Tasks in the Task Tracker to improve the job performance

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-2647.
--
Resolution: Won't Fix

Closing it as Won't fix as there is no active feature development happening in 
mrv1.

> Memory sharing across all the Tasks in the Task Tracker to improve the job 
> performance
> --
>
> Key: MAPREDUCE-2647
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2647
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: tasktracker
>Reporter: Devaraj K
>Assignee: Devaraj K
>
>   If all the tasks (maps/reduces) are using (working with) the same 
> additional data to execute the map/reduce task, each task should load the 
> data into memory individually and read the data. It is the additional effort 
> for all the tasks to do the same job. Instead of loading the data by each 
> task, data can be loaded into main memory and it can be used to execute all 
> the tasks.
> h5.Proposed Solution:
> 1. Provide a mechanism to load the data into shared memory and to read that 
> data from main memory.
> 2. We can provide a java API, which internally uses the native implementation 
> to read the data from the memory. All the maps/reducers can this API for 
> reading the data from the main memory. 
> h5.Example: 
>   Suppose in a map task, ip address is a key and it needs to get location 
> of the ip address from a local file. In this case each map task should load 
> the file into main memory and read from it and close it. It takes some time 
> to open, read from the file and process every time. Instead of this, we can 
> load the file in the task tracker memory and each task can read from the 
> memory directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-4294) Submitting job by enabling task profiling gives IOException

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4294.
--
Resolution: Fixed

It was fixed some time ago.

> Submitting job by enabling task profiling gives IOException
> ---
>
> Key: MAPREDUCE-4294
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4294
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>
> {noformat}
> java.io.IOException: Server returned HTTP response code: 400 for URL: 
> http://HOST-10-18-52-224:8080/tasklog?plaintext=true&attemptid=attempt_1338370885386_0006_m_00_0&filter=profile
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1290)
> at org.apache.hadoop.mapreduce.Job.downloadProfile(Job.java:1421)
> at org.apache.hadoop.mapreduce.Job.printTaskEvents(Job.java:1376)
> at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1310)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1247)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
> at 
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6256) Removed unused private methods in o.a.h.mapreduce.Job.java

2015-02-12 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6256:


 Summary: Removed unused private methods in o.a.h.mapreduce.Job.java
 Key: MAPREDUCE-6256
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6256
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Devaraj K
Priority: Minor


These below methods are not used any where in the code and these can be removed.
{code:xml}
  private void setStatus(JobStatus status)
  private boolean shouldDownloadProfile()
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (MAPREDUCE-6348) JobHistoryEventHandler could not flush every 30 secondes

2015-05-04 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reopened MAPREDUCE-6348:
--

> JobHistoryEventHandler could not flush every 30 secondes
> 
>
> Key: MAPREDUCE-6348
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6348
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: qus-jiawei
>Priority: Minor
>
> JobHistoryEventHandler could not flush the event every 30 seconds.
> cause the var isTimerActive is never set to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-6348) JobHistoryEventHandler could not flush every 30 secondes

2015-05-04 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-6348.
--
Resolution: Duplicate

> JobHistoryEventHandler could not flush every 30 secondes
> 
>
> Key: MAPREDUCE-6348
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6348
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: qus-jiawei
>Priority: Minor
>
> JobHistoryEventHandler could not flush the event every 30 seconds.
> cause the var isTimerActive is never set to true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-4127) Resource manager UI does not show the Job Priority

2015-05-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4127.
--
Resolution: Duplicate

It will be handled as part of YARN-1963.

> Resource manager UI does not show the Job Priority
> --
>
> Key: MAPREDUCE-4127
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4127
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Nishan Shetty
>
> In RM UI the priority of job is not displayed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-3847) Job in running state without any progress and no tasks running

2015-05-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-3847.
--
Resolution: Cannot Reproduce

Not a problem anymore, closing it. Feel free to reopen if you see this issue 
again. 

> Job in running state without any progress and no tasks running
> --
>
> Key: MAPREDUCE-3847
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3847
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobtracker, tasktracker
>Affects Versions: 0.20.1, 0.20.2
>Reporter: Abhijit Suresh Shingate
> Attachments: JTLogs.rar, TTLogs.rar
>
>
> Hi All,
> TestDFSIO program running with write option , number of files is 250 and file 
> size is 256 MB (block size is also 256 MB)
> NN went to safemode so JT was trying to connect to NN continuously..
> later NN switched and it became proper..
> Then JT is trying to kill some taskattempts and it's not able to do so as the 
> task is not in TaskInProgress state at TT side
> Also TaskTracker didnt respond before 10mins to it was declared lost marking 
> all the tasks on that.
> Has someone faced similar issue?
> I couldnt reproduce this problem again.
> Anyone can give any directions?
> Thanks in advance.
> Regards,
> Abhijit
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-3385) Add warning message for the overflow in reduce() of org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer

2015-06-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-3385.
--
Resolution: Duplicate

I am closing it as duplicate of MAPREDUCE-3384 as it will be handled as part of 
MAPREDUCE-3384. Please reopen if you disagree.

> Add warning message for the overflow in reduce() of 
> org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer
> 
>
> Key: MAPREDUCE-3385
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3385
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: JiangKai
>Priority: Minor
> Attachments: MAPREDUCE-3385.patch
>
>
> When we call the function reduce() of IntSumReducer,the result may overflow.
> We should send a warning message to users if overflow occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (MAPREDUCE-6371) HTML tag shown in Diagnostics field of JobHistoryPage

2015-06-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reopened MAPREDUCE-6371:
--

Reopening it to mark as dup of MAPREDUCE-6382.

> HTML tag shown in Diagnostics field of JobHistoryPage
> -
>
> Key: MAPREDUCE-6371
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6371
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.7.0
>Reporter: Bibin A Chundatt
>Assignee: J.Andreina
>Priority: Minor
>  Labels: newbie
> Attachments: Failed Task.png
>
>
> Diagnostics shown wrong in Jobhistory page
> Incase of failed task
> {code}
> Diagnostics  : Task failed  href="/jobhistory/task/task_1432651802642_0086_r_00">task_1432651802642_0086_r_00
> Job failed as tasks failed. failedMaps:0 failedReduces:1
> {code}
> Please check the image for detail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-6371) HTML tag shown in Diagnostics field of JobHistoryPage

2015-06-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-6371.
--
Resolution: Duplicate

> HTML tag shown in Diagnostics field of JobHistoryPage
> -
>
> Key: MAPREDUCE-6371
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6371
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.7.0
>Reporter: Bibin A Chundatt
>Assignee: J.Andreina
>Priority: Minor
>  Labels: newbie
> Attachments: Failed Task.png
>
>
> Diagnostics shown wrong in Jobhistory page
> Incase of failed task
> {code}
> Diagnostics  : Task failed  href="/jobhistory/task/task_1432651802642_0086_r_00">task_1432651802642_0086_r_00
> Job failed as tasks failed. failedMaps:0 failedReduces:1
> {code}
> Please check the image for detail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-6373) The logger reports total input paths but it is referring to input files

2015-06-17 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-6373.
--
   Resolution: Fixed
Fix Version/s: 2.8.0

Thanks [~bibinchundatt].

Committed to trunk and branch-2.

> The logger reports total input paths but it is referring to input files
> ---
>
> Key: MAPREDUCE-6373
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6373
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Andi Chirita Amdocs
>Assignee: Bibin A Chundatt
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: 0002-MAPREDUCE-6373.patch, MAPREDUCE-6373.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> The log message in the FileInputFormat is misleading : 
> {code}
> 2015-04-24 13:12:30,205 [main] INFO 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to 
> process : 6
> {code} 
> There is only 1 input path and 6 input files so the log message should be :
> {code}
> 2015-04-24 13:12:30,205 [main] INFO 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input files to 
> process : 6
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6419) JobHistoryServer doesn't sort properly based on Job ID when Job id's exceed 9999

2015-06-29 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6419:


 Summary: JobHistoryServer doesn't sort properly based on Job ID 
when Job id's exceed 
 Key: MAPREDUCE-6419
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6419
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: webapps
Affects Versions: 2.7.0
Reporter: Devaraj K


When Job id's exceed , JobHistoryServer is not sorting properly based on 
the Job ID. It is mixing the jobs having >  with other jobs considering 
only the first four digits of the job id. The same problem could exist for Job 
Map tasks and Reduce tasks table as well.


It is similar to the issue YARN-3840 exists for YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6426) TestShuffleHandler fails in trunk

2015-07-07 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6426:


 Summary: TestShuffleHandler fails in trunk
 Key: MAPREDUCE-6426
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6426
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Devaraj K
Assignee: zhihai xu


{code:xml}
expected:<1> but was:<0>
Stacktrace

java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.mapred.TestShuffleHandler.testGetMapOutputInfo(TestShuffleHandler.java:927)
{code}

https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2195/testReport/junit/org.apache.hadoop.mapred/TestShuffleHandler/testGetMapOutputInfo/

https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/247/testReport/org.apache.hadoop.mapred/TestShuffleHandler/testGetMapOutputInfo/




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-4754) Job is marked as FAILED and also throwing the TransitonException instead of KILLED when issues a KILL command

2016-02-11 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4754.
--
Resolution: Cannot Reproduce

I don't think it is still an issue since MR has undergone many changes after 
creating the issue. I am closing it, Please reopen it if you see the issue 
again.

> Job is marked as FAILED and also throwing the TransitonException instead of 
> KILLED when issues a KILL command
> -
>
> Key: MAPREDUCE-4754
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4754
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 2.0.1-alpha, 2.0.2-alpha
>Reporter: Nishan Shetty
>
> {code}
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> JOB_TASK_COMPLETED at KILLED
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:695)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:893)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:889)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
>   at java.lang.Thread.run(Thread.java:662)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-5422) [Umbrella] Fix invalid state transitions in MRAppMaster

2016-02-11 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-5422.
--
Resolution: Fixed

I am closing this umbrella jira as all the sub tasks are resolved.

> [Umbrella] Fix invalid state transitions in MRAppMaster
> ---
>
> Key: MAPREDUCE-5422
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5422
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: mr-am
>Affects Versions: 2.0.5-alpha
>    Reporter: Devaraj K
>Assignee: Devaraj K
>
> There are mutiple invalid state transitions for the state machines present in 
> MRAppMaster. All these can be handled as part of this umbrell JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-4189) TestContainerManagerSecurity is failing

2012-04-22 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4189:


 Summary: TestContainerManagerSecurity is failing
 Key: MAPREDUCE-4189
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4189
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: trunk
Reporter: Devaraj K
Priority: Critical


{code:xml}
---
 T E S T S
---
Running org.apache.hadoop.yarn.server.TestDiskFailures
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.519 sec
Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 10.673 sec <<< 
FAILURE!

Results :

Tests in error:
  
testAuthenticatedUser(org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  testMaliceUser(org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  
testUnauthorizedUser(org.apache.hadoop.yarn.server.TestContainerManagerSecurity)

Tests run: 5, Failures: 0, Errors: 3, Skipped: 0

{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-4055) Job history files are not getting copied from "intermediate done" directory to "done" directory

2012-04-26 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4055.
--

Resolution: Fixed

It is fixed and working fine now.

> Job history files are not getting copied from "intermediate done" directory 
> to "done" directory 
> 
>
> Key: MAPREDUCE-4055
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4055
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: Nishan Shetty
>
> 1.Submit job
> 2.After successful execution of job before the Job history files are copied 
> from intermediate done directory to done directory,NameNode got killed.
> 3.Restart the NameNode after mapreduce.jobhistory.move.interval-ms time is 
> elapsed(default is 3 min).
> Observe that Job history files are not copied from intermediate done 
> directory to done directory and also logs are not updated with any message
> Now submit another job observe that Job history files are not copied from 
> intermediate done directory to done directory and also nothing is logged into 
> historyserver logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4224) TestFifoScheduler throws org.apache.hadoop.metrics2.MetricsException

2012-05-04 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4224:


 Summary: TestFifoScheduler throws 
org.apache.hadoop.metrics2.MetricsException 
 Key: MAPREDUCE-4224
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4224
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.2, 2.0.0, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
2012-05-04 15:18:47,180 WARN  [main] util.MBeans (MBeans.java:getMBeanName(95)) 
- Error creating MBean object name: Hadoop:service=ResourceManager,name=RMNMInfo
org.apache.hadoop.metrics2.MetricsException: 
org.apache.hadoop.metrics2.MetricsException: 
Hadoop:service=ResourceManager,name=RMNMInfo already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:117)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:102)
at org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:93)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:55)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMNMInfo.(RMNMInfo.java:59)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.init(ResourceManager.java:225)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler.setUp(TestFifoScheduler.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Caused by: org.apache.hadoop.metrics2.MetricsException: 
Hadoop:service=ResourceManager,name=RMNMInfo already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:113)
... 30 more
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4254) NM throws NPE on startup if it doesn't have permission's on nm local dir's

2012-05-14 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4254:


 Summary: NM throws NPE on startup if it doesn't have permission's 
on nm local dir's
 Key: MAPREDUCE-4254
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4254
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, nodemanager
Affects Versions: 2.0.0, 3.0.0
    Reporter: Devaraj K
    Assignee: Devaraj K


NM throws NPE on startup if it doesn't have persmission's on nm local dir's


{code:xml}
2012-05-14 16:32:13,468 FATAL 
org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
NodeManager
org.apache.hadoop.yarn.YarnException: Failed to initialize LocalizationService
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.init(ResourceLocalizationService.java:202)
at 
org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init(ContainerManagerImpl.java:183)
at 
org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.init(NodeManager.java:166)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:268)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:284)
Caused by: java.io.IOException: mkdir of /mrv2/tmp/nm-local-dir/usercache failed
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:907)
at 
org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
at 
org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.init(ResourceLocalizationService.java:188)
... 6 more
2012-05-14 16:32:13,472 INFO org.apache.hadoop.yarn.service.CompositeService: 
Error stopping 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler.stop(NonAggregatingLogHandler.java:82)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.stop(ContainerManagerImpl.java:266)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:182)
at 
org.apache.hadoop.yarn.service.CompositeService$CompositeServiceShutdownHook.run(CompositeService.java:122)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4255) Job History Server throws NPE if it fails to get keytab

2012-05-14 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4255:


 Summary: Job History Server throws NPE if it fails to get keytab
 Key: MAPREDUCE-4255
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4255
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
2012-05-14 17:59:41,906 FATAL 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: Error starting 
JobHistoryServer
org.apache.hadoop.yarn.YarnException: History Server Failed to login
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.init(JobHistoryServer.java:69)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:132)
Caused by: java.io.IOException: Running in secure mode, but config doesn't have 
a keytab
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:258)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:229)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.doSecureLogin(JobHistoryServer.java:98)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.init(JobHistoryServer.java:67)
... 1 more
2012-05-14 17:59:41,918 INFO org.apache.hadoop.yarn.service.CompositeService: 
Error stopping org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
java.lang.NullPointerException
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.stop(JobHistoryServer.java:115)
at 
org.apache.hadoop.yarn.service.CompositeService$CompositeServiceShutdownHook.run(CompositeService.java:122)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
2012-05-14 17:59:41,918 INFO 
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: SHUTDOWN_MSG: 
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4261) MRAppMaster throws NPE while stopping RMContainerAllocator service

2012-05-16 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4261:


 Summary: MRAppMaster throws NPE while stopping 
RMContainerAllocator service
 Key: MAPREDUCE-4261
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4261
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am
Affects Versions: 2.0.0, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
2012-05-16 18:55:54,222 INFO [Thread-1] 
org.apache.hadoop.yarn.service.CompositeService: Error stopping 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
java.lang.NullPointerException
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter.stop(MRAppMaster.java:716)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$MRAppMasterShutdownHook.run(MRAppMaster.java:1036)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
2012-05-16 18:55:54,222 INFO [Thread-1] 
org.apache.hadoop.yarn.service.CompositeService: Error stopping 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
java.lang.NullPointerException
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getStat(RMContainerAllocator.java:521)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.stop(RMContainerAllocator.java:227)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.stop(MRAppMaster.java:668)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$MRAppMasterShutdownHook.run(MRAppMaster.java:1036)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4262) NM gives wrong log message saying "Connected to ResourceManager" before trying to connect

2012-05-16 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4262:


 Summary: NM gives wrong log message saying "Connected to 
ResourceManager" before trying to connect
 Key: MAPREDUCE-4262
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4262
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K
Priority: Minor


{code:xml}
2012-05-16 18:04:25,844 INFO 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Connected to 
ResourceManager at /xx.xx.xx.xx:8025
2012-05-16 18:04:26,870 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: host-xx-xx-xx-xx/xx.xx.xx.xx:8025. Already tried 0 time(s).
2012-05-16 18:04:27,870 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: host-xx-xx-xx-xx/xx.xx.xx.xx:8025. Already tried 1 time(s).
2012-05-16 18:04:28,871 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: host-xx-xx-xx-xx/xx.xx.xx.xx:8025. Already tried 2 time(s).
2012-05-16 18:04:29,872 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: host-xx-xx-xx-xx/xx.xx.xx.xx:8025. Already tried 3 time(s).
2012-05-16 18:04:30,873 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: host-xx-xx-xx-xx/xx.xx.xx.xx:8025. Already tried 4 time(s).
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4286) TestClientProtocolProviderImpls passes on failure conditions also

2012-05-27 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4286:


 Summary: TestClientProtocolProviderImpls passes on failure 
conditions also
 Key: MAPREDUCE-4286
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4286
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Devaraj K
Assignee: Devaraj K




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-4292) Job is hanging forever when some maps are failing always

2012-06-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4292.
--

Resolution: Duplicate

Dup of MAPREDUCE-3927.

> Job is hanging forever when some maps are failing always
> 
>
> Key: MAPREDUCE-4292
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4292
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.0.0-alpha
>Reporter: Nishan Shetty
>Priority: Critical
> Attachments: syslog
>
>
> Set property "mapred.reduce.tasks" to some value greater than zero
> I have a job in which some maps are failing always. 
> Observations:
> 1.Map phase is completing with 100%(with succeeded and failed maps). 
> 2.Reduce phase is not progressing further after 32%.
> 3.After map phase is completed job is hanging forever.
> Expected that job should be failed after waiting for some time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4352) Jobs fail during resource localization when directories in file cache reaches to unix directory limit

2012-06-19 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4352:


 Summary: Jobs fail during resource localization when directories 
in file cache reaches to unix directory limit
 Key: MAPREDUCE-4352
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4352
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


If we have multiple jobs which uses distributed cache with small size of files, 
the directory limit reaches before reaching the cache size and fails to create 
any directories in file cache. The jobs start failing with the below exception.


{code:xml}
java.io.IOException: mkdir of 
/tmp/nm-local-dir/usercache/root/filecache/1701886847734194975 failed
at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:909)
at 
org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
at 
org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:147)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

We should have a mechanism to clean the cache files if it crosses specified 
number of directories like cache size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4372) Deadlock in Resource Manager between SchedulerEventDispatcher.EventProcessor and Shutdown hook manager

2012-06-26 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4372:


 Summary: Deadlock in Resource Manager between 
SchedulerEventDispatcher.EventProcessor and Shutdown hook manager
 Key: MAPREDUCE-4372
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4372
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, resourcemanager
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


Please find the attached resource manager thread dump for the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4379) Node Manager throws java.lang.OutOfMemoryError: Java heap space due to org.apache.hadoop.fs.LocalDirAllocator.contexts

2012-06-27 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4379:


 Summary: Node Manager throws java.lang.OutOfMemoryError: Java heap 
space due to org.apache.hadoop.fs.LocalDirAllocator.contexts
 Key: MAPREDUCE-4379
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4379
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2, nodemanager
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K
Priority: Critical


{code:xml}
Exception in thread "Container Monitor" java.lang.OutOfMemoryError: Java heap 
space
at java.io.BufferedReader.(BufferedReader.java:80)
at java.io.BufferedReader.(BufferedReader.java:91)
at 
org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:410)
at 
org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.getProcessTree(ProcfsBasedProcessTree.java:171)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:389)
Exception in thread "LocalizerRunner for 
container_1340690914008_10890_01_03" java.lang.OutOfMemoryError: Java heap 
space
at java.util.Arrays.copyOfRange(Arrays.java:3209)
at java.lang.String.(String.java:215)
at 
com.sun.org.apache.xerces.internal.xni.XMLString.toString(XMLString.java:185)
at 
com.sun.org.apache.xerces.internal.parsers.AbstractDOMParser.characters(AbstractDOMParser.java:1188)
at 
com.sun.org.apache.xerces.internal.xinclude.XIncludeHandler.characters(XIncludeHandler.java:1084)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:464)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at 
com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:235)
at 
com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:180)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1738)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1689)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1635)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:722)
at 
org.apache.hadoop.conf.Configuration.setStrings(Configuration.java:1300)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.initDirs(ContainerLocalizer.java:375)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:127)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:103)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:862)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4380) Empty Userlogs directory is getting created under logs directory

2012-06-27 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4380:


 Summary: Empty Userlogs directory is getting created under logs 
directory
 Key: MAPREDUCE-4380
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4380
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Devaraj K


Empty Userlogs directory is getting created under logs directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-4420) ./mapred queue -info -showJobs displays containers and memory as zero always

2012-07-23 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4420.
--

Resolution: Fixed

It is fixed now and works fine.

> ./mapred queue -info  -showJobs displays containers and memory as 
> zero always
> 
>
> Key: MAPREDUCE-4420
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4420
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.0.0-alpha
>Reporter: Nishan Shetty
>Assignee: Devaraj K
> Attachments: screenshot-1.jpg
>
>
> ./mapred queue -info  -showJobs displays containers and memory as 
> zero always.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-4743) Job is marking as FAILED and also throwing thhe Transition exception instead of KILLED when issues a KILL command

2012-10-23 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4743:


 Summary: Job is marking as FAILED and also throwing thhe 
Transition exception instead of KILLED when issues a KILL command
 Key: MAPREDUCE-4743
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4743
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 2.0.2-alpha
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
T_KILL at SUCCEEDED
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:605)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:89)
   at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:903)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:897)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
at java.lang.Thread.run(Thread.java:662)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4744) Application Master is running forever when the TaskAttempt gets TA_KILL event at the state SUCCESS_CONTAINER_CLEANUP

2012-10-23 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4744:


 Summary: Application Master is running forever when the 
TaskAttempt gets TA_KILL event at the state SUCCESS_CONTAINER_CLEANUP
 Key: MAPREDUCE-4744
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4744
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 2.0.2-alpha
Reporter: Devaraj K
Assignee: Devaraj K


When the Task issues KILL event to TaskAttempt, It is expecting to get event 
back to the Task from TaskAttempt. If the Task Attempt state 
SUCCESS_CONTAINER_CLEANUP state then it is ignoring and Task is waiting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4745) Application Master is hanging when the TaskImpl gets T_KILL event and completes attempts by the time

2012-10-23 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4745:


 Summary: Application Master is hanging when the TaskImpl gets 
T_KILL event and completes attempts by the time  
 Key: MAPREDUCE-4745
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4745
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Devaraj K




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4788) Job are marking as FAILED even if there are no failed tasks in it

2012-11-12 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4788:


 Summary: Job are marking as FAILED even if there are no failed 
tasks in it
 Key: MAPREDUCE-4788
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4788
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 2.0.2-alpha, 2.0.1-alpha
Reporter: Devaraj K
Assignee: Devaraj K


Sometimes Jobs are marking as FAILED and some the tasks are marking as KILLED 
in it. 


In MRAppMaster, JobFinishEvent is triggering and waiting for the 5000 millis. 
If any tasks final state is unknown by this time those tasks are marking as 
KILLED and Job state is marking as FAILED.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4106) Fix skipping tests in mapreduce

2012-12-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4106.
--

Resolution: Fixed
  Assignee: Devaraj K

> Fix skipping tests in mapreduce
> ---
>
> Key: MAPREDUCE-4106
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4106
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: mrv2
>Affects Versions: 2.0.0-alpha, 3.0.0
>    Reporter: Devaraj K
>Assignee: Devaraj K
>
> There are 22 tests skipping in hadoop-mapreduce-client-jobclient module, all 
> these can be corrected as part of this umbrella jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4078) Hadoop-Mapreduce-0.23-Build - Build # 239 - Still Failing

2012-12-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4078.
--

Resolution: Not A Problem

> Hadoop-Mapreduce-0.23-Build - Build # 239 - Still Failing 
> --
>
> Key: MAPREDUCE-4078
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4078
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>    Reporter: Devaraj K
>
> See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/239/
> {code:xml}
> See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/239/
> ###
> ## LAST 60 LINES OF THE CONSOLE 
> ###
> Started by timer
> Building remotely on hadoop2 in workspace 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-0.23-Build
> Location 'http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23' 
> does not exist
> One or more repository locations do not exist anymore for 
> Hadoop-Mapreduce-0.23-Build, project will be disabled.
> Retrying after 10 seconds
> Location 'http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23' 
> does not exist
> One or more repository locations do not exist anymore for 
> Hadoop-Mapreduce-0.23-Build, project will be disabled.
> Retrying after 10 seconds
> Location 'http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.23' 
> does not exist
> One or more repository locations do not exist anymore for 
> Hadoop-Mapreduce-0.23-Build, project will be disabled.
> Archiving artifacts
> Email was triggered for: Failure
> Sending email for trigger: Failure
> ###
> ## FAILED TESTS (if any) 
> ##
> No tests ran.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4094) Mapreduce-trunk test cases are failing

2012-12-31 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4094.
--

Resolution: Not A Problem

> Mapreduce-trunk test cases are failing
> --
>
> Key: MAPREDUCE-4094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 3.0.0
>    Reporter: Devaraj K
>
> https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1037/
> {code:xml}
> Failed tests:   
> testDefaultCleanupAndAbort(org.apache.hadoop.mapred.TestJobCleanup): Done 
> file 
> "/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-0/_SUCCESS"
>  missing for job job_1333287145240_0001
>   testCustomAbort(org.apache.hadoop.mapred.TestJobCleanup): Done file 
> "/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-1/_SUCCESS"
>  missing for job job_1333287145240_0002
>   testCustomCleanup(org.apache.hadoop.mapred.TestJobCleanup): Done file 
> "/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-2/_custom_cleanup"
>  missing for job job_1333287145240_0003
>   testHeapUsageCounter(org.apache.hadoop.mapred.TestJobCounters): Job 
> job_1333287188908_0001 failed!
>   testTaskTempDir(org.apache.hadoop.mapred.TestMiniMRChildTask)
>   testTaskEnv(org.apache.hadoop.mapred.TestMiniMRChildTask): The environment 
> checker job failed.
>   testTaskOldEnv(org.apache.hadoop.mapred.TestMiniMRChildTask): The 
> environment checker job failed.
>   testJob(org.apache.hadoop.mapred.TestMiniMRClientCluster)
>   testLazyOutput(org.apache.hadoop.mapreduce.TestMapReduceLazyOutput)
>   
> testSpeculativeExecution(org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution)
>   testSleepJob(org.apache.hadoop.mapreduce.v2.TestMRJobs)
>   testRandomWriter(org.apache.hadoop.mapreduce.v2.TestMRJobs)
>   testDistributedCache(org.apache.hadoop.mapreduce.v2.TestMRJobs)
>   testValidProxyUser(org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser)
> Tests in error: 
>   
> testReduceFromPartialMem(org.apache.hadoop.mapred.TestReduceFetchFromPartialMem):
>  Job failed!
>   testWithDFS(org.apache.hadoop.mapred.TestJobSysDirWithDFS): Job failed!
>   
> testReduceFromPartialMem(org.apache.hadoop.mapred.TestReduceFetchFromPartialMem):
>  Job failed!
>   testLazyOutput(org.apache.hadoop.mapred.TestLazyOutput): Job failed!
>   testFailingMapper(org.apache.hadoop.mapreduce.v2.TestMRJobs): 0
>   org.apache.hadoop.mapreduce.v2.TestMROldApiJobs: Failed to Start 
> org.apache.hadoop.mapreduce.v2.TestMROldApiJobs
>   org.apache.hadoop.mapreduce.v2.TestUberAM: Failed to Start 
> org.apache.hadoop.mapreduce.v2.TestMRJobs
>   
> testDefaultCleanupAndAbort(org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter):
>  Failed to Start org.apache.hadoop.mapred.MiniMRCluster
>   
> testCustomAbort(org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter):
>  Failed to Start org.apache.hadoop.mapred.MiniMRCluster
>   
> testCustomCleanup(org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter):
>  Failed to Start org.apache.hadoop.mapred.MiniMRCluster
>   testChild(org.apache.hadoop.mapreduce.TestChild): Failed to Start 
> org.apache.hadoop.mapred.MiniMRCluster
> Tests run: 404, Failures: 14, Errors: 11, Skipped: 22
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4077) Issues while using Hadoop Streaming job

2013-01-17 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4077.
--

Resolution: Not A Problem

> Issues while using Hadoop Streaming job
> ---
>
> Key: MAPREDUCE-4077
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4077
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.1
>    Reporter: Devaraj K
>Assignee: Devaraj K
>
> When we use -file option it says deprecated and use -files.
> {code:xml}
> linux-f330:/home/devaraj/hadoop/trunk/hadoop-0.24.0-SNAPSHOT/bin # ./hadoop 
> jar 
> ../share/hadoop/tools/lib/hadoop-streaming-0.24.0-SNAPSHOT.jar -input /hadoop 
> -output /test/output/3 -mapper cat -reducer wc -file hadoop
> 02/02/19 10:55:51 WARN streaming.StreamJob: -file option is deprecated, 
> please use generic option -files instead.
> {code}
> But when we use -files option, it says unrecognized option.
> {code:xml}
> linux-f330:/home/devaraj/hadoop/trunk/hadoop-0.24.0-SNAPSHOT/bin # ./hadoop 
> jar 
> ../share/hadoop/tools/lib/hadoop-streaming-0.24.0-SNAPSHOT.jar -input /hadoop 
> -output 
> /test/output/3 -mapper cat -reducer wc -files hadoop
> 02/02/19 10:56:42 ERROR streaming.StreamJob: Unrecognized option: -files
> Usage: $HADOOP_PREFIX/bin/hadoop jar hadoop-streaming.jar [options]
> {code}
> When we use -archives option,  it says unrecognized option.
> {code:xml}
> linux-f330:/home/devaraj/hadoop/trunk/hadoop-0.24.0-SNAPSHOT/bin # ./hadoop 
> jar 
> ../share/hadoop/tools/lib/hadoop-streaming-0.24.0-SNAPSHOT.jar -input /hadoop 
> -output 
> /test/output/3 -mapper cat -reducer wc -archives testarchive.rar
> 02/02/19 11:05:43 ERROR streaming.StreamJob: Unrecognized option: -archives
> Usage: $HADOOP_PREFIX/bin/hadoop jar hadoop-streaming.jar [options]
> {code}
> But in the options it will display the usage of the -archives.
> {code:xml}
> linux-f330:/home/devaraj/hadoop/trunk/hadoop-0.24.0-SNAPSHOT/bin # ./hadoop 
> jar 
> ../share/hadoop/tools/lib/hadoop-streaming-0.24.0-SNAPSHOT.jar -input /hadoop 
> -output 
> /test/output/3 -mapper cat -reducer wc -archives testarchive.rar
> 02/02/19 11:05:43 ERROR streaming.StreamJob: Unrecognized option: -archives
> Usage: $HADOOP_PREFIX/bin/hadoop jar hadoop-streaming.jar [options]
> ..
> ..
> -libjars specify comma separated jar files 
> to include in the classpath.
> -archives specify comma separated 
> archives to be unarchived on the compute machines.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-3232) AM should handle reboot from Resource Manager

2013-01-17 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-3232.
--

Resolution: Not A Problem

> AM should  handle reboot from Resource Manager
> --
>
> Key: MAPREDUCE-3232
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3232
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.24.0
>    Reporter: Devaraj K
>    Assignee: Devaraj K
>
> When the RM doesn't have last response id for app attempt(or the request 
> response id is less than the last response id), RM sends reboot response but 
> AM doesn't handle this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4950) MR App Master failing to write the history due to AvroTypeException

2013-01-20 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-4950:


 Summary: MR App Master failing to write the history due to 
AvroTypeException
 Key: MAPREDUCE-4950
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4950
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mr-am
Reporter: Devaraj K
Priority: Critical


{code:xml}
2013-01-19 19:31:27,269 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing 
event MAP_ATTEMPT_STARTED
2013-01-19 19:31:27,269 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.yarn.service.CompositeService: Error stopping 
JobHistoryEventHandler
org.apache.avro.AvroTypeException: Attempt to process a enum when a array-start 
was expected.
at org.apache.avro.io.parsing.Parser.advance(Parser.java:93)
at org.apache.avro.io.JsonEncoder.writeEnum(JsonEncoder.java:210)
at 
org.apache.avro.specific.SpecificDatumWriter.writeEnum(SpecificDatumWriter.java:54)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:66)
at 
org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:104)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:65)
at 
org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:57)
at 
org.apache.hadoop.mapreduce.jobhistory.EventWriter.write(EventWriter.java:66)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$MetaInfo.writeEvent(JobHistoryEventHandler.java:825)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:517)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.stop(JobHistoryEventHandler.java:346)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:445)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:406)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
at java.lang.Thread.run(Thread.java:662)
2013-01-19 19:31:27,271 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory 
hdfs://hacluster /root/staging-dir/root/.staging/job_1358603069474_0135
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5174) History server gives wrong no of total maps and reducers

2013-04-23 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5174:


 Summary: History server gives wrong no of total maps and reducers
 Key: MAPREDUCE-5174
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5174
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Affects Versions: 2.0.3-alpha
Reporter: Devaraj K


History server displays wrong no of total maps and total reducers in JHS UI Job 
listing for non-succeeded jobs and also REST API (i.e http:///ws/v1/history/mapreduce/jobs/) gives wrong data for total maps and 
reducers

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5195) Job doesn't utilize all the cluster resources with CombineFileInputFormat

2013-04-30 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5195:


 Summary: Job doesn't utilize all the cluster resources with 
CombineFileInputFormat
 Key: MAPREDUCE-5195
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5195
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: job submission
Affects Versions: 2.0.4-alpha, 0.23.7, 2.0.5-beta
Reporter: Devaraj K


If we enable delay scheduling in resource manager and the submitted job is 
using CombineFileInputFormat then this job is not able to use the all the 
available resources in the cluster, running most of the maps in only some nodes 
and other nodes resources are idle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4743) Job is marking as FAILED and also throwing the Transition exception instead of KILLED when issues a KILL command

2013-06-07 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4743.
--

Resolution: Fixed

It is fixed in the latest code.

> Job is marking as FAILED and also throwing the Transition exception instead 
> of KILLED when issues a KILL command
> 
>
> Key: MAPREDUCE-4743
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4743
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.0.2-alpha
>Reporter: Devaraj K
>Assignee: Devaraj K
>
> {code:xml}
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> T_KILL at SUCCEEDED
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:605)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl.handle(TaskImpl.java:89)
>at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:903)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher.handle(MRAppMaster.java:897)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
>   at java.lang.Thread.run(Thread.java:662)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2013-06-20 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5335:


 Summary: Rename Job Tracker terminology in ShuffleSchedulerImpl
 Key: MAPREDUCE-5335
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster
Affects Versions: 2.0.4-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
2013-06-17 17:27:30,134 INFO [fetcher#2] 
org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
{code}



{code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
  // Notify the JobTracker
  // after every read error, if 'reportReadErrorImmediately' is true or
  // after every 'maxFetchFailuresBeforeReporting' failures
  private void checkAndInformJobTracker(
  int failures, TaskAttemptID mapId, boolean readError,
  boolean connectExcpt) {
if (connectExcpt || (reportReadErrorImmediately && readError)
|| ((failures % maxFetchFailuresBeforeReporting) == 0)) {
  LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
  status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) mapId);
}
  }

 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5358) MRAppMaster throws invalid transitions for JobImpl

2013-06-27 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5358:


 Summary: MRAppMaster throws invalid transitions for JobImpl
 Key: MAPREDUCE-5358
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5358
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am
Affects Versions: 2.0.1-alpha
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
2013-06-26 11:39:50,128 ERROR [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event at 
current state
org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
JOB_TASK_ATTEMPT_COMPLETED at SUCCEEDED
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
at java.lang.Thread.run(Thread.java:662)
{code}

{code:xml}
2013-06-26 11:39:50,129 ERROR [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event at 
current state
org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
JOB_MAP_TASK_RESCHEDULED at SUCCEEDED
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
at java.lang.Thread.run(Thread.java:662)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5409) MRAppMaster throws InvalidStateTransitonException: Invalid event: TA_TOO_MANY_FETCH_FAILURE at KILLED for TaskAttemptImpl

2013-07-23 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5409:


 Summary: MRAppMaster throws InvalidStateTransitonException: 
Invalid event: TA_TOO_MANY_FETCH_FAILURE at KILLED for TaskAttemptImpl
 Key: MAPREDUCE-5409
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5409
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
2013-07-23 12:28:05,217 INFO [IPC Server handler 29 on 50796] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
attempt_1374560536158_0003_m_40_0 is : 0.0
2013-07-23 12:28:05,221 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures 
for output of task attempt: attempt_1374560536158_0003_m_07_0 ... raising 
fetch failure to map
2013-07-23 12:28:05,222 ERROR [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle this 
event at current state for attempt_1374560536158_0003_m_07_0
org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
TA_TOO_MANY_FETCH_FAILURE at KILLED
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:445)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1032)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:143)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1123)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1115)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
at java.lang.Thread.run(Thread.java:662)
2013-07-23 12:28:05,249 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1374560536158_0003Job 
Transitioned from RUNNING to ERROR
2013-07-23 12:28:05,338 INFO [IPC Server handler 16 on 50796] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from 
attempt_1374560536158_0003_m_40_0
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5422) [Umbrella] Fix invalid state transitions in MRAppMaster

2013-07-26 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5422:


 Summary: [Umbrella] Fix invalid state transitions in MRAppMaster
 Key: MAPREDUCE-5422
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5422
 Project: Hadoop Map/Reduce
  Issue Type: Task
  Components: mr-am
Affects Versions: 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K


There are mutiple invalid state transitions for the state machines present in 
MRAppMaster. All these can be handled as part of this umbrell JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-5435) Nodemanager stops working automatically

2013-07-30 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-5435.
--

Resolution: Invalid

> Nodemanager stops working automatically
> ---
>
> Key: MAPREDUCE-5435
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5435
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Vishket
>
> Hi Everyone, 
> I have been trying to setup a 10 node Hadoop Cluster(Hadoop 2.0.5 alpha). 
> I've completed editing all the configuration files and am now trying to run 
> the daemons. All the processes work fine apart from the nodemanager. The 
> nodemanager runs fine on the slave however, on the master, it will only run 
> for 10-15 sec and then stops. Same thing happens if I run the start command 
> again.
> Any suggestions?
> Thanks in advance!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-5447) When a job state is ERROR , total map and reduce task are displayed as 0 in JHS home page , while navigating inside the respective job page displays the correct tota

2013-08-05 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-5447.
--

Resolution: Duplicate

> When a job state is ERROR , total map and reduce task are displayed as 0 in 
> JHS home page , while navigating inside the respective job page displays the 
> correct total. 
> 
>
> Key: MAPREDUCE-5447
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5447
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.0.5-alpha
>Reporter: J.Andreina
>Priority: Minor
> Attachments: JHSHomePage.png, JobPage.png
>
>
> When a job state is in error , total map and reduce task are displayed as 0 
> in JHS home page , while navigating inside the respective job page displays 
> the correct total.
> JHS Homepage:
> 
> Total Map and Reduce Task are 0
> Job Page:
> =
> Total Map task-2
> Total Reduce task -1
> successful Map Attempts -2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5622) MRAppMaster doesn't assign all allocated NODE_LOCAL containers to node-local maps

2013-11-12 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-5622:


 Summary: MRAppMaster doesn't assign all allocated NODE_LOCAL 
containers to node-local maps
 Key: MAPREDUCE-5622
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5622
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster
Affects Versions: 2.2.0
Reporter: Devaraj K


MRAppMaster will request containers for all the splits to launch map tasks, RM 
will give Node Local containers for all these if available. When the RM gives 
all containers as Node Local, MR AM may assign these  NODE-LOCAL containers to 
non-local maps.
\\
\\
|node1|split1|split2| |split4|
|node2| |split2|split3| |
|node3|split1|split2|split3|split4|
|node4|split1| |split3|split4|
\\
Consider this instance, assume RM has given one NODE LOCAL container on each 
node to process all the splits as local maps. While assigning, if the AM gives 
node1-container for split1, node2-container for split3, node3-container for 
split3 and node4-container can be given to only split2 which is not local. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (MAPREDUCE-4841) Application Master Retries fail due to FileNotFoundException

2014-07-22 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4841.
--

Resolution: Fixed

It has been fixed by MAPREDUCE-5476, closing it as duplicate of MAPREDUCE-5476.

> Application Master Retries fail due to FileNotFoundException
> 
>
> Key: MAPREDUCE-4841
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4841
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.0.1-alpha
>    Reporter: Devaraj K
>Assignee: Jason Lowe
>Priority: Critical
>
> Application attempt1 is deleting the job related files and these are not 
> present in the HDFS for following retries.
> {code:xml}
> Application application_1353724754961_0001 failed 4 times due to AM Container 
> for appattempt_1353724754961_0001_04 exited with exitCode: -1000 due to: 
> RemoteTrace: java.io.FileNotFoundException: File does not exist: 
> hdfs://hacluster:8020/tmp/hadoop-yarn/staging/mapred/.staging/job_1353724754961_0001/appTokens
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:752)
>  at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:88) at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:49) at 
> org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:157) at 
> org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:155) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>  at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:153) at 
> org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49) at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>  at java.lang.Thread.run(Thread.java:662) at LocalTrace: 
> org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: File 
> does not exist: 
> hdfs://hacluster:8020/tmp/hadoop-yarn/staging/mapred/.staging/job_1353724754961_0001/appTokens
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.convertFromProtoFormat(LocalResourceStatusPBImpl.java:217)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.getException(LocalResourceStatusPBImpl.java:147)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.update(ResourceLocalizationService.java:822)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:492)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:221)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:46)
>  at 
> org.apache.hadoop.yarn.proto.LocalizationProtocol$LocalizationProtocolService$2.callBlockingMethod(LocalizationProtocol.java:57)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:924) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686) .Failing this 
> attempt.. Failing the application. 
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (MAPREDUCE-4841) Application Master Retries fail due to FileNotFoundException

2014-07-22 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4841.
--

Resolution: Duplicate

> Application Master Retries fail due to FileNotFoundException
> 
>
> Key: MAPREDUCE-4841
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4841
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.0.1-alpha
>    Reporter: Devaraj K
>Assignee: Jason Lowe
>Priority: Critical
>
> Application attempt1 is deleting the job related files and these are not 
> present in the HDFS for following retries.
> {code:xml}
> Application application_1353724754961_0001 failed 4 times due to AM Container 
> for appattempt_1353724754961_0001_04 exited with exitCode: -1000 due to: 
> RemoteTrace: java.io.FileNotFoundException: File does not exist: 
> hdfs://hacluster:8020/tmp/hadoop-yarn/staging/mapred/.staging/job_1353724754961_0001/appTokens
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:752)
>  at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:88) at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:49) at 
> org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:157) at 
> org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:155) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>  at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:153) at 
> org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49) at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>  at java.lang.Thread.run(Thread.java:662) at LocalTrace: 
> org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: File 
> does not exist: 
> hdfs://hacluster:8020/tmp/hadoop-yarn/staging/mapred/.staging/job_1353724754961_0001/appTokens
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.convertFromProtoFormat(LocalResourceStatusPBImpl.java:217)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.getException(LocalResourceStatusPBImpl.java:147)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.update(ResourceLocalizationService.java:822)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:492)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:221)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:46)
>  at 
> org.apache.hadoop.yarn.proto.LocalizationProtocol$LocalizationProtocolService$2.callBlockingMethod(LocalizationProtocol.java:57)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:924) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686) .Failing this 
> attempt.. Failing the application. 
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (MAPREDUCE-4841) Application Master Retries fail due to FileNotFoundException

2014-07-22 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reopened MAPREDUCE-4841:
--


> Application Master Retries fail due to FileNotFoundException
> 
>
> Key: MAPREDUCE-4841
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4841
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.0.1-alpha
>    Reporter: Devaraj K
>Assignee: Jason Lowe
>Priority: Critical
>
> Application attempt1 is deleting the job related files and these are not 
> present in the HDFS for following retries.
> {code:xml}
> Application application_1353724754961_0001 failed 4 times due to AM Container 
> for appattempt_1353724754961_0001_04 exited with exitCode: -1000 due to: 
> RemoteTrace: java.io.FileNotFoundException: File does not exist: 
> hdfs://hacluster:8020/tmp/hadoop-yarn/staging/mapred/.staging/job_1353724754961_0001/appTokens
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:752)
>  at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:88) at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:49) at 
> org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:157) at 
> org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:155) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>  at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:153) at 
> org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49) at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>  at java.lang.Thread.run(Thread.java:662) at LocalTrace: 
> org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: File 
> does not exist: 
> hdfs://hacluster:8020/tmp/hadoop-yarn/staging/mapred/.staging/job_1353724754961_0001/appTokens
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.convertFromProtoFormat(LocalResourceStatusPBImpl.java:217)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.getException(LocalResourceStatusPBImpl.java:147)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.update(ResourceLocalizationService.java:822)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:492)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:221)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:46)
>  at 
> org.apache.hadoop.yarn.proto.LocalizationProtocol$LocalizationProtocolService$2.callBlockingMethod(LocalizationProtocol.java:57)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:924) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:396) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686) .Failing this 
> attempt.. Failing the application. 
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (MAPREDUCE-6046) Change the class name for logs in RMCommunicator.java

2014-08-22 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6046:


 Summary: Change the class name for logs in RMCommunicator.java
 Key: MAPREDUCE-6046
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6046
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mr-am
Affects Versions: 3.0.0
Reporter: Devaraj K
Priority: Minor


It is little confusing when the logs gets generated with the class name as 
RMContainerAllocator and not present in RMContainerAllocator.java.
{code:title=RMCommunicator.java|borderStyle=solid}

  private static final Log LOG = LogFactory.getLog(RMContainerAllocator.class);

{code}

In the above RMContainerAllocator.class needs to be changed to 
RMCommunicator.class.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] Created: (MAPREDUCE-2261) Fair Multiple Task Assignment Scheduler (Assigning multiple tasks per heart beat)

2011-01-13 Thread Devaraj K (JIRA)
Fair Multiple Task Assignment Scheduler (Assigning multiple tasks per heart 
beat)
-

 Key: MAPREDUCE-2261
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2261
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Devaraj K


  Functionality wise the Fair Multiple Task Assignment Scheduler behaves 
the same way except the assignment of Tasks. Instead of assigning a single Task 
per heartbeat, it checks for all the jobs if any local or non-local Task that 
can be launched.

Fair Multiple Task Assignment Scheduler has the advantage of assigning multiple 
jobs per heart beat interval depending upon the slots available on the Task 
Tracker, by configuring the number of parallel tasks to be executed in a Task 
Tracker at any point of time. The advantages are as follows:

a) Parallel Execution allows tasks be to submitted and processed in parallel 
independent of the status of other tasks.
b) More number of tasks is assigned in a heartbeat interval and consequently 
multitasking capability increases.
c) With multi task assignment, Task Tracker efficiency is increased.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (MAPREDUCE-2297) All map reduce tasks are failing if we give invalid path jar file for Job

2011-02-03 Thread Devaraj K (JIRA)
All map reduce tasks are failing if we give invalid path jar file for Job
-

 Key: MAPREDUCE-2297
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2297
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: tasktracker
Affects Versions: 0.20.2
Reporter: Devaraj K
Priority: Minor


This can be reproduced by giving the invalid jar file for the Job or it can be 
reproduced from hive.


In hive-default.xml


hive.aux.jars.path

Provided for adding auxillaryjarsPath


If we configure an invalid path for jar file, It is making all map reduce tasks 
to fail even those jobs are not depending on this jar file and it is giving the 
below exception.
{code:xml} 
hive> select * from a join b on(a.b=b.c);
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
java.io.FileNotFoundException: File does not exist: /user/root/grade.jar
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:495)
at 
org.apache.hadoop.filecache.DistributedCache.getTimestamp(DistributedCache.java:509)
at 
org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:651)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:783)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:752)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:698)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)

{code} 

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2299) In the Job Tracker UI -> task details page, machine and logs links are navigating to page not found error.

2011-02-03 Thread Devaraj K (JIRA)
In the Job Tracker UI -> task details page, machine and logs links are 
navigating to page not found error.
--

 Key: MAPREDUCE-2299
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2299
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.2
Reporter: Devaraj K


1. In the page showing All task attempts-On clicking of machine link is 
navigating to page not found error.
2. In the page showing All task attempts-On clicking of Task logs Link is 
navigating to page not found error.


-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2305) The status in the UI can be updated periodically without refreshing the page.

2011-02-07 Thread Devaraj K (JIRA)
The status in the UI can be updated periodically without refreshing the page.
-

 Key: MAPREDUCE-2305
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2305
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker, tasktracker
Affects Versions: 0.23.0
Reporter: Devaraj K
Priority: Minor


If we want to know the latest information in Job Tracker and Task Trackers, we 
need to refresh the corresponding pages every time. Instead it can be reloaded 
(refreshed) periodically.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2306) It is better to give Job start time instead of JobTracker start time in the JobTracker UI->Home.

2011-02-07 Thread Devaraj K (JIRA)
It is better to give Job start time instead of JobTracker start time in the 
JobTracker UI->Home.


 Key: MAPREDUCE-2306
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2306
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.3
Reporter: Devaraj K
Priority: Minor


In the log details, for each job it is giving JobTracker start time and also In 
the JobTracker UI->Home it is giving Start time of JobTracker. It is better to 
give JobId start time instead of JobTracker start time in Job Tracker UI home.


-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2307) Exception thrown in Jobtracker logs, when the Scheduler configured is FairScheduler.

2011-02-08 Thread Devaraj K (JIRA)
Exception thrown in Jobtracker logs, when the Scheduler configured is 
FairScheduler.


 Key: MAPREDUCE-2307
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2307
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: contrib/fair-share
Affects Versions: 0.23.0
Reporter: Devaraj K
Priority: Minor


If we try to start the job tracker with fair scheduler using the default 
configuration, It is giving the below exception.


{code:xml} 
2010-07-03 10:18:27,142 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 
on 9001: starting
2010-07-03 10:18:27,143 INFO org.apache.hadoop.mapred.JobTracker: Starting 
RUNNING
2010-07-03 10:18:27,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 
on 9001: starting
2010-07-03 10:18:28,037 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
new node: /default-rack/linux172.site
2010-07-03 10:18:28,090 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
new node: /default-rack/linux177.site
2010-07-03 10:18:40,074 ERROR org.apache.hadoop.mapred.PoolManager: Failed to 
reload allocations file - will use existing allocations.
java.lang.NullPointerException
at java.io.File.(File.java:222)
at 
org.apache.hadoop.mapred.PoolManager.reloadAllocsIfNecessary(PoolManager.java:127)
at org.apache.hadoop.mapred.FairScheduler.assignTasks(FairScheduler.java:234)
at org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:2785)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:513)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:984)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:980)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:978)
{code} 

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2309) While querying the Job Statics from the command-line, if we give wrong status name then there is no warning or response.

2011-02-08 Thread Devaraj K (JIRA)
While querying the Job Statics from the command-line, if we give wrong status 
name then there is no warning or response.


 Key: MAPREDUCE-2309
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2309
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.23.0
Reporter: Devaraj K
Priority: Minor


If we try to get the jobs information by giving the wrong status name from the 
command line interface, it is not giving any warning or response.


-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2310) If we stop Job Tracker, Task Tracker is also getting stopped.

2011-02-08 Thread Devaraj K (JIRA)
If we stop Job Tracker, Task Tracker is also getting stopped.
-

 Key: MAPREDUCE-2310
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2310
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: tasktracker
Affects Versions: 0.20.2
Reporter: Devaraj K
Priority: Minor


If we execute stop-jobtracker.sh for stopping Job Tracker, Task Tracker is also 
stopping.

This is not applicable for the latest (trunk) code because stop-jobtracker.sh 
file is not coming.


-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (MAPREDUCE-2306) It is better to give Job start time instead of JobTracker start time in the JobTracker UI->Home.

2011-02-09 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-2306.
--

Resolution: Duplicate

Yes. It is duplicate of MAPREDUCE-1541.

> It is better to give Job start time instead of JobTracker start time in the 
> JobTracker UI->Home.
> 
>
> Key: MAPREDUCE-2306
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2306
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobtracker
>Affects Versions: 0.20.3
>Reporter: Devaraj K
>Priority: Minor
>
> In the log details, for each job it is giving JobTracker start time and also 
> In the JobTracker UI->Home it is giving Start time of JobTracker. It is 
> better to give JobId start time instead of JobTracker start time in Job 
> Tracker UI home.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2317) HadoopArchives throwing NullPointerException while creating hadoop archives (.har files)

2011-02-10 Thread Devaraj K (JIRA)
HadoopArchives throwing NullPointerException while creating hadoop archives 
(.har files)


 Key: MAPREDUCE-2317
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2317
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: harchive
Affects Versions: 0.20.1, 0.23.0
 Environment: windows
Reporter: Devaraj K
Priority: Minor


While we are trying to run hadoop archive tool in widows using this way, it is 
giving the below exception.

java org.apache.hadoop.tools.HadoopArchives -archiveName temp.har D:/test/in 
E:/temp

{code:xml} 

java.lang.NullPointerException
at 
org.apache.hadoop.tools.HadoopArchives.writeTopLevelDirs(HadoopArchives.java:320)
at 
org.apache.hadoop.tools.HadoopArchives.archive(HadoopArchives.java:386)
at org.apache.hadoop.tools.HadoopArchives.run(HadoopArchives.java:725)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.HadoopArchives.main(HadoopArchives.java:739)

{code} 

I see the code flow to handle this feature in windows also, 

{code:title=Path.java|borderStyle=solid}

/** Returns the parent of a path or null if at root. */
  public Path getParent() {
String path = uri.getPath();
int lastSlash = path.lastIndexOf('/');
int start = hasWindowsDrive(path, true) ? 3 : 0;
if ((path.length() == start) ||   // empty path
(lastSlash == start && path.length() == start+1)) { // at root
  return null;
}
String parent;
if (lastSlash==-1) {
  parent = CUR_DIR;
} else {
  int end = hasWindowsDrive(path, true) ? 3 : 0;
  parent = path.substring(0, lastSlash==end?end+1:lastSlash);
}
return new Path(uri.getScheme(), uri.getAuthority(), parent);
  }

{code} 

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (MAPREDUCE-2387) Potential Resource leak in IOUtils.java

2011-03-15 Thread Devaraj K (JIRA)
Potential Resource leak in IOUtils.java
---

 Key: MAPREDUCE-2387
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2387
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Devaraj K




{code:title=IOUtils.java|borderStyle=solid}


try {
  copyBytes(in, out, buffSize);
} finally {
  if(close) {
out.close();
in.close();
  }
}
 
{code} 

In the above code if any exception throws from the out.close() statement, 
in.close() statement will not execute and the input stream will not be closed.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Resolved: (MAPREDUCE-2387) Potential Resource leak in IOUtils.java

2011-03-16 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-2387.
--

Resolution: Duplicate

It is related to Hadoop Common. Issue is raised with id HADOOP-7194.

> Potential Resource leak in IOUtils.java
> ---
>
> Key: MAPREDUCE-2387
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2387
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.23.0
>    Reporter: Devaraj K
>
> {code:title=IOUtils.java|borderStyle=solid}
> try {
>   copyBytes(in, out, buffSize);
> } finally {
>   if(close) {
> out.close();
> in.close();
>   }
> }
>  
> {code} 
> In the above code if any exception throws from the out.close() statement, 
> in.close() statement will not execute and the input stream will not be closed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-2463) Job History files are not moving to done folder when job history location is hdfs location

2011-05-01 Thread Devaraj K (JIRA)
Job History files are not moving to done folder when job history location is 
hdfs location
--

 Key: MAPREDUCE-2463
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2463
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.23.0
Reporter: Devaraj K
Assignee: Devaraj K


If "mapreduce.jobtracker.jobhistory.location" is configured as HDFS location 
then either during initialization of Job Tracker (while moving old job history 
files) or after completion of the job, history files are not moving to done and 
giving following exception.

{code:xml} 
2011-04-29 15:27:27,813 ERROR 
org.apache.hadoop.mapreduce.jobhistory.JobHistory: Unable to move history file 
to DONE folder.
java.lang.IllegalArgumentException: Wrong FS: 
hdfs://10.18.52.146:9000/history/job_201104291518_0001_root, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:402)
at 
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:58)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:419)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:294)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:215)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1516)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1492)
at 
org.apache.hadoop.fs.FileSystem.moveFromLocalFile(FileSystem.java:1482)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistory.moveToDoneNow(JobHistory.java:348)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistory.access$200(JobHistory.java:61)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistory$1.run(JobHistory.java:439)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

{code} 


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-2464) NullPointerException is coming in the job tracker when job tracker resends the previous heartbeat response.

2011-05-02 Thread Devaraj K (JIRA)
NullPointerException is coming in the job tracker when job tracker resends the 
previous heartbeat response.
---

 Key: MAPREDUCE-2464
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2464
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.23.0
Reporter: Devaraj K
Assignee: Devaraj K


Over the network, the heartbeat response sent by Job Tracker to Task Tracker 
might get lost. When Task Tracker sends the old heart beat again to Job Tracker 
then Job Tracker finds and ignores it saying duplicate and resends the old 
heartbeat response which it is maintaining in a map. If the response contains 
LaunchTaskAction for MapTask, then the NullPointerException is throwing.

{code:xml} 
2011-05-02 16:01:53,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 
on 9001 caught: java.lang.NullPointerException
at org.apache.hadoop.mapred.MapTask.write(MapTask.java:140)
at 
org.apache.hadoop.mapred.LaunchTaskAction.write(LaunchTaskAction.java:48)
at 
org.apache.hadoop.mapred.HeartbeatResponse.write(HeartbeatResponse.java:91)
at 
org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:163)
at org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:74)
at org.apache.hadoop.ipc.Server.setupResponse(Server.java:1561)
at org.apache.hadoop.ipc.Server.access$2800(Server.java:96)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1433)
{code} 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-2481) SocketTimeoutException is coming in the reduce task when the data size is very high

2011-05-10 Thread Devaraj K (JIRA)
SocketTimeoutException is coming in the reduce task when the data size is very 
high
---

 Key: MAPREDUCE-2481
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2481
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: task
Affects Versions: 0.20.2
Reporter: Devaraj K


SocketTimeoutException is coming when reduce task tries to read 
MapTaskCompletionEventsUpdate object from task tracker, it is able to read 
reset, TaskCompletionEvent.taskId, TaskCompletionEvent.idWithinJob properties 
and it is failing for reading the property isMap in TaskCompletionEvent which 
is of type boolean. This exception is coming multiple times.

{code}
2011-04-20 15:58:03,037 FATAL mapred.TaskTracker 
(TaskTracker.java:fatalError(2812)) - Task: 
attempt_201104201115_0010_r_02_0 - Killed : java.io.IOException:  Tried for 
the max ping retries On TimeOut :1
at org.apache.hadoop.ipc.Client.checkPingRetries(Client.java:1342)
at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:402)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
at 
org.apache.hadoop.mapred.TaskCompletionEvent.readFields(TaskCompletionEvent.java:230)
at 
org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate.readFields(MapTaskCompletionEventsUpdate.java:64)
at 
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:245)
at 
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:69)
at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:698)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:593)
Caused by: java.net.SocketTimeoutException: 6 millis timeout while waiting 
for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:45798 
remote=/127.0.0.1:35419]
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:397)
... 9 more
{code}

 org.mortbay.jetty.EofException is also coming many times in the logs as 
described in MAPREDUCE-5.
{code}
2011-04-20 15:57:20,748 WARN  mapred.TaskTracker (TaskTracker.java:doGet(3164)) 
- getMapOutput(attempt_201104201115_0010_m_38_0,4) failed :
org.mortbay.jetty.EofException
at org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:787)
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-2512) wait(5000) and notify() mechanism can be implemented instead of sleep(5000) in reduce task when there are no copies in progress and no new copies to schedule

2011-05-18 Thread Devaraj K (JIRA)
wait(5000) and notify() mechanism can be implemented instead of sleep(5000) in 
reduce task when there are no copies in progress and no new copies to schedule
-

 Key: MAPREDUCE-2512
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2512
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: task
Affects Versions: 0.20.2
Reporter: Devaraj K
Assignee: Devaraj K


{code:title=ReduceTask.java|borderStyle=solid} 
   try { 
if (numInFlight == 0 && numScheduled == 0) { 
  // we should indicate progress as we don't want TT to think 
  // we're stuck and kill us 
  reporter.progress(); 
  Thread.sleep(5000); 
} 
  } catch (InterruptedException e) { } // IGNORE 
{code} 

Here if we have no copies in flight and we can't schedule anything new, it is 
going to wait for 5000 millis. Instead of waiting for 5000 millis, this thread 
can wait with timeout and GetMapEventsThread can notify it if gets new map 
completion events earlier than 5000 millis time. 
 


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-2513) Improvements in Job Tracker UI for monitoring and managing the map reduce jobs

2011-05-18 Thread Devaraj K (JIRA)
Improvements in Job Tracker UI for monitoring and managing the map reduce jobs
--

 Key: MAPREDUCE-2513
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2513
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Devaraj K
Assignee: Devaraj K


It will be helpful to the user/administrator if we provide following features 
in the Job Tracker UI 

1. User wants to get the list of jobs submitted with given state
2. User wants to kill a scheduled/running job through UI
3. User wants to change the priority of a job 
4. User wants to get the scheduling information of jobs
5. User wants to delete the logs of Jobs and tasks
6. Only authorized users to be able to perform the above operations through 
task management UI
7. Pagination support for the jobs listing




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >