[jira] [Created] (MAPREDUCE-2910) Allow empty MapOutputFile segments

2011-08-29 Thread Binglin Chang (JIRA)
Allow empty MapOutputFile segments
--

 Key: MAPREDUCE-2910
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2910
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: task, tasktracker
Affects Versions: 0.20.2, 0.23.0
Reporter: Binglin Chang
Priority: Minor
 Fix For: 0.23.0


As the scale of cluster and job get larger, we see a lot of empty partitions in 
MapOutputFile due to large reduce numbers or partition skew. When map output 
compression is enabled, empty map output partitions gets larger & has 
additional compressor/decompressor initialization overhead. 
This can be optimized by allowing empty MapOutputFile segments, where the 
rawLength & partLength of IndexRecord all equal to 0. Corresponding support 
need to be added to IFile reader, writer, and reduce shuffle copier.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-2909) Docs for remaining records in yarn-api

2011-08-29 Thread Arun C Murthy (JIRA)
Docs for remaining records in yarn-api
--

 Key: MAPREDUCE-2909
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2909
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: documentation, mrv2
Affects Versions: 0.23.0
Reporter: Arun C Murthy
 Fix For: 0.23.0


MAPREDUCE-2891 , MAPREDUCE-2897 & MAPREDUCE-2898 added javadocs for core 
protocols (i.e. AMRMProtocol, ClientRMProtocol & ContainerManager). Most 
'records' also have javadocs - this jira is to track the remaining ones.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-2908) Fix findbugs warnings in Map Reduce.

2011-08-29 Thread Mahadev konar (JIRA)
Fix findbugs warnings in Map Reduce.


 Key: MAPREDUCE-2908
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2908
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Mahadev konar
Assignee: Mahadev konar
Priority: Critical
 Fix For: 0.23.0


In the current trunk/0.23 codebase there are 5 findbugs warnings which cause 
the precommit CI builds to -1 the patches.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




RE: Hadoop-Mapreduce-22-branch - Build # 65 - Still Failing

2011-08-29 Thread Rottinghuis, Joep
Looks like a problem with the Hudson slave setup, and/or the job.
If somebody from infra can confirm that ant is installed in the expected 
location (/homes/hudson/tools/ant/latest/bin/ant), or else an explicit ant 
version should be selected in the job setup:

+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-29_22-31-26 
-Declipse.home=/homes/hudson/tools/eclipse/latest 
-Dfindbugs.home=/homes/hudson/tools/findbugs/latest 
-Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true 
-Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: 
not found

Thanks,

Joep

-Original Message-
From: Apache Jenkins Server [mailto:jenk...@builds.apache.org] 
Sent: Monday, August 29, 2011 3:32 PM
To: mapreduce-dev@hadoop.apache.org
Subject: Hadoop-Mapreduce-22-branch - Build # 65 - Still Failing

See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/65/

###
## LAST 60 LINES OF THE CONSOLE 
### [...truncated 2276 lines...]
A src/examples/org/apache/hadoop/examples/pi/package.html
A src/examples/org/apache/hadoop/examples/AggregateWordCount.java
A src/examples/org/apache/hadoop/examples/Grep.java
A bin
A bin/mapred-config.sh
AUbin/stop-mapred.sh
AUbin/mapred
AUbin/start-mapred.sh
A build-utils.xml
A build.xml
 U.
Fetching 
'https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/common/src/test/bin'
 at -1 into 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/test/bin'
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AUsrc/test/bin/test-patch.sh
A src/test/bin/test-patch.properties
At revision 1163045
At revision 1163045
AUtar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AUhudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AUhudsonBuildHadoopPatch.sh
AUhudsonBuildHadoopRelease.sh
AUprocessHadoopPatchEmailRemote.sh
AUhudsonPatchQueueAdmin.sh
AUprocessHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1163045
no change for 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/mapreduce 
since the previous build no change for 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/common/src/test/bin
 since the previous build No emails were triggered.
[Hadoop-Mapreduce-22-branch] $ /bin/bash /tmp/hudson6415918233542791813.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch
+ /trunk cd 
+ /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-29_22-31-26 
+ -Declipse.home=/homes/hudson/tools/eclipse/latest 
+ -Dfindbugs.home=/homes/hudson/tools/findbugs/latest 
+ -Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true 
+ -Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: 
not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
Build step 'Execute shell' marked build as failure [FINDBUGS] Skipping 
publisher since build result is FAILURE Archiving artifacts Publishing Clover 
coverage report...
No Clover report will be published due to a Build Failure Recording test 
results Publishing Javadoc Recording fingerprints Email was triggered for: 
Failure Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
## No tests ran.


Hadoop-Mapreduce-22-branch - Build # 65 - Still Failing

2011-08-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/65/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2276 lines...]
A src/examples/org/apache/hadoop/examples/pi/package.html
A src/examples/org/apache/hadoop/examples/AggregateWordCount.java
A src/examples/org/apache/hadoop/examples/Grep.java
A bin
A bin/mapred-config.sh
AUbin/stop-mapred.sh
AUbin/mapred
AUbin/start-mapred.sh
A build-utils.xml
A build.xml
 U.
Fetching 
'https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/common/src/test/bin'
 at -1 into 
'/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/test/bin'
Checking out http://svn.apache.org/repos/asf/hadoop/nightly
AUsrc/test/bin/test-patch.sh
A src/test/bin/test-patch.properties
At revision 1163045
At revision 1163045
AUtar-munge
A commitBuild.sh
A hudsonEnv.sh
A jenkinsSetup
A jenkinsSetup/installTools.sh
AUhudsonBuildHadoopNightly.sh
A buildMR-279Branch.sh
AUhudsonBuildHadoopPatch.sh
AUhudsonBuildHadoopRelease.sh
AUprocessHadoopPatchEmailRemote.sh
AUhudsonPatchQueueAdmin.sh
AUprocessHadoopPatchEmail.sh
A README.txt
A test-patch
A test-patch/test-patch.sh
At revision 1163045
no change for 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/mapreduce 
since the previous build
no change for 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/common/src/test/bin
 since the previous build
No emails were triggered.
[Hadoop-Mapreduce-22-branch] $ /bin/bash /tmp/hudson6415918233542791813.sh
+ ulimit -n 1024
+ export ANT_OPTS=-Xmx2048m
+ pwd
+ TRUNK=/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk
+ cd /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk
+ /homes/hudson/tools/ant/latest/bin/ant -Dversion=2011-08-29_22-31-26 
-Declipse.home=/homes/hudson/tools/eclipse/latest 
-Dfindbugs.home=/homes/hudson/tools/findbugs/latest 
-Dforrest.home=/homes/hudson/tools/forrest/latest -Dcompile.c++=true 
-Dcompile.native=true clean create-c++-configure tar findbugs
nightly/hudsonBuildHadoopNightly.sh: 1: /homes/hudson/tools/ant/latest/bin/ant: 
not found
+ RESULT=127
+ [ 127 != 0 ]
+ echo Build Failed: remaining tests not run
Build Failed: remaining tests not run
+ exit 127
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


[jira] [Created] (MAPREDUCE-2907) ResourceManager logs filled with [INFO] debug messages from org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue

2011-08-29 Thread Ravi Prakash (JIRA)
ResourceManager logs filled with [INFO] debug messages from 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue


 Key: MAPREDUCE-2907
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2907
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.0
Reporter: Ravi Prakash
 Fix For: 0.23.0


I see a lot of info messages (probably used for debugging during development)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2906) FindBugs OutOfMemoryError

2011-08-29 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis resolved MAPREDUCE-2906.
-

Resolution: Not A Problem

Even though same entry does not exist in hadoop-common build on 0.20-security, 
or hadoophdfs on 0.22, the mapreduce build actually already has a parameter for 
this: findbugs.heap.size

> FindBugs OutOfMemoryError
> -
>
> Key: MAPREDUCE-2906
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2906
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.22.0
> Environment: FindBugs 1.3.9, ant 1.8.2, RHEL6, Jenkins 1.414 in 
> Tomcat 7.0.14, Sun Java HotSpot(TM) 64-Bit Server VM
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>
> When running the findbugs target from Jenkins, I get an OutOfMemory error.
> The "effort" in FindBugs is set to Max which ends up using a lot of memory to 
> go through all the classes. The jvmargs passed to FindBugs is hardcoded to 
> 512 MB max.
> We can leave the default to 512M, as long as we pass this as an ant parameter 
> which can be overwritten in individual cases through -D, or in the 
> build.properties file (either basedir, or user's home directory).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-2906) FindBugs OutOfMemoryError

2011-08-29 Thread Joep Rottinghuis (JIRA)
FindBugs OutOfMemoryError
-

 Key: MAPREDUCE-2906
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2906
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.22.0
 Environment: FindBugs 1.3.9, ant 1.8.2, RHEL6, Jenkins 1.414 in Tomcat 
7.0.14, Sun Java HotSpot(TM) 64-Bit Server VM
Reporter: Joep Rottinghuis
Assignee: Joep Rottinghuis


When running the findbugs target from Jenkins, I get an OutOfMemory error.
The "effort" in FindBugs is set to Max which ends up using a lot of memory to 
go through all the classes. The jvmargs passed to FindBugs is hardcoded to 512 
MB max.

We can leave the default to 512M, as long as we pass this as an ant parameter 
which can be overwritten in individual cases through -D, or in the 
build.properties file (either basedir, or user's home directory).


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Hadoop Tools Layout (was Re: DistCpV2 in 0.23)

2011-08-29 Thread Allen Wittenauer

I have a feeling this discussion should get moved to common-dev or even to 
general.

My #1 question is if tools is basically contrib reborn.  If not, what makes it 
different?

On Aug 29, 2011, at 1:43 AM, Amareshwari Sri Ramadasu wrote:

> Some questions on making hadoop-tools top level under trunk,
> 
> 1.  Should the patches for tools be created against Hadoop Common?
> 2.  What will happen to the tools test automation? Will it run as part of 
> Hadoop Common tests?
> 3.  Will it introduce a dependency from MapReduce to Common? Or is this taken 
> care in Mavenization?
> 
> 
> Thanks
> Amareshwari
> 
> On 8/26/11 10:17 PM, "Alejandro Abdelnur"  wrote:
> 
> Please, don't add more Mavenization work on us (eventually I want to go back
> to coding)
> 
> Given that Hadoop is already Mavenized, the patch should be Mavenized.
> 
> What will have to be done extra (besides Mavenizing distcp) is to create a
> hadoop-tools module at root level and within it a hadoop-distcp module.
> 
> The hadoop-tools POM will look pretty much like the hadoop-common-project
> POM.
> 
> The hadoop-distcp POM should follow the hadoop-common POM patterns.
> 
> Thanks.
> 
> Alejandro
> 
> On Fri, Aug 26, 2011 at 9:37 AM, Amareshwari Sri Ramadasu <
> amar...@yahoo-inc.com> wrote:
> 
>> Agree with Mithun and Robert. DistCp and Tools restructuring are separate
>> tasks. Since DistCp code is ready to be committed, it need not wait for the
>> Tools separation from MR/HDFS.
>> I would say it can go into contrib as the patch is now, and when the tools
>> restructuring happens it would be just an svn mv.  If there are no issues
>> with this proposal I can commit the code tomorrow.
>> 
>> Thanks
>> Amareshwari
>> 
>> On 8/26/11 7:45 PM, "Robert Evans"  wrote:
>> 
>> I agree with Mithun.  They are related but this goes beyond distcpv2 and
>> should not block distcpv2 from going in.  It would be very nice, however, to
>> get the layout settled soon so that we all know where to find something when
>> we want to work on it.
>> 
>> Also +1 for Alejandro's I also prefer to keep tools at the trunk level.
>> 
>> Even though HDFS, Common, and Mapreduce and perhaps soon tools are separate
>> modules right now, there is still tight coupling between the different
>> pieces, especially with tests.  IMO until we can reduce that coupling we
>> should treat building and testing Hadoop as a single project instead of
>> trying to keep them separate.
>> 
>> --Bobby
>> 
>> On 8/26/11 7:45 AM, "Mithun Radhakrishnan" 
>> wrote:
>> 
>> Would it be acceptable if retooling of tools/ were taken up separately? It
>> sounds to me like this might be a distinct (albeit related) task.
>> 
>> Mithun
>> 
>> 
>> 
>> From: Giridharan Kesavan 
>> To: mapreduce-dev@hadoop.apache.org
>> Sent: Friday, August 26, 2011 12:04 PM
>> Subject: Re: DistCpV2 in 0.23
>> 
>> +1 to Alejandro's
>> 
>> I prefer to keep the hadoop-tools at trunk level.
>> 
>> -Giri
>> 
>> On Thu, Aug 25, 2011 at 9:15 PM, Alejandro Abdelnur 
>> wrote:
>>> I'd suggest putting hadoop-tools either at trunk/ level or having a a
>> tools
>>> aggregator module for hdfs and other for common.
>>> 
>>> I personal would prefer at trunk/.
>>> 
>>> Thanks.
>>> 
>>> Alejandro
>>> 
>>> On Thu, Aug 25, 2011 at 9:06 PM, Amareshwari Sri Ramadasu <
>>> amar...@yahoo-inc.com> wrote:
>>> 
 Agree. It should be separate maven module (and patch puts it as separate
 maven module now). And top level for hadoop tools is nice to have, but
>> it
 becomes hard to maintain until patch automation tests run the tests
>> under
 tools. Currently we see many times the changes in HDFS effecting RAID
>> tests
 in MapReduce. So, I'm fine putting the tools under hadoop-mapreduce.
 
 I propose we can have something like the following:
 
 trunk/
 - hadoop-mapreduce
 - hadoop-mr-client
 - hadoop-yarn
 - hadoop-tools
 - hadoop-streaming
 - hadoop-archives
 - hadoop-distcp
 
 Thoughts?
 
 @Eli and @JD, we did not replace old legacy distcp because this is
>> really a
 complete rewrite and did not want to remove it until users are
>> familiarized
 with new one.
 
 On 8/26/11 12:51 AM, "Todd Lipcon"  wrote:
 
 Maybe a separate toplevel for hadoop-tools? Stuff like RAID could go
 in there as well - ie tools that are downstream of MR and/or HDFS.
 
 On Thu, Aug 25, 2011 at 12:09 PM, Mahadev Konar <
>> maha...@hortonworks.com>
 wrote:
> +1 for a seperate module in hadoop-mapreduce-project. I think
> hadoop-mapreduce-client might not be right place for it. We might have
> to pick a new maven module under hadoop-mapreduce-project that could
> host streaming/distcp/hadoop archives.
> 
> thanks
> mahadev
> 
> On Thu, Aug 25, 2011 at 11:04 AM, Alejandro Abdelnur <
>> t...@cloudera.com>
 wrote:
>> Agree, it should be a separate maven mod

Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Robert Evans
DONE I filed HADOOP-7589 and uploaded my patch to it.  Alejandro, could you 
take a quick look at the patch because you appear to be the maven expert.

Thanks,

Bobby Evans

On 8/29/11 12:39 PM, "Mahadev Konar"  wrote:

Bobby,
 You are right. The test-patch uses mvn compile. Please file a jira.
It should be a minor change:

thanks
mahadev

On Mon, Aug 29, 2011 at 10:34 AM, Robert Evans  wrote:
> Thanks Alejandro,
>
> That really clears things up. Is the a JIRA you know of to change test-patch 
> to do mvn test -DskipTests instead of mvn compile?  If not I can file one and 
> do the work.  Test-patch failed for me because of this.
>
> --Bobby
>
> On 8/29/11 12:21 PM, "Alejandro Abdelnur"  wrote:
>
> The reason for this failure is because of how Maven reactor/dependency
> resolution works (IMO a bug).
>
> Maven reactor/dependency resolution is smart enough to create the classpath
> using the classes from all modules being built.
>
> However, this smartness falls short just a bit. The dependencies are
> resolved using the deepest maven phase used by current mvn invocation. If
> you are doing 'mvn compile' you don't get to the test compile phase.  This
> means that the TEST classes are not resolved from the build but from the
> cache/repo.
>
> The solution is to run 'mvn test -DskipTests' instead 'mvn compile'. This
> will include the TEST classes from the build.
>
> The same when creating the eclipse profile, run 'mvn test -DskipTests
> eclipse:eclipse'
>
> Thanks.
>
> Alejandro
>
> On Mon, Aug 29, 2011 at 9:59 AM, Ravi Prakash  wrote:
>
>> Yeah I've seen this before. Sometimes I had to descend into child
>> directories to mvn install them, before I could maven install parents. I'm
>> hoping/guessing that issue is fixed now
>>
>> On Mon, Aug 29, 2011 at 11:39 AM, Robert Evans 
>> wrote:
>>
>> > Wow this is odd install works just fine, but compile fails unless I do an
>> > install first (I found this trying to run test-patch).
>> >
>> > $mvn --version
>> > Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
>> > Maven home: /home/evans/bin/maven
>> > Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
>> > Java home: /home/evans/bin/jdk1.6.0/jre
>> > Default locale: en_US, platform encoding: UTF-8
>> > OS name: "linux", version: "2.6.18-238.12.1.el5", arch: "i386", family:
>> > "unix"
>> >
>> > Has anyone else seen this, or is there something messed up with my
>> machine?
>> >
>> > Thanks,
>> >
>> > Bobby
>> >
>> > On 8/29/11 11:18 AM, "Robert Evans"  wrote:
>> >
>> > I am getting the following errors when I try to build either trunk or
>> 0.23
>> > with a clean maven cache.  I don't get any errors if I use my old cache.
>> >
>> > [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
>> > hadoop-yarn-common ---
>> > [INFO] Compiling 2 source files to
>> >
>> >
>> /home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
>> > common/target/classes
>> > [INFO]
>> > [INFO]
>> > 
>> > [INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
>> > [INFO]
>> > 
>> > [INFO]
>> > 
>> > [INFO] Reactor Summary:
>> > [INFO]
>> > [INFO] Apache Hadoop Project POM . SUCCESS
>> [0.714s]
>> > [INFO] Apache Hadoop Annotations . SUCCESS
>> [0.323s]
>> > [INFO] Apache Hadoop Project Dist POM  SUCCESS
>> [0.001s]
>> > [INFO] Apache Hadoop Assemblies .. SUCCESS
>> [0.025s]
>> > [INFO] Apache Hadoop Alfredo . SUCCESS
>> [0.067s]
>> > [INFO] Apache Hadoop Common .. SUCCESS
>> [2.117s]
>> > [INFO] Apache Hadoop Common Project .. SUCCESS
>> [0.001s]
>> > [INFO] Apache Hadoop HDFS  SUCCESS
>> [1.419s]
>> > [INFO] Apache Hadoop HDFS Project  SUCCESS
>> [0.001s]
>> > [INFO] hadoop-yarn-api ... SUCCESS
>> [7.019s]
>> > [INFO] hadoop-yarn-common  SUCCESS
>> [2.181s]
>> > [INFO] hadoop-yarn-server-common . FAILURE
>> [0.058s]
>> > [INFO] hadoop-yarn-server-nodemanager  SKIPPED
>> > [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
>> > [INFO] hadoop-yarn-server-tests .. SKIPPED
>> > [INFO] hadoop-yarn-server  SKIPPED
>> > [INFO] hadoop-yarn ... SKIPPED
>> > [INFO] hadoop-mapreduce-client-core .. SKIPPED
>> > [INFO] hadoop-mapreduce-client-common  SKIPPED
>> > [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
>> > [INFO] hadoop-mapreduce-client-app ... SKIPPED
>> > [INFO] hadoop-mapre

RE: which Eclipse plugin to use for Maven?

2011-08-29 Thread Jim Falgout
Ahh, thanks. I was thinking back to pre-merge when you had to run "ant 
eclipsefiles" since the source directories were spread everywhere.

-Original Message-
From: Robert Evans [mailto:ev...@yahoo-inc.com] 
Sent: Monday, August 29, 2011 12:38 PM
To: mapreduce-dev@hadoop.apache.org
Subject: Re: which Eclipse plugin to use for Maven?

Jim,

The m2 plugin replaces the normal eclipse build system with maven.  If you want 
to use M2 then you don't need to run mvn eclipse:eclipse at all.  What mvn 
eclipse:eclipse does is it generates source code, and produces a .project and 
.classpath so that eclipse can use it's normal build system not work.  The two 
approaches are not really compatible with each other.

--Bobby

On 8/29/11 11:52 AM, "Jim Falgout"  wrote:

Using the latest trunk code, used the "mvn eclipse:eclipse" target to build the 
Eclipse project files. I've got the M2E plugin for Maven installed. After some 
trouble with lifecycle errors ("Plugin execution not covered by lifecycle 
configuration" error messages) I noticed this comment in the .project file: 
"NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are 
not supported in M2Eclipse".

Is there another recommendation for Maven integration using an Eclipse plugin 
that will work out of the box?

Thanks!






Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Mahadev Konar
Bobby,
 You are right. The test-patch uses mvn compile. Please file a jira.
It should be a minor change:

thanks
mahadev

On Mon, Aug 29, 2011 at 10:34 AM, Robert Evans  wrote:
> Thanks Alejandro,
>
> That really clears things up. Is the a JIRA you know of to change test-patch 
> to do mvn test -DskipTests instead of mvn compile?  If not I can file one and 
> do the work.  Test-patch failed for me because of this.
>
> --Bobby
>
> On 8/29/11 12:21 PM, "Alejandro Abdelnur"  wrote:
>
> The reason for this failure is because of how Maven reactor/dependency
> resolution works (IMO a bug).
>
> Maven reactor/dependency resolution is smart enough to create the classpath
> using the classes from all modules being built.
>
> However, this smartness falls short just a bit. The dependencies are
> resolved using the deepest maven phase used by current mvn invocation. If
> you are doing 'mvn compile' you don't get to the test compile phase.  This
> means that the TEST classes are not resolved from the build but from the
> cache/repo.
>
> The solution is to run 'mvn test -DskipTests' instead 'mvn compile'. This
> will include the TEST classes from the build.
>
> The same when creating the eclipse profile, run 'mvn test -DskipTests
> eclipse:eclipse'
>
> Thanks.
>
> Alejandro
>
> On Mon, Aug 29, 2011 at 9:59 AM, Ravi Prakash  wrote:
>
>> Yeah I've seen this before. Sometimes I had to descend into child
>> directories to mvn install them, before I could maven install parents. I'm
>> hoping/guessing that issue is fixed now
>>
>> On Mon, Aug 29, 2011 at 11:39 AM, Robert Evans 
>> wrote:
>>
>> > Wow this is odd install works just fine, but compile fails unless I do an
>> > install first (I found this trying to run test-patch).
>> >
>> > $mvn --version
>> > Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
>> > Maven home: /home/evans/bin/maven
>> > Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
>> > Java home: /home/evans/bin/jdk1.6.0/jre
>> > Default locale: en_US, platform encoding: UTF-8
>> > OS name: "linux", version: "2.6.18-238.12.1.el5", arch: "i386", family:
>> > "unix"
>> >
>> > Has anyone else seen this, or is there something messed up with my
>> machine?
>> >
>> > Thanks,
>> >
>> > Bobby
>> >
>> > On 8/29/11 11:18 AM, "Robert Evans"  wrote:
>> >
>> > I am getting the following errors when I try to build either trunk or
>> 0.23
>> > with a clean maven cache.  I don't get any errors if I use my old cache.
>> >
>> > [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
>> > hadoop-yarn-common ---
>> > [INFO] Compiling 2 source files to
>> >
>> >
>> /home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
>> > common/target/classes
>> > [INFO]
>> > [INFO]
>> > 
>> > [INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
>> > [INFO]
>> > 
>> > [INFO]
>> > 
>> > [INFO] Reactor Summary:
>> > [INFO]
>> > [INFO] Apache Hadoop Project POM . SUCCESS
>> [0.714s]
>> > [INFO] Apache Hadoop Annotations . SUCCESS
>> [0.323s]
>> > [INFO] Apache Hadoop Project Dist POM  SUCCESS
>> [0.001s]
>> > [INFO] Apache Hadoop Assemblies .. SUCCESS
>> [0.025s]
>> > [INFO] Apache Hadoop Alfredo . SUCCESS
>> [0.067s]
>> > [INFO] Apache Hadoop Common .. SUCCESS
>> [2.117s]
>> > [INFO] Apache Hadoop Common Project .. SUCCESS
>> [0.001s]
>> > [INFO] Apache Hadoop HDFS  SUCCESS
>> [1.419s]
>> > [INFO] Apache Hadoop HDFS Project  SUCCESS
>> [0.001s]
>> > [INFO] hadoop-yarn-api ... SUCCESS
>> [7.019s]
>> > [INFO] hadoop-yarn-common  SUCCESS
>> [2.181s]
>> > [INFO] hadoop-yarn-server-common . FAILURE
>> [0.058s]
>> > [INFO] hadoop-yarn-server-nodemanager  SKIPPED
>> > [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
>> > [INFO] hadoop-yarn-server-tests .. SKIPPED
>> > [INFO] hadoop-yarn-server  SKIPPED
>> > [INFO] hadoop-yarn ... SKIPPED
>> > [INFO] hadoop-mapreduce-client-core .. SKIPPED
>> > [INFO] hadoop-mapreduce-client-common  SKIPPED
>> > [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
>> > [INFO] hadoop-mapreduce-client-app ... SKIPPED
>> > [INFO] hadoop-mapreduce-client-hs  SKIPPED
>> > [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
>> > [INFO] hadoop-mapreduce-client ... SKIPPED
>> > [INFO] hadoop-mapreduce

Re: which Eclipse plugin to use for Maven?

2011-08-29 Thread Robert Evans
Jim,

The m2 plugin replaces the normal eclipse build system with maven.  If you want 
to use M2 then you don't need to run mvn eclipse:eclipse at all.  What mvn 
eclipse:eclipse does is it generates source code, and produces a .project and 
.classpath so that eclipse can use it's normal build system not work.  The two 
approaches are not really compatible with each other.

--Bobby

On 8/29/11 11:52 AM, "Jim Falgout"  wrote:

Using the latest trunk code, used the "mvn eclipse:eclipse" target to build the 
Eclipse project files. I've got the M2E plugin for Maven installed. After some 
trouble with lifecycle errors ("Plugin execution not covered by lifecycle 
configuration" error messages) I noticed this comment in the .project file: 
"NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are 
not supported in M2Eclipse".

Is there another recommendation for Maven integration using an Eclipse plugin 
that will work out of the box?

Thanks!





Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Robert Evans
Thanks Alejandro,

That really clears things up. Is the a JIRA you know of to change test-patch to 
do mvn test -DskipTests instead of mvn compile?  If not I can file one and do 
the work.  Test-patch failed for me because of this.

--Bobby

On 8/29/11 12:21 PM, "Alejandro Abdelnur"  wrote:

The reason for this failure is because of how Maven reactor/dependency
resolution works (IMO a bug).

Maven reactor/dependency resolution is smart enough to create the classpath
using the classes from all modules being built.

However, this smartness falls short just a bit. The dependencies are
resolved using the deepest maven phase used by current mvn invocation. If
you are doing 'mvn compile' you don't get to the test compile phase.  This
means that the TEST classes are not resolved from the build but from the
cache/repo.

The solution is to run 'mvn test -DskipTests' instead 'mvn compile'. This
will include the TEST classes from the build.

The same when creating the eclipse profile, run 'mvn test -DskipTests
eclipse:eclipse'

Thanks.

Alejandro

On Mon, Aug 29, 2011 at 9:59 AM, Ravi Prakash  wrote:

> Yeah I've seen this before. Sometimes I had to descend into child
> directories to mvn install them, before I could maven install parents. I'm
> hoping/guessing that issue is fixed now
>
> On Mon, Aug 29, 2011 at 11:39 AM, Robert Evans 
> wrote:
>
> > Wow this is odd install works just fine, but compile fails unless I do an
> > install first (I found this trying to run test-patch).
> >
> > $mvn --version
> > Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
> > Maven home: /home/evans/bin/maven
> > Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
> > Java home: /home/evans/bin/jdk1.6.0/jre
> > Default locale: en_US, platform encoding: UTF-8
> > OS name: "linux", version: "2.6.18-238.12.1.el5", arch: "i386", family:
> > "unix"
> >
> > Has anyone else seen this, or is there something messed up with my
> machine?
> >
> > Thanks,
> >
> > Bobby
> >
> > On 8/29/11 11:18 AM, "Robert Evans"  wrote:
> >
> > I am getting the following errors when I try to build either trunk or
> 0.23
> > with a clean maven cache.  I don't get any errors if I use my old cache.
> >
> > [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
> > hadoop-yarn-common ---
> > [INFO] Compiling 2 source files to
> >
> >
> /home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
> > common/target/classes
> > [INFO]
> > [INFO]
> > 
> > [INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
> > [INFO]
> > 
> > [INFO]
> > 
> > [INFO] Reactor Summary:
> > [INFO]
> > [INFO] Apache Hadoop Project POM . SUCCESS
> [0.714s]
> > [INFO] Apache Hadoop Annotations . SUCCESS
> [0.323s]
> > [INFO] Apache Hadoop Project Dist POM  SUCCESS
> [0.001s]
> > [INFO] Apache Hadoop Assemblies .. SUCCESS
> [0.025s]
> > [INFO] Apache Hadoop Alfredo . SUCCESS
> [0.067s]
> > [INFO] Apache Hadoop Common .. SUCCESS
> [2.117s]
> > [INFO] Apache Hadoop Common Project .. SUCCESS
> [0.001s]
> > [INFO] Apache Hadoop HDFS  SUCCESS
> [1.419s]
> > [INFO] Apache Hadoop HDFS Project  SUCCESS
> [0.001s]
> > [INFO] hadoop-yarn-api ... SUCCESS
> [7.019s]
> > [INFO] hadoop-yarn-common  SUCCESS
> [2.181s]
> > [INFO] hadoop-yarn-server-common . FAILURE
> [0.058s]
> > [INFO] hadoop-yarn-server-nodemanager  SKIPPED
> > [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
> > [INFO] hadoop-yarn-server-tests .. SKIPPED
> > [INFO] hadoop-yarn-server  SKIPPED
> > [INFO] hadoop-yarn ... SKIPPED
> > [INFO] hadoop-mapreduce-client-core .. SKIPPED
> > [INFO] hadoop-mapreduce-client-common  SKIPPED
> > [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
> > [INFO] hadoop-mapreduce-client-app ... SKIPPED
> > [INFO] hadoop-mapreduce-client-hs  SKIPPED
> > [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
> > [INFO] hadoop-mapreduce-client ... SKIPPED
> > [INFO] hadoop-mapreduce .. SKIPPED
> > [INFO] Apache Hadoop Main  SKIPPED
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> >

Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Alejandro Abdelnur
The reason for this failure is because of how Maven reactor/dependency
resolution works (IMO a bug).

Maven reactor/dependency resolution is smart enough to create the classpath
using the classes from all modules being built.

However, this smartness falls short just a bit. The dependencies are
resolved using the deepest maven phase used by current mvn invocation. If
you are doing 'mvn compile' you don't get to the test compile phase.  This
means that the TEST classes are not resolved from the build but from the
cache/repo.

The solution is to run 'mvn test -DskipTests' instead 'mvn compile'. This
will include the TEST classes from the build.

The same when creating the eclipse profile, run 'mvn test -DskipTests
eclipse:eclipse'

Thanks.

Alejandro

On Mon, Aug 29, 2011 at 9:59 AM, Ravi Prakash  wrote:

> Yeah I've seen this before. Sometimes I had to descend into child
> directories to mvn install them, before I could maven install parents. I'm
> hoping/guessing that issue is fixed now
>
> On Mon, Aug 29, 2011 at 11:39 AM, Robert Evans 
> wrote:
>
> > Wow this is odd install works just fine, but compile fails unless I do an
> > install first (I found this trying to run test-patch).
> >
> > $mvn --version
> > Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
> > Maven home: /home/evans/bin/maven
> > Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
> > Java home: /home/evans/bin/jdk1.6.0/jre
> > Default locale: en_US, platform encoding: UTF-8
> > OS name: "linux", version: "2.6.18-238.12.1.el5", arch: "i386", family:
> > "unix"
> >
> > Has anyone else seen this, or is there something messed up with my
> machine?
> >
> > Thanks,
> >
> > Bobby
> >
> > On 8/29/11 11:18 AM, "Robert Evans"  wrote:
> >
> > I am getting the following errors when I try to build either trunk or
> 0.23
> > with a clean maven cache.  I don't get any errors if I use my old cache.
> >
> > [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
> > hadoop-yarn-common ---
> > [INFO] Compiling 2 source files to
> >
> >
> /home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
> > common/target/classes
> > [INFO]
> > [INFO]
> > 
> > [INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
> > [INFO]
> > 
> > [INFO]
> > 
> > [INFO] Reactor Summary:
> > [INFO]
> > [INFO] Apache Hadoop Project POM . SUCCESS
> [0.714s]
> > [INFO] Apache Hadoop Annotations . SUCCESS
> [0.323s]
> > [INFO] Apache Hadoop Project Dist POM  SUCCESS
> [0.001s]
> > [INFO] Apache Hadoop Assemblies .. SUCCESS
> [0.025s]
> > [INFO] Apache Hadoop Alfredo . SUCCESS
> [0.067s]
> > [INFO] Apache Hadoop Common .. SUCCESS
> [2.117s]
> > [INFO] Apache Hadoop Common Project .. SUCCESS
> [0.001s]
> > [INFO] Apache Hadoop HDFS  SUCCESS
> [1.419s]
> > [INFO] Apache Hadoop HDFS Project  SUCCESS
> [0.001s]
> > [INFO] hadoop-yarn-api ... SUCCESS
> [7.019s]
> > [INFO] hadoop-yarn-common  SUCCESS
> [2.181s]
> > [INFO] hadoop-yarn-server-common . FAILURE
> [0.058s]
> > [INFO] hadoop-yarn-server-nodemanager  SKIPPED
> > [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
> > [INFO] hadoop-yarn-server-tests .. SKIPPED
> > [INFO] hadoop-yarn-server  SKIPPED
> > [INFO] hadoop-yarn ... SKIPPED
> > [INFO] hadoop-mapreduce-client-core .. SKIPPED
> > [INFO] hadoop-mapreduce-client-common  SKIPPED
> > [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
> > [INFO] hadoop-mapreduce-client-app ... SKIPPED
> > [INFO] hadoop-mapreduce-client-hs  SKIPPED
> > [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
> > [INFO] hadoop-mapreduce-client ... SKIPPED
> > [INFO] hadoop-mapreduce .. SKIPPED
> > [INFO] Apache Hadoop Main  SKIPPED
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time: 14.938s
> > [INFO] Finished at: Mon Aug 29 11:18:06 CDT 2011
> > [INFO] Final Memory: 29M/207M
> > [INFO]
> > 
> > [ERROR] Failed to execute goal on project hadoop-yarn-server-common:
> Could
> > not r

[jira] [Created] (MAPREDUCE-2905) Allow mapred.fairscheduler.assignmultple to be set per job

2011-08-29 Thread Jeff Bean (JIRA)
Allow mapred.fairscheduler.assignmultple to be set per job
--

 Key: MAPREDUCE-2905
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2905
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: contrib/fair-share
Reporter: Jeff Bean


We encountered a situation where in the same cluster, large jobs benefit from 
mapred.fairscheduler.assignmultiple, but small jobs with small numbers of 
mappers do not: the mappers all clump to fully occupy just a few nodes, which 
causes those nodes to saturate and bottleneck. The desired behavior is to 
round-robin spread the job across more nodes.

It'd be nice developers can set a param similar to 
mapred.fairscheduler.assignmultiple on a per-job basis to better control the 
task allocation of a particular job.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Ravi Prakash
Yeah I've seen this before. Sometimes I had to descend into child
directories to mvn install them, before I could maven install parents. I'm
hoping/guessing that issue is fixed now

On Mon, Aug 29, 2011 at 11:39 AM, Robert Evans  wrote:

> Wow this is odd install works just fine, but compile fails unless I do an
> install first (I found this trying to run test-patch).
>
> $mvn --version
> Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
> Maven home: /home/evans/bin/maven
> Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
> Java home: /home/evans/bin/jdk1.6.0/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "2.6.18-238.12.1.el5", arch: "i386", family:
> "unix"
>
> Has anyone else seen this, or is there something messed up with my machine?
>
> Thanks,
>
> Bobby
>
> On 8/29/11 11:18 AM, "Robert Evans"  wrote:
>
> I am getting the following errors when I try to build either trunk or 0.23
> with a clean maven cache.  I don't get any errors if I use my old cache.
>
> [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
> hadoop-yarn-common ---
> [INFO] Compiling 2 source files to
>
> /home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
> common/target/classes
> [INFO]
> [INFO]
> 
> [INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
> [INFO]
> 
> [INFO]
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Project POM . SUCCESS [0.714s]
> [INFO] Apache Hadoop Annotations . SUCCESS [0.323s]
> [INFO] Apache Hadoop Project Dist POM  SUCCESS [0.001s]
> [INFO] Apache Hadoop Assemblies .. SUCCESS [0.025s]
> [INFO] Apache Hadoop Alfredo . SUCCESS [0.067s]
> [INFO] Apache Hadoop Common .. SUCCESS [2.117s]
> [INFO] Apache Hadoop Common Project .. SUCCESS [0.001s]
> [INFO] Apache Hadoop HDFS  SUCCESS [1.419s]
> [INFO] Apache Hadoop HDFS Project  SUCCESS [0.001s]
> [INFO] hadoop-yarn-api ... SUCCESS [7.019s]
> [INFO] hadoop-yarn-common  SUCCESS [2.181s]
> [INFO] hadoop-yarn-server-common . FAILURE [0.058s]
> [INFO] hadoop-yarn-server-nodemanager  SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
> [INFO] hadoop-yarn-server-tests .. SKIPPED
> [INFO] hadoop-yarn-server  SKIPPED
> [INFO] hadoop-yarn ... SKIPPED
> [INFO] hadoop-mapreduce-client-core .. SKIPPED
> [INFO] hadoop-mapreduce-client-common  SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
> [INFO] hadoop-mapreduce-client-app ... SKIPPED
> [INFO] hadoop-mapreduce-client-hs  SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
> [INFO] hadoop-mapreduce-client ... SKIPPED
> [INFO] hadoop-mapreduce .. SKIPPED
> [INFO] Apache Hadoop Main  SKIPPED
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 14.938s
> [INFO] Finished at: Mon Aug 29 11:18:06 CDT 2011
> [INFO] Final Memory: 29M/207M
> [INFO]
> 
> [ERROR] Failed to execute goal on project hadoop-yarn-server-common: Could
> not resolve dependencies for project
> org.apache.hadoop:hadoop-yarn-server-common:jar:0.24.0-SNAPSHOT: Failure to
> find org.apache.hadoop:hadoop-yarn-common:jar:tests:0.24.0-SNAPSHOT in
> http://ymaven.corp.yahoo.com:/proximity/repository/apache.snapshot was
> cached in the local repository, resolution will not be reattempted until
> the
> update interval of local apache.snapshot mirror has elapsed or updates are
> forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please
> read the following articles:
> [ERROR] [Help 1]
>
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionExcepti
> on
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn  -rf :hadoop-yarn-server-common
>
>
> Is anyone looking into this

which Eclipse plugin to use for Maven?

2011-08-29 Thread Jim Falgout
Using the latest trunk code, used the "mvn eclipse:eclipse" target to build the 
Eclipse project files. I've got the M2E plugin for Maven installed. After some 
trouble with lifecycle errors ("Plugin execution not covered by lifecycle 
configuration" error messages) I noticed this comment in the .project file: 
"NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are 
not supported in M2Eclipse".

Is there another recommendation for Maven integration using an Eclipse plugin 
that will work out of the box?

Thanks!




Re: Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Robert Evans
Wow this is odd install works just fine, but compile fails unless I do an 
install first (I found this trying to run test-patch).

$mvn --version
Apache Maven 3.0.3 (r1075438; 2011-02-28 11:31:09-0600)
Maven home: /home/evans/bin/maven
Java version: 1.6.0_22, vendor: Sun Microsystems Inc.
Java home: /home/evans/bin/jdk1.6.0/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.18-238.12.1.el5", arch: "i386", family: "unix"

Has anyone else seen this, or is there something messed up with my machine?

Thanks,

Bobby

On 8/29/11 11:18 AM, "Robert Evans"  wrote:

I am getting the following errors when I try to build either trunk or 0.23
with a clean maven cache.  I don't get any errors if I use my old cache.

[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
hadoop-yarn-common ---
[INFO] Compiling 2 source files to
/home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
common/target/classes
[INFO]
[INFO]

[INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
[INFO]

[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Project POM . SUCCESS [0.714s]
[INFO] Apache Hadoop Annotations . SUCCESS [0.323s]
[INFO] Apache Hadoop Project Dist POM  SUCCESS [0.001s]
[INFO] Apache Hadoop Assemblies .. SUCCESS [0.025s]
[INFO] Apache Hadoop Alfredo . SUCCESS [0.067s]
[INFO] Apache Hadoop Common .. SUCCESS [2.117s]
[INFO] Apache Hadoop Common Project .. SUCCESS [0.001s]
[INFO] Apache Hadoop HDFS  SUCCESS [1.419s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.001s]
[INFO] hadoop-yarn-api ... SUCCESS [7.019s]
[INFO] hadoop-yarn-common  SUCCESS [2.181s]
[INFO] hadoop-yarn-server-common . FAILURE [0.058s]
[INFO] hadoop-yarn-server-nodemanager  SKIPPED
[INFO] hadoop-yarn-server-resourcemanager  SKIPPED
[INFO] hadoop-yarn-server-tests .. SKIPPED
[INFO] hadoop-yarn-server  SKIPPED
[INFO] hadoop-yarn ... SKIPPED
[INFO] hadoop-mapreduce-client-core .. SKIPPED
[INFO] hadoop-mapreduce-client-common  SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
[INFO] hadoop-mapreduce-client-app ... SKIPPED
[INFO] hadoop-mapreduce-client-hs  SKIPPED
[INFO] hadoop-mapreduce-client-jobclient . SKIPPED
[INFO] hadoop-mapreduce-client ... SKIPPED
[INFO] hadoop-mapreduce .. SKIPPED
[INFO] Apache Hadoop Main  SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 14.938s
[INFO] Finished at: Mon Aug 29 11:18:06 CDT 2011
[INFO] Final Memory: 29M/207M
[INFO]

[ERROR] Failed to execute goal on project hadoop-yarn-server-common: Could
not resolve dependencies for project
org.apache.hadoop:hadoop-yarn-server-common:jar:0.24.0-SNAPSHOT: Failure to
find org.apache.hadoop:hadoop-yarn-common:jar:tests:0.24.0-SNAPSHOT in
http://ymaven.corp.yahoo.com:/proximity/repository/apache.snapshot was
cached in the local repository, resolution will not be reattempted until the
update interval of local apache.snapshot mirror has elapsed or updates are
forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionExcepti
on
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn  -rf :hadoop-yarn-server-common


Is anyone looking into this yet?

--Bobby




Trunk and 0.23 build failing with clean .m2 directory

2011-08-29 Thread Robert Evans
I am getting the following errors when I try to build either trunk or 0.23
with a clean maven cache.  I don't get any errors if I use my old cache.

[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @
hadoop-yarn-common ---
[INFO] Compiling 2 source files to
/home/evans/src/hadoop-git/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-
common/target/classes
[INFO] 
[INFO] 

[INFO] Building hadoop-yarn-server-common 0.24.0-SNAPSHOT
[INFO] 

[INFO] 

[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Project POM . SUCCESS [0.714s]
[INFO] Apache Hadoop Annotations . SUCCESS [0.323s]
[INFO] Apache Hadoop Project Dist POM  SUCCESS [0.001s]
[INFO] Apache Hadoop Assemblies .. SUCCESS [0.025s]
[INFO] Apache Hadoop Alfredo . SUCCESS [0.067s]
[INFO] Apache Hadoop Common .. SUCCESS [2.117s]
[INFO] Apache Hadoop Common Project .. SUCCESS [0.001s]
[INFO] Apache Hadoop HDFS  SUCCESS [1.419s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.001s]
[INFO] hadoop-yarn-api ... SUCCESS [7.019s]
[INFO] hadoop-yarn-common  SUCCESS [2.181s]
[INFO] hadoop-yarn-server-common . FAILURE [0.058s]
[INFO] hadoop-yarn-server-nodemanager  SKIPPED
[INFO] hadoop-yarn-server-resourcemanager  SKIPPED
[INFO] hadoop-yarn-server-tests .. SKIPPED
[INFO] hadoop-yarn-server  SKIPPED
[INFO] hadoop-yarn ... SKIPPED
[INFO] hadoop-mapreduce-client-core .. SKIPPED
[INFO] hadoop-mapreduce-client-common  SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
[INFO] hadoop-mapreduce-client-app ... SKIPPED
[INFO] hadoop-mapreduce-client-hs  SKIPPED
[INFO] hadoop-mapreduce-client-jobclient . SKIPPED
[INFO] hadoop-mapreduce-client ... SKIPPED
[INFO] hadoop-mapreduce .. SKIPPED
[INFO] Apache Hadoop Main  SKIPPED
[INFO] 

[INFO] BUILD FAILURE
[INFO] 

[INFO] Total time: 14.938s
[INFO] Finished at: Mon Aug 29 11:18:06 CDT 2011
[INFO] Final Memory: 29M/207M
[INFO] 

[ERROR] Failed to execute goal on project hadoop-yarn-server-common: Could
not resolve dependencies for project
org.apache.hadoop:hadoop-yarn-server-common:jar:0.24.0-SNAPSHOT: Failure to
find org.apache.hadoop:hadoop-yarn-common:jar:tests:0.24.0-SNAPSHOT in
http://ymaven.corp.yahoo.com:/proximity/repository/apache.snapshot was
cached in the local repository, resolution will not be reattempted until the
update interval of local apache.snapshot mirror has elapsed or updates are
forced -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionExcepti
on
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn  -rf :hadoop-yarn-server-common


Is anyone looking into this yet?

--Bobby



Hadoop-Mapreduce-trunk - Build # 800 - Still Failing

2011-08-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/800/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 18277 lines...]
[junit] Running 
org.apache.hadoop.mapreduce.security.token.delegation.TestDelegationToken
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.779 sec
[junit] Running org.apache.hadoop.mapreduce.util.TestMRAsyncDiskService
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.67 sec
[junit] Running org.apache.hadoop.mapreduce.util.TestProcfsBasedProcessTree
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 7.272 sec
[junit] Running org.apache.hadoop.record.TestRecordMR
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.008 sec
[junit] Running org.apache.hadoop.record.TestRecordWritable
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.067 sec
[junit] Running 
org.apache.hadoop.security.TestMapredGroupMappingServiceRefresh
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.231 sec
[junit] Running 
org.apache.hadoop.security.authorize.TestServiceLevelAuthorization
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 69.783 sec
[junit] Running org.apache.hadoop.tools.TestCopyFiles
[junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 89.466 sec
[junit] Running org.apache.hadoop.tools.TestDistCh
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 16.929 sec
[junit] Running org.apache.hadoop.tools.TestHadoopArchives
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 18.917 sec
[junit] Running org.apache.hadoop.tools.TestHarFileSystem
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 97.481 sec
[junit] Running org.apache.hadoop.tools.rumen.TestConcurrentRead
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.365 sec
[junit] Running org.apache.hadoop.tools.rumen.TestHistograms
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.72 sec
[junit] Running org.apache.hadoop.tools.rumen.TestParsedLine
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.077 sec
[junit] Running 
org.apache.hadoop.tools.rumen.TestPiecewiseLinearInterpolation
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.158 sec
[junit] Running org.apache.hadoop.tools.rumen.TestRandomSeedGenerator
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.16 sec
[junit] Running org.apache.hadoop.tools.rumen.TestRumenFolder
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.855 sec
[junit] Running org.apache.hadoop.tools.rumen.TestRumenJobTraces
[junit] Tests run: 13, Failures: 0, Errors: 0, Time elapsed: 14.846 sec
[junit] Running org.apache.hadoop.tools.rumen.TestZombieJob
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 1.842 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.267 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.222 sec

checkfailure:
[touch] Creating 
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/hadoop-mapreduce-project/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/trunk/hadoop-mapreduce-project/build.xml:848:
 Tests failed!

Total time: 115 minutes 23 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Updating MAPREDUCE-2898
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


[jira] [Created] (MAPREDUCE-2904) HDFS jars added incorrectly to yarn classpath

2011-08-29 Thread Sharad Agarwal (JIRA)
HDFS jars added incorrectly to yarn classpath
-

 Key: MAPREDUCE-2904
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2904
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Sharad Agarwal




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-2903) Map Tasks graph is throwing XML Parse error when Job is executed with 0 maps

2011-08-29 Thread Devaraj K (JIRA)
Map Tasks graph is throwing XML Parse error when Job is executed with 0 maps


 Key: MAPREDUCE-2903
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2903
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.20.205.0
Reporter: Devaraj K
 Fix For: 0.20.205.0


{code:xml}
XML Parsing Error: no element found
Location: 
http://10.18.52.170:50030/taskgraph?type=map&jobid=job_201108291536_0001
Line Number 1, Column 1:
^
{code}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop Tools Layout (was Re: DistCpV2 in 0.23)

2011-08-29 Thread Amareshwari Sri Ramadasu
Some questions on making hadoop-tools top level under trunk,

 1.  Should the patches for tools be created against Hadoop Common?
 2.  What will happen to the tools test automation? Will it run as part of 
Hadoop Common tests?
 3.  Will it introduce a dependency from MapReduce to Common? Or is this taken 
care in Mavenization?


Thanks
Amareshwari

On 8/26/11 10:17 PM, "Alejandro Abdelnur"  wrote:

Please, don't add more Mavenization work on us (eventually I want to go back
to coding)

Given that Hadoop is already Mavenized, the patch should be Mavenized.

What will have to be done extra (besides Mavenizing distcp) is to create a
hadoop-tools module at root level and within it a hadoop-distcp module.

The hadoop-tools POM will look pretty much like the hadoop-common-project
POM.

The hadoop-distcp POM should follow the hadoop-common POM patterns.

Thanks.

Alejandro

On Fri, Aug 26, 2011 at 9:37 AM, Amareshwari Sri Ramadasu <
amar...@yahoo-inc.com> wrote:

> Agree with Mithun and Robert. DistCp and Tools restructuring are separate
> tasks. Since DistCp code is ready to be committed, it need not wait for the
> Tools separation from MR/HDFS.
> I would say it can go into contrib as the patch is now, and when the tools
> restructuring happens it would be just an svn mv.  If there are no issues
> with this proposal I can commit the code tomorrow.
>
> Thanks
> Amareshwari
>
> On 8/26/11 7:45 PM, "Robert Evans"  wrote:
>
> I agree with Mithun.  They are related but this goes beyond distcpv2 and
> should not block distcpv2 from going in.  It would be very nice, however, to
> get the layout settled soon so that we all know where to find something when
> we want to work on it.
>
> Also +1 for Alejandro's I also prefer to keep tools at the trunk level.
>
> Even though HDFS, Common, and Mapreduce and perhaps soon tools are separate
> modules right now, there is still tight coupling between the different
> pieces, especially with tests.  IMO until we can reduce that coupling we
> should treat building and testing Hadoop as a single project instead of
> trying to keep them separate.
>
> --Bobby
>
> On 8/26/11 7:45 AM, "Mithun Radhakrishnan" 
> wrote:
>
> Would it be acceptable if retooling of tools/ were taken up separately? It
> sounds to me like this might be a distinct (albeit related) task.
>
> Mithun
>
>
> 
> From: Giridharan Kesavan 
> To: mapreduce-dev@hadoop.apache.org
> Sent: Friday, August 26, 2011 12:04 PM
> Subject: Re: DistCpV2 in 0.23
>
> +1 to Alejandro's
>
> I prefer to keep the hadoop-tools at trunk level.
>
> -Giri
>
> On Thu, Aug 25, 2011 at 9:15 PM, Alejandro Abdelnur 
> wrote:
> > I'd suggest putting hadoop-tools either at trunk/ level or having a a
> tools
> > aggregator module for hdfs and other for common.
> >
> > I personal would prefer at trunk/.
> >
> > Thanks.
> >
> > Alejandro
> >
> > On Thu, Aug 25, 2011 at 9:06 PM, Amareshwari Sri Ramadasu <
> > amar...@yahoo-inc.com> wrote:
> >
> >> Agree. It should be separate maven module (and patch puts it as separate
> >> maven module now). And top level for hadoop tools is nice to have, but
> it
> >> becomes hard to maintain until patch automation tests run the tests
> under
> >> tools. Currently we see many times the changes in HDFS effecting RAID
> tests
> >> in MapReduce. So, I'm fine putting the tools under hadoop-mapreduce.
> >>
> >> I propose we can have something like the following:
> >>
> >> trunk/
> >>  - hadoop-mapreduce
> >>  - hadoop-mr-client
> >>  - hadoop-yarn
> >>  - hadoop-tools
> >>  - hadoop-streaming
> >>  - hadoop-archives
> >>  - hadoop-distcp
> >>
> >> Thoughts?
> >>
> >> @Eli and @JD, we did not replace old legacy distcp because this is
> really a
> >> complete rewrite and did not want to remove it until users are
> familiarized
> >> with new one.
> >>
> >> On 8/26/11 12:51 AM, "Todd Lipcon"  wrote:
> >>
> >> Maybe a separate toplevel for hadoop-tools? Stuff like RAID could go
> >> in there as well - ie tools that are downstream of MR and/or HDFS.
> >>
> >> On Thu, Aug 25, 2011 at 12:09 PM, Mahadev Konar <
> maha...@hortonworks.com>
> >> wrote:
> >> > +1 for a seperate module in hadoop-mapreduce-project. I think
> >> > hadoop-mapreduce-client might not be right place for it. We might have
> >> > to pick a new maven module under hadoop-mapreduce-project that could
> >> > host streaming/distcp/hadoop archives.
> >> >
> >> > thanks
> >> > mahadev
> >> >
> >> > On Thu, Aug 25, 2011 at 11:04 AM, Alejandro Abdelnur <
> t...@cloudera.com>
> >> wrote:
> >> >> Agree, it should be a separate maven module.
> >> >>
> >> >> And it should be under hadoop-mapreduce-client, right?
> >> >>
> >> >> And now that we are in the topic, the same should go for streaming,
> no?
> >> >>
> >> >> Thanks.
> >> >>
> >> >> Alejandro
> >> >>
> >> >> On Thu, Aug 25, 2011 at 10:58 AM, Todd Lipcon 
> >> wrote:
> >> >>
> >> >>> On Thu, Aug 25, 2011 at 10:36 AM, Eli Collins 
> >> wrote:
> >> >>> 

Re: Building MR2

2011-08-29 Thread Arun C Murthy
Thanks Vinod!

On Aug 29, 2011, at 12:30 AM, Vinod Kumar Vavilapalli wrote:

> Updated http://wiki.apache.org/hadoop/DevelopingOnTrunkAfter279Merge
> and filed MAPREDUCE-2901.
> 
> +Vinod
> 
> On Fri, Aug 19, 2011 at 12:22 AM, Alejandro Abdelnur wrote:
> 
>> protoc is invoked from an antrun plugin configuration.
>> 
>> we could check using ant tasks that protoc is avail, and fail with a
>> message
>> if not.
>> 
>> Thxs.
>> 
>> Alejandro
>> 
>> On Thu, Aug 18, 2011 at 11:44 AM, Eli Collins  wrote:
>> 
>>> On Thu, Aug 18, 2011 at 11:39 AM, Alejandro Abdelnur 
>>> wrote:
 IMO, committing generated code is not a good idea. I'd rather put the
>>> burden
 on developers of installing protoc  (they have to do it for Java,
>> Maven,
 Forrest, and all the autoconf stuff if compiling native).
 
 I would document protoc as required tool.
 
>>> 
>>> Alejandro - is there an easy way we can make maven spit out a nicer
>>> error message (eg check for protoc on the path and print a message if
>>> it's not present)?
>>> 
>>> (Thank you Mahadev btw for pointing out the fix!)
>>> 
>>> Thanks,
>>> Eli
>>> 
>> 



[jira] [Created] (MAPREDUCE-2902) Merge DevelopingOnTrunkAfter279Merge wiki page into HowToContribute

2011-08-29 Thread Vinod Kumar Vavilapalli (JIRA)
Merge DevelopingOnTrunkAfter279Merge wiki page into HowToContribute
---

 Key: MAPREDUCE-2902
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2902
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
Reporter: Vinod Kumar Vavilapalli


We'll need to make http://wiki.apache.org/hadoop/HowToContribute the single 
source of truth.

Also, we'll need separate sections for building pre-0.23 and post-0.23 Hadoop.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Building MR2

2011-08-29 Thread Vinod Kumar Vavilapalli
Updated http://wiki.apache.org/hadoop/DevelopingOnTrunkAfter279Merge
and filed MAPREDUCE-2901.

+Vinod

On Fri, Aug 19, 2011 at 12:22 AM, Alejandro Abdelnur wrote:

> protoc is invoked from an antrun plugin configuration.
>
> we could check using ant tasks that protoc is avail, and fail with a
> message
> if not.
>
> Thxs.
>
> Alejandro
>
> On Thu, Aug 18, 2011 at 11:44 AM, Eli Collins  wrote:
>
> > On Thu, Aug 18, 2011 at 11:39 AM, Alejandro Abdelnur 
> > wrote:
> > > IMO, committing generated code is not a good idea. I'd rather put the
> > burden
> > > on developers of installing protoc  (they have to do it for Java,
> Maven,
> > > Forrest, and all the autoconf stuff if compiling native).
> > >
> > > I would document protoc as required tool.
> > >
> >
> > Alejandro - is there an easy way we can make maven spit out a nicer
> > error message (eg check for protoc on the path and print a message if
> > it's not present)?
> >
> > (Thank you Mahadev btw for pointing out the fix!)
> >
> > Thanks,
> > Eli
> >
>


[jira] [Created] (MAPREDUCE-2901) Build should fail sanely if protoc isn't on PATH

2011-08-29 Thread Vinod Kumar Vavilapalli (JIRA)
Build should fail sanely if protoc isn't on PATH


 Key: MAPREDUCE-2901
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2901
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli


It now fails "[ERROR] Failed to execute goal 
org.codehaus.mojo:exec-maven-plugin:1.2:exec (generate-sources) on project 
hadoop-yarn-api: Command execution failed. Process exited with an error: 1(Exit 
value: 1) -> [Help 1]".

Which doesn't help much.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira