Re: test-patch and native code compile

2012-09-06 Thread Hemanth Yamijala
Thanks. I've opened https://issues.apache.org/jira/browse/HADOOP-8776.
Will work on options and a patch there.

Hemanth

On Fri, Sep 7, 2012 at 12:24 AM, Colin McCabe  wrote:
> We could also call "uname" from test-patch.sh and skip running native
> tests on Mac OS X.
>
> I also think that HADOOP-7147 should be open rather than "won't fix,"
> as Alejandro commented.  Allen Wittenauer closed it as "won't fix"
> because he personally did not intend to fix it, but that doesn't mean
> it's not a bug.
>
> cheers.
> Colin
>
>
> On Thu, Sep 6, 2012 at 8:29 AM, Eli Collins  wrote:
>> Yea we want jenkins to run with native. How about adding making native
>> optional in test-patch via a flag and updating the jenkins jobs to use
>> it?
>>
>> On Thu, Sep 6, 2012 at 7:25 AM, Alejandro Abdelnur  wrote:
>>> Makes sense, though the Jenkins runs should continue to run w/ native, 
>>> right?
>>>
>>> On Thu, Sep 6, 2012 at 12:49 AM, Hemanth Yamijala  
>>> wrote:
 Hi,

 The test-patch script in Hadoop source runs a native compile with the
 patch. On platforms like MAC, there are issues with the native
 compile. For e.g we run into HADOOP-7147 that has been resolved as
 Won't fix.

 Hence, should we have a switch in test-patch to not run native compile
 ? Could open a JIRA and fix, if that's OK ?

 Thanks
 hemanth
>>>
>>>
>>>
>>> --
>>> Alejandro


[jira] [Created] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-06 Thread Hemanth Yamijala (JIRA)
Hemanth Yamijala created HADOOP-8776:


 Summary: Provide an option in test-patch that can enable / disable 
compiling native code
 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor


The test-patch script in Hadoop source runs a native compile with the patch. On 
platforms like MAC, there are issues with the native compile that make it 
difficult to use test-patch. This JIRA is to try and provide an option to make 
the native compilation optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8775) MR2 distcp permits non-positive value to -bandwidth option which causes job never to complete

2012-09-06 Thread Sandy Ryza (JIRA)
Sandy Ryza created HADOOP-8775:
--

 Summary: MR2 distcp permits non-positive value to -bandwidth 
option which causes job never to complete
 Key: HADOOP-8775
 URL: https://issues.apache.org/jira/browse/HADOOP-8775
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Sandy Ryza


The likelihood that someone would want to enter a non-positive value for 
-bandwidth seems really low. However, the job would never complete if a 
non-positive value was specified. It'd just get stuck at map 100%. Luckily, a 
positive value would always lead to the job completing.

bash-4.1$ hadoop distcp -bandwidth 0 
hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir 
hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir58
hadoop distcp -bandwidth 0 hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir 
hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir58
12/05/23 15:53:01 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsiz\
e', sourceFileListing=null, 
sourcePaths=[hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir], 
targetPath=hdfs://c1204.hal.cloudera.com:17020/user/hdfs/in-dir58}
12/05/23 15:53:02 WARN conf.Configuration: io.sort.mb is deprecated. Instead, 
use mapreduce.task.io.sort.mb
12/05/23 15:53:02 WARN conf.Configuration: io.sort.factor is deprecated. 
Instead, use mapreduce.task.io.sort.factor
12/05/23 15:53:02 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/05/23 15:53:03 INFO mapreduce.JobSubmitter: number of splits:3
12/05/23 15:53:04 WARN conf.Configuration: mapred.jar is deprecated. Instead, 
use mapreduce.job.jar
12/05/23 15:53:04 WARN conf.Configuration: 
mapred.map.tasks.speculative.execution is deprecated. Instead, use 
mapreduce.map.speculative
12/05/23 15:53:04 WARN conf.Configuration: mapred.reduce.tasks is deprecated. 
Instead, use mapreduce.job.reduces
12/05/23 15:53:04 WARN conf.Configuration: mapred.mapoutput.value.class is 
deprecated. Instead, use mapreduce.map.output.value.class
12/05/23 15:53:04 WARN conf.Configuration: mapreduce.map.class is deprecated. 
Instead, use mapreduce.job.map.class
12/05/23 15:53:04 WARN conf.Configuration: mapred.job.name is deprecated. 
Instead, use mapreduce.job.name
12/05/23 15:53:04 WARN conf.Configuration: mapreduce.inputformat.class is 
deprecated. Instead, use mapreduce.job.inputformat.class
12/05/23 15:53:04 WARN conf.Configuration: mapred.output.dir is deprecated. 
Instead, use mapreduce.output.fileoutputformat.outputdir
12/05/23 15:53:04 WARN conf.Configuration: mapreduce.outputformat.class is 
deprecated. Instead, use mapreduce.job.outputformat.class
12/05/23 15:53:04 WARN conf.Configuration: mapred.map.tasks is deprecated. 
Instead, use mapreduce.job.maps
12/05/23 15:53:04 WARN conf.Configuration: mapred.mapoutput.key.class is 
deprecated. Instead, use mapreduce.map.output.key.class
12/05/23 15:53:04 WARN conf.Configuration: mapred.working.dir is deprecated. 
Instead, use mapreduce.job.working.dir
12/05/23 15:53:04 INFO mapred.ResourceMgrDelegate: Submitted application 
application_1337808305464_0014 to ResourceManager at 
c1204.hal.cloudera.com/172.29.98.195:8040
12/05/23 15:53:04 INFO mapreduce.Job: The url to track the job: 
http://auto0:8088/proxy/application_1337808305464_0014/
12/05/23 15:53:04 INFO tools.DistCp: DistCp job-id: job_1337808305464_0014
12/05/23 15:53:04 INFO mapreduce.Job: Running job: job_1337808305464_0014
12/05/23 15:53:09 INFO mapreduce.Job: Job job_1337808305464_0014 running in 
uber mode : false
12/05/23 15:53:09 INFO mapreduce.Job:  map 0% reduce 0%
12/05/23 15:53:14 INFO mapreduce.Job:  map 33% reduce 0%
12/05/23 15:53:19 INFO mapreduce.Job:  map 100% reduce 0%

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-7147) setnetgrent in native code is not portable

2012-09-06 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-7147:
--

  Assignee: (was: Allen Wittenauer)

"Won't fix" makes it sound like this is not a valid bug, which it is.

> setnetgrent in native code is not portable
> --
>
> Key: HADOOP-7147
> URL: https://issues.apache.org/jira/browse/HADOOP-7147
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Todd Lipcon
> Attachments: hadoop-7147.patch, HADOOP-7147.patch
>
>
> HADOOP-6864 uses the setnetgrent function in a way which is not compatible 
> with BSD APIs, where the call returns void rather than int. This prevents the 
> native libs from building on OSX, for example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: test-patch and native code compile

2012-09-06 Thread Colin McCabe
We could also call "uname" from test-patch.sh and skip running native
tests on Mac OS X.

I also think that HADOOP-7147 should be open rather than "won't fix,"
as Alejandro commented.  Allen Wittenauer closed it as "won't fix"
because he personally did not intend to fix it, but that doesn't mean
it's not a bug.

cheers.
Colin


On Thu, Sep 6, 2012 at 8:29 AM, Eli Collins  wrote:
> Yea we want jenkins to run with native. How about adding making native
> optional in test-patch via a flag and updating the jenkins jobs to use
> it?
>
> On Thu, Sep 6, 2012 at 7:25 AM, Alejandro Abdelnur  wrote:
>> Makes sense, though the Jenkins runs should continue to run w/ native, right?
>>
>> On Thu, Sep 6, 2012 at 12:49 AM, Hemanth Yamijala  wrote:
>>> Hi,
>>>
>>> The test-patch script in Hadoop source runs a native compile with the
>>> patch. On platforms like MAC, there are issues with the native
>>> compile. For e.g we run into HADOOP-7147 that has been resolved as
>>> Won't fix.
>>>
>>> Hence, should we have a switch in test-patch to not run native compile
>>> ? Could open a JIRA and fix, if that's OK ?
>>>
>>> Thanks
>>> hemanth
>>
>>
>>
>> --
>> Alejandro


[jira] [Resolved] (HADOOP-8536) Problem with -Dproperty=value option on windows hadoop

2012-09-06 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha resolved HADOOP-8536.


Resolution: Duplicate

Dup of HADOOP-8739 

> Problem with -Dproperty=value option on windows hadoop
> --
>
> Key: HADOOP-8536
> URL: https://issues.apache.org/jira/browse/HADOOP-8536
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Trupti Dhavle
>
> While running the java examples the -Dproperty=value option to the hadoop 
> command is not getting read correctly.
> TERASORT COMMAND: 
> C:\hdp\branch-1-win\bin\hadoop   jar 
> C:\hdp\branch-1-win\build\hadoop-examples-1.1.0-SNAPSHOT.jar terasort  
> -Dmapreduce.reduce.input.limit=-1 teraInputDir teraOutputDir  
> Error-
> 12/06/27 10:28:26 INFO terasort.TeraSort: starting
> org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
> hdfs://localhost:8020/user/Administrator/-1
> It tries to look into directory named -1 instead of teraInputDir
> On setting "echo on" in the cmd scripts, I noticed that the "=" sign 
> disappears in the command passed to JVM-
> terasort -Dmapreduce.reduce.input.limit -1 teraInputDir teraOutputDir 
> In order to make it read properly quotes around “-Dproperty=value” are 
> required to be used.
> This JIRA is to track fixing this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Branch 2 release names

2012-09-06 Thread Arun C Murthy
To be clear, I think we all seem to agree that we continue to make 
hadoop-2.0.3, hadoop-2.0.3 etc. with alpha/beta tags as appropriate until we 
git 'GA' at which point we release hadoop-2.1.0. Makes sense?

thanks,
Arun

On Sep 6, 2012, at 11:38 AM, Arun C Murthy wrote:

> Uh, I meant 'create hadoop-2.0.2-alpha' release off branch-2.
> 
> On Sep 6, 2012, at 11:18 AM, Arun C Murthy wrote:
> 
>> Sounds fine.
>> 
>> For now, I think we can delete branch-2.1.0-alpha, create branch-2.0.2-alpha 
>> release off branch-2 and eventually make branch-2.1.0 as the stable release 
>> in the future.
>> 
>> Arun
>> 
>> On Sep 4, 2012, at 11:55 AM, Owen O'Malley wrote:
>> 
>>> While cleaning up the subversion branches, I thought more about the
>>> branch 2 release names. I'm concerned if we backtrack and reuse
>>> release numbers it will be extremely confusing to users. It also
>>> creates problems for tools like Maven that parse version numbers and
>>> expect a left to right release numbering scheme (eg. 2.1.1-alpha >
>>> 2.1.0). It also seems better to keep on the 2.0.x minor release until
>>> after we get a GA release off of the 2.0 branch.
>>> 
>>> Therefore, I'd like to propose:
>>> 1. rename branch-2.0.1-alpha -> branch-2.0
>>> 2. delete branch-2.1.0-alpha
>>> 3. stabilizing goes into branch-2.0 until it gets to GA
>>> 4. features go into branch-2 and will be branched into branch-2.1 later
>>> 5. The release tags can have the alpha/beta tags on them.
>>> 
>>> Thoughts?
>>> 
>>> -- Owen
>> 
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/
>> 
>> 
> 
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
> 
> 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




Re: Branch 2 release names

2012-09-06 Thread Arun C Murthy
Uh, I meant 'create hadoop-2.0.2-alpha' release off branch-2.

On Sep 6, 2012, at 11:18 AM, Arun C Murthy wrote:

> Sounds fine.
> 
> For now, I think we can delete branch-2.1.0-alpha, create branch-2.0.2-alpha 
> release off branch-2 and eventually make branch-2.1.0 as the stable release 
> in the future.
> 
> Arun
> 
> On Sep 4, 2012, at 11:55 AM, Owen O'Malley wrote:
> 
>> While cleaning up the subversion branches, I thought more about the
>> branch 2 release names. I'm concerned if we backtrack and reuse
>> release numbers it will be extremely confusing to users. It also
>> creates problems for tools like Maven that parse version numbers and
>> expect a left to right release numbering scheme (eg. 2.1.1-alpha >
>> 2.1.0). It also seems better to keep on the 2.0.x minor release until
>> after we get a GA release off of the 2.0 branch.
>> 
>> Therefore, I'd like to propose:
>> 1. rename branch-2.0.1-alpha -> branch-2.0
>> 2. delete branch-2.1.0-alpha
>> 3. stabilizing goes into branch-2.0 until it gets to GA
>> 4. features go into branch-2 and will be branched into branch-2.1 later
>> 5. The release tags can have the alpha/beta tags on them.
>> 
>> Thoughts?
>> 
>> -- Owen
> 
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
> 
> 

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




[jira] [Created] (HADOOP-8774) Incorrect maven configuration causes mrapp-generated-classpath file to be generated in multiple places

2012-09-06 Thread Hitesh Shah (JIRA)
Hitesh Shah created HADOOP-8774:
---

 Summary: Incorrect maven configuration causes 
mrapp-generated-classpath file to be generated in multiple places
 Key: HADOOP-8774
 URL: https://issues.apache.org/jira/browse/HADOOP-8774
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.1.0-alpha
Reporter: Hitesh Shah
Priority: Minor


Running a simple mvn clean install at the top of tree shows logs such as the 
ones below in the HDFS tree:

[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-hdfs-httpfs ---
[INFO] Wrote classpath file 
'/Users/Hitesh/dev/hadoop-common/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/classes/mrapp-generated-classpath'.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Branch 2 release names

2012-09-06 Thread Arun C Murthy
Sounds fine.

For now, I think we can delete branch-2.1.0-alpha, create branch-2.0.2-alpha 
release off branch-2 and eventually make branch-2.1.0 as the stable release in 
the future.

Arun

On Sep 4, 2012, at 11:55 AM, Owen O'Malley wrote:

> While cleaning up the subversion branches, I thought more about the
> branch 2 release names. I'm concerned if we backtrack and reuse
> release numbers it will be extremely confusing to users. It also
> creates problems for tools like Maven that parse version numbers and
> expect a left to right release numbering scheme (eg. 2.1.1-alpha >
> 2.1.0). It also seems better to keep on the 2.0.x minor release until
> after we get a GA release off of the 2.0 branch.
> 
> Therefore, I'd like to propose:
> 1. rename branch-2.0.1-alpha -> branch-2.0
> 2. delete branch-2.1.0-alpha
> 3. stabilizing goes into branch-2.0 until it gets to GA
> 4. features go into branch-2 and will be branched into branch-2.1 later
> 5. The release tags can have the alpha/beta tags on them.
> 
> Thoughts?
> 
> -- Owen

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/




Re: Branch 2 release names

2012-09-06 Thread Andrew Purtell
No, thanks Owen.

On Thu, Sep 6, 2012 at 9:27 AM, Owen O'Malley  wrote:

> On Wed, Sep 5, 2012 at 11:04 AM, Andrew Purtell 
> wrote:
> > If it's all the same to you, I'd prefer you leave the branch, or at
> least a
> > tag, and just ignore it. We're pretty far away from branch-2.1.0
> following
> > branch-2 but started from that point.
>
> Subversion you don't actually ever delete anything. For example, I've
> deleted the branch-0.20-append, but you can still get it from:
>
> %  svn ls
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-append@1380308
>
> Given that would you like a dev branch (hbase-branch-2.0?)?
>
> -- Owen
>


Re: Branch 2 release names

2012-09-06 Thread Owen O'Malley
On Wed, Sep 5, 2012 at 11:04 AM, Andrew Purtell  wrote:
> If it's all the same to you, I'd prefer you leave the branch, or at least a
> tag, and just ignore it. We're pretty far away from branch-2.1.0 following
> branch-2 but started from that point.

Subversion you don't actually ever delete anything. For example, I've
deleted the branch-0.20-append, but you can still get it from:

%  svn ls 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-append@1380308

Given that would you like a dev branch (hbase-branch-2.0?)?

-- Owen


[jira] [Resolved] (HADOOP-8583) Globbing is not correctly handled in a few cases on Windows

2012-09-06 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HADOOP-8583.


Resolution: Duplicate

dup of HADOOP-8739

> Globbing is not correctly handled in a few cases on Windows
> ---
>
> Key: HADOOP-8583
> URL: https://issues.apache.org/jira/browse/HADOOP-8583
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Windows
>Reporter: Ramya Sunil
>
> Glob handling fails in a few cases on a Windows environment.
> For example:
> {noformat}
> c:\> hadoop dfs -ls /
> Found 2 items
> drwxrwxrwx   - Administrator supergroup  0 2012-07-06 15:00 /tmp
> drwxr-xr-x   - Administrator supergroup  0 2012-07-06 18:52 /user
> c:\> hadoop dfs -ls /tmpInvalid*
> Found 2 items
> drwxr-xr-x   - Administrator supergroup  0 2012-07-10 18:50 
> /user/Administrator/sortInputDir
> drwxr-xr-x   - Administrator supergroup  0 2012-07-10 18:50 
> /user/Administrator/sortOutputDir
> c:\> hadoop dfs -rmr /tmp/*
> Usage: java FsShell [-rmr [-skipTrash]  ]
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: test-patch and native code compile

2012-09-06 Thread Eli Collins
Yea we want jenkins to run with native. How about adding making native
optional in test-patch via a flag and updating the jenkins jobs to use
it?

On Thu, Sep 6, 2012 at 7:25 AM, Alejandro Abdelnur  wrote:
> Makes sense, though the Jenkins runs should continue to run w/ native, right?
>
> On Thu, Sep 6, 2012 at 12:49 AM, Hemanth Yamijala  wrote:
>> Hi,
>>
>> The test-patch script in Hadoop source runs a native compile with the
>> patch. On platforms like MAC, there are issues with the native
>> compile. For e.g we run into HADOOP-7147 that has been resolved as
>> Won't fix.
>>
>> Hence, should we have a switch in test-patch to not run native compile
>> ? Could open a JIRA and fix, if that's OK ?
>>
>> Thanks
>> hemanth
>
>
>
> --
> Alejandro


Re: test-patch and native code compile

2012-09-06 Thread Alejandro Abdelnur
Makes sense, though the Jenkins runs should continue to run w/ native, right?

On Thu, Sep 6, 2012 at 12:49 AM, Hemanth Yamijala  wrote:
> Hi,
>
> The test-patch script in Hadoop source runs a native compile with the
> patch. On platforms like MAC, there are issues with the native
> compile. For e.g we run into HADOOP-7147 that has been resolved as
> Won't fix.
>
> Hence, should we have a switch in test-patch to not run native compile
> ? Could open a JIRA and fix, if that's OK ?
>
> Thanks
> hemanth



-- 
Alejandro


[jira] [Resolved] (HADOOP-8487) Many HDFS tests use a test path intended for local file system tests

2012-09-06 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic resolved HADOOP-8487.


Resolution: Fixed

Resolving as this is committed to branch-1-win. This way, the state of active 
Jiras is up to date under HADOOP-8645. Will reference this Jira once we get to 
fixing this in trunk.

> Many HDFS tests use a test path intended for local file system tests
> 
>
> Key: HADOOP-8487
> URL: https://issues.apache.org/jira/browse/HADOOP-8487
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8487-branch-1-win(2).patch, 
> HADOOP-8487-branch-1-win(3).patch, HADOOP-8487-branch-1-win(3).update.patch, 
> HADOOP-8487-branch-1-win.alternate.patch, HADOOP-8487-branch-1-win.patch
>
>
> Many tests use a test path intended for local tests setup by build 
> environment. In some cases the tests fails on platforms such as windows 
> because the path contains a c:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Common-0.23-Build #364

2012-09-06 Thread Apache Jenkins Server
See 

Changes:

[tgraves] merge -r 1381459:1381460 from branch-2. FIXES: YARN-87

[bobby] svn merge -c 1381317 FIXES: YARN-68. NodeManager will refuse to 
shutdown indefinitely due to container log aggregation (daryn via bobby)

--
[...truncated 11747 lines...]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
[WARNING] Failed to load coverage recording 
RecordingTranscripts.FileRef[datafile=
 testRecording=false, typedTestId=-1, runId=0, hash=20573914, 
timestamp=1346922490177]
java.io.EOFException: Unexpected end of ZLIB input streamjava.io.EOFException: 
Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:223)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at java.io.DataInputStream.readLong(DataInputStream.java:399)
at com.cenqua.clover.BaseRecording$Header.read(BaseRecording.java:65)
at com.cenqua.clover.BaseRecording$Header.(BaseRecording.java:55)
at 
com.cenqua.clover.RecordingTranscripts.readCoverageFromDisk(RecordingTranscripts.java:42)
at 
com.cenqua.clover.RecordingTranscripts$FileRef.read(RecordingTranscripts.java:356)
at 
com.cenqua.clover.CoverageDataCollator.collateRecordingFiles(CoverageDataCollator.java:116)
at 
com.cenqua.clover.CoverageDataCollator.loadCoverageData(CoverageDataCollator.java:66)
at 
com.cenqua.clover.CloverDatabase.loadCoverageData(CloverDatabase.java:164)
at 
com.cenqua.clover.CloverDatabase.loadCoverageData(CloverDatabase.java:159)
at 
com.cenqua.clover.reporters.CloverReportConfig.getCoverageDatabase(CloverReportConfig.java:342)
at 
com.cenqua.clover.reporters.Current.getCoverageDatabase(Current.java:126)
at 
com.cenqua.clover.reporters.CloverReporter.(CloverReporter.java:33)
at 
com.cenqua.clover.reporters.html.HtmlReporter.(HtmlReporter.java:125)
at 
com.cenqua.clover.reporters.CloverReporter.buildReporter(CloverReporter.java:72)
at 
com.cenqua.clover.tasks.CloverReportTask.generateReports(CloverReportTask.java:428)
at 
com.cenqua.clover.tasks.CloverReportTask.cloverExecute(CloverReportTask.java:385)
at 
com.cenqua.clover.tasks.AbstractCloverTask.execute(AbstractCloverTask.java:55)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
at sun.reflect.GeneratedMethodAccessor192.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:357)
at org.apache.tools.ant.Target.performTasks(Target.java:385)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337)
at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
at 
com.atlassian.maven.plugin.clover.CloverReportMojo.createReport(CloverReportMojo.java:424)
at 
com.atlassian.maven.plugin.clover.CloverReportMojo.createAllReportTypes(CloverReportMojo.java:372)
at 
com.atlassian.maven.plugin.clover.CloverReportMojo.executeReport(CloverReportMojo.java:356)
at 
org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:101)
at 
org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:66)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execut

Build failed in Jenkins: Hadoop-Common-trunk #525

2012-09-06 Thread Apache Jenkins Server
See 

Changes:

[eli] HDFS-3828. Block Scanner rescans blocks too frequently. Contributed by 
Andy Isaacson

[tgraves] YARN-87. NM ResourceLocalizationService does not set permissions of 
local cache directories (Jason Lowe via tgraves)

[atm] HADOOP-8766. FileContextMainOperationsBaseTest should randomize the root 
dir. Contributed by Colin Patrick McCabe.

[eli] HADOOP-8648. libhadoop: native CRC32 validation crashes when 
io.bytes.per.checksum=1. Contributed by Colin Patrick McCabe

[eli] HADOOP-8770. NN should not RPC to self to find trash defaults. 
Contributed by Eli Collins

[bobby] YARN-68. NodeManager will refuse to shutdown indefinitely due to 
container log aggregation (daryn via bobby)

[eli] Fix MAPREDUCE-4580 build breakage.

[todd] HDFS-3054. distcp -skipcrccheck has no effect. Contributed by Colin 
Patrick McCabe.

[vinodkv] YARN-83. Change package of YarnClient to org.apache.hadoop. 
Contributed by Bikas Saha.

--
[...truncated 26411 lines...]
[DEBUG]   (s) debug = false
[DEBUG]   (s) effort = Default
[DEBUG]   (s) failOnError = true
[DEBUG]   (s) findbugsXmlOutput = false
[DEBUG]   (s) findbugsXmlOutputDirectory = 

[DEBUG]   (s) fork = true
[DEBUG]   (s) includeTests = false
[DEBUG]   (s) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (s) maxHeap = 512
[DEBUG]   (s) nested = false
[DEBUG]   (s) outputDirectory = 

[DEBUG]   (s) outputEncoding = UTF-8
[DEBUG]   (s) pluginArtifacts = 
[org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, 
com.google.code.findbugs:bcel:jar:1.3.9:compile, 
org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, 
org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, 
org.apache.ant:ant:jar:1.7.1:compile, 
org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, 
org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, 
org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, 
org.sonatype.gshell:gshell-io:jar:2.0:compile, 
com.thoughtworks.qdox:qdox:jar:1.10:compile, 
org.apache.maven.shared:file-management:jar:1.2.1:compile, 
org.apache.maven.shared:maven-shared-io:jar:1.1:compile, 
commons-lang:commons-lang:jar:2.4:compile, 
org.slf4j:slf4j-api:jar:1.5.10:compile, 
org.sonatype.gossip:gossip:jar:1.2:compile, 
org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, 
commons-validator:commons-validator:jar:1.2.0:compile, 
commons-beanutils:commons-beanutils:jar:1.7.0:compile, 
commons-digester:commons-digester:jar:1.6:compile, 
commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, 
xml-apis:xml-apis:jar:1.0.b2:compile, 
org.codehaus.groovy:groovy-all:jar:1.7.4:compile, 
org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, 
org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, 
xerces:xercesImpl:jar:2.9.1:compile, 
commons-httpclient:commons-httpclient:jar:3.1:compile, 
commons-codec:commons-codec:jar:1.2:compile, 
org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, 
org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, 
org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, 
org.apache.velocity:velocity:jar:1.5:compile, 
commons-collections:commons-collections:jar:3.2:compile, 
org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, 
commons-io:commons-io:jar:1.4:compile, 
com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, 
com.google.code.findbugs:findbugs:jar:1.3.9:compile, 
com.google.code.findbugs:jsr305:jar:1.3.9:compile, 
com.google.code.findbugs:jFormatString:jar:1.3.9:compile, 
com.google.code.findbugs:annotations:jar:1.3.9:compile, 
dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, 
jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, 
xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, 
com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, 
asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, 
asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, 
asm:asm-xml:jar:3.1:co

[jira] [Created] (HADOOP-8773) Improve Server#getRemoteAddress by utilizing Server.Connection.hostAddress

2012-09-06 Thread binlijin (JIRA)
binlijin created HADOOP-8773:


 Summary: Improve Server#getRemoteAddress by utilizing 
Server.Connection.hostAddress 
 Key: HADOOP-8773
 URL: https://issues.apache.org/jira/browse/HADOOP-8773
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: binlijin
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


test-patch and native code compile

2012-09-06 Thread Hemanth Yamijala
Hi,

The test-patch script in Hadoop source runs a native compile with the
patch. On platforms like MAC, there are issues with the native
compile. For e.g we run into HADOOP-7147 that has been resolved as
Won't fix.

Hence, should we have a switch in test-patch to not run native compile
? Could open a JIRA and fix, if that's OK ?

Thanks
hemanth