Build failed in Jenkins: Hadoop-Common-0.23-Build #427

2012-11-09 Thread Apache Jenkins Server
See 

Changes:

[daryn] HDFS-3990. NN's health report has severe performance problems (daryn)

[bobby] svn merge -c 1407171 FIXES: YARN-186. Coverage fixing 
LinuxContainerExecutor (Aleksey Gorshkov via bobby)

[bobby] svn merge -c 1407118 FIXES: MAPREDUCE-4772. Fetch failures can take way 
too long for a map to be restarted (bobby)

--
[...truncated 11013 lines...]

[INFO] 
[INFO] --- maven-clover2-plugin:3.0.5:clover (clover) @ hadoop-auth ---
[INFO] Using /default-clover-report descriptor.
[INFO] Using Clover report descriptor: /tmp/mvn288244137686378946resource
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Clover is enabled with initstring 
'
[WARNING] Clover historical directory 
[
 does not exist, skipping Clover historical report generation 
([
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Loading coverage database from: 
'
[INFO] Writing HTML report to 
'
[INFO] Done. Processed 4 packages in 854ms (213ms per package).
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Clover is enabled with initstring 
'
[WARNING] Clover historical directory 
[
 does not exist, skipping Clover historical report generation 
([
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Loading coverage database from: 
'
[INFO] Writing report to 
'
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Auth Examples 0.23.5-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-auth-examples 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-auth-examples ---
[INFO] Wrote classpath file 
'
[INFO] 
[INFO] --- maven-clover2-plugin:3.0.5:setup (setup) @ hadoop-auth-examples ---
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Creating new database at 
'
[INFO] Processing files at 1.6 source level.
[INFO] Clover all over. Instrumented 3 files (1 package).
[INFO] Elapsed time = 0.014 secs. (214.286 files/sec, 20,214.285 srclines/sec)
[INFO] No Clover instrumentation done on source files in: 
[
 as no matching sources files found
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resource

[jira] [Resolved] (HADOOP-9006) Winutils should keep Administrators privileges intact

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9006.
-

Resolution: Fixed

+1 for the change. Committed the patch to branch-1-win. Thank you Chuan.

> Winutils should keep Administrators privileges intact
> -
>
> Key: HADOOP-9006
> URL: https://issues.apache.org/jira/browse/HADOOP-9006
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-9006-branch-1-win.patch
>
>
> This issue was originally discovered by [~ivanmi]. Cite his words as follows.
> {quote}
> Current by design behavior is for winutils to ACL the folders only for the 
> user passed in thru chmod/chown. This causes some un-natural side effects in 
> cases where Hadoop services run in the context of a non-admin user. For 
> example, Administrators on the box will no longer be able to:
>  - delete files created in the context of Hadoop services (other users)
>  - check the size of the folder where HDFS blocks are stored
> {quote}
> In my opinion, it is natural for some special accounts on Windows to be able 
> to access all the folders, including Hadoop folders. This is similar to Linux 
> in the way root users on Linux can always access any directories regardless 
> the permissions set the those directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[PROPOSAL] 1.1.1 and 1.2.0 scheduling

2012-11-09 Thread Matt Foley
Hi all,
Hadoop 1.1.0 came out on Oct 12.  I think there's enough interest to do a
maintenance release with some important patches.  I propose to code freeze
branch-1.1 a week from today, Fri 16 Nov, and have a 1.1.1 release
candidate ready for eval & vote starting Mon 19 Nov.

There's also a lot of good new stuff in branch-1.  I suggest that on Dec.1,
I create a branch-1.2 from branch-1, with a code freeze on Dec.7, and I'll
create a 1.2.0 release candidate on Mon 10 Dec.

Please provide your +1 if this is acceptable to you.

For 1.1.1, I propose to include the below, and I am of course open to
additional high-priority patches if they are reliable and can be committed
to branch-1.1 by the code freeze date.  Let's try to stick to serious bugs
and not new features.  Thanks!

--Matt Foley
Release Manager

HADOOP-8823. ant package target should not depend on cn-docs. (szetszwo)

HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with
webhdfs filesystem and fsck to fail when security is on.
(Arpit Gupta via suresh)

HADOOP-8882. Uppercase namenode host name causes fsck to fail when
useKsslAuth is on. (Arpit Gupta via suresh)

HADOOP-8995. Remove unnecessary bogus exception from Configuration.java.
(Jing Zhao via suresh)

HDFS-2815. Namenode is not coming out of safemode when we perform
(NN crash + restart). Also FSCK report shows blocks missed. (umamahesh)

HDFS-3791. HDFS-173 Backport - Namenode will not block until a large
directory deletion completes. It allows other operations when the
deletion is in progress. (umamahesh via suresh)

HDFS-4134. hadoop namenode and datanode entry points should return
negative exit code on bad arguments. (Steve Loughran via suresh)

MAPREDUCE-4782. NLineInputFormat skips first line of last InputSplit
(Mark Fuhs via bobby)


[jira] [Created] (HADOOP-9023) HttpFs is too restrictive on usernames

2012-11-09 Thread Harsh J (JIRA)
Harsh J created HADOOP-9023:
---

 Summary: HttpFs is too restrictive on usernames
 Key: HADOOP-9023
 URL: https://issues.apache.org/jira/browse/HADOOP-9023
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Harsh J


HttpFs tries to use UserProfile.USER_PATTERN to match all usernames before a 
doAs impersonation function. This regex is too strict for most usernames, as it 
disallows any special character at all. We should relax it more or ditch 
needing to match things there.

WebHDFS currently has no such limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [PROPOSAL] 1.1.1 and 1.2.0 scheduling

2012-11-09 Thread Steve Loughran
On 9 November 2012 17:52, Matt Foley  wrote:

> Hi all,
> Hadoop 1.1.0 came out on Oct 12.  I think there's enough interest to do a
> maintenance release with some important patches.  I propose to code freeze
> branch-1.1 a week from today, Fri 16 Nov, and have a 1.1.1 release
> candidate ready for eval & vote starting Mon 19 Nov.
>
> There's also a lot of good new stuff in branch-1.  I suggest that on Dec.1,
> I create a branch-1.2 from branch-1, with a code freeze on Dec.7, and I'll
> create a 1.2.0 release candidate on Mon 10 Dec.
>
> Please provide your +1 if this is acceptable to you.
>

+1


>
> For 1.1.1, I propose to include the below, and I am of course open to
> additional high-priority patches if they are reliable and can be committed
> to branch-1.1 by the code freeze date.  Let's try to stick to serious bugs
> and not new features.  Thanks!
>
> --Matt Foley
> Release Manager
>
> HADOOP-8823. ant package target should not depend on cn-docs.
> (szetszwo)
>
> HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with
> webhdfs filesystem and fsck to fail when security is on.
> (Arpit Gupta via suresh)
>
> HADOOP-8882. Uppercase namenode host name causes fsck to fail when
> useKsslAuth is on. (Arpit Gupta via suresh)
>
> HADOOP-8995. Remove unnecessary bogus exception from
> Configuration.java.
> (Jing Zhao via suresh)
>
> HDFS-2815. Namenode is not coming out of safemode when we perform
> (NN crash + restart). Also FSCK report shows blocks missed. (umamahesh)
>
> HDFS-3791. HDFS-173 Backport - Namenode will not block until a large
> directory deletion completes. It allows other operations when the
> deletion is in progress. (umamahesh via suresh)
>
> HDFS-4134. hadoop namenode and datanode entry points should return
> negative exit code on bad arguments. (Steve Loughran via suresh)
>
> MAPREDUCE-4782. NLineInputFormat skips first line of last InputSplit
> (Mark Fuhs via bobby)
>


Re: [PROPOSAL] 1.1.1 and 1.2.0 scheduling

2012-11-09 Thread Robert Evans
+1

On 11/9/12 12:27 PM, "Steve Loughran"  wrote:

>On 9 November 2012 17:52, Matt Foley  wrote:
>
>> Hi all,
>> Hadoop 1.1.0 came out on Oct 12.  I think there's enough interest to do
>>a
>> maintenance release with some important patches.  I propose to code
>>freeze
>> branch-1.1 a week from today, Fri 16 Nov, and have a 1.1.1 release
>> candidate ready for eval & vote starting Mon 19 Nov.
>>
>> There's also a lot of good new stuff in branch-1.  I suggest that on
>>Dec.1,
>> I create a branch-1.2 from branch-1, with a code freeze on Dec.7, and
>>I'll
>> create a 1.2.0 release candidate on Mon 10 Dec.
>>
>> Please provide your +1 if this is acceptable to you.
>>
>
>+1
>
>
>>
>> For 1.1.1, I propose to include the below, and I am of course open to
>> additional high-priority patches if they are reliable and can be
>>committed
>> to branch-1.1 by the code freeze date.  Let's try to stick to serious
>>bugs
>> and not new features.  Thanks!
>>
>> --Matt Foley
>> Release Manager
>>
>> HADOOP-8823. ant package target should not depend on cn-docs.
>> (szetszwo)
>>
>> HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls
>>with
>> webhdfs filesystem and fsck to fail when security is on.
>> (Arpit Gupta via suresh)
>>
>> HADOOP-8882. Uppercase namenode host name causes fsck to fail when
>> useKsslAuth is on. (Arpit Gupta via suresh)
>>
>> HADOOP-8995. Remove unnecessary bogus exception from
>> Configuration.java.
>> (Jing Zhao via suresh)
>>
>> HDFS-2815. Namenode is not coming out of safemode when we perform
>> (NN crash + restart). Also FSCK report shows blocks missed.
>>(umamahesh)
>>
>> HDFS-3791. HDFS-173 Backport - Namenode will not block until a large
>> directory deletion completes. It allows other operations when the
>> deletion is in progress. (umamahesh via suresh)
>>
>> HDFS-4134. hadoop namenode and datanode entry points should return
>> negative exit code on bad arguments. (Steve Loughran via suresh)
>>
>> MAPREDUCE-4782. NLineInputFormat skips first line of last InputSplit
>> (Mark Fuhs via bobby)
>>



Re: [PROPOSAL] 1.1.1 and 1.2.0 scheduling

2012-11-09 Thread Jitendra Pandey
+1

On Fri, Nov 9, 2012 at 9:52 AM, Matt Foley  wrote:

> Hi all,
> Hadoop 1.1.0 came out on Oct 12.  I think there's enough interest to do a
> maintenance release with some important patches.  I propose to code freeze
> branch-1.1 a week from today, Fri 16 Nov, and have a 1.1.1 release
> candidate ready for eval & vote starting Mon 19 Nov.
>
> There's also a lot of good new stuff in branch-1.  I suggest that on Dec.1,
> I create a branch-1.2 from branch-1, with a code freeze on Dec.7, and I'll
> create a 1.2.0 release candidate on Mon 10 Dec.
>
> Please provide your +1 if this is acceptable to you.
>
> For 1.1.1, I propose to include the below, and I am of course open to
> additional high-priority patches if they are reliable and can be committed
> to branch-1.1 by the code freeze date.  Let's try to stick to serious bugs
> and not new features.  Thanks!
>
> --Matt Foley
> Release Manager
>
> HADOOP-8823. ant package target should not depend on cn-docs.
> (szetszwo)
>
> HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with
> webhdfs filesystem and fsck to fail when security is on.
> (Arpit Gupta via suresh)
>
> HADOOP-8882. Uppercase namenode host name causes fsck to fail when
> useKsslAuth is on. (Arpit Gupta via suresh)
>
> HADOOP-8995. Remove unnecessary bogus exception from
> Configuration.java.
> (Jing Zhao via suresh)
>
> HDFS-2815. Namenode is not coming out of safemode when we perform
> (NN crash + restart). Also FSCK report shows blocks missed. (umamahesh)
>
> HDFS-3791. HDFS-173 Backport - Namenode will not block until a large
> directory deletion completes. It allows other operations when the
> deletion is in progress. (umamahesh via suresh)
>
> HDFS-4134. hadoop namenode and datanode entry points should return
> negative exit code on bad arguments. (Steve Loughran via suresh)
>
> MAPREDUCE-4782. NLineInputFormat skips first line of last InputSplit
> (Mark Fuhs via bobby)
>



-- 



[jira] [Resolved] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8963.
-

   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

Committed the patch to branch-1.

Thank you Arpit.

> CopyFromLocal doesn't always create user directory
> --
>
> Key: HADOOP-8963
> URL: https://issues.apache.org/jira/browse/HADOOP-8963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Billie Rinaldi
>Assignee: Arpit Gupta
>Priority: Trivial
> Fix For: 1.2.0
>
> Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
> HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
> HADOOP-8963.branch-1.patch
>
>
> When you use the command "hadoop fs -copyFromLocal filename ." before the 
> /user/username directory has been created, the file is created with name 
> /user/username instead of a directory being created with file 
> /user/username/filename.  The command "hadoop fs -copyFromLocal filename 
> filename" works as expected, creating /user/username and 
> /user/username/filename, and "hadoop fs -copyFromLocal filename ." works as 
> expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8977) multiple FsShell test failures on Windows

2012-11-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8977.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

I committed the patch. Thank you Chris. Thank you Arpit for the review.

> multiple FsShell test failures on Windows
> -
>
> Key: HADOOP-8977
> URL: https://issues.apache.org/jira/browse/HADOOP-8977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: trunk-win
>
> Attachments: HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977-branch-trunk-win.patch, HADOOP-8977-branch-trunk-win.patch, 
> HADOOP-8977.patch
>
>
> Multiple FsShell-related tests fail on Windows.  Commands are returning 
> non-zero exit status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira