[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2012-05-28 Thread Radim Kolar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284364#comment-13284364
 ] 

Radim Kolar commented on HADOOP-1381:
-

default 1 meg interval should be fine

 The distance between sync blocks in SequenceFiles should be configurable 
 rather than hard coded to 2000 bytes
 -

 Key: HADOOP-1381
 URL: https://issues.apache.org/jira/browse/HADOOP-1381
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 0.22.0
Reporter: Owen O'Malley
Assignee: Harsh J
 Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
 HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
 HADOOP-1381.r5.diff


 Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
 better if it was configurable with a much higher default (1mb or so?).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8433) Logs are getting misplaced after introducing hadoop-env.sh

2012-05-28 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-8433:
-

Attachment: HADOOP-8433.patch

HI Eli,

I have updated both places..thanks a lot

 Logs are getting misplaced after introducing hadoop-env.sh
 --

 Key: HADOOP-8433
 URL: https://issues.apache.org/jira/browse/HADOOP-8433
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-8433.patch, Hadoop-8433.patch


 It's better to comment the following in hadoop-env.sh
 # Where log files are stored.  $HADOOP_HOME/logs by default.
 export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
 Because of this logs are placing under root($user) and this getting called 
 two times while starting process.
 hence logs are placing at /root/root/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8433) Logs are getting misplaced after introducing hadoop-env.sh

2012-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284388#comment-13284388
 ] 

Hadoop QA commented on HADOOP-8433:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529950/HADOOP-8433.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1041//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1041//console

This message is automatically generated.

 Logs are getting misplaced after introducing hadoop-env.sh
 --

 Key: HADOOP-8433
 URL: https://issues.apache.org/jira/browse/HADOOP-8433
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-8433.patch, Hadoop-8433.patch


 It's better to comment the following in hadoop-env.sh
 # Where log files are stored.  $HADOOP_HOME/logs by default.
 export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
 Because of this logs are placing under root($user) and this getting called 
 two times while starting process.
 hence logs are placing at /root/root/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8358:


Attachment: HADOOP-8358.patch

Resubmitting patch pre-commit just to be sure about the Findbugs initialization 
issues.

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8358:


Target Version/s: 2.0.1-alpha, 3.0.0  (was: 2.0.0-alpha, 3.0.0)

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284400#comment-13284400
 ] 

Hadoop QA commented on HADOOP-8358:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529956/HADOOP-8358.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1042//console

This message is automatically generated.

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8358:


Attachment: HADOOP-8358.patch

Rebased the core-site.xml changes that caused the patch application to fail. 
The Auto-HA merge to trunk changed the core-site.xml leading to this.

Re-submmitting for another QA round before committing (for re-checking findbugs 
initialization fail, which doesn't occur locally).

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-1381:


 Target Version/s: 2.0.1-alpha, 3.0.0  (was: 2.0.0-alpha, 3.0.0)
Affects Version/s: (was: 0.22.0)
   2.0.0-alpha

 The distance between sync blocks in SequenceFiles should be configurable 
 rather than hard coded to 2000 bytes
 -

 Key: HADOOP-1381
 URL: https://issues.apache.org/jira/browse/HADOOP-1381
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Owen O'Malley
Assignee: Harsh J
 Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
 HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
 HADOOP-1381.r5.diff


 Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
 better if it was configurable with a much higher default (1mb or so?).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8268:


  Labels: maven  (was: maven patch)
Hadoop Flags: Reviewed

Thanks Radim!

+1 based on a local {{mvn clean install -Dtest=FooBar}} as well. Committing to 
branch-2 and trunk shortly.

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven
 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, haDOOP-8268.patch, 
 hadoop-pom.txt, hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8268:


  Resolution: Fixed
   Fix Version/s: 2.0.1-alpha
Target Version/s:   (was: 2.0.1-alpha, 3.0.0)
  Status: Resolved  (was: Patch Available)

- Committed revision 1343272 to trunk.
- svn merge -c 1343272 to branch-2 committed as revision 1343275.

Thanks for this and also your continuing contributions Radim! :)

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, haDOOP-8268.patch, 
 hadoop-pom.txt, hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284426#comment-13284426
 ] 

Hudson commented on HADOOP-8268:


Integrated in Hadoop-Hdfs-trunk-Commit #2365 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2365/])
HADOOP-8268. A few pom.xml across Hadoop project may fail XML validation. 
Contributed by Radim Kolar. (harsh) (Revision 1343272)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343272
Files : 
* /hadoop/common/trunk/hadoop-assemblies/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project-dist/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-archives/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-extras/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, haDOOP-8268.patch, 
 hadoop-pom.txt, hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by 

[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284429#comment-13284429
 ] 

Hadoop QA commented on HADOOP-8358:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529958/HADOOP-8358.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1043//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1043//console

This message is automatically generated.

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284430#comment-13284430
 ] 

Hudson commented on HADOOP-8268:


Integrated in Hadoop-Common-trunk-Commit #2292 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2292/])
HADOOP-8268. A few pom.xml across Hadoop project may fail XML validation. 
Contributed by Radim Kolar. (harsh) (Revision 1343272)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343272
Files : 
* /hadoop/common/trunk/hadoop-assemblies/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project-dist/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-archives/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-extras/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, haDOOP-8268.patch, 
 hadoop-pom.txt, hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated 

[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284434#comment-13284434
 ] 

Harsh J commented on HADOOP-8358:
-

Failing test {{org.apache.hadoop.fs.viewfs.TestViewFsTrash}} is unrelated to 
this change. Findbugs succeeded this time, so the previous issue was something 
else on trunk at the time or was a flaky result.

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8358:


  Resolution: Fixed
   Fix Version/s: 2.0.1-alpha
Target Version/s:   (was: 2.0.1-alpha, 3.0.0)
  Status: Resolved  (was: Patch Available)

Committed revision 1343294 to branch-2 (i.e. merge -c of 1343290 also committed 
to trunk).

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284442#comment-13284442
 ] 

Hudson commented on HADOOP-8358:


Integrated in Hadoop-Hdfs-trunk-Commit #2366 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2366/])
HADOOP-8358. Config-related WARN for dfs.web.ugi can be avoided. (harsh) 
(Revision 1343290)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343290
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/lib/StaticUserWebFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/lib/TestStaticUserWebFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284443#comment-13284443
 ] 

Hudson commented on HADOOP-8358:


Integrated in Hadoop-Common-trunk-Commit #2293 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2293/])
HADOOP-8358. Config-related WARN for dfs.web.ugi can be avoided. (harsh) 
(Revision 1343290)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343290
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/lib/StaticUserWebFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/lib/TestStaticUserWebFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284450#comment-13284450
 ] 

Hudson commented on HADOOP-8268:


Integrated in Hadoop-Mapreduce-trunk-Commit #2311 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2311/])
HADOOP-8268. A few pom.xml across Hadoop project may fail XML validation. 
Contributed by Radim Kolar. (harsh) (Revision 1343272)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343272
Files : 
* /hadoop/common/trunk/hadoop-assemblies/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-annotations/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-auth/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project-dist/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-archives/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-extras/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-rumen/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-streaming/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, haDOOP-8268.patch, 
 hadoop-pom.txt, hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically 

[jira] [Updated] (HADOOP-8436) NPE In getLocalPathForWrite ( path, conf ) when dfs.client.buffer.dir not configured

2012-05-28 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-8436:
-

Attachment: HADOOP-8436.patch

Attaching Patch...

 NPE In getLocalPathForWrite ( path, conf ) when dfs.client.buffer.dir not 
 configured
 

 Key: HADOOP-8436
 URL: https://issues.apache.org/jira/browse/HADOOP-8436
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-8436.patch


 Call  dirAllocator.getLocalPathForWrite ( path , conf );
 without configuring  dfs.client.buffer.dir..
 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:261)
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:365)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:134)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:113)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths

2012-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284480#comment-13284480
 ] 

Hadoop QA commented on HADOOP-8437:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529963/HADOOP-8437.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1044//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1044//console

This message is automatically generated.

 getLocalPathForWrite is not throwing any expection for invalid paths
 

 Key: HADOOP-8437
 URL: https://issues.apache.org/jira/browse/HADOOP-8437
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.1-alpha
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.0.1-alpha, 3.0.0

 Attachments: HADOOP-8437.patch


 call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf );
 Here it will not thrown any exception but earlier version it used throw.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284481#comment-13284481
 ] 

Hudson commented on HADOOP-8358:


Integrated in Hadoop-Mapreduce-trunk-Commit #2312 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2312/])
HADOOP-8358. Config-related WARN for dfs.web.ugi can be avoided. (harsh) 
(Revision 1343290)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343290
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/lib/StaticUserWebFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/lib/TestStaticUserWebFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8358.patch, HADOOP-8358.patch, HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8443) MiniDFSCluster Hangs

2012-05-28 Thread Ivan Provalov (JIRA)
Ivan Provalov created HADOOP-8443:
-

 Summary: MiniDFSCluster Hangs
 Key: HADOOP-8443
 URL: https://issues.apache.org/jira/browse/HADOOP-8443
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.0.2
 Environment: Mac OSX 10.7.3
Reporter: Ivan Provalov
 Attachments: hadoop-hanging.tar

When using MiniDFSCluster in a unit test it works and terminates as expected.  
However creating and shutting it down from a main causes it to hang. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8443) MiniDFSCluster Hangs

2012-05-28 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated HADOOP-8443:
--

Attachment: hadoop-hanging.tar

 MiniDFSCluster Hangs
 

 Key: HADOOP-8443
 URL: https://issues.apache.org/jira/browse/HADOOP-8443
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.0.2
 Environment: Mac OSX 10.7.3
Reporter: Ivan Provalov
 Attachments: hadoop-hanging.tar


 When using MiniDFSCluster in a unit test it works and terminates as expected. 
  However creating and shutting it down from a main causes it to hang. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7823) port HADOOP-4012 to branch-1

2012-05-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284569#comment-13284569
 ] 

Chris Douglas commented on HADOOP-7823:
---

The patch should also include HADOOP-6925

The rest of the code looks familiar (where did the {{NLineInputFormat}} change 
come from?). IIRC the unit test coverage is pretty good, but how else has this 
been verified?

 port HADOOP-4012 to branch-1
 

 Key: HADOOP-7823
 URL: https://issues.apache.org/jira/browse/HADOOP-7823
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.205.0
Reporter: Tim Broberg
Assignee: Andrew Purtell
 Attachments: HADOOP-7823-branch-1-v2.patch, 
 HADOOP-7823-branch-1-v3.patch, HADOOP-7823-branch-1-v3.patch, 
 HADOOP-7823-branch-1.patch


 Please see HADOOP-4012 - Providing splitting support for bzip2 compressed 
 files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8424) Web UI broken on Windows because classpath not setup correctly

2012-05-28 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284588#comment-13284588
 ] 

Ivan Mitic commented on HADOOP-8424:


Looks good overall. Three minor comments:
1. It seems that you misplaced the comment for developers, add Hadoop classes 
to CLASSPATH, it should be left on its original location
2. Can you please remove the if exist %HADOOP_CORE_HOME%\... before for 
loops as it is not needed
3. Can you also please break the two if's and place: 
{code}
for %%i in (%HADOOP_CORE_HOME%\build\*.jar) do (
  set CLASSPATH=!CLASSPATH!;%%i
)
{code}
under for releases, add core hadoop jar  webapps to CLASSPATH, and place:
{code}
for %%i in (%HADOOP_CORE_HOME%\build\ivy\lib\Hadoop\common\*.jar) do (
  set CLASSPATH=!CLASSPATH!;%%i
)
{code}
under: add libs to CLASSPATH right after
{code}for %%i in (%HADOOP_CORE_HOME%\lib\*.jar) do (
  set CLASSPATH=!CLASSPATH!;%%i
){code}
This way, we will be more consistent with the script layout from bin\hadoop 
used for non-Windows platforms.

Separately, what do you think about having a tracking Jira on making .sh and 
.cmd scripts fully consistent, as they aren't at the moment?


 Web UI broken on Windows because classpath not setup correctly
 --

 Key: HADOOP-8424
 URL: https://issues.apache.org/jira/browse/HADOOP-8424
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 1.1.0

 Attachments: HADOOP-8424.branch-1-win.patch


 The classpath is setup to include the hadoop jars before the build webapps 
 directory and that upsets jetty when it is trying to resolve the webapp 
 classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8443) MiniDFSCluster Hangs

2012-05-28 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284609#comment-13284609
 ] 

Harsh J commented on HADOOP-8443:
-

Appears to be a hanging metrics thread:

{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.BlocksMap.size(BlocksMap.java:457)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlocksTotal(FSNamesystem.java:5046)
at 
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics.doUpdates(FSNamesystemMetrics.java:101)
at 
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(AbstractMetricsContext.java:293)
at 
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(AbstractMetricsContext.java:53)
at 
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetricsContext.java:258)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
{code}

 MiniDFSCluster Hangs
 

 Key: HADOOP-8443
 URL: https://issues.apache.org/jira/browse/HADOOP-8443
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.0.2
 Environment: Mac OSX 10.7.3
Reporter: Ivan Provalov
 Attachments: hadoop-hanging.tar


 When using MiniDFSCluster in a unit test it works and terminates as expected. 
  However creating and shutting it down from a main causes it to hang. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8357) Restore security in Hadoop 0.22 branch

2012-05-28 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284615#comment-13284615
 ] 

Konstantin Boudnik commented on HADOOP-8357:


This is indeed looks good and complete enough. I see here quite a bit of test 
scenarios we did for first Y! security release.
+1 on the changes. Let's commit it.

 Restore security in Hadoop 0.22 branch
 --

 Key: HADOOP-8357
 URL: https://issues.apache.org/jira/browse/HADOOP-8357
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
Assignee: Benoy Antony
 Attachments: SecurityTestPlan_results.pdf, 
 performance_22_vs_22sec.pdf, performance_22_vs_22sec_vs_22secon.pdf, 
 test_patch_results


 This is to track changes for restoring security in 0.22 branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6453) Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH

2012-05-28 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6453:
---

Assignee: (was: Matt Foley)
Target Version/s: 0.22.0  (was: 1.1.0, 0.22.0)
  Status: Open  (was: Patch Available)

Withdrawing patch.

 Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH
 

 Key: HADOOP-6453
 URL: https://issues.apache.org/jira/browse/HADOOP-6453
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.22.0, 0.21.0, 0.20.2
Reporter: Chad Metcalf
Priority: Minor
 Fix For: 0.22.1

 Attachments: HADOOP-6453-0.20.patch, HADOOP-6453-0.20v2.patch, 
 HADOOP-6453-0.20v3.patch, HADOOP-6453-trunkv2.patch, 
 HADOOP-6453-trunkv3.patch, HADOOP-6453.trunk.patch


 Currently the hadoop wrapper script assumes its the only place that uses 
 JAVA_LIBRARY_PATH and initializes it to a blank line.
 JAVA_LIBRARY_PATH=''
 This prevents anyone from setting this outside of the hadoop wrapper (say 
 hadoop-config.sh) for their own native libraries.
 The fix is pretty simple. Don't initialize it to '' and append the native 
 libs like normal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6453) Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH

2012-05-28 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-6453:
---

Target Version/s: 0.22.0, 1.1.1  (was: 0.22.0)

 Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH
 

 Key: HADOOP-6453
 URL: https://issues.apache.org/jira/browse/HADOOP-6453
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.20.2, 0.21.0, 0.22.0
Reporter: Chad Metcalf
Priority: Minor
 Fix For: 0.22.1

 Attachments: HADOOP-6453-0.20.patch, HADOOP-6453-0.20v2.patch, 
 HADOOP-6453-0.20v3.patch, HADOOP-6453-trunkv2.patch, 
 HADOOP-6453-trunkv3.patch, HADOOP-6453.trunk.patch


 Currently the hadoop wrapper script assumes its the only place that uses 
 JAVA_LIBRARY_PATH and initializes it to a blank line.
 JAVA_LIBRARY_PATH=''
 This prevents anyone from setting this outside of the hadoop wrapper (say 
 hadoop-config.sh) for their own native libraries.
 The fix is pretty simple. Don't initialize it to '' and append the native 
 libs like normal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7823) port HADOOP-4012 to branch-1

2012-05-28 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284604#comment-13284604
 ] 

Andrew Purtell commented on HADOOP-7823:


bq. The rest of the code looks familiar (where did the NLineInputFormat change 
come from?). 

I also did manual code inspection of 0.23, as well as followed JIRA tickets 
referenced by commenters on this issue.

Will put up a v4 shortly that includes HADOOP-6925.

 port HADOOP-4012 to branch-1
 

 Key: HADOOP-7823
 URL: https://issues.apache.org/jira/browse/HADOOP-7823
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.205.0
Reporter: Tim Broberg
Assignee: Andrew Purtell
 Attachments: HADOOP-7823-branch-1-v2.patch, 
 HADOOP-7823-branch-1-v3.patch, HADOOP-7823-branch-1-v3.patch, 
 HADOOP-7823-branch-1.patch


 Please see HADOOP-4012 - Providing splitting support for bzip2 compressed 
 files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira