[jira] [Commented] (HADOOP-7894) bin and sbin commands don't use JAVA_HOME when run from the tarball

2013-01-10 Thread Eli Reisman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550864#comment-13550864
 ] 

Eli Reisman commented on HADOOP-7894:
-

You know, I did, and it didn't pick up. I also tried the env_keep sudoers fix 
mentioned in another thread, etc. and no dice. In the end (running ubuntu) I 
had to hardcode the JAVA_HOME and a few other env vars into the env scripts in 
the run scripts in order to get my 2.0.2-alpha YARN, HDFS, and MR all running 
successfully. I'm on the wrong machine but if you're curious let me know and I 
can post a more detailed idea of what ended up working, if it helps diagnose 
the problem. Its certainly not ideal, but it is getting me by for now. Thanks 
for the advice!

> bin and sbin commands don't use  JAVA_HOME when run from the tarball 
> -
>
> Key: HADOOP-7894
> URL: https://issues.apache.org/jira/browse/HADOOP-7894
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> When running eg ./sbin/start-dfs.sh from a tarball the scripts complain 
> JAVA_HOME is not set and could not be found even if the env var is set.
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ echo $JAVA_HOME
> /home/eli/toolchain/jdk1.6.0_24-x64
> hadoop-0.24.0-SNAPSHOT $ ./sbin/start-dfs.sh 
> log4j:ERROR Could not find value for key log4j.appender.NullAppender
> log4j:ERROR Could not instantiate appender named "NullAppender".
> Starting namenodes on [localhost]
> localhost: Error: JAVA_HOME is not set and could not be found.
> {noformat}
> I have to explicitly set this via hadoop-env.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550842#comment-13550842
 ] 

Hadoop QA commented on HADOOP-9097:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564358/HADOOP-9097.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 11 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-dist hadoop-tools/hadoop-distcp 
hadoop-tools/hadoop-rumen hadoop-tools/hadoop-tools-dist:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2024//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2024//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2024//console

This message is automatically generated.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9197) Some little confusion in official documentation

2013-01-10 Thread Jason Lee (JIRA)
Jason Lee created HADOOP-9197:
-

 Summary: Some little confusion in official documentation
 Key: HADOOP-9197
 URL: https://issues.apache.org/jira/browse/HADOOP-9197
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jason Lee
Priority: Trivial


I am just a newbie to Hadoop. recently i self-study hadoop. when i reading the 
official documentations, i find that them is a little confusion by beginners 
like me. for example, look at the documents about HDFS shell guide:

In 0.17, the prefix of HDFS shell is hadoop dfs:
http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html

In 0.19, the prefix of HDFS shell is hadoop fs:
http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr

In 1.0.4,the prefix of HDFS shell is hdfs dfs:
http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls

As a beginner, i think reading them is suffering.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9192) Move token related request/response messages to common

2013-01-10 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550830#comment-13550830
 ] 

Siddharth Seth commented on HADOOP-9192:


+1 for the common changes. A minor issue with the MR change - which I'll 
mention in the MR jira.

The YARN changes to make some fields "required" instead of "optional", and the 
int64 to uint64 do make this an incompatible change w.r.t current branch-2 or 
any of the previous alpha releases from this branch. Don't really see this as a 
big issue though since I've always had the impression that the APIs are 
currently unstable - and that they need to be looked at before compatibility 
becomes a critical concern.

> Move token related request/response messages to common
> --
>
> Key: HADOOP-9192
> URL: https://issues.apache.org/jira/browse/HADOOP-9192
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HADOOP-9192.patch, HADOOP-9192.patch, HADOOP-9192.patch
>
>
> Get, Renew and Cancel delegation token requests and responses are repeated in 
> HDFS, Yarn and MR. This jira proposes to move these messages into 
> Security.proto in common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Status: Patch Available  (was: Open)

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.5, 2.0.3-alpha
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097.patch

I've attached 2 "entire" patches which are the combination of all 4 jira.

The committer should run the remove script first then apply the appropriate 
patch. The trunk patch works on branch-2 also, but there is a separate remove 
script.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-9097:
--

Attachment: HADOOP-9097-remove-entire.sh
HADOOP-9097-remove-branch23.sh
HADOOP-9097-remove-branch2.sh
HADOOP-9097-entire.patch
HADOOP-9097-branch-0.23.patch
HADOOP-9097-branch-0.23-entire.patch

> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-10 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550687#comment-13550687
 ] 

Matt Foley commented on HADOOP-8924:


+1, conditional on those tests, and test-patch results.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-10 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550686#comment-13550686
 ] 

Matt Foley commented on HADOOP-8924:


Yes, this looks okay, and thanks for doing this, Chris.  The "real work" is 
being done in Java [ack Alejandro for the first version of that], which is 
goodness and also now a transparent part of the Hadoop distribution.  And 
invoking it from a Maven plugin is also [I'll admit :-)] goodness, and 
hopefully makes this approach acceptable to you, Alejandro?

Chris, please test against RHEL5, RHEL6, and Windows.  Hopefully the MD5's will 
be the same for all three platforms, and the same as the saveVersion.sh script 
produces.  (Achieving those constraints was the tricky part of the python 
version.)  Thanks.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550606#comment-13550606
 ] 

Hudson commented on HADOOP-8419:


Integrated in Hadoop-trunk-Commit #3215 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3215/])
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) 
(Revision 1431740)
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 
1431739)

 Result = SUCCESS
eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java


> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
> HADOOP-8419-trunk-v2.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-01-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550600#comment-13550600
 ] 

Eric Yang commented on HADOOP-8419:
---

+1, I just committed this, thank you Yu.

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
> HADOOP-8419-trunk-v2.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2013-01-10 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-8419:
--

  Resolution: Fixed
Target Version/s: 3.0.0, 1.1.2
  Status: Resolved  (was: Patch Available)

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk.patch, 
> HADOOP-8419-trunk-v2.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550526#comment-13550526
 ] 

Thomas Graves commented on HADOOP-9097:
---

There are a couple of files that I'm not sure about for this. They have 
existing copyright/licenses. Anyone with experience with apache license know?

hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c




> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 2.0.3-alpha, 0.23.6
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9196) Modify BloomFilter.write() to address memory concerns

2013-01-10 Thread James (JIRA)
James created HADOOP-9196:
-

 Summary: Modify BloomFilter.write() to address memory concerns
 Key: HADOOP-9196
 URL: https://issues.apache.org/jira/browse/HADOOP-9196
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: James
Priority: Minor


It appears that org.apache.hadoop.util.bloom.BloomFilter's write() method 
creates a byte array large enough to fit the entire bit vector into memory 
during serialization.  This is unnecessary and may cause out of memory issues 
if the bit vector is sufficiently large and memory is tight.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-10 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550389#comment-13550389
 ] 

Eli Collins commented on HADOOP-9194:
-

This is why HDFS has a separate port for "service" IPC, allows you do to 
port-based QOS (see HDFS-599)

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550387#comment-13550387
 ] 

Hadoop QA commented on HADOOP-9195:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564241/HADOOP-9195.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2023//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2023//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2023//console

This message is automatically generated.

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch, HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Status: Patch Available  (was: Open)

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch, HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Status: Open  (was: Patch Available)

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch, HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Attachment: HADOOP-9195.patch

Tweaked it a bit to allow for open-ended ranges.

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch, HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-10 Thread Philip Zeyliger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549938#comment-13549938
 ] 

Philip Zeyliger commented on HADOOP-9194:
-

In a previous life, I used systems which has multiple ports open for the same 
protocols, and relied on the both hardware and OS queueing to make one port a 
higher priority than the other.  Sure was easy to reason about.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9194) RPC Support for QoS

2013-01-10 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549923#comment-13549923
 ] 

Luke Lu commented on HADOOP-9194:
-

bq.  Is there a reason we need our own fields, if the same information is 
present in DiffServ?

The field is needed in all layers. IP DS is for IP switches (layer-2/3), but 
our RPC can use none IP network like IB/RDMA, loopback, shared memory and unix 
domain socket (cf. your own work on HDFS-347). One specific example would be 
HBase region server talking to DN on the same node via unix domain socket. You 
want to be able differentiate OLTP traffic from compaction traffic and set io 
priority on the fds accordingly (assuming underlying io scheduler supports it, 
e.g. cfq). 

The RPC field is also useful for layer-7 switches (application proxies, load 
balancers) to implement QoS.

> RPC Support for QoS
> ---
>
> Key: HADOOP-9194
> URL: https://issues.apache.org/jira/browse/HADOOP-9194
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Affects Versions: 2.0.2-alpha
>Reporter: Luke Lu
>
> One of the next frontiers of Hadoop performance is QoS (Quality of Service). 
> We need QoS support to fight the inevitable "buffer bloat" (including various 
> queues, which are probably necessary for throughput) in our software stack. 
> This is important for mixed workload with different latency and throughput 
> requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the 
> same DFS.
> Any potential bottleneck will need to be managed by QoS mechanisms, starting 
> with RPC. 
> How about adding a one byte DS (differentiated services) field (a la the 
> 6-bit DS field in IP header) in the RPC header to facilitate the QoS 
> mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of 
> the header is helpful for implementing high performance QoS mechanisms in 
> switches (software or hardware) and servers with minimum decoding effort.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8981) TestMetricsSystemImpl fails on Windows

2013-01-10 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549818#comment-13549818
 ] 

Eli Collins commented on HADOOP-8981:
-

This isn't windows specific right? See HDFS-3636. How about merging to trunk as 
well?

> TestMetricsSystemImpl fails on Windows
> --
>
> Key: HADOOP-8981
> URL: https://issues.apache.org/jira/browse/HADOOP-8981
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Xuan Gong
> Fix For: trunk-win
>
> Attachments: HADOOP-8981-branch-trunk-win.1.patch, 
> HADOOP-8981-branch-trunk-win.2.patch, HADOOP-8981-branch-trunk-win.3.patch, 
> HADOOP-8981-branch-trunk-win.4.patch, HADOOP-8981-branch-trunk-win.5.patch
>
>
> The test is failing on an expected mock interaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9184) Some reducers failing to write final output file to s3.

2013-01-10 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549805#comment-13549805
 ] 

Kihwal Lee commented on HADOOP-9184:


The PreCommit builds run only against trunk. Target your patch for branch-1 and 
run it through the test-patch process (ant test-patch or run the script 
manually). If everything passes, you can post the result to this jira.


> Some reducers failing to write final output file to s3.
> ---
>
> Key: HADOOP-9184
> URL: https://issues.apache.org/jira/browse/HADOOP-9184
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Jeremy Karn
> Attachments: example.pig, HADOOP-9184-branch-0.20.patch, 
> hadoop-9184.patch, task_log.txt
>
>
> We had a Hadoop job that was running 100 reducers with most of the reducers 
> expected to write out an empty file. When the final output was to an S3 
> bucket we were finding that sometimes we were missing a final part file.  
> This was happening approximately 1 job in 3 (so approximately 1 reducer out 
> of 300 was failing to output the data properly). I've attached the pig script 
> we were using to reproduce the bug.
> After an in depth look and instrumenting the code we traced the problem to 
> moveTaskOutputs in FileOutputCommitter.  
> The code there looked like:
> {code}
> if (fs.isFile(taskOutput)) {
>   … do stuff …   
> } else if(fs.getFileStatus(taskOutput).isDir()) {
>   … do stuff … 
> }
> {code}
> And what we saw happening is that for the problem jobs neither path was being 
> exercised.  I've attached the task log of our instrumented code.  In this 
> version we added an else statement and printed out the line "THIS SEEMS LIKE 
> WE SHOULD NEVER GET HERE …".
> The root cause of this seems to be an eventual consistency issue with S3.  
> You can see in the log that the first time moveTaskOutputs is called it finds 
> that the taskOutput is a directory.  It goes into the isDir() branch and 
> successfully retrieves the list of files in that directory from S3 (in this 
> case just one file).  This triggers a recursive call to moveTaskOutputs for 
> the file found in the directory.  But in this pass through moveTaskOutput the 
> temporary output file can't be found resulting in both branches of the above 
> if statement being skipped and the temporary file never being moved to the 
> final output location.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9155) FsPermission should have different default value, 777 for directory and 666 for file

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549618#comment-13549618
 ] 

Hudson commented on HADOOP-9155:


Integrated in Hadoop-Hdfs-trunk #1281 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/])
HADOOP-9155. FsPermission should have different default value, 777 for 
directory and 666 for file. Contributed by Binglin Chang. (Revision 1431148)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431148
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java


> FsPermission should have different default value, 777 for directory and 666 
> for file
> 
>
> Key: HADOOP-9155
> URL: https://issues.apache.org/jira/browse/HADOOP-9155
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9155.patch, HADOOP-9155.v2.patch, 
> HADOOP-9155.v3.patch, HADOOP-9155.v3.patch, HADOOP-9155.v3.patch
>
>
> The default permission for {{FileSystem#create}} is the same default as for 
> {{FileSystem#mkdirs}}, namely {{0777}}. It would make more sense for the 
> default to be {{0666}} for files and {{0777}} for directories.  The current 
> default leads to a lot of files being created with the executable bit that 
> really should not be.  One example is anything created with FsShell's 
> copyToLocal.
> For reference, {{fopen}} creates files with a mode of {{0666}} (minus 
> whatever bits are set in the umask; usually {{0022}}.  This seems to be the 
> standard behavior and we should follow it.  This is also a regression since 
> branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9183) Potential deadlock in ActiveStandbyElector

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549613#comment-13549613
 ] 

Hudson commented on HADOOP-9183:


Integrated in Hadoop-Hdfs-trunk #1281 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/])
HADOOP-9183. Potential deadlock in ActiveStandbyElector. (Revision 1431251)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431251
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java


> Potential deadlock in ActiveStandbyElector
> --
>
> Key: HADOOP-9183
> URL: https://issues.apache.org/jira/browse/HADOOP-9183
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 2_jcarder_result_1.png, 3_jcarder_result_0.png, 
> HADOOP-9183.patch, HADOOP-9183.patch, HADOOP-9183.patch
>
>
> A jcarder run found a potential deadlock in the locking of 
> ActiveStandbyElector and ActiveStandbyElector.WatcherWithClientRef. No 
> deadlock has been seen in practice, this is just a theoretical possibility at 
> the moment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9155) FsPermission should have different default value, 777 for directory and 666 for file

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549605#comment-13549605
 ] 

Hudson commented on HADOOP-9155:


Integrated in Hadoop-Mapreduce-trunk #1309 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1309/])
HADOOP-9155. FsPermission should have different default value, 777 for 
directory and 666 for file. Contributed by Binglin Chang. (Revision 1431148)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431148
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java


> FsPermission should have different default value, 777 for directory and 666 
> for file
> 
>
> Key: HADOOP-9155
> URL: https://issues.apache.org/jira/browse/HADOOP-9155
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9155.patch, HADOOP-9155.v2.patch, 
> HADOOP-9155.v3.patch, HADOOP-9155.v3.patch, HADOOP-9155.v3.patch
>
>
> The default permission for {{FileSystem#create}} is the same default as for 
> {{FileSystem#mkdirs}}, namely {{0777}}. It would make more sense for the 
> default to be {{0666}} for files and {{0777}} for directories.  The current 
> default leads to a lot of files being created with the executable bit that 
> really should not be.  One example is anything created with FsShell's 
> copyToLocal.
> For reference, {{fopen}} creates files with a mode of {{0666}} (minus 
> whatever bits are set in the umask; usually {{0022}}.  This seems to be the 
> standard behavior and we should follow it.  This is also a regression since 
> branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9183) Potential deadlock in ActiveStandbyElector

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549600#comment-13549600
 ] 

Hudson commented on HADOOP-9183:


Integrated in Hadoop-Mapreduce-trunk #1309 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1309/])
HADOOP-9183. Potential deadlock in ActiveStandbyElector. (Revision 1431251)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431251
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java


> Potential deadlock in ActiveStandbyElector
> --
>
> Key: HADOOP-9183
> URL: https://issues.apache.org/jira/browse/HADOOP-9183
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 2_jcarder_result_1.png, 3_jcarder_result_0.png, 
> HADOOP-9183.patch, HADOOP-9183.patch, HADOOP-9183.patch
>
>
> A jcarder run found a potential deadlock in the locking of 
> ActiveStandbyElector and ActiveStandbyElector.WatcherWithClientRef. No 
> deadlock has been seen in practice, this is just a theoretical possibility at 
> the moment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9155) FsPermission should have different default value, 777 for directory and 666 for file

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549523#comment-13549523
 ] 

Hudson commented on HADOOP-9155:


Integrated in Hadoop-Yarn-trunk #92 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/92/])
HADOOP-9155. FsPermission should have different default value, 777 for 
directory and 666 for file. Contributed by Binglin Chang. (Revision 1431148)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431148
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java


> FsPermission should have different default value, 777 for directory and 666 
> for file
> 
>
> Key: HADOOP-9155
> URL: https://issues.apache.org/jira/browse/HADOOP-9155
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9155.patch, HADOOP-9155.v2.patch, 
> HADOOP-9155.v3.patch, HADOOP-9155.v3.patch, HADOOP-9155.v3.patch
>
>
> The default permission for {{FileSystem#create}} is the same default as for 
> {{FileSystem#mkdirs}}, namely {{0777}}. It would make more sense for the 
> default to be {{0666}} for files and {{0777}} for directories.  The current 
> default leads to a lot of files being created with the executable bit that 
> really should not be.  One example is anything created with FsShell's 
> copyToLocal.
> For reference, {{fopen}} creates files with a mode of {{0666}} (minus 
> whatever bits are set in the umask; usually {{0022}}.  This seems to be the 
> standard behavior and we should follow it.  This is also a regression since 
> branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9183) Potential deadlock in ActiveStandbyElector

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549518#comment-13549518
 ] 

Hudson commented on HADOOP-9183:


Integrated in Hadoop-Yarn-trunk #92 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/92/])
HADOOP-9183. Potential deadlock in ActiveStandbyElector. (Revision 1431251)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431251
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java


> Potential deadlock in ActiveStandbyElector
> --
>
> Key: HADOOP-9183
> URL: https://issues.apache.org/jira/browse/HADOOP-9183
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 2_jcarder_result_1.png, 3_jcarder_result_0.png, 
> HADOOP-9183.patch, HADOOP-9183.patch, HADOOP-9183.patch
>
>
> A jcarder run found a potential deadlock in the locking of 
> ActiveStandbyElector and ActiveStandbyElector.WatcherWithClientRef. No 
> deadlock has been seen in practice, this is just a theoretical possibility at 
> the moment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9183) Potential deadlock in ActiveStandbyElector

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549506#comment-13549506
 ] 

Hudson commented on HADOOP-9183:


Integrated in Hadoop-trunk-Commit #3212 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3212/])
HADOOP-9183. Potential deadlock in ActiveStandbyElector. (Revision 1431251)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431251
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java


> Potential deadlock in ActiveStandbyElector
> --
>
> Key: HADOOP-9183
> URL: https://issues.apache.org/jira/browse/HADOOP-9183
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 2_jcarder_result_1.png, 3_jcarder_result_0.png, 
> HADOOP-9183.patch, HADOOP-9183.patch, HADOOP-9183.patch
>
>
> A jcarder run found a potential deadlock in the locking of 
> ActiveStandbyElector and ActiveStandbyElector.WatcherWithClientRef. No 
> deadlock has been seen in practice, this is just a theoretical possibility at 
> the moment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549504#comment-13549504
 ] 

Hadoop QA commented on HADOOP-9195:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564145/HADOOP-9195.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2022//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2022//console

This message is automatically generated.

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9183) Potential deadlock in ActiveStandbyElector

2013-01-10 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9183:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks for the reviews, Todd.

> Potential deadlock in ActiveStandbyElector
> --
>
> Key: HADOOP-9183
> URL: https://issues.apache.org/jira/browse/HADOOP-9183
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 2_jcarder_result_1.png, 3_jcarder_result_0.png, 
> HADOOP-9183.patch, HADOOP-9183.patch, HADOOP-9183.patch
>
>
> A jcarder run found a potential deadlock in the locking of 
> ActiveStandbyElector and ActiveStandbyElector.WatcherWithClientRef. No 
> deadlock has been seen in practice, this is just a theoretical possibility at 
> the moment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549471#comment-13549471
 ] 

Caleb Jones commented on HADOOP-9195:
-

I'm failing to see how TestZKFailoverController's tests are in any way related 
to the mere addition of a new PathFilter in org.apache.hadoop.fs. I've fixed 
the compiler warnings though (was referencing deprecated FileStatus.isDir()).

Unless someone can point out how TestZKFailoverController is failing due to 
these code changes, I'm going to assume that TestZKFailoverController's tests 
are having stability issues:

{noformat}
Error Message

test timed out after 15000 milliseconds

Stacktrace

java.lang.Exception: test timed out after 15000 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:458)
at 
org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:646)
...
{noformat}

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Status: Patch Available  (was: Open)

Submitting updated patch with compiler warnings fixed.

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Status: Open  (was: Patch Available)

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Attachment: HADOOP-9195.patch

Generic use date range PathFilter implementation (fixed references to 
deprecated FileStatus.isDir() and replaced with new FileStatus.isDirectory()).

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Caleb Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Jones updated HADOOP-9195:


Attachment: (was: HADOOP-9195.patch)

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9195) Generic Use Date Range PathFilter

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549440#comment-13549440
 ] 

Hadoop QA commented on HADOOP-9195:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564132/HADOOP-9195.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 2034 javac 
compiler warnings (more than the trunk's current 2014 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2021//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2021//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2021//console

This message is automatically generated.

> Generic Use Date Range PathFilter
> -
>
> Key: HADOOP-9195
> URL: https://issues.apache.org/jira/browse/HADOOP-9195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Caleb Jones
>Priority: Minor
> Attachments: HADOOP-9195.patch
>
>
> It would be useful for Hadoop to provide a general purpose date range 
> PathFilter that operates on file mtime. I have implemented one, with tests, 
> and would like to know where best to put it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira