[jira] [Updated] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11510:

Attachment: HADOOP-11510.002.patch

Fix the build failure.

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11335) KMS ACL in meta data or database

2015-01-27 Thread Dian Fu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated HADOOP-11335:
-
Attachment: HADOOP-11335.005.patch

Update the patch to fix the unit test failure.

> KMS ACL in meta data or database
> 
>
> Key: HADOOP-11335
> URL: https://issues.apache.org/jira/browse/HADOOP-11335
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Jerry Chen
>Assignee: Dian Fu
>  Labels: Security
> Attachments: HADOOP-11335.001.patch, HADOOP-11335.002.patch, 
> HADOOP-11335.003.patch, HADOOP-11335.004.patch, HADOOP-11335.005.patch, KMS 
> ACL in metadata or database.pdf
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> Currently Hadoop KMS has implemented ACL for keys and the per key ACL are 
> stored in the configuration file kms-acls.xml.
> The management of ACL in configuration file would not be easy in enterprise 
> usage and it is put difficulties for backup and recovery.
> It is ideal to store the ACL for keys in the key meta data similar to what 
> file system ACL does.  In this way, the backup and recovery that works on 
> keys should work for ACL for keys too.
> On the other hand, with the ACL in meta data, the ACL of each key can be 
> easily manipulate with API or command line tool and take effect instantly.  
> This is very important for enterprise level access control management.  This 
> feature can be addressed by separate JIRA. While with the configuration file, 
> these would be hard to provide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11316) "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii characters

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294729#comment-14294729
 ] 

Hadoop QA commented on HADOOP-11316:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694918/HADOOP-11316.1.patch
  against trunk revision ee1e06a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5512//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5512//console

This message is automatically generated.

> "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii 
> characters
> -
>
> Key: HADOOP-11316
> URL: https://issues.apache.org/jira/browse/HADOOP-11316
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Blocker
> Attachments: HADOOP-11316.1.patch
>
>
> The command fails because following files include non-ascii characters.
> * ComparableVersion.java
> * CommonConfigurationKeysPublic.java
> * ComparableVersion.java
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
>   [javadoc]   ^
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
> {code}
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
>  error: unmappable character for encoding ASCII
>   [javadoc]   //  

[jira] [Commented] (HADOOP-11377) jdiff failing on java 8, "Null.java" not found

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294692#comment-14294692
 ] 

Tsuyoshi OZAWA commented on HADOOP-11377:
-

This problem seems to be reproduced with JDK-7 too.

> jdiff failing on java 8, "Null.java" not found
> --
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11316) "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii characters

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11316:

Attachment: HADOOP-11316.1.patch

Attaching first patch.

After fixing this problem, I faced HADOOP-11377. Should I fix HADOOP-11377 
here? I cannot find any workarounds to avoid the problem yet.

> "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii 
> characters
> -
>
> Key: HADOOP-11316
> URL: https://issues.apache.org/jira/browse/HADOOP-11316
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Blocker
> Attachments: HADOOP-11316.1.patch
>
>
> The command fails because following files include non-ascii characters.
> * ComparableVersion.java
> * CommonConfigurationKeysPublic.java
> * ComparableVersion.java
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
>   [javadoc]   ^
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
> {code}
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
>  error: unmappable character for encoding ASCII
>   [javadoc]   //  

[jira] [Updated] (HADOOP-11316) "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii characters

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11316:

Status: Patch Available  (was: Open)

> "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii 
> characters
> -
>
> Key: HADOOP-11316
> URL: https://issues.apache.org/jira/browse/HADOOP-11316
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Blocker
> Attachments: HADOOP-11316.1.patch
>
>
> The command fails because following files include non-ascii characters.
> * ComparableVersion.java
> * CommonConfigurationKeysPublic.java
> * ComparableVersion.java
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
>   [javadoc]   ^
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
> {code}
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
>  error: unmappable character for encoding ASCII
>   [javadoc]   //  

[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294596#comment-14294596
 ] 

Hadoop QA commented on HADOOP-11510:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12694744/HADOOP-11510.001.patch
  against trunk revision ee1e06a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5511//console

This message is automatically generated.

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294594#comment-14294594
 ] 

Anu Engineer commented on HADOOP-11514:
---

I meant ECChunk instead of ECBlock.

Thx
Anu

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294589#comment-14294589
 ] 

Yi Liu commented on HADOOP-11510:
-

Re-trigger the Jenkins in back-end.

Thanks Charles for review. 
{quote}
To be consistent in testTruncateThroughFileContext you could add a few more 
finals to the decls.
{quote}
It's really not an issue, I think we don't need to care about that.

{quote}
Just out of curiosity, why 3 in newLength = fileLength/3?
{quote}
We can choose any length for test. I just use that one.

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11510:

Comment: was deleted

(was: {color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12694743/HADOOP-11510.001.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5492//console

This message is automatically generated.)

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11510:

Comment: was deleted

(was: {color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12694744/HADOOP-11510.001.patch
  against trunk revision 6f9fe76.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5493//console

This message is automatically generated.)

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11469) KMS should skip default.key.acl and whitelist.key.acl when loading key acl

2015-01-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11469:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks Dian for contribution and Charles for 
review.

> KMS should skip default.key.acl and whitelist.key.acl when loading key acl
> --
>
> Key: HADOOP-11469
> URL: https://issues.apache.org/jira/browse/HADOOP-11469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11469.001.patch, HADOOP-11469.002.patch, 
> HADOOP-11469.003.patch, HADOOP-11469.004.patch, HADOOP-11469.005.patch, 
> HADOOP-11469.006.patch
>
>
> KMSACLs#setKeyACLs, loads key ACLs from the configuration by checking if the 
> key name contains "key.acl". However, this also matches "default.key.acl" and 
> "whitelist.key.acl" which is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11469) KMS should skip default.key.acl and whitelist.key.acl when loading key acl

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294541#comment-14294541
 ] 

Hudson commented on HADOOP-11469:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6948 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6948/])
HADOOP-11469. KMS should skip default.key.acl and whitelist.key.acl when 
loading key acl. (Dian Fu via yliu) (yliu: rev 
ee1e06a3ab9136a3cd32b44c5535dfd2443bfad6)
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSACLs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java


> KMS should skip default.key.acl and whitelist.key.acl when loading key acl
> --
>
> Key: HADOOP-11469
> URL: https://issues.apache.org/jira/browse/HADOOP-11469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Minor
> Attachments: HADOOP-11469.001.patch, HADOOP-11469.002.patch, 
> HADOOP-11469.003.patch, HADOOP-11469.004.patch, HADOOP-11469.005.patch, 
> HADOOP-11469.006.patch
>
>
> KMSACLs#setKeyACLs, loads key ACLs from the configuration by checking if the 
> key name contains "key.acl". However, this also matches "default.key.acl" and 
> "whitelist.key.acl" which is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294489#comment-14294489
 ] 

Anu Engineer commented on HADOOP-11514:
---


[~drankye] This is more of a question, than a review comment. when we invoke 
the pure java decoder how does it know if the ECBlock is valid ? In other 
words, don't you need the hash of the block to be part of ECBlock so that it 
can be validated before decoding ? I am very new to this code base, if this 
validation is done somewhere else, Please point me to that.

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2015-01-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294482#comment-14294482
 ] 

Colin Patrick McCabe commented on HADOOP-11223:
---

bq. Gopal wrote: The primary blockage I had trying to implement this is the 
expected behaviour of addDefaultResource(). The overlays set programmatically 
is somewhat easier to disable cleanly.

Yeah.  {{addDefaultResource}} is certainly problematic.  We could probably just 
have a static copy of the configuration for each component (hadoop common, 
hdfs, yarn, etc.), to get around the fact that different XML files will be 
parsed for each one.

bq. Varun wrote: From the preceding discussion it seemed immutability was not 
to be considered. And programmers using the class will take care of it.

I think if we've learned anything from Configuration.java, it's that the 
programmers will *not* take care of it :)

bq. Can't we keep a simple flag in Configuration class to make it read only. 
DefaultConfiguration can probably extend Configuration. We can make this flag 
to be set only from DefaultConfiguration(in constructor) and disallow any 
operations which involve setting keys or loading resources, if this flag is set.

I would rather not add a lot of complexity without benchmarks showing that it's 
needed.  Copying the configuration object is simple and we know it will work.  
Plus, the real cost is reading the config files from the disk, not creating 
more Java objects on the heap.

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Varun Saxena
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294401#comment-14294401
 ] 

Zhe Zhang commented on HADOOP-11514:


[~drankye] When you make the next rev, could you also fix the lines exceeding 
80 characters? I found at least one:
line 25 of {{RawErasureCoder}}

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294350#comment-14294350
 ] 

Zhe Zhang commented on HADOOP-11514:


Just saw one issue not addressed:

dataSize, paritySize, and chunkSize apply to all descendants of this interface 
(RawErasureCoder). Shouldn't they become member variables?

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294330#comment-14294330
 ] 

Zhe Zhang commented on HADOOP-11514:


No worries, I'll make the package name change.

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294328#comment-14294328
 ] 

Kai Zheng commented on HADOOP-11514:


Zhe and Tsz, as I'm traveling today and not convenient to hit my dev 
environment, I'm not able to update the patch changing the package name. Maybe 
I can get it done in a follow up JIRA?

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294321#comment-14294321
 ] 

Zhe Zhang commented on HADOOP-11514:


I agree, let's commit after changing package name to {{erasurecode}}

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10626) Limit Returning Attributes for LDAP search

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294271#comment-14294271
 ] 

Hudson commented on HADOOP-10626:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6942 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6942/])
HADOOP-10626. Limit Returning Attributes for LDAP search. Contributed by Jason 
Hubbard. (atm: rev 8bf6f0b70396e8f2d3b37e6da194b19f357e846a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Limit Returning Attributes for LDAP search
> --
>
> Key: HADOOP-10626
> URL: https://issues.apache.org/jira/browse/HADOOP-10626
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.3.0
>Reporter: Jason Hubbard
>Assignee: Jason Hubbard
>  Labels: easyfix, newbie, performance
> Fix For: 2.7.0
>
> Attachments: HADOOP-10626.patch, HADOOP-10626.patch
>
>
> When using Hadoop Ldap Group mappings in an enterprise environment, searching 
> groups and returning all members can take a long time causing a timeout.  
> This causes not all groups to be returned for a user.  Because the first 
> search only searches for the user dn and the second search retrieves the 
> group member attribute, we only need to return the group member attribute on 
> the search speeding up the search.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10626) Limit Returning Attributes for LDAP search

2015-01-27 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-10626:

   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Jason.

> Limit Returning Attributes for LDAP search
> --
>
> Key: HADOOP-10626
> URL: https://issues.apache.org/jira/browse/HADOOP-10626
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.3.0
>Reporter: Jason Hubbard
>Assignee: Jason Hubbard
>  Labels: easyfix, newbie, performance
> Fix For: 2.7.0
>
> Attachments: HADOOP-10626.patch, HADOOP-10626.patch
>
>
> When using Hadoop Ldap Group mappings in an enterprise environment, searching 
> groups and returning all members can take a long time causing a timeout.  
> This causes not all groups to be returned for a user.  Because the first 
> search only searches for the user dn and the second search retrieves the 
> group member attribute, we only need to return the group member attribute on 
> the search speeding up the search.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294230#comment-14294230
 ] 

Hadoop QA commented on HADOOP-11514:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694748/HDFS-7353-v7.patch
  against trunk revision 1e2d98a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5510//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5510//console

This message is automatically generated.

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294152#comment-14294152
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11514:
--

Oops, we still need to change the package name before committing the patch.

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11264) Common side changes for HDFS Erasure coding support

2015-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-11264:
-
Component/s: io

> Common side changes for HDFS Erasure coding support
> ---
>
> Key: HADOOP-11264
> URL: https://issues.apache.org/jira/browse/HADOOP-11264
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>
> This is umbrella JIRA for tracking the common side changes for HDFS Erasure 
> Coding support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294136#comment-14294136
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11514:
--

I believe the eclipse error has nothing to do with the patch.  Will commit it 
soon.

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-11514:
-
Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-11264

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11514) Raw Erasure Coder API for concrete encoding and decoding

2015-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze moved HDFS-7353 to HADOOP-11514:


Fix Version/s: (was: HDFS-EC)
   HDFS-EC
  Key: HADOOP-11514  (was: HDFS-7353)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Raw Erasure Coder API for concrete encoding and decoding
> 
>
> Key: HADOOP-11514
> URL: https://issues.apache.org/jira/browse/HADOOP-11514
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HDFS-7353-v1.patch, HDFS-7353-v2.patch, 
> HDFS-7353-v3.patch, HDFS-7353-v4.patch, HDFS-7353-v5.patch, 
> HDFS-7353-v6.patch, HDFS-7353-v7.patch
>
>
> This is to abstract and define raw erasure coder API across different codes 
> algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
> various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8934) Shell command ls should include sort options

2015-01-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294073#comment-14294073
 ] 

Allen Wittenauer commented on HADOOP-8934:
--

[~jonallen], if you want to rebase this for trunk, let's get it committed.

Thanks!

> Shell command ls should include sort options
> 
>
> Key: HADOOP-8934
> URL: https://issues.apache.org/jira/browse/HADOOP-8934
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch
>
>
> The shell command ls should include options to sort the output similar to the 
> unix ls command.  The following options seem appropriate:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8934) Shell command ls should include sort options

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8934:
-
Status: Open  (was: Patch Available)

> Shell command ls should include sort options
> 
>
> Key: HADOOP-8934
> URL: https://issues.apache.org/jira/browse/HADOOP-8934
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch
>
>
> The shell command ls should include options to sort the output similar to the 
> unix ls command.  The following options seem appropriate:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11442) hadoop-azure: Create test jar

2015-01-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294051#comment-14294051
 ] 

Chris Nauroth commented on HADOOP-11442:


Hi, [~shkhande].  I have just 2 minor comments:
# Jenkins was not able to apply the patch because of line ending differences.  
If you convert the patch to Linux line endings, then Jenkins will be able to 
apply it.
# It appears there is an incorrect indentation on .

> hadoop-azure: Create test jar
> -
>
> Key: HADOOP-11442
> URL: https://issues.apache.org/jira/browse/HADOOP-11442
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
> Environment: windows, azure
>Reporter: shashank
>Assignee: Chris Nauroth
> Attachments: HADOOP-11442.patch
>
>
> pom of hadoop-azure project to needs to be modified to create a test jar as 
> well. This test jar is required to run test cases of Windowsazuretablesink 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9636) UNIX like sort options for ls shell command

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9636.
--
Resolution: Duplicate

OK, I see what happened. There is *yet another* version of this JIRA but it 
wasn't linked appropriately. *sigh*

Closing this one as a dupe too.

> UNIX like sort options for ls shell command
> ---
>
> Key: HADOOP-9636
> URL: https://issues.apache.org/jira/browse/HADOOP-9636
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Varun Dhussa
>Priority: Minor
> Attachments: HADOOP-9636-001.patch, HADOOP-9636-02.patch
>
>
> Add support for unix ls like sort options in fs -ls:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : sort by acess time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9636) UNIX like sort options for ls shell command

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9636:
-
Attachment: HADOOP-9636-02.patch

-02:
* This is just a rebase of the previous patch.

> UNIX like sort options for ls shell command
> ---
>
> Key: HADOOP-9636
> URL: https://issues.apache.org/jira/browse/HADOOP-9636
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Varun Dhussa
>Priority: Minor
> Attachments: HADOOP-9636-001.patch, HADOOP-9636-02.patch
>
>
> Add support for unix ls like sort options in fs -ls:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : sort by acess time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9335) Including UNIX like sort options for ls shell command

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9335.
--
Resolution: Duplicate

Closing this as a dupe of HADOOP-9636.  While this JIRA is older, the other one 
has a newer patch that likely just needs a rebase to get committed.

> Including UNIX like sort options for ls shell command
> -
>
> Key: HADOOP-9335
> URL: https://issues.apache.org/jira/browse/HADOOP-9335
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Arjun K R
>Priority: Minor
>  Labels: fs, shell
> Attachments: HADOOP_9335.patch
>
>
> Currently ls shell command does not support sort optiions.The ls shell 
> command should include following unix like sort options :
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : sort by acess time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-9636) UNIX like sort options for ls shell command

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-9636:
--

I'm re-opening this one, since the patch is more recent.  It just needs a 
rebase.

> UNIX like sort options for ls shell command
> ---
>
> Key: HADOOP-9636
> URL: https://issues.apache.org/jira/browse/HADOOP-9636
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Varun Dhussa
>Priority: Minor
> Attachments: HADOOP-9636-001.patch
>
>
> Add support for unix ls like sort options in fs -ls:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : sort by acess time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9335) Including UNIX like sort options for ls shell command

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9335:
-
Fix Version/s: (was: 0.20.2)

> Including UNIX like sort options for ls shell command
> -
>
> Key: HADOOP-9335
> URL: https://issues.apache.org/jira/browse/HADOOP-9335
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Arjun K R
>Priority: Minor
>  Labels: fs, shell
> Attachments: HADOOP_9335.patch
>
>
> Currently ls shell command does not support sort optiions.The ls shell 
> command should include following unix like sort options :
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : sort by acess time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7943) DFS shell get/copy gives weird errors when permissions are wrong with directories

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7943.
--
Resolution: Won't Fix

Daryn reports this is fixed in trunk and it's unlikely we'll ever do another 
1.x release, given the last one was over a year ago at this point...

Closing as won't fix.

> DFS shell get/copy gives weird errors when permissions are wrong with 
> directories
> -
>
> Key: HADOOP-7943
> URL: https://issues.apache.org/jira/browse/HADOOP-7943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.205.0
>Reporter: Ben West
>Priority: Minor
>  Labels: hdfs, shell
> Attachments: hadoop-7943-1.0.0.patch, hadoop-7943-1.0.0v2.patch, 
> hadoop-7943-1.0v3.patch, hadoop-7943-1.0v4.patch, hadoop-7943.patch, 
> hadoop-7943.patch
>
>
> Let /foo be a *directory* in HDFS (issue does not occur with files) and /bar 
> be a local dir. Do something like:
> {code}
> $ chmod u-w /bar
> $ hadoop -get /foo/myfile /bar
> copyToLocal: Permission denied  # correctly tells me permission is denied
> $ hadoop -get /foo /bar
> copyToLocal: null   
> $ hadoop -get /foo/ /bar
> copyToLocal: No such file or directory
> {code}
> I've been banging my head for a bit trying to figure out why hadoop thinks my 
> directory doesn't exist, but it turns out the problem was just with my local 
> permissions. The "Permission denied" error would've been a lot nicer to get.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11441) Hadoop-azure: Change few methods scope to public

2015-01-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294030#comment-14294030
 ] 

Chris Nauroth commented on HADOOP-11441:


Hello, [~shkhande].  Would you also please add the {{VisibleForTesting}} 
annotation to {{AzureNativeFileSystemStore#getAccountKeyFromConfiguration}}?  
We use this annotation to indicate we are relaxing the visibility of a method 
only to facilitate testing.  You can see examples of this in the same file.

> Hadoop-azure: Change few methods scope to public
> 
>
> Key: HADOOP-11441
> URL: https://issues.apache.org/jira/browse/HADOOP-11441
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: shashank
>Assignee: Chris Nauroth
> Attachments: HADOOP-11441.patch, HADOOP-11441.patch
>
>
> TestWindowsAzureTableSinkSetup class test cases have dependencies with 
> hadoop-azure classes, however few functions in hadoop azure  classes are 
> having default access and are not visible outside package.
> AzureBlobStorageTestAccount.createTestAccount()
> AzureNativeFileSystemStore.getAccountKeyFromConfiguration()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7943) DFS shell get/copy gives weird errors when permissions are wrong with directories

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7943:
-
Status: Open  (was: Patch Available)

> DFS shell get/copy gives weird errors when permissions are wrong with 
> directories
> -
>
> Key: HADOOP-7943
> URL: https://issues.apache.org/jira/browse/HADOOP-7943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.205.0
>Reporter: Ben West
>Priority: Minor
>  Labels: hdfs, shell
> Attachments: hadoop-7943-1.0.0.patch, hadoop-7943-1.0.0v2.patch, 
> hadoop-7943-1.0v3.patch, hadoop-7943-1.0v4.patch, hadoop-7943.patch, 
> hadoop-7943.patch
>
>
> Let /foo be a *directory* in HDFS (issue does not occur with files) and /bar 
> be a local dir. Do something like:
> {code}
> $ chmod u-w /bar
> $ hadoop -get /foo/myfile /bar
> copyToLocal: Permission denied  # correctly tells me permission is denied
> $ hadoop -get /foo /bar
> copyToLocal: null   
> $ hadoop -get /foo/ /bar
> copyToLocal: No such file or directory
> {code}
> I've been banging my head for a bit trying to figure out why hadoop thinks my 
> directory doesn't exist, but it turns out the problem was just with my local 
> permissions. The "Permission denied" error would've been a lot nicer to get.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11513) Artifact errors with Maven build

2015-01-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11513:
---
Summary: Artifact errors with Maven build  (was: Artifact errors with Maven 
build on Linux)

> Artifact errors with Maven build
> 
>
> Key: HADOOP-11513
> URL: https://issues.apache.org/jira/browse/HADOOP-11513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>
> I recently started getting the following errors with _mvn -q clean compile 
> install_
> {code}
> [ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
> [ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
> [ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
> [ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
> [ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
> [ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
> [ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
> [ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
> {code}
> mvn --version reports:
> {code}
> Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
> 2014-12-14T09:29:23-08:00)
> Maven home: /home/vagrant/usr/share/maven
> Java version: 1.7.0_65, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-7-openjdk-amd64/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11513) Artifact errors with Maven build

2015-01-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11513:
---
Description: 
I recently started getting the following errors with _mvn -q clean compile 
install_ on Linux and OS X.

{code}
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
{code}

_mvn --version_ on Linux reports:
{code}
Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
2014-12-14T09:29:23-08:00)
Maven home: /home/vagrant/usr/share/maven
Java version: 1.7.0_65, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-7-openjdk-amd64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"
{code}

  was:
I recently started getting the following errors with _mvn -q clean compile 
install_

{code}
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
{code}

mvn --version reports:
{code}
Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
2014-12-14T09:29:23-08:00)
Maven home: /home/vagrant/usr/share/maven
Java version: 1.7.0_65, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-7-openjdk-amd64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"
{code}


> Artifact errors with Maven build
> 
>
> Key: HADOOP-11513
> URL: https://issues.apache.org/jira/browse/HADOOP-11513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>
> I recently started getting the following errors with _mvn -q clean compile 
> install_ on Linux and OS X.
> {code}
> [ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
> [ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
> [ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
> [ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
> [ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
> [ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
> [ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
> [ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
> {code}
> _mvn --version_ on Linux reports:
> {code}
> Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
> 2014-12-14T09:29:23-08:00)
> Maven home: /home/vagrant/usr/share/maven
> Java version: 1.7.0_65, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-7-openjdk-amd64/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4297) Enable Java assertions when running tests

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294002#comment-14294002
 ] 

Hudson commented on HADOOP-4297:


FAILURE: Integrated in Hadoop-trunk-Commit #6939 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6939/])
HADOOP-4297. Enable Java assertions when running tests. Contributed by Tsz Wo 
Nicholas Sze. (wheat9: rev 543064e89d2ff2d7eb7727664c3f3aa8e9b5bdef)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestJUnitSetup.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Enable Java assertions when running tests
> -
>
> Key: HADOOP-4297
> URL: https://issues.apache.org/jira/browse/HADOOP-4297
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.19.0, 0.20.0
>Reporter: Yoram Kulbak
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.7.0
>
> Attachments: HADOOP-4297.patch, HADOOP-4297.patch, 
> c4297_20140719.patch
>
>
> A suggestion to enable Java assertions in the project's build xml when 
> running tests. I think this would improve the build quality.
> To enable assertions add the following snippets to the JUnit tasks in 
> build.xml:
> 
>  
> 
> --
> For example:
> 
>  ...
> 
> 
> 
> 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11513) Artifact errors with Maven build on Linux

2015-01-27 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11513:
--

 Summary: Artifact errors with Maven build on Linux
 Key: HADOOP-11513
 URL: https://issues.apache.org/jira/browse/HADOOP-11513
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
Reporter: Arpit Agarwal


I recently started getting the following errors with _mvn -q clean compile 
install_

{code}
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
{code}

mvn --version reports:
{code}
Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
2014-12-14T09:29:23-08:00)
Maven home: /home/vagrant/usr/share/maven
Java version: 1.7.0_65, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-7-openjdk-amd64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7427) syntax error in smart-apply-patch.sh

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7427.
--
Resolution: Cannot Reproduce

Closing as cannot reproduce.

> syntax error in smart-apply-patch.sh 
> -
>
> Key: HADOOP-7427
> URL: https://issues.apache.org/jira/browse/HADOOP-7427
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Tsz Wo Nicholas Sze
>
> {noformat}
>  [exec] Finished build.
>  [exec] hdfs/src/test/bin/smart-apply-patch.sh: line 60: syntax error in 
> conditional expression: unexpected token `('
> BUILD FAILED
> hdfs/build.xml:1595: exec returned: 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-6858) Enable rotateable JVM garbage collection logs for Hadoop daemons

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6858.
--
Resolution: Later

Closing this as 'Later'.

patch definitely won't apply in modern versions.  plus there is now enough 
shell infrastructure in place to do this sort of thing without changing the 
base shell code.

> Enable rotateable JVM garbage collection logs for Hadoop daemons
> 
>
> Key: HADOOP-6858
> URL: https://issues.apache.org/jira/browse/HADOOP-6858
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Andrew Ryan
> Attachments: HADOOP-6858.patch
>
>
> The purpose of this enhancement is to make it easier to collect garbage 
> collection logs and insure that they persist across restarts in the same way 
> that the standard output files of Hadoop daemon JVM's currently does.
> Garbage collection logs are a vital debugging tool for administrators and 
> developers. In our production environments, at some point or another, every 
> single type of Hadoop daemon has OOM'ed or experienced other significant 
> issues related to GC and/or lack of heap memory. For the longest time, we 
> have put in garbage collection logs in our HADOOP_NAMENODE_OPTS, 
> HADOOP_JOBTRACKER_OPTS, etc. by using options like "-XX:+PrintGCDateStamps 
> -XX:+PrintGCDetails -Xloggc:$HADOOP_LOG_DIR/jobtracker.gc.log".
> Unfortunately, these logs don't survive a restart of the node, so if a node 
> OOM's and then is restarted automatically, or manually by someone who is 
> unaware, we lose the GC logs forever. We also have to manually add GC log 
> options to each daemon. This patch:
> 1) Creates a single, optional, off by default, parameter for specifying GC 
> logging.
> 2) If that parameter is set, automatically enables GC logging for all daemons 
> in the cluster. The parameter is flexible enough to allow for the different 
> ways various vendor's JVM's require garbage collection logging to be 
> specified. 
> 3) If GC logging is on, insures that the GC log files for each daemon are 
> rotated with up to 5 copies kept, same as the .out files currently.
> We are currently running a variation of this patch in our 0.20 install. This 
> patch actually includes changes to common, mapred, and hdfs, so it obviously 
> cannot be applied as-is, but is included here for review and comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-4297) Enable Java assertions when running tests

2015-01-27 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-4297:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2.

> Enable Java assertions when running tests
> -
>
> Key: HADOOP-4297
> URL: https://issues.apache.org/jira/browse/HADOOP-4297
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.19.0, 0.20.0
>Reporter: Yoram Kulbak
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.7.0
>
> Attachments: HADOOP-4297.patch, HADOOP-4297.patch, 
> c4297_20140719.patch
>
>
> A suggestion to enable Java assertions in the project's build xml when 
> running tests. I think this would improve the build quality.
> To enable assertions add the following snippets to the JUnit tasks in 
> build.xml:
> 
>  
> 
> --
> For example:
> 
>  ...
> 
> 
> 
> 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11316) "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii characters

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA reassigned HADOOP-11316:
---

Assignee: Tsuyoshi OZAWA

> "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii 
> characters
> -
>
> Key: HADOOP-11316
> URL: https://issues.apache.org/jira/browse/HADOOP-11316
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
>Priority: Blocker
>
> The command fails because following files include non-ascii characters.
> * ComparableVersion.java
> * CommonConfigurationKeysPublic.java
> * ComparableVersion.java
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
>   [javadoc]   ^
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
> {code}
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
>  error: unmappable character for encoding ASCII
>   [javadoc]   //  

[jira] [Commented] (HADOOP-11316) "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii characters

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293983#comment-14293983
 ] 

Tsuyoshi OZAWA commented on HADOOP-11316:
-

Yes, let me fix this problem.

> "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii 
> characters
> -
>
> Key: HADOOP-11316
> URL: https://issues.apache.org/jira/browse/HADOOP-11316
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi OZAWA
>Priority: Blocker
>
> The command fails because following files include non-ascii characters.
> * ComparableVersion.java
> * CommonConfigurationKeysPublic.java
> * ComparableVersion.java
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
>   [javadoc]   ^
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
> {code}
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
>  error: unmappable character for encoding ASCII
>   [javadoc]   //  

[jira] [Updated] (HADOOP-6858) Enable rotateable JVM garbage collection logs for Hadoop daemons

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6858:
-
Affects Version/s: (was: 0.22.0)

> Enable rotateable JVM garbage collection logs for Hadoop daemons
> 
>
> Key: HADOOP-6858
> URL: https://issues.apache.org/jira/browse/HADOOP-6858
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Andrew Ryan
> Attachments: HADOOP-6858.patch
>
>
> The purpose of this enhancement is to make it easier to collect garbage 
> collection logs and insure that they persist across restarts in the same way 
> that the standard output files of Hadoop daemon JVM's currently does.
> Garbage collection logs are a vital debugging tool for administrators and 
> developers. In our production environments, at some point or another, every 
> single type of Hadoop daemon has OOM'ed or experienced other significant 
> issues related to GC and/or lack of heap memory. For the longest time, we 
> have put in garbage collection logs in our HADOOP_NAMENODE_OPTS, 
> HADOOP_JOBTRACKER_OPTS, etc. by using options like "-XX:+PrintGCDateStamps 
> -XX:+PrintGCDetails -Xloggc:$HADOOP_LOG_DIR/jobtracker.gc.log".
> Unfortunately, these logs don't survive a restart of the node, so if a node 
> OOM's and then is restarted automatically, or manually by someone who is 
> unaware, we lose the GC logs forever. We also have to manually add GC log 
> options to each daemon. This patch:
> 1) Creates a single, optional, off by default, parameter for specifying GC 
> logging.
> 2) If that parameter is set, automatically enables GC logging for all daemons 
> in the cluster. The parameter is flexible enough to allow for the different 
> ways various vendor's JVM's require garbage collection logging to be 
> specified. 
> 3) If GC logging is on, insures that the GC log files for each daemon are 
> rotated with up to 5 copies kept, same as the .out files currently.
> We are currently running a variation of this patch in our 0.20 install. This 
> patch actually includes changes to common, mapred, and hdfs, so it obviously 
> cannot be applied as-is, but is included here for review and comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7569) Remove common start-all.sh

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7569:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Closing as won't fix.  start-all is still relevant for non-secure installs.

> Remove common start-all.sh
> --
>
> Key: HADOOP-7569
> URL: https://issues.apache.org/jira/browse/HADOOP-7569
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Eli Collins
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-7569.patch
>
>
> MAPREDUCE-2736 removes start-mapred.sh. We should either update the call to 
> start-mapred to hadoop-yarn/bin/start-all.sh instead or just remove the 
> script since it's deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-4297) Enable Java assertions when running tests

2015-01-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293975#comment-14293975
 ] 

Haohui Mai commented on HADOOP-4297:


+1. I'll commit it shortly.

> Enable Java assertions when running tests
> -
>
> Key: HADOOP-4297
> URL: https://issues.apache.org/jira/browse/HADOOP-4297
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.19.0, 0.20.0
>Reporter: Yoram Kulbak
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HADOOP-4297.patch, HADOOP-4297.patch, 
> c4297_20140719.patch
>
>
> A suggestion to enable Java assertions in the project's build xml when 
> running tests. I think this would improve the build quality.
> To enable assertions add the following snippets to the JUnit tasks in 
> build.xml:
> 
>  
> 
> --
> For example:
> 
>  ...
> 
> 
> 
> 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10290) Surefire steals focus on MacOS

2015-01-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10290:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

If JDK7 fixes this, then I'm going to close this as Won't Fix.

> Surefire steals focus on MacOS
> --
>
> Key: HADOOP-10290
> URL: https://issues.apache.org/jira/browse/HADOOP-10290
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Laurent Goujon
> Attachments: hadoop-10290.patch, hadoop-10290.patch
>
>
> When running tests on MacOS X, surefire plugin keeps stealing focus from 
> current application.
> This can be avoided by adding {noformat}-Djava.awt.headless=true{noformat} to 
> the surefire commandline



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11346) Rewrite sls/rumen to use new shell framework

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293958#comment-14293958
 ] 

Hadoop QA commented on HADOOP-11346:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694814/HADOOP-11346-02.patch
  against trunk revision f56da3c.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 13 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5509//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5509//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5509//console

This message is automatically generated.

> Rewrite sls/rumen to use new shell framework
> 
>
> Key: HADOOP-11346
> URL: https://issues.apache.org/jira/browse/HADOOP-11346
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tools
>Reporter: Allen Wittenauer
>Assignee: John Smith
> Attachments: HADOOP-11346-01.patch, HADOOP-11346-02.patch, 
> HADOOP-11346.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11346) Rewrite sls/rumen to use new shell framework

2015-01-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293931#comment-14293931
 ] 

Allen Wittenauer commented on HADOOP-11346:
---

It'd be great if you also cygwin'd the paths so that this has a better chance 
of working on Windows.  See the code in hadoop-functions.sh finalize area.

> Rewrite sls/rumen to use new shell framework
> 
>
> Key: HADOOP-11346
> URL: https://issues.apache.org/jira/browse/HADOOP-11346
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tools
>Reporter: Allen Wittenauer
>Assignee: John Smith
> Attachments: HADOOP-11346-01.patch, HADOOP-11346-02.patch, 
> HADOOP-11346.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11460) Deprecate shell vars

2015-01-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293924#comment-14293924
 ] 

Allen Wittenauer commented on HADOOP-11460:
---

Yes.  As usual, feel free to update the patch. :D

> Deprecate shell vars
> 
>
> Key: HADOOP-11460
> URL: https://issues.apache.org/jira/browse/HADOOP-11460
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>  Labels: scripts, shell
> Attachments: HADOOP-11460-00.patch, HADOOP-11460-01.patch, 
> HADOOP-11460-02.patch
>
>
> It is a very common shell pattern in 3.x to effectively replace sub-project 
> specific vars with generics.  We should have a function that does this 
> replacement and provides a warning to the end user that the old shell var is 
> deprecated.  Additionally, we should use this shell function to deprecate the 
> shell vars that are holdovers already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11316) "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii characters

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293918#comment-14293918
 ] 

Steve Loughran commented on HADOOP-11316:
-

Looks like some invalid chars have got in, one in a username., 

others elsewhere

hboutemy credit lines should be cut; current ASF policy is "no authors"

Line 318 is a — instead of -- for the end of an XML comment in a javac comment. 
Again, trivial to cleanup

do you want to submit a patch?

> "mvn package -Pdist,docs -DskipTests -Dtar" fails because of non-ascii 
> characters
> -
>
> Key: HADOOP-11316
> URL: https://issues.apache.org/jira/browse/HADOOP-11316
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi OZAWA
>Priority: Blocker
>
> The command fails because following files include non-ascii characters.
> * ComparableVersion.java
> * CommonConfigurationKeysPublic.java
> * ComparableVersion.java
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
>   [javadoc]   ^
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java:13:
>  error: unmappable character for encoding ASCII
>   [javadoc] //author mailto:hbout...@apache.org";>Herv?? 
> Boutemy
> {code}
> {code}
>   [javadoc] 
> /mnt/build/hadoop-2.6.0-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java:318:
>  error: unmappable character for encoding ASCII
>   [javadoc]   //  

[jira] [Updated] (HADOOP-9954) Hadoop 2.0.5 doc build failure - OutOfMemoryError exception

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9954:
---
   Resolution: Duplicate
Fix Version/s: (was: 2.0.5-alpha)
   2.5.0
   Status: Resolved  (was: Patch Available)

resolving as fixed in 2.5.0. Moral: get these build patches in as the problems 
won't go away.


> Hadoop 2.0.5 doc build failure - OutOfMemoryError exception
> ---
>
> Key: HADOOP-9954
> URL: https://issues.apache.org/jira/browse/HADOOP-9954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.5-alpha
> Environment: CentOS 5, Sun JDK 1.6 (but not on CenOS6 + OpenJDK 7).
>Reporter: Paul Han
> Fix For: 2.5.0
>
> Attachments: HADOOP-9954.patch
>
>
> When run hadoop build with command line options:
> {code}
> mvn package -Pdist,native,docs -DskipTests -Dtar 
> {code}
> Build failed adn OutOfMemoryError Exception is thrown:
> {code}
> [INFO] --- maven-source-plugin:2.1.2:test-jar (default) @ hadoop-hdfs ---
> [INFO] 
> [INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ hadoop-hdfs ---
> [INFO] ** FindBugsMojo execute ***
> [INFO] canGenerate is true
> [INFO] ** FindBugsMojo executeFindbugs ***
> [INFO] Temp File is 
> /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/findbugsTemp.xml
> [INFO] Fork Value is true
>  [java] Out of memory
>  [java] Total memory: 477M
>  [java]  free memory: 68M
>  [java] Analyzed: 
> /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/classes
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/org/codehaus/mojo/findbugs-maven-plugin/2.3.2/findbugs-maven-plugin-2.3.2.jar
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/com/google/code/findbugs/bcel/1.3.9/bcel-1.3.9.jar
>  ...
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar
>  [java] Exception in thread "main" java.lang.OutOfMemoryError: GC 
> overhead limit exceeded
>  [java]   at java.util.HashMap.(HashMap.java:226)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefSet.(UnconditionalValueDerefSet.java:68)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:650)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:82)
>  [java]   at 
> edu.umd.cs.findbugs.ba.BasicAbstractDataflowAnalysis.getFactOnEdge(BasicAbstractDataflowAnalysis.java:119)
>  [java]   at 
> edu.umd.cs.findbugs.ba.AbstractDataflow.getFactOnEdge(AbstractDataflow.java:54)
>  [java]   at 
> edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.examineNullValues(NullDerefAndRedundantComparisonFinder.java:297)
>  [java]   at 
> edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.execute(NullDerefAndRedundantComparisonFinder.java:150)
>  [java]   at 
> edu.umd.cs.findbugs.detect.FindNullDeref.analyzeMethod(FindNullDeref.java:278)
>  [java]   at 
> edu.umd.cs.findbugs.detect.FindNullDeref.visitClassContext(FindNullDeref.java:205)
>  [java]   at 
> edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68)
>  [java]   at 
> edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:979)
>  [java]   at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:230)
>  [java]   at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348)
>  [java]   at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057)
>  [java] Java Result: 1
> [INFO] No bugs found
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-01-27 Thread Jim VanOosten (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293912#comment-14293912
 ] 

Jim VanOosten commented on HADOOP-10420:


Steve, 

If SoftLayer provides a public endpoint to use for testing,  how much space 
would be needed and how long will the endpoint be needed?


> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11346) Rewrite sls/rumen to use new shell framework

2015-01-27 Thread John Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Smith updated HADOOP-11346:

Attachment: HADOOP-11346-02.patch

Updated to fix no more JAVA_HEAP_MAX

> Rewrite sls/rumen to use new shell framework
> 
>
> Key: HADOOP-11346
> URL: https://issues.apache.org/jira/browse/HADOOP-11346
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tools
>Reporter: Allen Wittenauer
>Assignee: John Smith
> Attachments: HADOOP-11346-01.patch, HADOOP-11346-02.patch, 
> HADOOP-11346.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11346) Rewrite sls/rumen to use new shell framework

2015-01-27 Thread John Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Smith reassigned HADOOP-11346:
---

Assignee: John Smith  (was: Allen Wittenauer)

> Rewrite sls/rumen to use new shell framework
> 
>
> Key: HADOOP-11346
> URL: https://issues.apache.org/jira/browse/HADOOP-11346
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tools
>Reporter: Allen Wittenauer
>Assignee: John Smith
> Attachments: HADOOP-11346-01.patch, HADOOP-11346-02.patch, 
> HADOOP-11346.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11478) HttpFSServer does not properly impersonate a real user when executing "open" operation in a kerberised environment

2015-01-27 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293894#comment-14293894
 ] 

Charles Lamb commented on HADOOP-11478:
---

[~ranadip],

I assume that when you configured your kms acls per the instructions in 
HADOOP-11479 that this problem went away. Feel free to reopen if that's not the 
case.

Charles


> HttpFSServer does not properly impersonate a real user when executing "open" 
> operation in a kerberised environment
> --
>
> Key: HADOOP-11478
> URL: https://issues.apache.org/jira/browse/HADOOP-11478
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: CentOS
>Reporter: Ranadip
>Priority: Blocker
>
> Setup:
> - Kerberos enabled in the cluster, including Hue SSO
> - Encryption enabled using KMS. Encryption key and encryption zone created. 
> KMS key level ACL created to allow only real user to have all access to the 
> key and no one else.
> Manifestation:
> Using Hue, real user logged in using Kerberos credentials. For direct access, 
> user does kinit and then uses curl calls.
> New file creation inside encryption zone goes ahead fine as expected. 
> But attempts to view the contents of the file fails with exception:
> "User [httpfs] is not authorized to perform [DECRYPT_EEK] on key with ACL 
> name [mykeyname]!!"
> Perhaps, this is linked to bug #HDFS-6849. In the file HttpFSServer.java, the 
> OPEN handler calls command.execute(fs) directly (and this fails). In CREATE, 
> that call is wrapped within fsExecute(user, command). Apparently, this seems 
> to cause the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11478) HttpFSServer does not properly impersonate a real user when executing "open" operation in a kerberised environment

2015-01-27 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb resolved HADOOP-11478.
---
Resolution: Not a Problem

> HttpFSServer does not properly impersonate a real user when executing "open" 
> operation in a kerberised environment
> --
>
> Key: HADOOP-11478
> URL: https://issues.apache.org/jira/browse/HADOOP-11478
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: CentOS
>Reporter: Ranadip
>Priority: Blocker
>
> Setup:
> - Kerberos enabled in the cluster, including Hue SSO
> - Encryption enabled using KMS. Encryption key and encryption zone created. 
> KMS key level ACL created to allow only real user to have all access to the 
> key and no one else.
> Manifestation:
> Using Hue, real user logged in using Kerberos credentials. For direct access, 
> user does kinit and then uses curl calls.
> New file creation inside encryption zone goes ahead fine as expected. 
> But attempts to view the contents of the file fails with exception:
> "User [httpfs] is not authorized to perform [DECRYPT_EEK] on key with ACL 
> name [mykeyname]!!"
> Perhaps, this is linked to bug #HDFS-6849. In the file HttpFSServer.java, the 
> OPEN handler calls command.execute(fs) directly (and this fails). In CREATE, 
> that call is wrapped within fsExecute(user, command). Apparently, this seems 
> to cause the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-01-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10181:
---
  Component/s: metrics
 Target Version/s: 2.7.0
Affects Version/s: 2.6.0
 Hadoop Flags: Reviewed

+1 for patch v003.  Andrew, thank you for addressing the feedback.

I plan to wait until Monday, 2/2, to commit this, in case any committer who has 
prior experience with this code also wants to review.

> GangliaContext does not work with multicast ganglia setup
> -
>
> Key: HADOOP-10181
> URL: https://issues.apache.org/jira/browse/HADOOP-10181
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.6.0
>Reporter: Andrew Otto
>Assignee: Andrew Johnson
>Priority: Minor
>  Labels: ganglia, hadoop, metrics, multicast
> Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
> HADOOP-10181.003.patch
>
>
> The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
> a DatagramSocket to send these metrics.  This works fine for Ganglia 
> multicast setups that are all on the same VLAN.  However, when working with 
> multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
> end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
> for which a particular multicast packet is valid.  The packets sent by 
> GangliaContext do not make it to ganglia aggregrators on the same multicast 
> group, but in different VLANs.
> To fix, we'd need a configuration property that specifies that multicast is 
> to be used, and another that allows setting of the multicast packet TTL.  
> With these set, we could then use MulticastSocket setTimeToLive() instead of 
> just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-01-27 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293884#comment-14293884
 ] 

John Smith commented on HADOOP-11485:
-

+1 (non-binding)

> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11460) Deprecate shell vars

2015-01-27 Thread John Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293879#comment-14293879
 ] 

John Smith commented on HADOOP-11460:
-

Should KMS get deprecated as well?

> Deprecate shell vars
> 
>
> Key: HADOOP-11460
> URL: https://issues.apache.org/jira/browse/HADOOP-11460
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>  Labels: scripts, shell
> Attachments: HADOOP-11460-00.patch, HADOOP-11460-01.patch, 
> HADOOP-11460-02.patch
>
>
> It is a very common shell pattern in 3.x to effectively replace sub-project 
> specific vars with generics.  We should have a function that does this 
> replacement and provides a warning to the end user that the old shell var is 
> deprecated.  Additionally, we should use this shell function to deprecate the 
> shell vars that are holdovers already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2015-01-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293875#comment-14293875
 ] 

Yongjun Zhang commented on HADOOP-11045:


Hi [~ozawa],

Thanks a lot for your feedback! I did do some study before deciding to use 
hexversion. Below is what I found:

* hexversion exists in as early version as Python 1.5.2, whereas version_info 
exists only from 2.0 on. 
* hexversion is described as "The version number encoded as a single integer. 
This is guaranteed to increase with each version, including proper support for 
non-production releases", however, per 
http://stackoverflow.com/questions/1093322/how-do-i-check-what-version-of-python-is-running-my-script,
 version_info may not, see " As long you do not endup comparing 
(3,3,0,'rc1','0') and (3,3,0,'beta','0') –  sorin Jun 5 '13 at 9:51 "

Based on this information, I chose to use hexversion. It's a bit harder to 
read, but not too bad. There is a detailed description of the format here: 
https://docs.python.org/2/library/sys.html#sys.hexversion. Please see that when 
I print out error messages, I do print a more readable version info.

What do you think?

Thanks.

--Yongjun




> Introducing a tool to detect flaky tests of hadoop jenkins test job
> ---
>
> Key: HADOOP-11045
> URL: https://issues.apache.org/jira/browse/HADOOP-11045
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, tools
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch, 
> HADOOP-11045.003.patch, HADOOP-11045.004.patch, HADOOP-11045.005.patch, 
> HADOOP-11045.006.patch, HADOOP-11045.007.patch
>
>
> File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
> test jobs. Certainly it can be adapted to projects other than hadoop.
> I developed the tool on top of some initial work [~tlipcon] did. We find it 
> quite useful. With Todd's agreement, I'd like to push it to upstream so all 
> of us can share (thanks Todd for the initial work and support). I hope you 
> find the tool useful too.
> The idea is, when one has the need to see if the test failure s/he is seeing 
> in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
> good idea. Also, if one wants to look at the failure trend of a testcase in a 
> given jenkins job, the tool can be used too. I hope people find it useful.
> This tool is for hadoop contributors rather than hadoop users. Thanks 
> [~tedyu] for the advice to put to dev-support dir.
> Description of the tool:
> {code}
> #
> # Given a jenkins test job, this script examines all runs of the job done
> # within specified period of time (number of days prior to the execution
> # time of this script), and reports all failed tests.
> #
> # The output of this script includes a section for each run that has failed
> # tests, with each failed test name listed.
> #
> # More importantly, at the end, it outputs a summary section to list all 
> failed
> # tests within all examined runs, and indicate how many runs a same test
> # failed, and sorted all failed tests by how many runs each test failed in.
> #
> # This way, when we see failed tests in PreCommit build, we can quickly tell 
> # whether a failed test is a new failure or it failed before, and it may just 
> # be a flaky test.
> #
> # Of course, to be 100% sure about the reason of a failed test, closer look 
> # at the failed test for the specific run is necessary.
> #
> {code}
> How to use the tool:
> {code}
> Usage: determine-flaky-tests-hadoop.py [options]
> Options:
>   -h, --helpshow this help message and exit
>   -J JENKINS_URL, --jenkins-url=JENKINS_URL
> Jenkins URL
>   -j JOB_NAME, --job-name=JOB_NAME
> Job name to look at
>   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
> Number of days to examine
> {code}
> Example command line:
> {code}
> ./determine-flaky-tests-hadoop.py -J https://builds.apache.org -j 
> PreCommit-HDFS-Build -n 2 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293817#comment-14293817
 ] 

Chris Nauroth commented on HADOOP-11509:


I took a look at how the generic command line options are documented, currently 
and going back to 1.0.4:

http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options

http://hadoop.apache.org/docs/r1.0.4/commands_manual.html#Generic+Options

Also relevant is the streaming documentation, which basically repeats the 
information:

http://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopStreaming.html#Generic_Command_Options

http://hadoop.apache.org/docs/r1.0.4/streaming.html#Generic+Command+Options

There is no way for a user to interpret this documentation to get a complete 
and correct understanding of these precedence rules (regardless of this patch). 
 It sounds like you're suggesting we file a follow-on jira to improve the 
documentation.  Do I understand correctly?

I still can't see a way that this patch would cause calling code to break.  The 
argument handling at this layer is not strictly positional, due to the way 
commons-cli works.  I don't expect anyone will need to swap the order of 
arguments in their script or anything like that.  Do you have an example of 
something that would break after this patch?

> change parsing sequence in GenericOptionsParser to parse -D parameters first
> 
>
> Key: HADOOP-11509
> URL: https://issues.apache.org/jira/browse/HADOOP-11509
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.0
>
> Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch
>
>
> In GenericOptionsParser, we need to parse -D parameter first. In that case, 
> the user input parameter (through -D) can be set into configuration object 
> earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293812#comment-14293812
 ] 

Tsuyoshi OZAWA commented on HADOOP-11045:
-

Please let me know if you have reason to use sys.hexversion.

> Introducing a tool to detect flaky tests of hadoop jenkins test job
> ---
>
> Key: HADOOP-11045
> URL: https://issues.apache.org/jira/browse/HADOOP-11045
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, tools
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch, 
> HADOOP-11045.003.patch, HADOOP-11045.004.patch, HADOOP-11045.005.patch, 
> HADOOP-11045.006.patch, HADOOP-11045.007.patch
>
>
> File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
> test jobs. Certainly it can be adapted to projects other than hadoop.
> I developed the tool on top of some initial work [~tlipcon] did. We find it 
> quite useful. With Todd's agreement, I'd like to push it to upstream so all 
> of us can share (thanks Todd for the initial work and support). I hope you 
> find the tool useful too.
> The idea is, when one has the need to see if the test failure s/he is seeing 
> in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
> good idea. Also, if one wants to look at the failure trend of a testcase in a 
> given jenkins job, the tool can be used too. I hope people find it useful.
> This tool is for hadoop contributors rather than hadoop users. Thanks 
> [~tedyu] for the advice to put to dev-support dir.
> Description of the tool:
> {code}
> #
> # Given a jenkins test job, this script examines all runs of the job done
> # within specified period of time (number of days prior to the execution
> # time of this script), and reports all failed tests.
> #
> # The output of this script includes a section for each run that has failed
> # tests, with each failed test name listed.
> #
> # More importantly, at the end, it outputs a summary section to list all 
> failed
> # tests within all examined runs, and indicate how many runs a same test
> # failed, and sorted all failed tests by how many runs each test failed in.
> #
> # This way, when we see failed tests in PreCommit build, we can quickly tell 
> # whether a failed test is a new failure or it failed before, and it may just 
> # be a flaky test.
> #
> # Of course, to be 100% sure about the reason of a failed test, closer look 
> # at the failed test for the specific run is necessary.
> #
> {code}
> How to use the tool:
> {code}
> Usage: determine-flaky-tests-hadoop.py [options]
> Options:
>   -h, --helpshow this help message and exit
>   -J JENKINS_URL, --jenkins-url=JENKINS_URL
> Jenkins URL
>   -j JOB_NAME, --job-name=JOB_NAME
> Job name to look at
>   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
> Number of days to examine
> {code}
> Example command line:
> {code}
> ./determine-flaky-tests-hadoop.py -J https://builds.apache.org -j 
> PreCommit-HDFS-Build -n 2 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11045) Introducing a tool to detect flaky tests of hadoop jenkins test job

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293811#comment-14293811
 ] 

Tsuyoshi OZAWA commented on HADOOP-11045:
-

[~yzhangal] Great work! One point: I prefer to use sys.version_info instead of 
sys.hexversion. We can use tuple comparison feature like this: 

{code}
>>> sys.version_info
sys.version_info(major=2, minor=7, micro=6, releaselevel='final', serial=0)
>>> sys.version_info > (2, 6, 0)
True
>>> sys.version_info > (3, 0, 0)
False
>>> sys.version_info < (3, 0, 0)
True
{code}

> Introducing a tool to detect flaky tests of hadoop jenkins test job
> ---
>
> Key: HADOOP-11045
> URL: https://issues.apache.org/jira/browse/HADOOP-11045
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, tools
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11045.001.patch, HADOOP-11045.002.patch, 
> HADOOP-11045.003.patch, HADOOP-11045.004.patch, HADOOP-11045.005.patch, 
> HADOOP-11045.006.patch, HADOOP-11045.007.patch
>
>
> File this jira to introduce a tool to detect flaky tests of hadoop jenkins 
> test jobs. Certainly it can be adapted to projects other than hadoop.
> I developed the tool on top of some initial work [~tlipcon] did. We find it 
> quite useful. With Todd's agreement, I'd like to push it to upstream so all 
> of us can share (thanks Todd for the initial work and support). I hope you 
> find the tool useful too.
> The idea is, when one has the need to see if the test failure s/he is seeing 
> in a pre-build jenkins run is flaky or not, s/he could run this tool to get a 
> good idea. Also, if one wants to look at the failure trend of a testcase in a 
> given jenkins job, the tool can be used too. I hope people find it useful.
> This tool is for hadoop contributors rather than hadoop users. Thanks 
> [~tedyu] for the advice to put to dev-support dir.
> Description of the tool:
> {code}
> #
> # Given a jenkins test job, this script examines all runs of the job done
> # within specified period of time (number of days prior to the execution
> # time of this script), and reports all failed tests.
> #
> # The output of this script includes a section for each run that has failed
> # tests, with each failed test name listed.
> #
> # More importantly, at the end, it outputs a summary section to list all 
> failed
> # tests within all examined runs, and indicate how many runs a same test
> # failed, and sorted all failed tests by how many runs each test failed in.
> #
> # This way, when we see failed tests in PreCommit build, we can quickly tell 
> # whether a failed test is a new failure or it failed before, and it may just 
> # be a flaky test.
> #
> # Of course, to be 100% sure about the reason of a failed test, closer look 
> # at the failed test for the specific run is necessary.
> #
> {code}
> How to use the tool:
> {code}
> Usage: determine-flaky-tests-hadoop.py [options]
> Options:
>   -h, --helpshow this help message and exit
>   -J JENKINS_URL, --jenkins-url=JENKINS_URL
> Jenkins URL
>   -j JOB_NAME, --job-name=JOB_NAME
> Job name to look at
>   -n NUM_PREV_DAYS, --num-days=NUM_PREV_DAYS
> Number of days to examine
> {code}
> Example command line:
> {code}
> ./determine-flaky-tests-hadoop.py -J https://builds.apache.org -j 
> PreCommit-HDFS-Build -n 2 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9954) Hadoop 2.0.5 doc build failure - OutOfMemoryError exception

2015-01-27 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293777#comment-14293777
 ] 

Tsuyoshi OZAWA commented on HADOOP-9954:


[~ste...@apache.org] I think this problem looks fixed in HADOOP-10910. Can we 
close this as resolved? BTW, I faced a problem reported as HADOOP-11316 when I 
checked the command.

> Hadoop 2.0.5 doc build failure - OutOfMemoryError exception
> ---
>
> Key: HADOOP-9954
> URL: https://issues.apache.org/jira/browse/HADOOP-9954
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.5-alpha
> Environment: CentOS 5, Sun JDK 1.6 (but not on CenOS6 + OpenJDK 7).
>Reporter: Paul Han
> Fix For: 2.0.5-alpha
>
> Attachments: HADOOP-9954.patch
>
>
> When run hadoop build with command line options:
> {code}
> mvn package -Pdist,native,docs -DskipTests -Dtar 
> {code}
> Build failed adn OutOfMemoryError Exception is thrown:
> {code}
> [INFO] --- maven-source-plugin:2.1.2:test-jar (default) @ hadoop-hdfs ---
> [INFO] 
> [INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ hadoop-hdfs ---
> [INFO] ** FindBugsMojo execute ***
> [INFO] canGenerate is true
> [INFO] ** FindBugsMojo executeFindbugs ***
> [INFO] Temp File is 
> /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/findbugsTemp.xml
> [INFO] Fork Value is true
>  [java] Out of memory
>  [java] Total memory: 477M
>  [java]  free memory: 68M
>  [java] Analyzed: 
> /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/classes
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/org/codehaus/mojo/findbugs-maven-plugin/2.3.2/findbugs-maven-plugin-2.3.2.jar
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/com/google/code/findbugs/bcel/1.3.9/bcel-1.3.9.jar
>  ...
>  [java]  Aux: 
> /home/henkins-service/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar
>  [java] Exception in thread "main" java.lang.OutOfMemoryError: GC 
> overhead limit exceeded
>  [java]   at java.util.HashMap.(HashMap.java:226)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefSet.(UnconditionalValueDerefSet.java:68)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:650)
>  [java]   at 
> edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:82)
>  [java]   at 
> edu.umd.cs.findbugs.ba.BasicAbstractDataflowAnalysis.getFactOnEdge(BasicAbstractDataflowAnalysis.java:119)
>  [java]   at 
> edu.umd.cs.findbugs.ba.AbstractDataflow.getFactOnEdge(AbstractDataflow.java:54)
>  [java]   at 
> edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.examineNullValues(NullDerefAndRedundantComparisonFinder.java:297)
>  [java]   at 
> edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.execute(NullDerefAndRedundantComparisonFinder.java:150)
>  [java]   at 
> edu.umd.cs.findbugs.detect.FindNullDeref.analyzeMethod(FindNullDeref.java:278)
>  [java]   at 
> edu.umd.cs.findbugs.detect.FindNullDeref.visitClassContext(FindNullDeref.java:205)
>  [java]   at 
> edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68)
>  [java]   at 
> edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:979)
>  [java]   at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:230)
>  [java]   at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348)
>  [java]   at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057)
>  [java] Java Result: 1
> [INFO] No bugs found
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293674#comment-14293674
 ] 

Hadoop QA commented on HADOOP-10846:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12694783/HADOOP-10846-v4.patch
  against trunk revision 0da53a3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.http.TestHttpServerLifecycle

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5508//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5508//console

This message is automatically generated.

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1, 2.5.2
> Environment: PowerPC platform
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846-v3.patch, HADOOP-10846-v4.patch, HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293647#comment-14293647
 ] 

Hudson commented on HADOOP-11499:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/87/])
HADOOP-11499. Check of executorThreadsStarted in ValueQueue#submitRefillTask() 
evades lock acquisition. Contributed by Ted Yu (jlowe: rev 
7574df1bba33919348d3009f2578d6a81b5818e6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java


> Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock 
> acquisition
> 
>
> Key: HADOOP-11499
> URL: https://issues.apache.org/jira/browse/HADOOP-11499
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: hadoop-11499-001.patch
>
>
> {code}
> if (!executorThreadsStarted) {
>   synchronized (this) {
> // To ensure all requests are first queued, make coreThreads =
> // maxThreads
> // and pre-start all the Core Threads.
> executor.prestartAllCoreThreads();
> executorThreadsStarted = true;
>   }
> }
> {code}
> It is possible that two threads executing the above code both see 
> executorThreadsStarted as being false, leading to 
> executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293644#comment-14293644
 ] 

Hudson commented on HADOOP-11466:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/87/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
> because it is slower there
> 
>
> Key: HADOOP-11466
> URL: https://issues.apache.org/jira/browse/HADOOP-11466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, performance, util
> Environment: Linux X86 and Solaris SPARC
>Reporter: Suman Somasundar
>Assignee: Suman Somasundar
>Priority: Minor
>  Labels: patch
> Fix For: 2.6.1
>
> Attachments: HADOOP-11466.003.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
> byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
> discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
> for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
> order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
> patch uses Unsafe.getLong. The problem is that this call is incredibly 
> expensive on SPARC. The reason is that the Studio compiler detects an 
> unaligned pointer read and handles this read in software. x86 supports 
> unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6221) RPC Client operations cannot be interrupted

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293650#comment-14293650
 ] 

Hudson commented on HADOOP-6221:


FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/87/])
HADOOP-6221 RPC Client operations cannot be interrupted (stevel) (stevel: rev 
1f2b6956c2012a7d6ea7e7ba5116d3ad71c23d7e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java


> RPC Client operations cannot be interrupted
> ---
>
> Key: HADOOP-6221
> URL: https://issues.apache.org/jira/browse/HADOOP-6221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-6221-007.patch, HADOOP-6221-008.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch
>
>
> RPC.waitForProxy swallows any attempts to interrupt it while waiting for a 
> proxy; this makes it hard to shutdown a service that you are starting; you 
> have to wait for the timeouts. 
> There are only 4-5 places in the code that use either of the two overloaded 
> methods, removing the catch and changing the signature should not be too 
> painful, unless anyone is using the method outside the hadoop codebase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293651#comment-14293651
 ] 

Hudson commented on HADOOP-11509:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #87 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/87/])
HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse (xgong: 
rev 0bf333911c950f22ec0f784bf465306e20b0d507)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


> change parsing sequence in GenericOptionsParser to parse -D parameters first
> 
>
> Key: HADOOP-11509
> URL: https://issues.apache.org/jira/browse/HADOOP-11509
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.0
>
> Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch
>
>
> In GenericOptionsParser, we need to parse -D parameter first. In that case, 
> the user input parameter (through -D) can be set into configuration object 
> earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293606#comment-14293606
 ] 

Hudson commented on HADOOP-11509:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2018 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2018/])
HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse (xgong: 
rev 0bf333911c950f22ec0f784bf465306e20b0d507)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


> change parsing sequence in GenericOptionsParser to parse -D parameters first
> 
>
> Key: HADOOP-11509
> URL: https://issues.apache.org/jira/browse/HADOOP-11509
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.0
>
> Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch
>
>
> In GenericOptionsParser, we need to parse -D parameter first. In that case, 
> the user input parameter (through -D) can be set into configuration object 
> earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6221) RPC Client operations cannot be interrupted

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293605#comment-14293605
 ] 

Hudson commented on HADOOP-6221:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #2018 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2018/])
HADOOP-6221 RPC Client operations cannot be interrupted (stevel) (stevel: rev 
1f2b6956c2012a7d6ea7e7ba5116d3ad71c23d7e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java


> RPC Client operations cannot be interrupted
> ---
>
> Key: HADOOP-6221
> URL: https://issues.apache.org/jira/browse/HADOOP-6221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-6221-007.patch, HADOOP-6221-008.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch
>
>
> RPC.waitForProxy swallows any attempts to interrupt it while waiting for a 
> proxy; this makes it hard to shutdown a service that you are starting; you 
> have to wait for the timeouts. 
> There are only 4-5 places in the code that use either of the two overloaded 
> methods, removing the catch and changing the signature should not be too 
> painful, unless anyone is using the method outside the hadoop codebase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293600#comment-14293600
 ] 

Hudson commented on HADOOP-11466:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2018 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2018/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
> because it is slower there
> 
>
> Key: HADOOP-11466
> URL: https://issues.apache.org/jira/browse/HADOOP-11466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, performance, util
> Environment: Linux X86 and Solaris SPARC
>Reporter: Suman Somasundar
>Assignee: Suman Somasundar
>Priority: Minor
>  Labels: patch
> Fix For: 2.6.1
>
> Attachments: HADOOP-11466.003.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
> byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
> discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
> for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
> order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
> patch uses Unsafe.getLong. The problem is that this call is incredibly 
> expensive on SPARC. The reason is that the Studio compiler detects an 
> unaligned pointer read and handles this read in software. x86 supports 
> unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293602#comment-14293602
 ] 

Hudson commented on HADOOP-11499:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2018 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2018/])
HADOOP-11499. Check of executorThreadsStarted in ValueQueue#submitRefillTask() 
evades lock acquisition. Contributed by Ted Yu (jlowe: rev 
7574df1bba33919348d3009f2578d6a81b5818e6)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock 
> acquisition
> 
>
> Key: HADOOP-11499
> URL: https://issues.apache.org/jira/browse/HADOOP-11499
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: hadoop-11499-001.patch
>
>
> {code}
> if (!executorThreadsStarted) {
>   synchronized (this) {
> // To ensure all requests are first queued, make coreThreads =
> // maxThreads
> // and pre-start all the Core Threads.
> executor.prestartAllCoreThreads();
> executorThreadsStarted = true;
>   }
> }
> {code}
> It is possible that two threads executing the above code both see 
> executorThreadsStarted as being false, leading to 
> executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293570#comment-14293570
 ] 

Hudson commented on HADOOP-11499:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/83/])
HADOOP-11499. Check of executorThreadsStarted in ValueQueue#submitRefillTask() 
evades lock acquisition. Contributed by Ted Yu (jlowe: rev 
7574df1bba33919348d3009f2578d6a81b5818e6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java


> Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock 
> acquisition
> 
>
> Key: HADOOP-11499
> URL: https://issues.apache.org/jira/browse/HADOOP-11499
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: hadoop-11499-001.patch
>
>
> {code}
> if (!executorThreadsStarted) {
>   synchronized (this) {
> // To ensure all requests are first queued, make coreThreads =
> // maxThreads
> // and pre-start all the Core Threads.
> executor.prestartAllCoreThreads();
> executorThreadsStarted = true;
>   }
> }
> {code}
> It is possible that two threads executing the above code both see 
> executorThreadsStarted as being false, leading to 
> executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293574#comment-14293574
 ] 

Hudson commented on HADOOP-11509:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/83/])
HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse (xgong: 
rev 0bf333911c950f22ec0f784bf465306e20b0d507)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> change parsing sequence in GenericOptionsParser to parse -D parameters first
> 
>
> Key: HADOOP-11509
> URL: https://issues.apache.org/jira/browse/HADOOP-11509
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.0
>
> Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch
>
>
> In GenericOptionsParser, we need to parse -D parameter first. In that case, 
> the user input parameter (through -D) can be set into configuration object 
> earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293579#comment-14293579
 ] 

Hudson commented on HADOOP-11499:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2037 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2037/])
HADOOP-11499. Check of executorThreadsStarted in ValueQueue#submitRefillTask() 
evades lock acquisition. Contributed by Ted Yu (jlowe: rev 
7574df1bba33919348d3009f2578d6a81b5818e6)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/ValueQueue.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock 
> acquisition
> 
>
> Key: HADOOP-11499
> URL: https://issues.apache.org/jira/browse/HADOOP-11499
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: hadoop-11499-001.patch
>
>
> {code}
> if (!executorThreadsStarted) {
>   synchronized (this) {
> // To ensure all requests are first queued, make coreThreads =
> // maxThreads
> // and pre-start all the Core Threads.
> executor.prestartAllCoreThreads();
> executorThreadsStarted = true;
>   }
> }
> {code}
> It is possible that two threads executing the above code both see 
> executorThreadsStarted as being false, leading to 
> executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6221) RPC Client operations cannot be interrupted

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293582#comment-14293582
 ] 

Hudson commented on HADOOP-6221:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #2037 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2037/])
HADOOP-6221 RPC Client operations cannot be interrupted (stevel) (stevel: rev 
1f2b6956c2012a7d6ea7e7ba5116d3ad71c23d7e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> RPC Client operations cannot be interrupted
> ---
>
> Key: HADOOP-6221
> URL: https://issues.apache.org/jira/browse/HADOOP-6221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-6221-007.patch, HADOOP-6221-008.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch
>
>
> RPC.waitForProxy swallows any attempts to interrupt it while waiting for a 
> proxy; this makes it hard to shutdown a service that you are starting; you 
> have to wait for the timeouts. 
> There are only 4-5 places in the code that use either of the two overloaded 
> methods, removing the catch and changing the signature should not be too 
> painful, unless anyone is using the method outside the hadoop codebase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293577#comment-14293577
 ] 

Hudson commented on HADOOP-11466:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2037 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2037/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
> because it is slower there
> 
>
> Key: HADOOP-11466
> URL: https://issues.apache.org/jira/browse/HADOOP-11466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, performance, util
> Environment: Linux X86 and Solaris SPARC
>Reporter: Suman Somasundar
>Assignee: Suman Somasundar
>Priority: Minor
>  Labels: patch
> Fix For: 2.6.1
>
> Attachments: HADOOP-11466.003.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
> byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
> discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
> for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
> order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
> patch uses Unsafe.getLong. The problem is that this call is incredibly 
> expensive on SPARC. The reason is that the Studio compiler detects an 
> unaligned pointer read and handles this read in software. x86 supports 
> unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6221) RPC Client operations cannot be interrupted

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293573#comment-14293573
 ] 

Hudson commented on HADOOP-6221:


FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/83/])
HADOOP-6221 RPC Client operations cannot be interrupted (stevel) (stevel: rev 
1f2b6956c2012a7d6ea7e7ba5116d3ad71c23d7e)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> RPC Client operations cannot be interrupted
> ---
>
> Key: HADOOP-6221
> URL: https://issues.apache.org/jira/browse/HADOOP-6221
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-6221-007.patch, HADOOP-6221-008.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, HADOOP-6221.patch, 
> HADOOP-6221.patch, HADOOP-6221.patch
>
>
> RPC.waitForProxy swallows any attempts to interrupt it while waiting for a 
> proxy; this makes it hard to shutdown a service that you are starting; you 
> have to wait for the timeouts. 
> There are only 4-5 places in the code that use either of the two overloaded 
> methods, removing the catch and changing the signature should not be too 
> painful, unless anyone is using the method outside the hadoop codebase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11509) change parsing sequence in GenericOptionsParser to parse -D parameters first

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293583#comment-14293583
 ] 

Hudson commented on HADOOP-11509:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2037 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2037/])
HADOOP-11509. Change parsing sequence in GenericOptionsParser to parse (xgong: 
rev 0bf333911c950f22ec0f784bf465306e20b0d507)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


> change parsing sequence in GenericOptionsParser to parse -D parameters first
> 
>
> Key: HADOOP-11509
> URL: https://issues.apache.org/jira/browse/HADOOP-11509
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.0
>
> Attachments: HADOOP-11509.1.patch, HADOOP-11509.2.patch
>
>
> In GenericOptionsParser, we need to parse -D parameter first. In that case, 
> the user input parameter (through -D) can be set into configuration object 
> earlier and used to process other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

2015-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293568#comment-14293568
 ] 

Hudson commented on HADOOP-11466:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #83 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/83/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
> because it is slower there
> 
>
> Key: HADOOP-11466
> URL: https://issues.apache.org/jira/browse/HADOOP-11466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, performance, util
> Environment: Linux X86 and Solaris SPARC
>Reporter: Suman Somasundar
>Assignee: Suman Somasundar
>Priority: Minor
>  Labels: patch
> Fix For: 2.6.1
>
> Attachments: HADOOP-11466.003.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
> byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
> discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
> for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
> order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
> patch uses Unsafe.getLong. The problem is that this call is incredibly 
> expensive on SPARC. The reason is that the Studio compiler detects an 
> unaligned pointer read and handles this read in software. x86 supports 
> unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-01-27 Thread Ryan Pridgeon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293563#comment-14293563
 ] 

Ryan Pridgeon commented on HADOOP-11512:


Working on it.

> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Priority: Minor
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2015-01-27 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293554#comment-14293554
 ] 

Ayappan commented on HADOOP-10846:
--

Hi Steve,
   I attached a new patch which deals with the windows env ( which is 
always Little Endian or atleast now )

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1, 2.5.2
> Environment: PowerPC platform
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846-v3.patch, HADOOP-10846-v4.patch, HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-01-27 Thread Harsh J (JIRA)
Harsh J created HADOOP-11512:


 Summary: Use getTrimmedStrings when reading serialization keys
 Key: HADOOP-11512
 URL: https://issues.apache.org/jira/browse/HADOOP-11512
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.6.0
Reporter: Harsh J
Priority: Minor


In the file 
{{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
 we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) which 
does not trim the input. This could cause confusing user issues if someone 
manually overrides the key in the XML files/Configuration object without using 
the dynamic approach.

The call should instead use Configuration#getTrimmedStrings(…), so the 
whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293550#comment-14293550
 ] 

Hadoop QA commented on HADOOP-10181:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12694776/HADOOP-10181.003.patch
  against trunk revision 0da53a3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5507//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5507//console

This message is automatically generated.

> GangliaContext does not work with multicast ganglia setup
> -
>
> Key: HADOOP-10181
> URL: https://issues.apache.org/jira/browse/HADOOP-10181
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Otto
>Assignee: Andrew Johnson
>Priority: Minor
>  Labels: ganglia, hadoop, metrics, multicast
> Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
> HADOOP-10181.003.patch
>
>
> The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
> a DatagramSocket to send these metrics.  This works fine for Ganglia 
> multicast setups that are all on the same VLAN.  However, when working with 
> multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
> end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
> for which a particular multicast packet is valid.  The packets sent by 
> GangliaContext do not make it to ganglia aggregrators on the same multicast 
> group, but in different VLANs.
> To fix, we'd need a configuration property that specifies that multicast is 
> to be used, and another that allows setting of the multicast packet TTL.  
> With these set, we could then use MulticastSocket setTimeToLive() instead of 
> just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10846) DataChecksum#calculateChunkedSums not working for PPC when buffers not backed by array

2015-01-27 Thread Ayappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayappan updated HADOOP-10846:
-
Attachment: HADOOP-10846-v4.patch

> DataChecksum#calculateChunkedSums not working for PPC when buffers not backed 
> by array
> --
>
> Key: HADOOP-10846
> URL: https://issues.apache.org/jira/browse/HADOOP-10846
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.4.1, 2.5.2
> Environment: PowerPC platform
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10846-v1.patch, HADOOP-10846-v2.patch, 
> HADOOP-10846-v3.patch, HADOOP-10846-v4.patch, HADOOP-10846.patch
>
>
> Got the following exception when running Hadoop on Power PC. The 
> implementation for computing checksum when the data buffer and checksum 
> buffer are not backed by arrays.
> 13/09/16 04:06:57 ERROR security.UserGroupInformation: 
> PriviledgedActionException as:biadmin (auth:SIMPLE) 
> cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> org.apache.hadoop.fs.ChecksumException: Checksum error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-01-27 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293534#comment-14293534
 ] 

Charles Lamb commented on HADOOP-11510:
---

[~hitliuyi],

Looks good. One small nit. To be consistent in testTruncateThroughFileContext 
you could add a few more finals to the decls.

Just out of curiosity, why 3 in newLength = fileLength/3?


> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11417) review filesystem seek logic, clarify/confirm spec, test & fix compliance

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293495#comment-14293495
 ] 

Steve Loughran commented on HADOOP-11417:
-

looks like s3n, s3a and swift all fail here: either they check and reject, or 
they hand off to HTTP to open at the offset, which then fails.

It would be nice to have a common solution. To complicate things, they will all 
need to close their http input streams, so that future seeks don't get confused 
as to where they are, so other bits of the code will have to check for this now 
special state "streams closed at end of file", differentiating it from "streams 
closed after close()"

> review filesystem seek logic, clarify/confirm spec, test & fix compliance
> -
>
> Key: HADOOP-11417
> URL: https://issues.apache.org/jira/browse/HADOOP-11417
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-11270 implies there's a diff in the way HDFS seeks and the object 
> stores on the action {{seek(len(file))}}
> # review what HDFS does, add contract test to exactly demonstrate HDFS 
> behaviour.
> # ensure FS spec is consistent with this
> # test/audit all supported filesystems to verify consistent behaviour
> # fix where appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11417) review filesystem seek logic, clarify/confirm spec, test & fix compliance

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293487#comment-14293487
 ] 

Steve Loughran commented on HADOOP-11417:
-

Looking at the HDFS code, the logic is
{code}
if (targetPos > getFileLength()) {
  throw new EOFException("Cannot seek after EOF");
}
if (targetPos < 0) {
  throw new EOFException("Cannot seek to negative offset");
}
if (closed) {
  throw new IOException("Stream is closed!");
}
{code}

That is: it is not an error to {{seek(len(file))}.

Instead, on the {{read()}} operation, it goes
{code}
if (pos < getFileLength()) {
 ... the read logic, which appears to either return success or throw 
something
}
   return -1;
{code}

That is: you can seek to the length of a file; the read() operation then fails.

h3. Conclusions

The FS spec is wrong as it says filesystems MAY throw an exception for any seek 
>= len(file). It should 

{code}
s > 0 and ((s==0) or ((s < len(data else raise [EOFException, 
IOException]

Some FileSystems do not raise an exception if this condition is not met. They
instead return -1 on any `read()` operation where, at the time of the read,
`len(data(FSDIS)) < pos(FSDIS)`.
{code}

it should have the condition
{code}
s >= 0 and s < len(data) else raise [EOFException, IOException]
{code}

This matches hdfs and handles what was considered the special case, seek(0) is 
always valid.

As HADOOP-11270 notes, at least one of the object stores does not follow HDFS 
behaviour. Apart from a special test for seek(0). {{AbstractContractSeekTest}} 
does not test the case {{seek(len(file))}}. It does test {{seek(len(file))+2}}, 
going far enough past the end to resolve any ambiguity.

Proposed

# correct the spec to match HDFS
# add a new test in {{AbstractContractSeekTest}} which declares that all 
filesystem clients must support  {{seek(len(file))}}. 
# see what fails. 
# fix them.





> review filesystem seek logic, clarify/confirm spec, test & fix compliance
> -
>
> Key: HADOOP-11417
> URL: https://issues.apache.org/jira/browse/HADOOP-11417
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-11270 implies there's a diff in the way HDFS seeks and the object 
> stores on the action {{seek(len(file))}}
> # review what HDFS does, add contract test to exactly demonstrate HDFS 
> behaviour.
> # ensure FS spec is consistent with this
> # test/audit all supported filesystems to verify consistent behaviour
> # fix where appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2015-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293479#comment-14293479
 ] 

Hadoop QA commented on HADOOP-11082:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12669951/HADOOP-11082.patch
  against trunk revision 0da53a3.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5506//console

This message is automatically generated.

> Resolve findbugs warnings in hadoop-aws module
> --
>
> Key: HADOOP-11082
> URL: https://issues.apache.org/jira/browse/HADOOP-11082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: David S. Wang
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11082.patch, findbugs.xml
>
>
> Currently hadoop-aws module has the findbugs exclude file from hadoop-common. 
>  It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11082:

 Target Version/s: 2.7.0  (was: 2.6.0)
Affects Version/s: (was: 3.0.0)
   2.6.0
   Status: Patch Available  (was: Open)

resubmitting. 

Colin, that FS contract base test is probably forever stuck in java3-land, as 
we don't know which external filesystems have subclassed it for testing. Pity. 

> Resolve findbugs warnings in hadoop-aws module
> --
>
> Key: HADOOP-11082
> URL: https://issues.apache.org/jira/browse/HADOOP-11082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: David S. Wang
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11082.patch, findbugs.xml
>
>
> Currently hadoop-aws module has the findbugs exclude file from hadoop-common. 
>  It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10181) GangliaContext does not work with multicast ganglia setup

2015-01-27 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293473#comment-14293473
 ] 

Andrew Johnson commented on HADOOP-10181:
-

Thanks for the feedback, [~cnauroth]!  I've submitted a new patch to address 
your comments.

> GangliaContext does not work with multicast ganglia setup
> -
>
> Key: HADOOP-10181
> URL: https://issues.apache.org/jira/browse/HADOOP-10181
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Otto
>Assignee: Andrew Johnson
>Priority: Minor
>  Labels: ganglia, hadoop, metrics, multicast
> Attachments: HADOOP-10181.001.patch, HADOOP-10181.002.patch, 
> HADOOP-10181.003.patch
>
>
> The GangliaContext class which is used to send Hadoop metrics to Ganglia uses 
> a DatagramSocket to send these metrics.  This works fine for Ganglia 
> multicast setups that are all on the same VLAN.  However, when working with 
> multiple VLANs, a packet sent via DatagramSocket to a multicast address will 
> end up with a TTL of 1.  Multicast TTL indicates the number of network hops 
> for which a particular multicast packet is valid.  The packets sent by 
> GangliaContext do not make it to ganglia aggregrators on the same multicast 
> group, but in different VLANs.
> To fix, we'd need a configuration property that specifies that multicast is 
> to be used, and another that allows setting of the multicast packet TTL.  
> With these set, we could then use MulticastSocket setTimeToLive() instead of 
> just plain ol' DatagramSocket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2015-01-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11082:

Status: Open  (was: Patch Available)

> Resolve findbugs warnings in hadoop-aws module
> --
>
> Key: HADOOP-11082
> URL: https://issues.apache.org/jira/browse/HADOOP-11082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: David S. Wang
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11082.patch, findbugs.xml
>
>
> Currently hadoop-aws module has the findbugs exclude file from hadoop-common. 
>  It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10309) S3 block filesystem should more aggressively delete temporary files

2015-01-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293471#comment-14293471
 ] 

Steve Loughran commented on HADOOP-10309:
-

patch isn't applying because the code is now in the {{hadoop-aws}} module. 
Otherwise: looks straightforward to apply

> S3 block filesystem should more aggressively delete temporary files
> ---
>
> Key: HADOOP-10309
> URL: https://issues.apache.org/jira/browse/HADOOP-10309
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Joe Kelley
>Priority: Minor
> Attachments: HADOOP-10309.patch
>
>
> The S3 FileSystem reading implementation downloads block files into a 
> configurable temporary directory. deleteOnExit() is called on these files, so 
> they are deleted when the JVM exits.
> However, JVM reuse can lead to JVMs that stick around for a very long time. 
> This can cause these temporary files to build up indefinitely and, in the 
> worst case, fill up the local directory.
> After a block file has been read, there is no reason to keep it around. It 
> should be deleted.
> Writing to the S3 FileSystem already has this behavior; after a temporary 
> block file is written and uploaded to S3, it is deleted immediately; there is 
> no need to wait for the JVM to exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >