[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-23 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377361#comment-14377361
 ] 

Kai Zheng commented on HADOOP-11664:


Let's have it. Will update the patch including the mentioned XML file.

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-23 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377340#comment-14377340
 ] 

Zhe Zhang commented on HADOOP-11664:


Great idea Vinay!



—
Sent from Mailbox

On Mon, Mar 23, 2015 at 10:44 PM, Vinayakumar B (JIRA) 



> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377334#comment-14377334
 ] 

Vinayakumar B commented on HADOOP-11664:


How about including a predifined schema xml also in this patch.?

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11701) RPC authentication fallback option should support enabling fallback only for specific connections.

2015-03-23 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377295#comment-14377295
 ] 

Yongjun Zhang commented on HADOOP-11701:


Hi [~cnauroth],

Thanks for creating this jira. I have a question, with HDFS-6776 fix, insecure 
cluster would return null delegation token and secure cluster will return 
non-null delegation token. The fallback may happen only for null delegation 
token, which means insecure cluster only. So whether a token is null here serve 
the purpose of distinguishing between cluster that we want to fallback and the 
other cluster that we don't.  Is that not sufficient? Thanks.




  

> RPC authentication fallback option should support enabling fallback only for 
> specific connections.
> --
>
> Key: HADOOP-11701
> URL: https://issues.apache.org/jira/browse/HADOOP-11701
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Reporter: Chris Nauroth
>
> We currently support the {{ipc.client.fallback-to-simple-auth-allowed}} 
> configuration property so that a client configured with security can fallback 
> to simple authentication when communicating with an unsecured server.  This 
> is a global property that enables the fallback behavior for all RPC 
> connections, even though fallback is only desirable for clusters that are 
> known to be unsecured.  This issue proposes to support configurability of 
> fallback on specific connections, not all connections globally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-23 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11377:

Priority: Major  (was: Blocker)

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-23 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377290#comment-14377290
 ] 

Tsuyoshi Ozawa commented on HADOOP-11377:
-

[~vinodkv] To reproduce the problem, please run following command:
{code}
mvn package -Pdocs -DskipTests 
{code}

If FINDBUGS_HOME is set, the command succeeds but the error still occurs 
because Null.java is not found - please grep the outputs. The patch fixes the 
error.

Akira told me that the reason of the build failure is because env.FINDBUGS_HOME 
is not set. The ant script cannot find default.xls and fails as a result. We 
should describe FINDBUGS_HOME need to be set on  HADOOP-11457. However, I think 
this problem is not critical issue, so marking this issue as non-blocker ticket.

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11741) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377289#comment-14377289
 ] 

Hadoop QA commented on HADOOP-11741:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12706821/HADOOP-11741.001.patch
  against trunk revision 9fae455.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5987//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5987//console

This message is automatically generated.

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HADOOP-11741
> URL: https://issues.apache.org/jira/browse/HADOOP-11741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HADOOP-11741.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2015-03-23 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11090:

Assignee: Mohammad Kamrul Islam  (was: Tsuyoshi Ozawa)

> [Umbrella] Support Java 8 in Hadoop
> ---
>
> Key: HADOOP-11090
> URL: https://issues.apache.org/jira/browse/HADOOP-11090
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works  with Java 8 is important for the Apache community.
>   
> This JIRA is to track  the issues/experiences encountered during Java 8 
> migration. If you find a potential bug , please create a separate JIRA either 
> as a sub-task or linked into this JIRA.
> If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
> well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2015-03-23 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned HADOOP-11090:
---

Assignee: Tsuyoshi Ozawa  (was: Mohammad Kamrul Islam)

> [Umbrella] Support Java 8 in Hadoop
> ---
>
> Key: HADOOP-11090
> URL: https://issues.apache.org/jira/browse/HADOOP-11090
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Mohammad Kamrul Islam
>Assignee: Tsuyoshi Ozawa
>
> Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
> works  with Java 8 is important for the Apache community.
>   
> This JIRA is to track  the issues/experiences encountered during Java 8 
> migration. If you find a potential bug , please create a separate JIRA either 
> as a sub-task or linked into this JIRA.
> If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
> well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-03-23 Thread Takenori Sato (JIRA)
Takenori Sato created HADOOP-11742:
--

 Summary: mkdir by file system shell fails on an empty bucket
 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
 Environment: CentOS 7
Reporter: Takenori Sato


I have built the latest 2.7, and tried S3AFileSystem.

Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as follows:

{code}
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3a/foo (foo)
15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ ()
15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
mkdir: `s3a://s3a/foo': No such file or directory
{code}

So does _ls_.

{code}
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ ()
15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
ls: `s3a://s3a/': No such file or directory
{code}

This is how it works via s3n.

{code}
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
Found 1 items
drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
{code}

The snapshot is the following:

{quote}
\# git branch
\* branch-2.7
  trunk
\# git log
commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
Author: Harsh J 
Date:   Sun Mar 22 10:18:32 2015 +0530
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11741) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HADOOP-11741:
---
Status: Patch Available  (was: Open)

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HADOOP-11741
> URL: https://issues.apache.org/jira/browse/HADOOP-11741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HADOOP-11741.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11741) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HADOOP-11741:
---
Attachment: HADOOP-11741.001.patch

> Add LOG.isDebugEnabled() guard for some LOG.debug(..)
> -
>
> Key: HADOOP-11741
> URL: https://issues.apache.org/jira/browse/HADOOP-11741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HADOOP-11741.001.patch
>
>
> {{isDebugEnabled()}} is optional. But when there are :
> 1. lots of concatenating Strings
> 2. complicated function calls
> in the arguments, {{LOG.debug(..)}} should be guarded with 
> {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11741) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-23 Thread Walter Su (JIRA)
Walter Su created HADOOP-11741:
--

 Summary: Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 Key: HADOOP-11741
 URL: https://issues.apache.org/jira/browse/HADOOP-11741
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su


{{isDebugEnabled()}} is optional. But when there are :
1. lots of concatenating Strings
2. complicated function calls
in the arguments, {{LOG.debug(..)}} should be guarded with 
{{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11611) fix TestHTracedRESTReceiver unit test failures

2015-03-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11611:
--
Affects Version/s: (was: 3.2)

> fix TestHTracedRESTReceiver unit test failures
> --
>
> Key: HADOOP-11611
> URL: https://issues.apache.org/jira/browse/HADOOP-11611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Critical
>
> Fix some issues with HTracedRESTReceiver that are resulting in unit test 
> failures.
> So there were two main issues:
> * better way to launch htraced
> * fixes to the HTracedRESTReceiver logic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9642) Configuration to resolve environment variables via ${env.VARIABLE} references

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377001#comment-14377001
 ] 

Hadoop QA commented on HADOOP-9642:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706745/HADOOP-9642.002.patch
  against trunk revision 972f1f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5986//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5986//console

This message is automatically generated.

> Configuration to resolve environment variables via ${env.VARIABLE} references
> -
>
> Key: HADOOP-9642
> URL: https://issues.apache.org/jira/browse/HADOOP-9642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, scripts
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Kengo Seki
>Priority: Minor
> Attachments: HADOOP-9642.001.patch, HADOOP-9642.002.patch
>
>
> We should be able to get env variables from Configuration files, rather than 
> just system properties. I propose using the traditional {{env}} prefix 
> {{${env.PATH}}} to make it immediately clear to people reading a conf file 
> that it's an env variable -and to avoid any confusion with system properties 
> and existing configuration properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10512) Document usage of node-group layer topology

2015-03-23 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HADOOP-10512.
-
Resolution: Duplicate
  Assignee: (was: Junping Du)

> Document usage of node-group layer topology
> ---
>
> Key: HADOOP-10512
> URL: https://issues.apache.org/jira/browse/HADOOP-10512
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Junping Du
>
> For work under umbrella of HADOOP-8468, user can enable nodegroup layer 
> between node and rack in some situations. We should document it after YARN-18 
> and YARN-19 is figured out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10512) Document usage of node-group layer topology

2015-03-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376963#comment-14376963
 ] 

Junping Du commented on HADOOP-10512:
-

Agree with [~aw]. We can have a document before YARN code get checked in. Given 
HDFS-6261 is almost there, let's mark this JIRA as duplicated. We can have a 
separated one for YARN document when patch is there.

> Document usage of node-group layer topology
> ---
>
> Key: HADOOP-10512
> URL: https://issues.apache.org/jira/browse/HADOOP-10512
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Junping Du
>Assignee: Junping Du
>
> For work under umbrella of HADOOP-8468, user can enable nodegroup layer 
> between node and rack in some situations. We should document it after YARN-18 
> and YARN-19 is figured out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9642) Configuration to resolve environment variables via ${env.VARIABLE} references

2015-03-23 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376857#comment-14376857
 ] 

Kengo Seki commented on HADOOP-9642:


":-" and "\-" behave in the same way as shell's variable expansion.

> Configuration to resolve environment variables via ${env.VARIABLE} references
> -
>
> Key: HADOOP-9642
> URL: https://issues.apache.org/jira/browse/HADOOP-9642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, scripts
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Kengo Seki
>Priority: Minor
> Attachments: HADOOP-9642.001.patch, HADOOP-9642.002.patch
>
>
> We should be able to get env variables from Configuration files, rather than 
> just system properties. I propose using the traditional {{env}} prefix 
> {{${env.PATH}}} to make it immediately clear to people reading a conf file 
> that it's an env variable -and to avoid any confusion with system properties 
> and existing configuration properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9642) Configuration to resolve environment variables via ${env.VARIABLE} references

2015-03-23 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-9642:
---
Attachment: HADOOP-9642.002.patch

Attaching a revised patch.

- Default value feature added. In addition to ":\-", "-" is also supported.
- Javadoc modified.

Could anyone review it? Thank you.

> Configuration to resolve environment variables via ${env.VARIABLE} references
> -
>
> Key: HADOOP-9642
> URL: https://issues.apache.org/jira/browse/HADOOP-9642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, scripts
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Assignee: Kengo Seki
>Priority: Minor
> Attachments: HADOOP-9642.001.patch, HADOOP-9642.002.patch
>
>
> We should be able to get env variables from Configuration files, rather than 
> just system properties. I propose using the traditional {{env}} prefix 
> {{${env.PATH}}} to make it immediately clear to people reading a conf file 
> that it's an env variable -and to avoid any confusion with system properties 
> and existing configuration properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11726) Allow applications to access both secure and insecure clusters at the same time

2015-03-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376697#comment-14376697
 ] 

Haohui Mai commented on HADOOP-11726:
-

bq. will it solve the "implementing a secure distcp application that can only 
write to secure clusters" issue?

Yes. Obviously the application to verify whether the token is authentic, but it 
is feasible as long as you don't swallow the error at the file system layer.

bq. For the "fix all application" approach, I found that for distcp, there is a 
wildcard issue that the change has to go beyond distcp. See my latest comment 
in HDFS-7036. 

Just to echo my comments in HDFS-6776 (at least for the distcp use case):

bq. What can be done is to put the hack there, and to inject a corresponding 
token into token cache so that the filesystem no longer need to get the DT from 
the server. 

https://issues.apache.org/jira/browse/HDFS-6776?focusedCommentId=14121719&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14121719



> Allow applications to access both secure and insecure clusters at the same 
> time
> ---
>
> Key: HADOOP-11726
> URL: https://issues.apache.org/jira/browse/HADOOP-11726
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>
> Today there are multiple integration issues when an application 
> (particularly, distcp) access both secure and insecure clusters (e.g., 
> HDFS-6776 / HDFS-7036)
> There are four use cases in this scenario:
> * Secure <-> Secure. Works.
> * Insecure <-> Insecure. Works.
> * Accessing secure clusters from insecure client. Will not work as expected. 
> The insecure client won't be able to be authenticated with the secure client, 
> otherwise it is a security vulnerability.
> * Accessing insecure clusters from secure client. Currently it will not work 
> as the secure client won't be able to obtain a delegation token from the 
> insecure cluster.
> This jira proposes to fix the last use case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11676) Add API to NetworkTopology for getting all racks

2015-03-23 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reopened HADOOP-11676:


> Add API to NetworkTopology for getting all racks
> 
>
> Key: HADOOP-11676
> URL: https://issues.apache.org/jira/browse/HADOOP-11676
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11676.002.patch, HADOOP-11676.patch
>
>
> The existing two NetworkTopology.chooseRandom(..) API support choosing node 
> from scope and choosing outside scope. BlockPlacementPolicyDefault class use 
> these two API to choose node from one rack or choose outside one rack.
> We want to implement a new placement policy called 
> BlockPlacementPolicyFaultTolerant which tries its best to place replicas to 
> most racks. To achieve this, We need to know how many replicas each rack has. 
> And first, we need to get all racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11676) Add API to NetworkTopology for getting all racks

2015-03-23 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HADOOP-11676.

Resolution: Won't Fix

Reverted per discussion under HDFS-7891

> Add API to NetworkTopology for getting all racks
> 
>
> Key: HADOOP-11676
> URL: https://issues.apache.org/jira/browse/HADOOP-11676
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11676.002.patch, HADOOP-11676.patch
>
>
> The existing two NetworkTopology.chooseRandom(..) API support choosing node 
> from scope and choosing outside scope. BlockPlacementPolicyDefault class use 
> these two API to choose node from one rack or choose outside one rack.
> We want to implement a new placement policy called 
> BlockPlacementPolicyFaultTolerant which tries its best to place replicas to 
> most racks. To achieve this, We need to know how many replicas each rack has. 
> And first, we need to get all racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-23 Thread Zhe Zhang (JIRA)
Zhe Zhang created HADOOP-11740:
--

 Summary: Combine erasure encoder and decoder interfaces
 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Zhe Zhang


Rationale [discussed | 
https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
 under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11377) jdiff failing on java 7 and java 8, "Null.java" not found

2015-03-23 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376334#comment-14376334
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11377:
--

[~ozawa] / [~ste...@apache.org], I couldn't reproduce this with either JDKs. 
Steps to reproduce?

> jdiff failing on java 7 and java 8, "Null.java" not found
> -
>
> Key: HADOOP-11377
> URL: https://issues.apache.org/jira/browse/HADOOP-11377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0, 2.7.0
> Environment: Java8 jenkins
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-11377.001.patch
>
>
> Jdiff is having problems on Java 8, as it cannot find a javadoc for the new 
> {{Null}} datatype
> {code}
> '
> The ' characters around the executable and arguments are
> not part of the command.
>   [javadoc] javadoc: error - Illegal package name: ""
>   [javadoc] javadoc: error - File not found: 
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376179#comment-14376179
 ] 

Hadoop QA commented on HADOOP-11664:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706464/HADOOP-11664-v2.patch
  against trunk revision 82eda77.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5985//console

This message is automatically generated.

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-23 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11664:

Status: Patch Available  (was: Open)

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11737) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376080#comment-14376080
 ] 

Hudson commented on HADOOP-11737:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2091 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2091/])
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki. (ozawa: rev 
0b9f12c847e26103bc2304cf7114e6d103264669)
* hadoop-common-project/hadoop-nfs/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
> ---
>
> Key: HADOOP-11737
> URL: https://issues.apache.org/jira/browse/HADOOP-11737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11735.001.patch
>
>
> It should be removed because only hadoop-nfs will be left behind when parent 
> upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11698) remove distcpv1 from hadoop-extras

2015-03-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376055#comment-14376055
 ] 

Allen Wittenauer commented on HADOOP-11698:
---

Todd has openly admitted to have stopped working on Hadoop, so asking for his 
opinion is sort of pointless.

Switch it from distcpv1 to distcpv2 and call it a day.

> remove distcpv1 from hadoop-extras
> --
>
> Key: HADOOP-11698
> URL: https://issues.apache.org/jira/browse/HADOOP-11698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11698-branch2.patch, HADOOP-11698.patch
>
>
> distcpv1 is pretty much unsupported. we should just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11737) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376028#comment-14376028
 ] 

Hudson commented on HADOOP-11737:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #141 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/141/])
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki. (ozawa: rev 
0b9f12c847e26103bc2304cf7114e6d103264669)
* hadoop-common-project/hadoop-nfs/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
> ---
>
> Key: HADOOP-11737
> URL: https://issues.apache.org/jira/browse/HADOOP-11737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11735.001.patch
>
>
> It should be removed because only hadoop-nfs will be left behind when parent 
> upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11737) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375970#comment-14375970
 ] 

Hudson commented on HADOOP-11737:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #132 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/132/])
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki. (ozawa: rev 
0b9f12c847e26103bc2304cf7114e6d103264669)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-nfs/pom.xml


> mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
> ---
>
> Key: HADOOP-11737
> URL: https://issues.apache.org/jira/browse/HADOOP-11737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11735.001.patch
>
>
> It should be removed because only hadoop-nfs will be left behind when parent 
> upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11737) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375956#comment-14375956
 ] 

Hudson commented on HADOOP-11737:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2073 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2073/])
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki. (ozawa: rev 
0b9f12c847e26103bc2304cf7114e6d103264669)
* hadoop-common-project/hadoop-nfs/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
> ---
>
> Key: HADOOP-11737
> URL: https://issues.apache.org/jira/browse/HADOOP-11737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11735.001.patch
>
>
> It should be removed because only hadoop-nfs will be left behind when parent 
> upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375902#comment-14375902
 ] 

Remus Rusanu commented on HADOOP-11691:
---

Can you please tell me the midl command line you get?  For me is this one:
{code}
E:\HW\tools\OACR\bin\midl.exe /W2 /WX /nologo /char signed /env win32 /Oicf 
/app_config /out"E:\HW\project\hadoop-common\hadoop-common-project\hadoo
p-common\target/winutils/" /h "hadoopwinutilsvc_h.h" /tlb 
"E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/libwinutils
.tlb" /robust hadoopwinutilsvc.idl
{code}
which passes in `/env win32` as per the MIDL MSbuild Task spec, but apparently 
midl.exe wants `/win32` instead. I'm courious in your case what does the midl  
command line look like. 

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375893#comment-14375893
 ] 

Hadoop QA commented on HADOOP-11691:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12706518/HADOOP-11691-003.patch
  against trunk revision 0b9f12c.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5984//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5984//console

This message is automatically generated.

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375888#comment-14375888
 ] 

Kiran Kumar M R commented on HADOOP-11691:
--

Build is successful for me with both patch-002 and patch-003
Yes $Platform is Win32 or x64. 'Release|Win32' is '$(Configuration)|$(Platform)'

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375877#comment-14375877
 ] 

Remus Rusanu commented on HADOOP-11691:
---

Actually $(Platform) is Win32, not sure what makes my midl unhappy. I'll 
investigate.

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375872#comment-14375872
 ] 

Remus Rusanu commented on HADOOP-11691:
---

Hi [~kiranmr] the trunk still fail for me to build x86:
{code}
"E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj"
 (default target) (4) ->
(Link target) ->
  
E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
 : fatal error LNK1112: module machine type 'x
64' conflicts with target machine type 'X86' 
[E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
{code}

I think the 
{code}
-  X64
+  $(Platform)
{code}
is incorrect because $(Platform) expands to `Release|Win32` or `Debug|Win32` 
but the MIDL Task https://msdn.microsoft.com/en-us/library/ee862478.aspx 
expects `win32` or `x64`.

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375859#comment-14375859
 ] 

Kiran Kumar M R commented on HADOOP-11691:
--

[~chuanliu] There is no need to specify {{$(VCInstallDir)}} explicitily. I am 
inheriting existing settings in project and only appending Win8SDK path

{code}
$(IncludePath);$(Windows8SDK_IncludePath);
{code}

Order of include files will matter, I have changed in in patch-004. Now Win8 
SDK inlcude path comes first 
{code}
$(Windows8SDK_IncludePath);$(IncludePath);
{code}

Try this file. As I mentioned in earlier comment, libwinutils and winutils will 
use different IDL files. But it should be compatible as pointed Remus.

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-23 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HADOOP-11691:
-
Attachment: HADOOP-11691-003.patch

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11737) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375701#comment-14375701
 ] 

Hudson commented on HADOOP-11737:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #875 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/875/])
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki. (ozawa: rev 
0b9f12c847e26103bc2304cf7114e6d103264669)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-nfs/pom.xml


> mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
> ---
>
> Key: HADOOP-11737
> URL: https://issues.apache.org/jira/browse/HADOOP-11737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11735.001.patch
>
>
> It should be removed because only hadoop-nfs will be left behind when parent 
> upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11737) mockito's version in hadoop-nfs’ pom.xml shouldn't be specified

2015-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375694#comment-14375694
 ] 

Hudson commented on HADOOP-11737:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #141 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/141/])
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki. (ozawa: rev 
0b9f12c847e26103bc2304cf7114e6d103264669)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-nfs/pom.xml


> mockito's version in hadoop-nfs’ pom.xml shouldn't be specified
> ---
>
> Key: HADOOP-11737
> URL: https://issues.apache.org/jira/browse/HADOOP-11737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11735.001.patch
>
>
> It should be removed because only hadoop-nfs will be left behind when parent 
> upgrades mockito.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-03-23 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11645:
---
Attachment: HADOOP-11645-v2.patch

Refined the patch based on latest branch. Ready for review.

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch
>
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11649) Allow to configure multiple erasure codecs

2015-03-23 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375490#comment-14375490
 ] 

Kai Zheng commented on HADOOP-11649:


Opened HADOOP-11739 for another method thru Java service locator.

> Allow to configure multiple erasure codecs
> --
>
> Key: HADOOP-11649
> URL: https://issues.apache.org/jira/browse/HADOOP-11649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11649-v1.patch
>
>
> This is to allow to configure erasure codec and coder in core-site 
> configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11739) Allow to register multiple erasure codecs via service locator

2015-03-23 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11739:
--

 Summary: Allow to register multiple erasure codecs via service 
locator
 Key: HADOOP-11739
 URL: https://issues.apache.org/jira/browse/HADOOP-11739
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This is to allow registering multiple erasure codecs thru Java service locator, 
to complement the method by configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-23 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11664:
---
Attachment: HADOOP-11664-v2.patch

Refined the patch based on latest branch. Ready for review.

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)