[jira] [Commented] (HADOOP-10432) Refactor SSLFactory to expose static method to determine HostnameVerifier

2014-03-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947606#comment-13947606
 ] 

Alejandro Abdelnur commented on HADOOP-10432:
-

The patch is straight forward refactoring of code that is already tested.

> Refactor SSLFactory to expose static method to determine HostnameVerifier
> -
>
> Key: HADOOP-10432
> URL: https://issues.apache.org/jira/browse/HADOOP-10432
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10432.patch
>
>
> The {{SSFactory.getHostnameVerifier()}} method is private and takes a 
> configuration to fetch a hardcoded property. Having a public method to 
> resolve a verifier based on the provided value will enable getting a verifier 
> based on the verifier constant (DEFAULT, DEFAULT_AND_LOCALHOST, STRICT, 
> STRICT_IE6, ALLOW_ALL).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10432) Refactor SSLFactory to expose static method to determine HostnameVerifier

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947592#comment-13947592
 ] 

Hadoop QA commented on HADOOP-10432:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636537/HADOOP-10432.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3715//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3715//console

This message is automatically generated.

> Refactor SSLFactory to expose static method to determine HostnameVerifier
> -
>
> Key: HADOOP-10432
> URL: https://issues.apache.org/jira/browse/HADOOP-10432
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10432.patch
>
>
> The {{SSFactory.getHostnameVerifier()}} method is private and takes a 
> configuration to fetch a hardcoded property. Having a public method to 
> resolve a verifier based on the provided value will enable getting a verifier 
> based on the verifier constant (DEFAULT, DEFAULT_AND_LOCALHOST, STRICT, 
> STRICT_IE6, ALLOW_ALL).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10436) ToolRunner is not thread-safe

2014-03-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-10436:


Status: Patch Available  (was: Open)

> ToolRunner is not thread-safe
> -
>
> Key: HADOOP-10436
> URL: https://issues.apache.org/jira/browse/HADOOP-10436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ajay Chitre
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-10436.1.patch
>
>
> ToolRunner class is not thread-safe because it uses GenericOptionsParser.  
> The constructor of GenericOptionsParser uses 'OptionBuilder' which is a 
> singleton class that uses instance variables.  In other words, OptionBuilder 
> is NOT thread safe.  As a result, when multiple Hadoop jobs are triggered 
> simultaneously using ToolRunner they end up stepping on each other.
> The easiest way to fix it is by making 'buildGeneralOptions' synchronized in 
> GenericOptionsParser.
> private static synchronized Options buildGeneralOptions(Options opts) {
> If this seems like the correct way of fixing this, either we can provide a 
> patch or someone can quickly fix it.  Thanks.
> Ajay Chitre
> achi...@cisco.com
> Virendra Singh
> virsi...@cisco.com



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10436) ToolRunner is not thread-safe

2014-03-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-10436:


Attachment: HADOOP-10436.1.patch

Made buildGeneralOptions synchronized.
OptionBuilder is a part or o.a.commons.cli, so we should avoid this problem at 
GenericOptionsParser#buildGeneralOptions. 

> ToolRunner is not thread-safe
> -
>
> Key: HADOOP-10436
> URL: https://issues.apache.org/jira/browse/HADOOP-10436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ajay Chitre
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-10436.1.patch
>
>
> ToolRunner class is not thread-safe because it uses GenericOptionsParser.  
> The constructor of GenericOptionsParser uses 'OptionBuilder' which is a 
> singleton class that uses instance variables.  In other words, OptionBuilder 
> is NOT thread safe.  As a result, when multiple Hadoop jobs are triggered 
> simultaneously using ToolRunner they end up stepping on each other.
> The easiest way to fix it is by making 'buildGeneralOptions' synchronized in 
> GenericOptionsParser.
> private static synchronized Options buildGeneralOptions(Options opts) {
> If this seems like the correct way of fixing this, either we can provide a 
> patch or someone can quickly fix it.  Thanks.
> Ajay Chitre
> achi...@cisco.com
> Virendra Singh
> virsi...@cisco.com



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10429) KeyStores should have methods to generate the materials themselves, KeyShell should use them

2014-03-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947572#comment-13947572
 ] 

Alejandro Abdelnur commented on HADOOP-10429:
-

[~lmccay], agree 100%. The patch adds new methods, but it does not remove the 
old ones, both work, and the default impl of the new signature uses the old 
one. This means that if you have a custom provider already, it will work just 
fine and it will have the new functionality.

> KeyStores should have methods to generate the materials themselves, KeyShell 
> should use them
> 
>
> Key: HADOOP-10429
> URL: https://issues.apache.org/jira/browse/HADOOP-10429
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10429.patch
>
>
> Currently, the {{KeyProvider}} API expects the caller to provide the key 
> materials. And, the {{KeyShell}} generates key materials.
> For security reasons, {{KeyProvider}} implementations may want to generate 
> and hide (from the user generating the key) the key materials.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10436) ToolRunner is not thread-safe

2014-03-25 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA reassigned HADOOP-10436:
---

Assignee: Tsuyoshi OZAWA

> ToolRunner is not thread-safe
> -
>
> Key: HADOOP-10436
> URL: https://issues.apache.org/jira/browse/HADOOP-10436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ajay Chitre
>Assignee: Tsuyoshi OZAWA
>
> ToolRunner class is not thread-safe because it uses GenericOptionsParser.  
> The constructor of GenericOptionsParser uses 'OptionBuilder' which is a 
> singleton class that uses instance variables.  In other words, OptionBuilder 
> is NOT thread safe.  As a result, when multiple Hadoop jobs are triggered 
> simultaneously using ToolRunner they end up stepping on each other.
> The easiest way to fix it is by making 'buildGeneralOptions' synchronized in 
> GenericOptionsParser.
> private static synchronized Options buildGeneralOptions(Options opts) {
> If this seems like the correct way of fixing this, either we can provide a 
> patch or someone can quickly fix it.  Thanks.
> Ajay Chitre
> achi...@cisco.com
> Virendra Singh
> virsi...@cisco.com



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10424) TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing

2014-03-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10424:
---

Affects Version/s: 2.4.0

> TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing
> -
>
> Key: HADOOP-10424
> URL: https://issues.apache.org/jira/browse/HADOOP-10424
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Mit Desai
>Assignee: Akira AJISAKA
> Attachments: log.txt
>
>
> testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
>   Time elapsed: 44.069 sec  <<< FAILURE!
> java.lang.AssertionError: environment set for child is wrong
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
> Results :
> Failed tests: 
>   
> TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd:107->runStreamJobAndValidateEnv:157
>  environment set for child is wrong



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10424) TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing

2014-03-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947557#comment-13947557
 ] 

Akira AJISAKA commented on HADOOP-10424:


bq. {{MRApp.cmdEnvironment}}
My above comment was wrong, {{MyMRApp.cmdEnvironment}} in 
{{TestMapReduceChildJVM}} is the right.
Now I'm thinking the test can be removed because the environment variables are 
already tested in {{TestMapReduceChildJVM}}.

> TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing
> -
>
> Key: HADOOP-10424
> URL: https://issues.apache.org/jira/browse/HADOOP-10424
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Mit Desai
>Assignee: Akira AJISAKA
> Attachments: log.txt
>
>
> testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
>   Time elapsed: 44.069 sec  <<< FAILURE!
> java.lang.AssertionError: environment set for child is wrong
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
> Results :
> Failed tests: 
>   
> TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd:107->runStreamJobAndValidateEnv:157
>  environment set for child is wrong



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10424) TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing

2014-03-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947536#comment-13947536
 ] 

Akira AJISAKA commented on HADOOP-10424:


The test fails because {{echo $HADOOP_ROOT_LOGGER $HADOOP_CLIENT_OPTS}} returns 
nothing in Hadoop Streaming after MAPREDUCE-5806.
I suppose it is sufficient to confirm the environment variables by 
{{MRApp.cmdEnvironment}} instead of echo command.

> TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing
> -
>
> Key: HADOOP-10424
> URL: https://issues.apache.org/jira/browse/HADOOP-10424
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Mit Desai
>Assignee: Akira AJISAKA
> Attachments: log.txt
>
>
> testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
>   Time elapsed: 44.069 sec  <<< FAILURE!
> java.lang.AssertionError: environment set for child is wrong
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
> Results :
> Failed tests: 
>   
> TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd:107->runStreamJobAndValidateEnv:157
>  environment set for child is wrong



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2014-03-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947521#comment-13947521
 ] 

Akira AJISAKA commented on HADOOP-10392:


The failure is not related to the patch. The test fails in trunk also 
(HADOOP-10424).

> Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
> 
>
> Key: HADOOP-10392
> URL: https://issues.apache.org/jira/browse/HADOOP-10392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
> HADOOP-10392.patch
>
>
> There're some methods calling Path.makeQualified(FileSystem), which causes 
> javac warning.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10424) TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing

2014-03-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947451#comment-13947451
 ] 

Akira AJISAKA commented on HADOOP-10424:


At first I divide the assertion to find the cause.
{code}
-assertTrue("environment set for child is wrong", env.contains("INFO,CLA")
-   && env.contains("-Dyarn.app.container.log.dir=")
-   && env.contains("-Dyarn.app.container.log.filesize=" + logSize)
-   && env.contains("-Dlog4j.configuration="));
+assertTrue("environment set for child should contain INFO,CLA",
+env.contains("INFO,CLA"));
+assertTrue(env.contains("-Dyarn.app.container.log.dir="));
+assertTrue(env.contains("-Dyarn.app.container.log.filesize=" + logSize));
+assertTrue(env.contains("-Dlog4j.configuration="));
{code}
The test fails at the following.
{code}
+assertTrue("environment set for child should contain INFO,CLA",
+env.contains("INFO,CLA"));
{code}

> TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing
> -
>
> Key: HADOOP-10424
> URL: https://issues.apache.org/jira/browse/HADOOP-10424
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Mit Desai
>Assignee: Akira AJISAKA
> Attachments: log.txt
>
>
> testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
>   Time elapsed: 44.069 sec  <<< FAILURE!
> java.lang.AssertionError: environment set for child is wrong
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
> Results :
> Failed tests: 
>   
> TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd:107->runStreamJobAndValidateEnv:157
>  environment set for child is wrong



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10424) TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing

2014-03-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-10424:
--

Assignee: Akira AJISAKA

> TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd is failing
> -
>
> Key: HADOOP-10424
> URL: https://issues.apache.org/jira/browse/HADOOP-10424
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Mit Desai
>Assignee: Akira AJISAKA
> Attachments: log.txt
>
>
> testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
>   Time elapsed: 44.069 sec  <<< FAILURE!
> java.lang.AssertionError: environment set for child is wrong
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
>   at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
> Results :
> Failed tests: 
>   
> TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd:107->runStreamJobAndValidateEnv:157
>  environment set for child is wrong



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10438) TestStreamingTaskLog fails

2014-03-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-10438.


Resolution: Duplicate

Closing this issue as duplicate.

> TestStreamingTaskLog fails
> --
>
> Key: HADOOP-10438
> URL: https://issues.apache.org/jira/browse/HADOOP-10438
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>
> TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd fails intermittently.
> {code}
> Running org.apache.hadoop.streaming.TestStreamingTaskLog
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 49.003 sec 
> <<< FAILURE! - in org.apache.hadoop.streaming.TestStreamingTaskLog
> testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
>   Time elapsed: 48.918 sec  <<< FAILURE!
> java.lang.AssertionError: environment set for child is wrong
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
> at 
> org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10438) TestStreamingTaskLog fails

2014-03-25 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10438:
--

 Summary: TestStreamingTaskLog fails
 Key: HADOOP-10438
 URL: https://issues.apache.org/jira/browse/HADOOP-10438
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA


TestStreamingTaskLog#testStreamingTaskLogWithHadoopCmd fails intermittently.
{code}
Running org.apache.hadoop.streaming.TestStreamingTaskLog
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 49.003 sec <<< 
FAILURE! - in org.apache.hadoop.streaming.TestStreamingTaskLog
testStreamingTaskLogWithHadoopCmd(org.apache.hadoop.streaming.TestStreamingTaskLog)
  Time elapsed: 48.918 sec  <<< FAILURE!
java.lang.AssertionError: environment set for child is wrong
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.streaming.TestStreamingTaskLog.runStreamJobAndValidateEnv(TestStreamingTaskLog.java:157)
at 
org.apache.hadoop.streaming.TestStreamingTaskLog.testStreamingTaskLogWithHadoopCmd(TestStreamingTaskLog.java:107)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10429) KeyStores should have methods to generate the materials themselves, KeyShell should use them

2014-03-25 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947418#comment-13947418
 ] 

Larry McCay commented on HADOOP-10429:
--

[~tucu00]] - I had given this some thought in the past as well. I think that it 
is fine to add this but I don't know that we should remove the ability for the 
consumer to use an arbitrary source for keying material. I would imagine a 
perhaps adding a separate switch to indicate that you want to delegate it to 
the provider or not.

I can imagine a usecase where a specialized hardware key generator is used but 
you want to store it in a java keystore. You shouldn't necessarily have to 
write a new provider for that combination.

What do you think?


> KeyStores should have methods to generate the materials themselves, KeyShell 
> should use them
> 
>
> Key: HADOOP-10429
> URL: https://issues.apache.org/jira/browse/HADOOP-10429
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10429.patch
>
>
> Currently, the {{KeyProvider}} API expects the caller to provide the key 
> materials. And, the {{KeyShell}} generates key materials.
> For security reasons, {{KeyProvider}} implementations may want to generate 
> and hide (from the user generating the key) the key materials.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10359) Native bzip2 compression support is broken on non-Linux systems

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947417#comment-13947417
 ] 

Hadoop QA commented on HADOOP-10359:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12630446/HADOOP-10359-native-bzip2-for-os-x.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3714//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3714//console

This message is automatically generated.

> Native bzip2 compression support is broken on non-Linux systems
> ---
>
> Key: HADOOP-10359
> URL: https://issues.apache.org/jira/browse/HADOOP-10359
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 2.2.0
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
> Hadoop 2.2.0-CDH5.0.0-beta-2
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10359-native-bzip2-for-os-x.patch
>
>
> While testing the patch for HADOOP-9648, I noticed that the bzip2 native 
> compressor/decompressor support wasn't working properly. I dug around a bit 
> and got native bzip2 support to work on my macbook. Will attach a patch in a 
> bit. (This probably needs to be tested on FreeBSD / Windows / Linux, but I 
> don't have the time to set up the necessary VMs to do it. I assume the build 
> bot will test Linux).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10427) KeyProvider implementations should be thread safe

2014-03-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947378#comment-13947378
 ] 

Alejandro Abdelnur commented on HADOOP-10427:
-

patch adds documentation, synchronized keywords and synchronization via a read 
write lock. it is obvious and not that easy to write a testcase

> KeyProvider implementations should be thread safe
> -
>
> Key: HADOOP-10427
> URL: https://issues.apache.org/jira/browse/HADOOP-10427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10427.patch
>
>
> The {{KeyProvider}} API should be thread-safe so it can be used safely in 
> server apps.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10359) Native bzip2 compression support is broken on non-Linux systems

2014-03-25 Thread Ilya Maykov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Maykov updated HADOOP-10359:
-

Status: Patch Available  (was: Open)

Attaching the patch with "submit patch" option, hopefully that changes status 
to "Patch Available" ...

> Native bzip2 compression support is broken on non-Linux systems
> ---
>
> Key: HADOOP-10359
> URL: https://issues.apache.org/jira/browse/HADOOP-10359
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 2.2.0
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
> Hadoop 2.2.0-CDH5.0.0-beta-2
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10359-native-bzip2-for-os-x.patch
>
>
> While testing the patch for HADOOP-9648, I noticed that the bzip2 native 
> compressor/decompressor support wasn't working properly. I dug around a bit 
> and got native bzip2 support to work on my macbook. Will attach a patch in a 
> bit. (This probably needs to be tested on FreeBSD / Windows / Linux, but I 
> don't have the time to set up the necessary VMs to do it. I assume the build 
> bot will test Linux).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10427) KeyProvider implementations should be thread safe

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947375#comment-13947375
 ] 

Hadoop QA commented on HADOOP-10427:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636527/HADOOP-10427.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3713//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3713//console

This message is automatically generated.

> KeyProvider implementations should be thread safe
> -
>
> Key: HADOOP-10427
> URL: https://issues.apache.org/jira/browse/HADOOP-10427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10427.patch
>
>
> The {{KeyProvider}} API should be thread-safe so it can be used safely in 
> server apps.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10281) Create a scheduler, which assigns schedulables a priority level

2014-03-25 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10281:
--

Attachment: (was: subtask4_scheduler.patch)

> Create a scheduler, which assigns schedulables a priority level
> ---
>
> Key: HADOOP-10281
> URL: https://issues.apache.org/jira/browse/HADOOP-10281
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10280) Make Schedulables return a configurable identity of user or group

2014-03-25 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947301#comment-13947301
 ] 

Chris Li commented on HADOOP-10280:
---

Thanks Arapit! This completes the core changes required for RPC-level QoS.

> Make Schedulables return a configurable identity of user or group
> -
>
> Key: HADOOP-10280
> URL: https://issues.apache.org/jira/browse/HADOOP-10280
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10280.patch, HADOOP-10280.patch, 
> HADOOP-10280.patch, HADOOP-10280.patch
>
>
> In order to intelligently schedule incoming calls, we need to know what 
> identity it falls under.
> We do this by defining the Schedulable interface, which has one method, 
> getIdentity(IdentityType idType)
> The scheduler can then query a Schedulable object for its identity, depending 
> on what idType is. 
> For example:
> Call 1: Made by user=Alice, group=admins
> Call 2: Made by user=Bob, group=admins
> Call 3: Made by user=Carlos, group=users
> Call 4: Made by user=Alice, group=admins
> Depending on what the identity is, we would treat these requests differently. 
> If we query on Username, we can bucket these 4 requests into 3 sets for 
> Alice, Bob, and Carlos. If we query on Groupname, we can bucket these 4 
> requests into 2 sets for admins and users.
> In this initial version, idType can be username or primary group. In future 
> versions, it could be jobID, request class (read or write), or some explicit 
> QoS field. These are user-defined, and will be reloaded on callqueue refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10280) Make Schedulables return a configurable identity of user or group

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947262#comment-13947262
 ] 

Hudson commented on HADOOP-10280:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5401/])
HADOOP-10280. Add files missed in previous checkin. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581533)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/IdentityProvider.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Schedulable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/UserIdentityProvider.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIdentityProviders.java
HADOOP-10280. Make Schedulables return a configurable identity of user or 
group. (Contributed by Chris Li) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581532)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


> Make Schedulables return a configurable identity of user or group
> -
>
> Key: HADOOP-10280
> URL: https://issues.apache.org/jira/browse/HADOOP-10280
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10280.patch, HADOOP-10280.patch, 
> HADOOP-10280.patch, HADOOP-10280.patch
>
>
> In order to intelligently schedule incoming calls, we need to know what 
> identity it falls under.
> We do this by defining the Schedulable interface, which has one method, 
> getIdentity(IdentityType idType)
> The scheduler can then query a Schedulable object for its identity, depending 
> on what idType is. 
> For example:
> Call 1: Made by user=Alice, group=admins
> Call 2: Made by user=Bob, group=admins
> Call 3: Made by user=Carlos, group=users
> Call 4: Made by user=Alice, group=admins
> Depending on what the identity is, we would treat these requests differently. 
> If we query on Username, we can bucket these 4 requests into 3 sets for 
> Alice, Bob, and Carlos. If we query on Groupname, we can bucket these 4 
> requests into 2 sets for admins and users.
> In this initial version, idType can be username or primary group. In future 
> versions, it could be jobID, request class (read or write), or some explicit 
> QoS field. These are user-defined, and will be reloaded on callqueue refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9985) HDFS Compatible ViewFileSystem

2014-03-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9985:


Fix Version/s: (was: 2.0.6-alpha)

> HDFS Compatible ViewFileSystem
> --
>
> Key: HADOOP-9985
> URL: https://issues.apache.org/jira/browse/HADOOP-9985
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lohit Vijayarenu
>
> There are multiple scripts and projects like pig, hive, elephantbird refer to 
> HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace 
> this causes problem because supported scheme for federation is viewfs:// . We 
> will have to force all users to change their scripts/programs to be able to 
> access federated cluster. 
> It would be great if thee was a way to map viewfs scheme to hdfs scheme 
> without exposing it to users. Opening this JIRA to get inputs from people who 
> have thought about this in their clusters.
> In our clusters we ended up created another class 
> HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and 
> viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there 
> any suggested approach other than this?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10280) Make Schedulables return a configurable identity of user or group

2014-03-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10280:
---

  Resolution: Fixed
   Fix Version/s: 2.4.0
  3.0.0
Target Version/s: 2.4.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2 and branch-2.4.

Thanks for the contribution Chris Li!

> Make Schedulables return a configurable identity of user or group
> -
>
> Key: HADOOP-10280
> URL: https://issues.apache.org/jira/browse/HADOOP-10280
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10280.patch, HADOOP-10280.patch, 
> HADOOP-10280.patch, HADOOP-10280.patch
>
>
> In order to intelligently schedule incoming calls, we need to know what 
> identity it falls under.
> We do this by defining the Schedulable interface, which has one method, 
> getIdentity(IdentityType idType)
> The scheduler can then query a Schedulable object for its identity, depending 
> on what idType is. 
> For example:
> Call 1: Made by user=Alice, group=admins
> Call 2: Made by user=Bob, group=admins
> Call 3: Made by user=Carlos, group=users
> Call 4: Made by user=Alice, group=admins
> Depending on what the identity is, we would treat these requests differently. 
> If we query on Username, we can bucket these 4 requests into 3 sets for 
> Alice, Bob, and Carlos. If we query on Groupname, we can bucket these 4 
> requests into 2 sets for admins and users.
> In this initial version, idType can be username or primary group. In future 
> versions, it could be jobID, request class (read or write), or some explicit 
> QoS field. These are user-defined, and will be reloaded on callqueue refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10250) VersionUtil returns wrong value when comparing two versions

2014-03-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947209#comment-13947209
 ] 

Yongjun Zhang commented on HADOOP-10250:


HI [~szetszwo], That's a very good question.  I actually did try importing it 
in my earlier attempt, it turned out to be not clean, basically
we need to import the whole package org.apache.maven.artifact.versioning  
though we just need this  single file. Plus it would create 
hadoop dependency on maven (they have different version too). At that point, it 
was decided that to make a copy of this file is a cleaner
and easier solution. I'm open for suggestions. Thanks.






> VersionUtil returns wrong value when comparing two versions
> ---
>
> Key: HADOOP-10250
> URL: https://issues.apache.org/jira/browse/HADOOP-10250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.3.0
>
> Attachments: HADOOP-10250.001.patch, HADOOP-10250.002.patch, 
> HADOOP-10250.003.patch, HADOOP-10250.004.patch, HADOOP-10250.004.patch
>
>
> VersionUtil.compareVersions("1.0.0-beta-1", "1.0.0") returns 7 instead of 
> negative number, which is wrong, because 1.0.0-beta-1 older than 1.0.0.
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2014-03-25 Thread Jinghui Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947173#comment-13947173
 ] 

Jinghui Wang commented on HADOOP-10420:
---

 I understand your concern on adding multiple authentication mechanisms to the 
client can be problematic, however the authentication scheme that the patch 
adds is not necessarily specific to Softlayer, the TempAuth mechanism from 
OpenStack uses GET request to authenticate and uses the token from the 
authentication for the subsequent requests. I do agree that a pluggable 
authentication mechanism is needed eventually to support multiple 
authentication schemes. In the meantime, would you be more open to accept the 
patch it can be used by TempAuth as well.
 
I have ran the unit tests that the open stack module included and all tests 
passed exception one, which I opened another JIRA HADOOP-10351. 

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
> Attachments: HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10437) Fix the javac warnings in the conf and the util package

2014-03-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947141#comment-13947141
 ] 

Haohui Mai commented on HADOOP-10437:
-

+1 pending Jenkins.

> Fix the javac warnings in the conf and the util package
> ---
>
> Key: HADOOP-10437
> URL: https://issues.apache.org/jira/browse/HADOOP-10437
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10437_20140325.patch
>
>
> There are a few minor javac warnings in org.apache.hadoop.conf and 
> org.apache.hadoop.util.  We should fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10437) Fix the javac warnings in the conf and the util package

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10437:
-

Status: Patch Available  (was: Open)

> Fix the javac warnings in the conf and the util package
> ---
>
> Key: HADOOP-10437
> URL: https://issues.apache.org/jira/browse/HADOOP-10437
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10437_20140325.patch
>
>
> There are a few minor javac warnings in org.apache.hadoop.conf and 
> org.apache.hadoop.util.  We should fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10437) Fix the javac warnings in the conf and the util package

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10437:
-

Component/s: conf
Description: There are a few minor javac warnings in org.apache.hadoop.conf 
and org.apache.hadoop.util.  We should fix them.  (was: There are a few minor 
javac warnings in org.apache.hadoop.util.  We should fix them.)
Summary: Fix the javac warnings in the conf and the util package  (was: 
Fix the javac warnings in the hadoop.util package)

> Fix the javac warnings in the conf and the util package
> ---
>
> Key: HADOOP-10437
> URL: https://issues.apache.org/jira/browse/HADOOP-10437
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10437_20140325.patch
>
>
> There are a few minor javac warnings in org.apache.hadoop.conf and 
> org.apache.hadoop.util.  We should fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10437) Fix the javac warnings in the hadoop.util package

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10437:
-

Attachment: c10437_20140325.patch

c10437_20140325.patch: 1st patch as well fixes the conf package.

> Fix the javac warnings in the hadoop.util package
> -
>
> Key: HADOOP-10437
> URL: https://issues.apache.org/jira/browse/HADOOP-10437
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10437_20140325.patch
>
>
> There are a few minor javac warnings in org.apache.hadoop.util.  We should 
> fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10437) Fix the javac warnings in the hadoop.util package

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10437:


 Summary: Fix the javac warnings in the hadoop.util package
 Key: HADOOP-10437
 URL: https://issues.apache.org/jira/browse/HADOOP-10437
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


There are a few minor javac warnings in org.apache.hadoop.util.  We should fix 
them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10250) VersionUtil returns wrong value when comparing two versions

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947108#comment-13947108
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10250:
--

Hi [~yzhangal], why copying ComparableVersion but not importing it?

> VersionUtil returns wrong value when comparing two versions
> ---
>
> Key: HADOOP-10250
> URL: https://issues.apache.org/jira/browse/HADOOP-10250
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.3.0
>
> Attachments: HADOOP-10250.001.patch, HADOOP-10250.002.patch, 
> HADOOP-10250.003.patch, HADOOP-10250.004.patch, HADOOP-10250.004.patch
>
>
> VersionUtil.compareVersions("1.0.0-beta-1", "1.0.0") returns 7 instead of 
> negative number, which is wrong, because 1.0.0-beta-1 older than 1.0.0.
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947037#comment-13947037
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10416:
--

Sorry, it is Ooize web ui, according to [~bowenzhangusa].

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947024#comment-13947024
 ] 

Alejandro Abdelnur commented on HADOOP-10416:
-

AFAIK the NN web ui, in non-secure mode was always getting DrWho as user. If 
you want to 'personalize it' then a field asking the username  and then using 
that value for the 'user.name=' query string would do. But, on its own the 
browser won't know the user.name.

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947002#comment-13947002
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10416:
--

 > disable anonymous if you want the user name.

How to view NN web ui?

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947000#comment-13947000
 ] 

Alejandro Abdelnur commented on HADOOP-10416:
-

disable anonymous if you want the user name.

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946991#comment-13946991
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10416:
--

> Then user.name should never be expected, processed. 

Then, how to check file permission?  Simple authentication is useful for 
preventing accidental deletion of other users' files.

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946964#comment-13946964
 ] 

Alejandro Abdelnur commented on HADOOP-10416:
-

bq. if anonymous is enabled 

Then user.name should never be expected, processed. 

May be that is the problem we are seeing, no?

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10436) ToolRunner is not thread-safe

2014-03-25 Thread Ajay Chitre (JIRA)
Ajay Chitre created HADOOP-10436:


 Summary: ToolRunner is not thread-safe
 Key: HADOOP-10436
 URL: https://issues.apache.org/jira/browse/HADOOP-10436
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ajay Chitre


ToolRunner class is not thread-safe because it uses GenericOptionsParser.  The 
constructor of GenericOptionsParser uses 'OptionBuilder' which is a singleton 
class that uses instance variables.  In other words, OptionBuilder is NOT 
thread safe.  As a result, when multiple Hadoop jobs are triggered 
simultaneously using ToolRunner they end up stepping on each other.

The easiest way to fix it is by making 'buildGeneralOptions' synchronized in 
GenericOptionsParser.

private static synchronized Options buildGeneralOptions(Options opts) {

If this seems like the correct way of fixing this, either we can provide a 
patch or someone can quickly fix it.  Thanks.

Ajay Chitre
achi...@cisco.com

Virendra Singh
virsi...@cisco.com




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10416) For pseudo authentication, what to do if there is an expired token?

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10416:
-

Description: 
PseudoAuthenticationHandler currently only gets username from the "user.name" 
parameter.  It there is an expired auth token in the request, the token is 
ignored.  Further, if anonymous is enabled, the client will be authenticated as 
anonymous.

The above behavior seems non-desirable since the client does not want to be 
authenticated as anonymous.


  was:
PseudoAuthenticationHandler currently only gets username from the "user.name" 
parameter.  It should also renew expired auth token if it is available in the 
cookies.


Summary: For pseudo authentication, what to do if there is an expired 
token?  (was: If there is an expired token, PseudoAuthenticationHandler should 
renew it)

Revised summary and description.

> For pseudo authentication, what to do if there is an expired token?
> ---
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It there is an expired auth token in the request, the token is 
> ignored.  Further, if anonymous is enabled, the client will be authenticated 
> as anonymous.
> The above behavior seems non-desirable since the client does not want to be 
> authenticated as anonymous.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10416) If there is an expired token, PseudoAuthenticationHandler should renew it

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946929#comment-13946929
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10416:
--

> ... Once the cookie expires, the user must present again his/her/its 
> credentials (in the case of pseudo via user.name query string parameter). 
> Using the cookie itself as the credentials is wrong.

[~tucu00], if anonymous is enabled, the expired cookie will be ignored and the 
client will be authenticated as anonymous.  The client won't be able to 
authenticated using user.name.  This is the problem.

If using the cookie itself as the credentials is wrong, we probably should 
return an error for expired cookie.  However, this will change the behavior for 
both secure and non-secure setting.

> If there is an expired token, PseudoAuthenticationHandler should renew it
> -
>
> Key: HADOOP-10416
> URL: https://issues.apache.org/jira/browse/HADOOP-10416
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c10416_20140321.patch, c10416_20140322.patch
>
>
> PseudoAuthenticationHandler currently only gets username from the "user.name" 
> parameter.  It should also renew expired auth token if it is available in the 
> cookies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10426) CreateOpts.getOpt(..) should declare with generic type argument

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946916#comment-13946916
 ] 

Hudson commented on HADOOP-10426:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5399 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5399/])
HADOOP-10426. Declare CreateOpts.getOpt(..) with generic type argument, removes 
unused FileContext.getFileStatus(..) and fixes various javac warnings. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581437)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Options.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpConfig.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/AbstractMapWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestWrapper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestWrapper.java


> CreateOpts.getOpt(..) should declare with generic type argument
> ---
>
> Key: HADOOP-10426
> URL: https://issues.apache.org/jira/browse/HADOOP-10426
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: c10426_20140324.patch
>
>
> Similar to CreateOpts.setOpt(..), the CreateOpts.getOpt(..) should also 
> declare with a generic type parameter .  Then, all the 
> casting from CreateOpts to its subclasses can be avoided.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10426) CreateOpts.getOpt(..) should declare with generic type argument

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10426:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Junping for reviewing the patch.

I have committed this.

> CreateOpts.getOpt(..) should declare with generic type argument
> ---
>
> Key: HADOOP-10426
> URL: https://issues.apache.org/jira/browse/HADOOP-10426
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: c10426_20140324.patch
>
>
> Similar to CreateOpts.setOpt(..), the CreateOpts.getOpt(..) should also 
> declare with a generic type parameter .  Then, all the 
> casting from CreateOpts to its subclasses can be avoided.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10435) Enhance distcp to support preserving HDFS ACLs.

2014-03-25 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-10435:
--

 Summary: Enhance distcp to support preserving HDFS ACLs.
 Key: HADOOP-10435
 URL: https://issues.apache.org/jira/browse/HADOOP-10435
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


This issue tracks enhancing distcp to add a new command-line argument for 
preserving HDFS ACLs from the source at the copy destination.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-03-25 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946828#comment-13946828
 ] 

Yi Liu commented on HADOOP-10150:
-

Thanks [~tucu00] for your comment.
We less concern the internal use of HDFS client, on the contrary we care more 
about encrypted data easy for clients. Even though we found that in webhdfs it 
should use DistributedFileSystem as well to remove the symlink issue as 
HDFS-4933 stated(The issue we found is “Throwing UnresolvedPathException when 
getting HDFS symlink file through HDFS REST API”, and there is no “statistics” 
for HDFS REST which is inconsistent with behavior of DistributedFileSystem, 
suppose this JIRA will resolve it).

“Transparent” or “at rest” encryption usually means that the server handles 
encrypting data for persistence, but does not manage keys for particular 
clients or applications, nor require applications to even be aware that 
encryption is in use. Hence how it can be described as transparent. This type 
of solution distributes secret keys within the secure enclave (not to clients), 
or might employ a two tier key architecture (data keys wrapped by the cluster 
secret key) but with keys managed per application typically. E.g. in a database 
system, per table. The goal here is to avoid data leakage from the server by 
universally encrypting data “at rest”.

Other cryptographic application architectures handle use cases where clients or 
applications want to protect data with encryption from other clients or 
applications. For those use cases encryption and decryption is done on the 
client, and the scope of key sharing should be minimized to where the 
cryptographic operations take place. In this type of solution the server 
becomes an unnecessary central point of compromise for user or application 
keys, so sharing there should be avoided. This isn’t really an “at rest” 
solution because the client may or may not choose to encrypt, and because key 
sharing is minimized, the server cannot and should not be able to distinguish 
encrypted data from random bytes, so cannot guarantee all persisted data is 
encrypted.

Therefore we have two different types of solutions useful for different 
reasons, with different threat models. Combinations of the two must be 
carefully done (or avoided) so as not to end up with something combining the 
worst of both threat models.

HDFS-6134 and HADOOP-10150 are orthogonal and complimentary solutions when 
viewed in this light. HDFS-6134, as described at least by the JIRA title, wants 
to introduce transparent encryption within HDFS. In my opinion, it shouldn’t 
attempt “client side encryption on the server” for reasons mentioned above. 
HADOOP-10150 wants to make management of partially encrypted data easy for 
clients, for the client side encryption use cases, by presenting a filtered 
view over base Hadoop filesystems like HDFS.

{quote} in the "Storage of IV and data key" is stated "So we implement extended 
information based on INode feature, and use it to store data key and IV. 
"{quote}
We assume HDFS-2006 could help, that’s why we put separate patches. In that the 
CFS patch it was decoupled with underlying filesystem if xattr present. And it 
could be end user’s choice to decide whether store key alias or data encryption 
key.

{quote}(Mentioned before), how thing flush() operations will be handled as the 
encryption block will be cut short? How this is handled on writes? How this is 
handled on reads?{quote}
For hflush, hsync, actually it's very simple. In cryptographic output stream of 
CFS, we buffer the plain text in cache and do encryption until data size 
reaches buffer length to improve performance. So for hflush /hsync, we just 
need to flush the buffer and do encryption immediately, and then call 
FSDataOutputStream.hfulsh/hsync which will handle the remaining thing.

{quote}Still, it is not clear how transparency will be achieved for existing 
applications: HDFS URI changes, clients must connect to the Key store to 
retrieve the encryption key (clients will need key store principals). The 
encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer 
processes){quote}
There is no URL changed, please see latest design doc and test case.
We have considered HADOOP-9534 and HADOOP-10141, encryption of key material 
could be handled by the implementation of key providers according to customers 
environment.

{quote}Use of AES-CTR (instead of an authenticated encryption mode such as 
AES-GCM){quote}
AES-GCM was introduce addition CPU cycles by GHASH - 2.5x additional cycles in 
Sandy-Bridge and Ivy-Bridge, 0.6x additional cycle in Haswell. Data integrity 
was ensured by underlying filesystem like HDFS in this scenario. We decide to 
use AES-CTR for best performance.
Furthermore, AES-GCM mode is not available as a JCE cipher in Java 6. It may be 
EOL but plenty of Hadoopers are still runni

[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-03-25 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946822#comment-13946822
 ] 

Yi Liu commented on HADOOP-10150:
-

We less concern the internal use of HDFS client, on the contrary we care more 
about encrypted data easy for clients. Even though we found that in webhdfs it 
should use DistributedFileSystem as well to remove the symlink issue as 
HDFS-4933 stated(The issue we found is “Throwing UnresolvedPathException when 
getting HDFS symlink file through HDFS REST API”, and there is no “statistics” 
for HDFS REST which is inconsistent with behavior of DistributedFileSystem, 
suppose this JIRA will resolve it).

“Transparent” or “at rest” encryption usually means that the server handles 
encrypting data for persistence, but does not manage keys for particular 
clients or applications, nor require applications to even be aware that 
encryption is in use. Hence how it can be described as transparent. This type 
of solution distributes secret keys within the secure enclave (not to clients), 
or might employ a two tier key architecture (data keys wrapped by the cluster 
secret key) but with keys managed per application typically. E.g. in a database 
system, per table. The goal here is to avoid data leakage from the server by 
universally encrypting data “at rest”.

Other cryptographic application architectures handle use cases where clients or 
applications want to protect data with encryption from other clients or 
applications. For those use cases encryption and decryption is done on the 
client, and the scope of key sharing should be minimized to where the 
cryptographic operations take place. In this type of solution the server 
becomes an unnecessary central point of compromise for user or application 
keys, so sharing there should be avoided. This isn’t really an “at rest” 
solution because the client may or may not choose to encrypt, and because key 
sharing is minimized, the server cannot and should not be able to distinguish 
encrypted data from random bytes, so cannot guarantee all persisted data is 
encrypted.

Therefore we have two different types of solutions useful for different 
reasons, with different threat models. Combinations of the two must be 
carefully done (or avoided) so as not to end up with something combining the 
worst of both threat models.

HDFS-6134 and HADOOP-10150 are orthogonal and complimentary solutions when 
viewed in this light. HDFS-6134, as described at least by the JIRA title, wants 
to introduce transparent encryption within HDFS. In my opinion, it shouldn’t 
attempt “client side encryption on the server” for reasons mentioned above. 
HADOOP-10150 wants to make management of partially encrypted data easy for 
clients, for the client side encryption use cases, by presenting a filtered 
view over base Hadoop filesystems like HDFS..

{ in the "Storage of IV and data key" is stated "So we implement extended 
information based on INode feature, and use it to store data key and IV. "}
We assume HDFS-2006 could help, that’s why we put separate patches. In that the 
CFS patch it was decoupled with underlying filesystem if xattr present. And it 
could be end user’s choice to decide whether store key alias or data encryption 
key. 
 
{(Mentioned before), how thing flush() operations will be handled as the 
encryption block will be cut short? How this is handled on writes? How this is 
handled on reads?}
For hflush, hsync, actually it's very simple.   In cryptographic output stream 
of CFS, we buffer the plain text in cache and do encryption until data size 
reaches buffer length to improve performance.  So for hflush /hsync, we just 
need to flush the buffer and do encryption immediately, and then call 
FSDataOutputStream.hfulsh/hsync which will handle the remaining thing.
 
{Still, it is not clear how transparency will be achieved for existing 
applications: HDFS URI changes, clients must connect to the Key store to 
retrieve the encryption key (clients will need key store principals). The 
encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes)}
There is no URL changed, please see latest design doc and test case.   
We have considered HADOOP-9534 and HADOOP-10141, encryption of key material 
could be handled by the implementation of key providers according to customers 
environment.
 
{Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM)}
AES-GCM was introduce addition CPU cycles by GHASH - 2.5x additional cycles in 
Sandy-Bridge and Ivy-Bridge, 0.6x additional cycle in Haswell. Data integrity 
was ensured by underlying filesystem like HDFS in this scenario. We decide to 
use AES-CTR for best performance. 
Furthermore, AES-GCM mode is not available as a JCE cipher in Java 6. It may be 
EOL but plenty of Hadoopers are still running it. It's not even listed on the 
Java 7 Sun provider document 
(http://

[jira] [Resolved] (HADOOP-9986) HDFS Compatible ViewFileSystem

2014-03-25 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-9986.


Resolution: Invalid

> HDFS Compatible ViewFileSystem
> --
>
> Key: HADOOP-9986
> URL: https://issues.apache.org/jira/browse/HADOOP-9986
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lohit Vijayarenu
> Fix For: 2.0.6-alpha
>
>
> There are multiple scripts and projects like pig, hive, elephantbird refer to 
> HDFS URI as hdfs://namenodehostport/ or hdfs:/// . In federated namespace 
> this causes problem because supported scheme for federation is viewfs:// . We 
> will have to force all users to change their scripts/programs to be able to 
> access federated cluster. 
> It would be great if thee was a way to map viewfs scheme to hdfs scheme 
> without exposing it to users. Opening this JIRA to get inputs from people who 
> have thought about this in their clusters.
> In our clusters we ended up created another class 
> HDFSCompatibleViewFileSystem which hijacks both hdfs.fs.impl and 
> viewfs.fs.impl and passes down filesystem calls to ViewFileSystem. Is there 
> any suggested approach other than this?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10414) Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml

2014-03-25 Thread Joey Echeverria (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946688#comment-13946688
 ] 

Joey Echeverria commented on HADOOP-10414:
--

I manually verified that the updated parameter in hadoop-policy.xml matches 
what's in the source code. I don't think a test is needed.

> Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml
> ---
>
> Key: HADOOP-10414
> URL: https://issues.apache.org/jira/browse/HADOOP-10414
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.3.0
>Reporter: Joey Echeverria
>Assignee: Joey Echeverria
> Attachments: HADOOP-10414.patch
>
>
> In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property for the 
> RefreshUserMappingsProtocol service changed form 
> security.refresh.usertogroups.mappings.protocol.acl to 
> security.refresh.user.mappings.protocol.acl, but the example in 
> hadoop-policy.xml was not updated. The example should be fixed to avoid 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10434) Is it possible to use "df" to calculate the dfs usage instead of "du"

2014-03-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10434:
---

Summary: Is it possible to use "df" to calculate the dfs usage instead of 
"du"  (was: Is it possible to use "df" to calculating the dfs usage instead of 
"du")

> Is it possible to use "df" to calculate the dfs usage instead of "du"
> -
>
> Key: HADOOP-10434
> URL: https://issues.apache.org/jira/browse/HADOOP-10434
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: MaoYuan Xian
>Priority: Minor
>
> When we run datanode from the machine with big disk volume, it's found du 
> operations from org.apache.hadoop.fs.DU's DURefreshThread cost lots of disk 
> performance.
> As we use the whole disk for hdfs storage, it is possible calculate volume 
> usage via "df" command. Is it necessary adding the "df" option for usage 
> calculation in hdfs 
> (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice)?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10280) Make Schedulables return a configurable identity of user or group

2014-03-25 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946668#comment-13946668
 ] 

Arpit Agarwal commented on HADOOP-10280:


Thanks Chris.

I intend to commit this later today if there are no objections.

> Make Schedulables return a configurable identity of user or group
> -
>
> Key: HADOOP-10280
> URL: https://issues.apache.org/jira/browse/HADOOP-10280
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
>Assignee: Chris Li
> Attachments: HADOOP-10280.patch, HADOOP-10280.patch, 
> HADOOP-10280.patch, HADOOP-10280.patch
>
>
> In order to intelligently schedule incoming calls, we need to know what 
> identity it falls under.
> We do this by defining the Schedulable interface, which has one method, 
> getIdentity(IdentityType idType)
> The scheduler can then query a Schedulable object for its identity, depending 
> on what idType is. 
> For example:
> Call 1: Made by user=Alice, group=admins
> Call 2: Made by user=Bob, group=admins
> Call 3: Made by user=Carlos, group=users
> Call 4: Made by user=Alice, group=admins
> Depending on what the identity is, we would treat these requests differently. 
> If we query on Username, we can bucket these 4 requests into 3 sets for 
> Alice, Bob, and Carlos. If we query on Groupname, we can bucket these 4 
> requests into 2 sets for admins and users.
> In this initial version, idType can be username or primary group. In future 
> versions, it could be jobID, request class (read or write), or some explicit 
> QoS field. These are user-defined, and will be reloaded on callqueue refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946559#comment-13946559
 ] 

Hudson commented on HADOOP-10015:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1712 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1712/])
HADOOP-10015. UserGroupInformation prints out excessive warnings.  Contributed 
by Nicolas Liochon (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1580977)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> 10015.v6.patch, HADOOP-10015.000.patch, HADOOP-10015.001.patch, 
> HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10425) Incompatible behavior of LocalFileSystem:getContentSummary

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946568#comment-13946568
 ] 

Hudson commented on HADOOP-10425:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1712 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1712/])
HADOOP-10425. LocalFileSystem.getContentSummary should not count crc files. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581183)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


> Incompatible behavior of LocalFileSystem:getContentSummary
> --
>
> Key: HADOOP-10425
> URL: https://issues.apache.org/jira/browse/HADOOP-10425
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Brandon Li
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: c10425_20140324.patch, c10425_20140324b.patch
>
>
> Unlike in Hadoop1, FilterFileSystem overrides getContentSummary, which causes 
> content summary to be called on rawLocalFileSystem in Local mode.
> This impacts the computations of Stats in Hive with getting back FileSizes 
> that include the size of the crc files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10423) Clarify compatibility policy document for combination of new client and old server.

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946561#comment-13946561
 ] 

Hudson commented on HADOOP-10423:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1712 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1712/])
HADOOP-10423. Clarify compatibility policy document for combination of new 
client and old server. (Chris Nauroth via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581116)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm


> Clarify compatibility policy document for combination of new client and old 
> server.
> ---
>
> Key: HADOOP-10423
> URL: https://issues.apache.org/jira/browse/HADOOP-10423
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10423.1.patch
>
>
> As discussed on the dev mailing lists and MAPREDUCE-4052, we need to update 
> the text of the compatibility policy to discuss a new client combined with an 
> old server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10422) Remove redundant logging of RPC retry attempts.

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946563#comment-13946563
 ] 

Hudson commented on HADOOP-10422:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1712 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1712/])
HADOOP-10422. Remove redundant logging of RPC retry attempts. Contributed by 
Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581112)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryUtils.java


> Remove redundant logging of RPC retry attempts.
> ---
>
> Key: HADOOP-10422
> URL: https://issues.apache.org/jira/browse/HADOOP-10422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10422.1.patch
>
>
> {{RetryUtils}} logs each retry attempt at both info level and debug level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10422) Remove redundant logging of RPC retry attempts.

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946535#comment-13946535
 ] 

Hudson commented on HADOOP-10422:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1737 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1737/])
HADOOP-10422. Remove redundant logging of RPC retry attempts. Contributed by 
Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581112)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryUtils.java


> Remove redundant logging of RPC retry attempts.
> ---
>
> Key: HADOOP-10422
> URL: https://issues.apache.org/jira/browse/HADOOP-10422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10422.1.patch
>
>
> {{RetryUtils}} logs each retry attempt at both info level and debug level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10425) Incompatible behavior of LocalFileSystem:getContentSummary

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946540#comment-13946540
 ] 

Hudson commented on HADOOP-10425:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1737 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1737/])
HADOOP-10425. LocalFileSystem.getContentSummary should not count crc files. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581183)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


> Incompatible behavior of LocalFileSystem:getContentSummary
> --
>
> Key: HADOOP-10425
> URL: https://issues.apache.org/jira/browse/HADOOP-10425
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Brandon Li
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: c10425_20140324.patch, c10425_20140324b.patch
>
>
> Unlike in Hadoop1, FilterFileSystem overrides getContentSummary, which causes 
> content summary to be called on rawLocalFileSystem in Local mode.
> This impacts the computations of Stats in Hive with getting back FileSizes 
> that include the size of the crc files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946531#comment-13946531
 ] 

Hudson commented on HADOOP-10015:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1737 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1737/])
HADOOP-10015. UserGroupInformation prints out excessive warnings.  Contributed 
by Nicolas Liochon (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1580977)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> 10015.v6.patch, HADOOP-10015.000.patch, HADOOP-10015.001.patch, 
> HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10423) Clarify compatibility policy document for combination of new client and old server.

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946533#comment-13946533
 ] 

Hudson commented on HADOOP-10423:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1737 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1737/])
HADOOP-10423. Clarify compatibility policy document for combination of new 
client and old server. (Chris Nauroth via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581116)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm


> Clarify compatibility policy document for combination of new client and old 
> server.
> ---
>
> Key: HADOOP-10423
> URL: https://issues.apache.org/jira/browse/HADOOP-10423
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10423.1.patch
>
>
> As discussed on the dev mailing lists and MAPREDUCE-4052, we need to update 
> the text of the compatibility policy to discuss a new client combined with an 
> old server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8602) Passive mode support for FTPFileSystem

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946472#comment-13946472
 ] 

Hadoop QA commented on HADOOP-8602:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580259/HADOOP-8602.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3711//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3711//console

This message is automatically generated.

> Passive mode support for FTPFileSystem
> --
>
> Key: HADOOP-8602
> URL: https://issues.apache.org/jira/browse/HADOOP-8602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Nemon Lou
>Priority: Minor
> Attachments: HADOOP-8602.patch, HADOOP-8602.patch
>
>
>  FTPFileSystem uses active mode for default data connection mode.We shall be 
> able to choose passive mode when active mode doesn't work (firewall for 
> example).
>  My thoughts is to add an option "fs.ftp.data.connection.mode" in 
> core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
> supports passive mode, we just need to add a few code in FTPFileSystem 
> .connect() method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8602) Passive mode support for FTPFileSystem

2014-03-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946432#comment-13946432
 ] 

Steve Loughran commented on HADOOP-8602:


Case test needs to use {{toLower()}} with a locale; the {{equalsIgnoreCase()}} 
fails in countries where {{"I".toLower() != "i"}}

> Passive mode support for FTPFileSystem
> --
>
> Key: HADOOP-8602
> URL: https://issues.apache.org/jira/browse/HADOOP-8602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Nemon Lou
>Priority: Minor
> Attachments: HADOOP-8602.patch, HADOOP-8602.patch
>
>
>  FTPFileSystem uses active mode for default data connection mode.We shall be 
> able to choose passive mode when active mode doesn't work (firewall for 
> example).
>  My thoughts is to add an option "fs.ftp.data.connection.mode" in 
> core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
> supports passive mode, we just need to add a few code in FTPFileSystem 
> .connect() method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2014-03-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10420:


Component/s: fs

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
> Attachments: HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10425) Incompatible behavior of LocalFileSystem:getContentSummary

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946408#comment-13946408
 ] 

Hudson commented on HADOOP-10425:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #520 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/520/])
HADOOP-10425. LocalFileSystem.getContentSummary should not count crc files. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581183)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java


> Incompatible behavior of LocalFileSystem:getContentSummary
> --
>
> Key: HADOOP-10425
> URL: https://issues.apache.org/jira/browse/HADOOP-10425
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Brandon Li
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: c10425_20140324.patch, c10425_20140324b.patch
>
>
> Unlike in Hadoop1, FilterFileSystem overrides getContentSummary, which causes 
> content summary to be called on rawLocalFileSystem in Local mode.
> This impacts the computations of Stats in Hive with getting back FileSizes 
> that include the size of the crc files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946399#comment-13946399
 ] 

Hudson commented on HADOOP-10015:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #520 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/520/])
HADOOP-10015. UserGroupInformation prints out excessive warnings.  Contributed 
by Nicolas Liochon (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1580977)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> UserGroupInformation prints out excessive ERROR warnings
> 
>
> Key: HADOOP-10015
> URL: https://issues.apache.org/jira/browse/HADOOP-10015
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Haohui Mai
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
> 10015.v6.patch, HADOOP-10015.000.patch, HADOOP-10015.001.patch, 
> HADOOP-10015.002.patch
>
>
> In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
> it catches an exception.
> However, it prints benign warnings in the following paradigm:
> {noformat}
>  try {
> ugi.doAs(new PrivilegedExceptionAction() {
>   @Override
>   public FileStatus run() throws Exception {
> return fs.getFileStatus(nonExist);
>   }
> });
>   } catch (FileNotFoundException e) {
>   }
> {noformat}
> For example, FileSystem#exists() follows this paradigm. Distcp uses this 
> paradigm too. The exception is expected therefore there should be no ERROR 
> logs printed in the namenode logs.
> Currently, the user quickly finds out that the namenode log is quickly filled 
> by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
> behavior confuses the operators.
> This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10423) Clarify compatibility policy document for combination of new client and old server.

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946401#comment-13946401
 ] 

Hudson commented on HADOOP-10423:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #520 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/520/])
HADOOP-10423. Clarify compatibility policy document for combination of new 
client and old server. (Chris Nauroth via kasha) (kasha: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581116)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/Compatibility.apt.vm


> Clarify compatibility policy document for combination of new client and old 
> server.
> ---
>
> Key: HADOOP-10423
> URL: https://issues.apache.org/jira/browse/HADOOP-10423
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10423.1.patch
>
>
> As discussed on the dev mailing lists and MAPREDUCE-4052, we need to update 
> the text of the compatibility policy to discuss a new client combined with an 
> old server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10422) Remove redundant logging of RPC retry attempts.

2014-03-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946403#comment-13946403
 ] 

Hudson commented on HADOOP-10422:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #520 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/520/])
HADOOP-10422. Remove redundant logging of RPC retry attempts. Contributed by 
Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1581112)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryUtils.java


> Remove redundant logging of RPC retry attempts.
> ---
>
> Key: HADOOP-10422
> URL: https://issues.apache.org/jira/browse/HADOOP-10422
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10422.1.patch
>
>
> {{RetryUtils}} logs each retry attempt at both info level and debug level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946329#comment-13946329
 ] 

Hadoop QA commented on HADOOP-10392:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12633823/HADOOP-10392.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 24 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-mapreduce-project/hadoop-mapreduce-examples 
hadoop-tools/hadoop-archives hadoop-tools/hadoop-extras 
hadoop-tools/hadoop-gridmix hadoop-tools/hadoop-openstack 
hadoop-tools/hadoop-rumen hadoop-tools/hadoop-streaming:

  org.apache.hadoop.streaming.TestStreamingTaskLog

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3706//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3706//console

This message is automatically generated.

> Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
> 
>
> Key: HADOOP-10392
> URL: https://issues.apache.org/jira/browse/HADOOP-10392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
> HADOOP-10392.patch
>
>
> There're some methods calling Path.makeQualified(FileSystem), which causes 
> javac warning.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946321#comment-13946321
 ] 

Hadoop QA commented on HADOOP-10428:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636528/HADOOP-10428.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3708//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3708//console

This message is automatically generated.

>   JavaKeyStoreProvider should accept keystore password via configuration 
> falling back to ENV VAR
> ---
>
> Key: HADOOP-10428
> URL: https://issues.apache.org/jira/browse/HADOOP-10428
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10428.patch
>
>
> Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
> VAR.
> Allowing the password to be set via configuration enables applications to 
> interactively ask for the password before initializing the 
> {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10429) KeyStores should have methods to generate the materials themselves, KeyShell should use them

2014-03-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13946309#comment-13946309
 ] 

Hadoop QA commented on HADOOP-10429:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636530/HADOOP-10429.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3707//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3707//console

This message is automatically generated.

> KeyStores should have methods to generate the materials themselves, KeyShell 
> should use them
> 
>
> Key: HADOOP-10429
> URL: https://issues.apache.org/jira/browse/HADOOP-10429
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10429.patch
>
>
> Currently, the {{KeyProvider}} API expects the caller to provide the key 
> materials. And, the {{KeyShell}} generates key materials.
> For security reasons, {{KeyProvider}} implementations may want to generate 
> and hide (from the user generating the key) the key materials.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10434) Is it possible to use "df" to calculating the dfs usage instead of "du"

2014-03-25 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10434:
-

Summary: Is it possible to use "df" to calculating the dfs usage instead of 
"du"  (was: Is it possible to use "df" to calculating the dfs usage indtead of 
"du")

> Is it possible to use "df" to calculating the dfs usage instead of "du"
> ---
>
> Key: HADOOP-10434
> URL: https://issues.apache.org/jira/browse/HADOOP-10434
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: MaoYuan Xian
>Priority: Minor
>
> When we run datanode from the machine with big disk volume, it's found du 
> operations from org.apache.hadoop.fs.DU's DURefreshThread cost lots of disk 
> performance.
> As we use the whole disk for hdfs storage, it is possible calculate volume 
> usage via "df" command. Is it necessary adding the "df" option for usage 
> calculation in hdfs 
> (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice)?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10434) Is it possible to use "df" to calculating the dfs usage indtead of "du"

2014-03-25 Thread MaoYuan Xian (JIRA)
MaoYuan Xian created HADOOP-10434:
-

 Summary: Is it possible to use "df" to calculating the dfs usage 
indtead of "du"
 Key: HADOOP-10434
 URL: https://issues.apache.org/jira/browse/HADOOP-10434
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.3.0
Reporter: MaoYuan Xian
Priority: Minor


When we run datanode from the machine with big disk volume, it's found du 
operations from org.apache.hadoop.fs.DU's DURefreshThread cost lots of disk 
performance.

As we use the whole disk for hdfs storage, it is possible calculate volume 
usage via "df" command. Is it necessary adding the "df" option for usage 
calculation in hdfs 
(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice)?



--
This message was sent by Atlassian JIRA
(v6.2#6252)