[jira] [Commented] (HADOOP-11632) Cleanup Find.java to remove SupressWarnings annotations

2015-02-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336167#comment-14336167
 ] 

Akira AJISAKA commented on HADOOP-11632:


Thanks Tsuyoshi!

> Cleanup Find.java to remove SupressWarnings annotations
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-02-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336164#comment-14336164
 ] 

Akira AJISAKA commented on HADOOP-10105:


bq. it's not good idea to use mixed libraries of http clients.
We've already been using mixed libraries, therefore I'm thinking we can remove 
some httpclient dependencies in 2.7.0 releases, and remove the other 
dependencies in 2.8.0 release.

> remove httpclient dependency
> 
>
> Key: HADOOP-10105
> URL: https://issues.apache.org/jira/browse/HADOOP-10105
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
> HADOOP-10105.part2.patch, HADOOP-10105.patch
>
>
> httpclient is now end-of-life and is no longer being developed.  Now that we 
> have a dependency on {{httpcore}}, we should phase out our use of the old 
> discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
> {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2015-02-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336154#comment-14336154
 ] 

Tsuyoshi Ozawa commented on HADOOP-10105:
-

[~ajisakaa] [~brahmareddy] Thank you for taking these issues. I have one 
question : what versions are you targeting? If we do this, the timing of 
upgrading should be at the same time since it's not good idea to use mixed 
libraries of http clients. I looked some related tickets, and some resolved 
issues are targeting 2.7.0. Is it possible to remove all dependency at 2.7.0 
release? If not, I think it's better to target 2.8.0 release.

> remove httpclient dependency
> 
>
> Key: HADOOP-10105
> URL: https://issues.apache.org/jira/browse/HADOOP-10105
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
> HADOOP-10105.part2.patch, HADOOP-10105.patch
>
>
> httpclient is now end-of-life and is no longer being developed.  Now that we 
> have a dependency on {{httpcore}}, we should phase out our use of the old 
> discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
> {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11632) Cleanup Find.java to remove SupressWarnings annotations

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336153#comment-14336153
 ] 

Hudson commented on HADOOP-11632:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7197 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7197/])
HADOOP-11632. Cleanup Find.java to remove SupressWarnings annotations. 
Contributed by Akira AJISAKA. (ozawa: rev 
ad8ed3e802782a7a3fb3d21c5862673a8f695372)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Find.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Cleanup Find.java to remove SupressWarnings annotations
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11632) Cleanup Find.java to remove SupressWarnings annotations

2015-02-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11632:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks Akira for your report and 
contribution.

> Cleanup Find.java to remove SupressWarnings annotations
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11632) Cleanup Find.java to remove SupressWarnings annotations

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11632:

Summary: Cleanup Find.java to remove SupressWarnings annotations  (was: 
Cleanup Find.java remove SupressWarnings annotations)

> Cleanup Find.java to remove SupressWarnings annotations
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11632) Cleanup Find.java remove SupressWarnings annotations

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11632:

Summary: Cleanup Find.java remove SupressWarnings annotations  (was: Clean 
up SupressWarnings annotations from Find.java)

> Cleanup Find.java remove SupressWarnings annotations
> 
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11632) Cleanup Find.java to remove SupressWarnings annotations

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336143#comment-14336143
 ] 

Tsuyoshi OZAWA commented on HADOOP-11632:
-

+1, committing this shortly.

> Cleanup Find.java to remove SupressWarnings annotations
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336140#comment-14336140
 ] 

Hadoop QA commented on HADOOP-11620:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700685/HADOOP-11620.5.patch
  against trunk revision 6cbd9f1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5775//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5775//console

This message is automatically generated.

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch, HADOOP-11620.4.patch, HADOOP-11620.5.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-24 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336127#comment-14336127
 ] 

Kiran Kumar M R commented on HADOOP-9922:
-

Thanks for the review [~cnauroth], I have attached a new patch addressing these 
warnings and few more.

For some variables, i have declared as {{unsigned int}} instead of {{size_t}}, 
as 64-bit build was complaining assigning {{size_t}} to {{ULONG}}

Following warnings in 32-bit build are resolved:
{code}
libwinutils.c(2887): warning C4018: '<' : signed/unsigned mismatch 
[winutils\libwinutils.vcxproj]
libwinutils.c(2899): warning C4018: '<' : signed/unsigned mismatch 
[winutils\libwinutils.vcxproj]
service.c(187): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
service.c(282): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
service.c(380): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
service.c(430): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
service.c(1117): warning C4020: 'AddNodeManagerAndUserACEsToObject' : too many 
actual parameters [winutils\winutils.vcxproj]
task.c(160): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
task.c(195): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
task.c(240): warning C4029: declared formal parameter list different from 
definition [winutils\winutils.vcxproj]
task.c(339): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
{code}

Following warnings in 64-bit build are resolved:
{code}
service.c(282): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
service.c(1117): warning C4020: 'AddNodeManagerAndUserACEsToObject' : too many 
actual parameters [winutils\winutils.vcxproj]
task.c(240): warning C4029: declared formal parameter list different from 
definition [winutils\winutils.vcxproj]
task.c(339): warning C4018: '<' : signed/unsigned mismatch 
[winutils\winutils.vcxproj]
{code}

One warning is changed in 64-bit build:
{code}
- task.c(312): warning C4133: 'function' : incompatible types - from 'int *' to 
'size_t *' [winutils\winutils.vcxproj]
+ task.c(312): warning C4133: 'function' : incompatible types - from 'unsigned 
int *' to 'size_t *' [winutils\winutils.vcxproj]
{code}

> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-24 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HADOOP-9922:

Attachment: HADOOP-9922-004.patch

> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11635) scheduled jobs are failing

2015-02-24 Thread ankush (JIRA)
ankush created HADOOP-11635:
---

 Summary: scheduled jobs are failing
 Key: HADOOP-11635
 URL: https://issues.apache.org/jira/browse/HADOOP-11635
 Project: Hadoop Common
  Issue Type: Bug
Reporter: ankush


Sqoop Unable to load db driver for AS400

scheduled jobs are failing with the following error…

15/02/24 23:58:01 ERROR sqoop.Sqoop: Got exception running Sqoop: 
java.lang.RuntimeException: Could not load db driver class: com.i\
bm.as400.access.AS400JDBCDriver

we performed an upgrade of our Hadoop infrastructure from v 5.1.2 to v5.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11620:
-
Attachment: HADOOP-11620.5.patch

Thanks for the review [~andrew.wang]

Uploading updated patch with your suggestions.

bq. ..enough to safely increment currentIdx. I guess it's not a big deal, but 
it'd be safer to use an AtomicInt here.
Actually atomic wont work here.. since I have to do modulo increment. but ive 
changed it increment in a synchronized scope

bq.   ..the createProvider refactor related to this patch? Doesn't seem 
necessary.
Agreed.. I was thinking of using it something else first.. but I decided to 
keep it since copy pasting the constructor for LBKMSClientProvider everywhere 
looked ugly 

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch, HADOOP-11620.4.patch, HADOOP-11620.5.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10027) *Compressor_deflateBytesDirect passes instance instead of jclass to GetStaticObjectField

2015-02-24 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336041#comment-14336041
 ] 

Hui Zheng commented on HADOOP-10027:


I take it that the problem happened in GetStaticObjectField method,however it 
would be better to use a instance wide lock instead of a class wide lock.
But I have no idea why you need to synchronize the GetDirectBufferAddress 
method such as 

LOCK_CLASS(env, clazz, "ZlibCompressor");
uncompressed_bytes = (*env)->GetDirectBufferAddress(env,

uncompressed_direct_buf);
UNLOCK_CLASS(env, clazz, "ZlibCompressor");

I think if we need thread-safe,we should synchronize every method(which use 
compressedDirectBuf or uncompressedDirectBuf) of ZlibCompressor.
 

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> 
>
> Key: HADOOP-10027
> URL: https://issues.apache.org/jira/browse/HADOOP-10027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Eric Abbott
>Assignee: Hui Zheng
>Priority: Minor
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
> // Get members of ZlibCompressor
> jobject clazz = (*env)->GetStaticObjectField(env, this,
>  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x7f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x7f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x7f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x7f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x7f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x7f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11634) Webhdfs kerboes principal and keytab descriptions are Interchanged in SecureMode doc

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335919#comment-14335919
 ] 

Hadoop QA commented on HADOOP-11634:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700656/HADOOP-11634.patch
  against trunk revision 1a625b8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5774//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5774//console

This message is automatically generated.

> Webhdfs kerboes principal and keytab descriptions are Interchanged  in 
> SecureMode doc
> -
>
> Key: HADOOP-11634
> URL: https://issues.apache.org/jira/browse/HADOOP-11634
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11634.patch
>
>
>  *Need to interchnage Following Note for principal and keytab* 
> {noformat}
> Parameter  Value  
> Notes
> dfs.web.authentication.kerberos.principal http/_h...@realm.tld
>Kerberos keytab file for the WebHDFS.
> dfs.web.authentication.kerberos.keytab
> /etc/security/keytab/http.service.keytabKerberos principal name for 
> WebHDFS.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11615) Remove MRv1-specific terms from ServiceLevelAuth.md

2015-02-24 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335898#comment-14335898
 ] 

Brahma Reddy Battula commented on HADOOP-11615:
---

Thanks a lot for review,,This also I corrected but it's not showing,once after 
creating the patch ..I think since it is so big line

> Remove MRv1-specific terms from ServiceLevelAuth.md
> ---
>
> Key: HADOOP-11615
> URL: https://issues.apache.org/jira/browse/HADOOP-11615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HADOOP-11615.patch
>
>
> JobTracker should be ResourceManager, and {{hadoop mradmin}} should be {{yarn 
> rmadmin}} in ServiceLevelAuth.md.
> {code}
> The service-level authorization configuration for the NameNode and JobTracker 
> can be changed without restarting either of the Hadoop master daemons. The 
> cluster administrator can change `$HADOOP_CONF_DIR/hadoop-policy.xml` on the 
> master nodes and instruct the NameNode and JobTracker to reload their 
> respective configurations via the `-refreshServiceAcl` switch to `dfsadmin` 
> and `mradmin` commands respectively.
> Refresh the service-level authorization configuration for the NameNode:
>$ bin/hadoop dfsadmin -refreshServiceAcl
> Refresh the service-level authorization configuration for the JobTracker:
>$ bin/hadoop mradmin -refreshServiceAcl
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11615) Remove MRv1-specific terms from ServiceLevelAuth.md

2015-02-24 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335899#comment-14335899
 ] 

Brahma Reddy Battula commented on HADOOP-11615:
---

Thanks a lot for review,,This also I corrected but it's not showing,once after 
creating the patch ..I think since it is so big line

> Remove MRv1-specific terms from ServiceLevelAuth.md
> ---
>
> Key: HADOOP-11615
> URL: https://issues.apache.org/jira/browse/HADOOP-11615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HADOOP-11615.patch
>
>
> JobTracker should be ResourceManager, and {{hadoop mradmin}} should be {{yarn 
> rmadmin}} in ServiceLevelAuth.md.
> {code}
> The service-level authorization configuration for the NameNode and JobTracker 
> can be changed without restarting either of the Hadoop master daemons. The 
> cluster administrator can change `$HADOOP_CONF_DIR/hadoop-policy.xml` on the 
> master nodes and instruct the NameNode and JobTracker to reload their 
> respective configurations via the `-refreshServiceAcl` switch to `dfsadmin` 
> and `mradmin` commands respectively.
> Refresh the service-level authorization configuration for the NameNode:
>$ bin/hadoop dfsadmin -refreshServiceAcl
> Refresh the service-level authorization configuration for the JobTracker:
>$ bin/hadoop mradmin -refreshServiceAcl
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11618) DelegateToFileSystem always uses default FS's default port

2015-02-24 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335883#comment-14335883
 ] 

Gera Shegalov commented on HADOOP-11618:


[~brahmareddy], thanks for working on this patch. 
Please fix the code style and add a test.

> DelegateToFileSystem always uses default FS's default port 
> ---
>
> Key: HADOOP-11618
> URL: https://issues.apache.org/jira/browse/HADOOP-11618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Gera Shegalov
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11618.patch
>
>
> DelegateToFileSystem constructor has the following code:
> {code}
> super(theUri, supportedScheme, authorityRequired,
> FileSystem.getDefaultUri(conf).getPort());
> {code}
> The default port should be taken from theFsImpl instead.
> {code}
> super(theUri, supportedScheme, authorityRequired,
> theFsImpl.getDefaultPort());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11634) Webhdfs kerboes principal and keytab descriptions are Interchanged in SecureMode doc

2015-02-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11634:
--
Summary: Webhdfs kerboes principal and keytab descriptions are Interchanged 
 in SecureMode doc  (was: Webhdfs kerboes principal and keytab descriptions are 
wrongly given( Interchanged)  in SecureMode doc)

> Webhdfs kerboes principal and keytab descriptions are Interchanged  in 
> SecureMode doc
> -
>
> Key: HADOOP-11634
> URL: https://issues.apache.org/jira/browse/HADOOP-11634
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11634.patch
>
>
>  *Need to interchnage Following Note for principal and keytab* 
> {noformat}
> Parameter  Value  
> Notes
> dfs.web.authentication.kerberos.principal http/_h...@realm.tld
>Kerberos keytab file for the WebHDFS.
> dfs.web.authentication.kerberos.keytab
> /etc/security/keytab/http.service.keytabKerberos principal name for 
> WebHDFS.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11634) Webhdfs kerboes principal and keytab descriptions are wrongly given( Interchanged) in SecureMode doc

2015-02-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11634:
--
Status: Patch Available  (was: Open)

> Webhdfs kerboes principal and keytab descriptions are wrongly given( 
> Interchanged)  in SecureMode doc
> -
>
> Key: HADOOP-11634
> URL: https://issues.apache.org/jira/browse/HADOOP-11634
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11634.patch
>
>
>  *Need to interchnage Following Note for principal and keytab* 
> {noformat}
> Parameter  Value  
> Notes
> dfs.web.authentication.kerberos.principal http/_h...@realm.tld
>Kerberos keytab file for the WebHDFS.
> dfs.web.authentication.kerberos.keytab
> /etc/security/keytab/http.service.keytabKerberos principal name for 
> WebHDFS.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11634) Webhdfs kerboes principal and keytab descriptions are wrongly given( Interchanged) in SecureMode doc

2015-02-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11634:
--
Attachment: HADOOP-11634.patch

> Webhdfs kerboes principal and keytab descriptions are wrongly given( 
> Interchanged)  in SecureMode doc
> -
>
> Key: HADOOP-11634
> URL: https://issues.apache.org/jira/browse/HADOOP-11634
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11634.patch
>
>
>  *Need to interchnage Following Note for principal and keytab* 
> {noformat}
> Parameter  Value  
> Notes
> dfs.web.authentication.kerberos.principal http/_h...@realm.tld
>Kerberos keytab file for the WebHDFS.
> dfs.web.authentication.kerberos.keytab
> /etc/security/keytab/http.service.keytabKerberos principal name for 
> WebHDFS.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11626) DFSInputStream should only update ReadStatistics when the read is success.

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335849#comment-14335849
 ] 

Hadoop QA commented on HADOOP-11626:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12700598/HADOOP-11626.000.patch
  against trunk revision 9a37247.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5770//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5770//console

This message is automatically generated.

> DFSInputStream should only update ReadStatistics when the read is success.
> --
>
> Key: HADOOP-11626
> URL: https://issues.apache.org/jira/browse/HADOOP-11626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Trivial
> Attachments: HADOOP-11626.000.patch
>
>
> In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the 
> {{ReadStatistics}} even the read is failed:
> {code}
> int nread = reader.readAll(buf, offset, len);
> updateReadStatistics(readStatistics, nread, reader);
> if (nread != len) {
>   throw new IOException("truncated return from reader.read(): " +
> "excpected " + len + ", got " + nread);
> }
> {code}
> It should only record success read, i.e., after throwing {{IOE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11634) Webhdfs kerboes principal and keytab descriptions are wrongly given( Interchanged) in SecureMode doc

2015-02-24 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-11634:
-

 Summary: Webhdfs kerboes principal and keytab descriptions are 
wrongly given( Interchanged)  in SecureMode doc
 Key: HADOOP-11634
 URL: https://issues.apache.org/jira/browse/HADOOP-11634
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


 *Need to interchnage Following Note for principal and keytab* 

{noformat}
ParameterValue  
Notes
dfs.web.authentication.kerberos.principal   http/_h...@realm.tld
   Kerberos keytab file for the WebHDFS.
dfs.web.authentication.kerberos.keytab  
/etc/security/keytab/http.service.keytabKerberos principal name for 
WebHDFS.
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335806#comment-14335806
 ] 

Andrew Wang commented on HADOOP-11620:
--

Hi Arun, thanks for working on this. Broad strokes look good, just mostly nitty 
stuff:

KMSClientProvider:
* Unused import
* Typo: presesnt
* Could we split the new parsing into functions?
* What is the purpose of the new constructor that takes a Path, and marking the 
current one as @VisibleForTesting?

TestKeyProviderFactory:
* Unused new imports

TestKMS:
* Unused import, although unrelated to this patch
* Is the createProvider refactor related to this patch? Doesn't seem necessary.

LoadBalancingKMSClientProvider:
* getCurrentIdx is unused
* Let's use {{Time.monotonicNow}} rather than {{System.currentTimeMillis}}, 
better precision.
* {{seed}} is not quite the right term in the test constructor, maybe just 
{{currentIdx}}?
* in doOp, let's use slf4j substitution instead of string concatenation for the 
logs.
* In doOp, I'd recommend always printing the LOG warn on an exception (the else 
case), then additionally log "Failed to contact any of the KMS in the load 
balancer group, aborting." if we're at the end. It'd also be good to include 
the IOE's message in the first log.
* doOp would also be more clear as a for loop, rather than nesting all these 
doOp calls. Seems like recursion will lead to funky stacktraces too.
* getStartIdx, maybe rename to {{nextIdx}} as it's not strictly a getter? Since 
this pre-increments, it makes using the test constructor a little more 
difficult, would be easier if it did post-increment.
* Just using {{volatile}} isn't enough to safely increment {{currentIdx}}. I 
guess it's not a big deal, but it'd be safer to use an AtomicInt here.
* Some lihes longer than 80 chars.

TestLoadBalancingKMSClientProvider:
* Needs an auto-formatter pass, the multi-line chained mocking should be double 
indented
* Let's use some more descriptive messages than "Should fail" :)

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch, HADOOP-11620.4.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11480) Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335805#comment-14335805
 ] 

Hudson commented on HADOOP-11480:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #7195 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7195/])
HADOOP-11480. Typo in hadoop-aws/index.md uses wrong scheme for 
test.fs.s3.name. Contributed by Ted Yu. (aajisaka: rev 
1a625b8158ab1cf765fbda962ba725503409d9fe)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md


> Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name
> -
>
> Key: HADOOP-11480
> URL: https://issues.apache.org/jira/browse/HADOOP-11480
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: s3
> Fix For: 2.7.0
>
> Attachments: hadoop-11480-001.patch
>
>
> Around line 270:
> {code}
>
>  test.fs.s3.name
> s3a://test-aws-s3/
>
> {code}
> The scheme should be s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11480) Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11480:
---
Component/s: (was: umentation)

> Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name
> -
>
> Key: HADOOP-11480
> URL: https://issues.apache.org/jira/browse/HADOOP-11480
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: s3
> Fix For: 2.7.0
>
> Attachments: hadoop-11480-001.patch
>
>
> Around line 270:
> {code}
>
>  test.fs.s3.name
> s3a://test-aws-s3/
>
> {code}
> The scheme should be s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11632) Clean up SupressWarnings annotations from Find.java

2015-02-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335791#comment-14335791
 ] 

Akira AJISAKA commented on HADOOP-11632:


The patch is just to refactor the code, so new tests are not needed.

> Clean up SupressWarnings annotations from Find.java
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11480) Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11480:
---
  Component/s: documentation
   umentation
Affects Version/s: 2.7.0

> Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name
> -
>
> Key: HADOOP-11480
> URL: https://issues.apache.org/jira/browse/HADOOP-11480
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, umentation
>Affects Versions: 2.7.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: s3
> Fix For: 2.7.0
>
> Attachments: hadoop-11480-001.patch
>
>
> Around line 270:
> {code}
>
>  test.fs.s3.name
> s3a://test-aws-s3/
>
> {code}
> The scheme should be s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11632) Clean up SupressWarnings annotations from Find.java

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335786#comment-14335786
 ] 

Hadoop QA commented on HADOOP-11632:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12700616/HADOOP-11632-001.patch
  against trunk revision ac3468a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5773//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5773//console

This message is automatically generated.

> Clean up SupressWarnings annotations from Find.java
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11480) Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11480:
---
  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s: 2.7.0
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~tedyu] for the contribution.

> Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name
> -
>
> Key: HADOOP-11480
> URL: https://issues.apache.org/jira/browse/HADOOP-11480
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: s3
> Fix For: 2.7.0
>
> Attachments: hadoop-11480-001.patch
>
>
> Around line 270:
> {code}
>
>  test.fs.s3.name
> s3a://test-aws-s3/
>
> {code}
> The scheme should be s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11480) Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11480:
---
Hadoop Flags: Reviewed

LGTM, +1.

> Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name
> -
>
> Key: HADOOP-11480
> URL: https://issues.apache.org/jira/browse/HADOOP-11480
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: s3
> Attachments: hadoop-11480-001.patch
>
>
> Around line 270:
> {code}
>
>  test.fs.s3.name
> s3a://test-aws-s3/
>
> {code}
> The scheme should be s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11633) Convert remaining branch-2 .apt.vm files to markdown

2015-02-24 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HADOOP-11633:
-

Assignee: Masatake Iwasaki

> Convert remaining branch-2 .apt.vm files to markdown
> 
>
> Key: HADOOP-11633
> URL: https://issues.apache.org/jira/browse/HADOOP-11633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Masatake Iwasaki
>
> We should convert the remaining branch-2 .apt.vm files to markdown.
> Excluding the yarn files, which are covered by YARN-3168, we have remaining:
> {code}
> cmccabe@keter:~/hadoop> find -name '*.apt.vm'
> ./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm
> ./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/UsingHttpTools.apt.vm
> ./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/HadoopStreaming.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapredAppMasterRest.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapReduceTutorial.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapredCommands.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapReduce_Compatibility_Hadoop1_Hadoop2.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/DistributedCacheDeploy.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/PluggableShuffleAndPluggableSort.apt.vm
> ./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/apt/HistoryServerRest.apt.vm
> ./hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
> ./hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
> ./hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
> ./hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
> ./hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
> ./hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
> ./hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
> ./hadoop-tools/hadoop-openstack/src/site/apt/index.apt.vm
> ./hadoop-tools/hadoop-sls/src/site/apt/SchedulerLoadSimulator.apt.vm
> ./hadoop-project/src/site/apt/index.apt.vm
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335743#comment-14335743
 ] 

Tsuyoshi OZAWA commented on HADOOP-11602:
-

Thank you, Steve. Sounds good to me. Let me try.

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335735#comment-14335735
 ] 

Chris Nauroth commented on HADOOP-9922:
---

[~kiranmr], thank you for sharing a patch.  This looks good.

When I built for 32-bit, there were 5 additional compilation warnings:

{code}
service.c(187): warning C4018: '<' : signed/unsigned mismatch 
[C:\hdc\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
service.c(380): warning C4018: '<' : signed/unsigned mismatch 
[C:\hdc\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
service.c(430): warning C4018: '<' : signed/unsigned mismatch 
[C:\hdc\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
task.c(160): warning C4018: '<' : signed/unsigned mismatch 
[C:\hdc\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
task.c(195): warning C4018: '<' : signed/unsigned mismatch 
[C:\hdc\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
{code}

It looks like we have some code that was trying to compare an {{int}} to a 
{{size_t}}, and the difference in data type size on 32-bit triggers these 
warnings.  I suspect you can make this work on both 32-bit and 64-bit by 
switching the declaration of the relevant variables from {{int}} to {{size_t}}.

I think this patch will be ready to go once that is addressed.

> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11594) Improve the readability of site index of documentation

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335733#comment-14335733
 ] 

Hadoop QA commented on HADOOP-11594:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12700614/HADOOP-11594.002.patch
  against trunk revision ac3468a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5772//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5772//console

This message is automatically generated.

> Improve the readability of site index of documentation
> --
>
> Key: HADOOP-11594
> URL: https://issues.apache.org/jira/browse/HADOOP-11594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11594.001.patch, HADOOP-11594.002.patch
>
>
> * change the order of items
> * make redundant title shorter and fit it in single line as far as possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11633) Convert remaining branch-2 .apt.vm files to markdown

2015-02-24 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11633:
-

 Summary: Convert remaining branch-2 .apt.vm files to markdown
 Key: HADOOP-11633
 URL: https://issues.apache.org/jira/browse/HADOOP-11633
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe


We should convert the remaining branch-2 .apt.vm files to markdown.

Excluding the yarn files, which are covered by YARN-3168, we have remaining:
{code}
cmccabe@keter:~/hadoop> find -name '*.apt.vm'
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/index.apt.vm
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/UsingHttpTools.apt.vm
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/HadoopStreaming.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapredAppMasterRest.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapReduceTutorial.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapredCommands.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/MapReduce_Compatibility_Hadoop1_Hadoop2.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/DistributedCacheDeploy.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/PluggableShuffleAndPluggableSort.apt.vm
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/apt/HistoryServerRest.apt.vm
./hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
./hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
./hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
./hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
./hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
./hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
./hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
./hadoop-tools/hadoop-openstack/src/site/apt/index.apt.vm
./hadoop-tools/hadoop-sls/src/site/apt/SchedulerLoadSimulator.apt.vm
./hadoop-project/src/site/apt/index.apt.vm
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11632) Clean up SupressWarnings annotations from Find.java

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11632:
---
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

> Clean up SupressWarnings annotations from Find.java
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11632) Clean up SupressWarnings annotations from Find.java

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11632:
---
Attachment: HADOOP-11632-001.patch

Attaching a patch to remove the annotations.

> Clean up SupressWarnings annotations from Find.java
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: HADOOP-11632-001.patch
>
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335705#comment-14335705
 ] 

Hudson commented on HADOOP-11495:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7194 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7194/])
move HADOOP-11495 to 2.7 (cmccabe: rev ac3468add4ec6fa3581536a9c55d422801a948bd)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 2.7.0
>
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11495-b2.001.patch, 
> HADOOP-11495-b2.002.patch, HADOOP-11496-00.patch, HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11632) Clean up SupressWarnings annotations from Find.java

2015-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA moved YARN-3253 to HADOOP-11632:
--

Affects Version/s: (was: 2.7.0)
   2.7.0
  Key: HADOOP-11632  (was: YARN-3253)
  Project: Hadoop Common  (was: Hadoop YARN)

> Clean up SupressWarnings annotations from Find.java
> ---
>
> Key: HADOOP-11632
> URL: https://issues.apache.org/jira/browse/HADOOP-11632
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>
> There are some SuppressWarnings annotations in Find.java. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11594) Improve the readability of site index of documentation

2015-02-24 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11594:
--
Attachment: HADOOP-11594.002.patch

Thanks, [~aw]. I agree with your comments. I update the patch. In addition to 
above, I replaced "Heterogeneous Storage" by "Storage Policy".

> Improve the readability of site index of documentation
> --
>
> Key: HADOOP-11594
> URL: https://issues.apache.org/jira/browse/HADOOP-11594
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-11594.001.patch, HADOOP-11594.002.patch
>
>
> * change the order of items
> * make redundant title shorter and fit it in single line as far as possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11495) Convert site documentation from apt to markdown

2015-02-24 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11495:
--
 Target Version/s: 2.7.0  (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   2.7.0
Fix Version/s: (was: 3.0.0)
   2.7.0

I backported this to 2.7 as discussed.  I'll open up a JIRA to convert the 
remaining apt.vm files 2.7 to Markdown.

> Convert site documentation from apt to markdown
> ---
>
> Key: HADOOP-11495
> URL: https://issues.apache.org/jira/browse/HADOOP-11495
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 2.7.0
>
> Attachments: HADOOP-11495-02.patch, HADOOP-11495-03.patch, 
> HADOOP-11495-04.patch, HADOOP-11495-05.patch, HADOOP-11495-b2.001.patch, 
> HADOOP-11495-b2.002.patch, HADOOP-11496-00.patch, HADOOP-11496-01.patch
>
>
> Almost Plain Text (aka APT) lost.  Markdown won.
> As a result, there are a ton of tools and online resources for Markdown that 
> would make editing and using our documentation much easier.  It would be 
> extremely beneficial for the community as a whole to move from apt to 
> markdown.
> This JIRA proposes to do this migration for the common project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack

2015-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335682#comment-14335682
 ] 

Steve Loughran commented on HADOOP-11614:
-

I think that openstack maintenance is something we should discuss with them 
before these changes.

I never knew about their work, it hasn't picked up the Hadoop 2.5+ changes 
(contracts and fixes), but may have other changes of its own —it looks like 
later auth changes are in there.

So a merge is the right thing to do, as long as their versions are compatible 
with the hadoop releases (requirement on us as well as them)

> Remove httpclient dependency from hadoop-openstack
> --
>
> Key: HADOOP-11614
> URL: https://issues.apache.org/jira/browse/HADOOP-11614
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HADOOP-11614.patch
>
>
> Remove httpclient dependency from hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11626) DFSInputStream should only update ReadStatistics when the read is success.

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335654#comment-14335654
 ] 

Tsuyoshi OZAWA commented on HADOOP-11626:
-

[~eddyxu] Thank you for reporting this issue. In this case, this is correct 
since reader.readAll reads "nread" bytes from "buf" actually. Please correct me 
if I have a missing point.

> DFSInputStream should only update ReadStatistics when the read is success.
> --
>
> Key: HADOOP-11626
> URL: https://issues.apache.org/jira/browse/HADOOP-11626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Trivial
> Attachments: HADOOP-11626.000.patch
>
>
> In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the 
> {{ReadStatistics}} even the read is failed:
> {code}
> int nread = reader.readAll(buf, offset, len);
> updateReadStatistics(readStatistics, nread, reader);
> if (nread != len) {
>   throw new IOException("truncated return from reader.read(): " +
> "excpected " + len + ", got " + nread);
> }
> {code}
> It should only record success read, i.e., after throwing {{IOE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335651#comment-14335651
 ] 

Hadoop QA commented on HADOOP-11629:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700601/HADOOP-11629.1.patch
  against trunk revision 9a37247.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-azure.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5771//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5771//console

This message is automatically generated.

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.1.patch, HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Rajiv Chittajallu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335630#comment-14335630
 ] 

Rajiv Chittajallu commented on HADOOP-11628:


bq. If you follow the thread mentioned, gives more details on why this is a bad 
deployment strategy. So I'm thinking this should probably be a runtime option 
with a default of off.

Only argument against canonicalization is trusting DNS. One could argue that as 
site that wouldn't trust its DNS for reverse lookup should have similar 
reservations against forward lookups as well.

Canonicalization (or a way to append default domain) is required to support 
short names in service URIs as well.  GSSAPI (rfc2743) and Kerberos 5 (rfc4120) 
are not specific to SPNEGO, which is specific to http, where there is a 
provision to provide Host header.  GSSAPI auth with ssh against multi-a 
rotation has same challenges. NN<>DN negotiate spn and validated against 
allowed list in configuration (dfs.namenode.kerberos.principal.pattern)

I agree this wouldn't work across all deployment strategies (eg: using Akami 
for failover/load balancing) and should be configurable and should be 
documented as to how clients and servers are expected to construct service 
principle.



> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11614) Remove httpclient dependency from hadoop-openstack

2015-02-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335631#comment-14335631
 ] 

Akira AJISAKA commented on HADOOP-11614:


Hi [~brahmareddy], thank you for the patch. Now I have two concerns with the 
big change.
1. The patch changes the signature of some public methods. That is an 
incompatible change.
2. OpenStack community maintains another SwiftNativeFileSystem in 
https://github.com/openstack/sahara-extra. IMHO, I don't want code duplication 
and want to merge them first if possible.

I asked [~kazuki], who maintains the project, to tell me what we should to do. 
He thinks it would be better to drop hadoop-openstack codebase and use the code 
in OpenStack community since these developments have not been in-sync. I mostly 
agree with him if the below conditions are satisfied.
* sahara-extra releases its jar periodically and we can use it from Maven 
repository.
* Create a minimum wrapper code and call sahara-extra APIs from it, like s3n or 
azure.
* Keep the contract tests for hadoop-openstack.

Hi [~ste...@apache.org], what do you think?

> Remove httpclient dependency from hadoop-openstack
> --
>
> Key: HADOOP-11614
> URL: https://issues.apache.org/jira/browse/HADOOP-11614
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HADOOP-11614.patch
>
>
> Remove httpclient dependency from hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated HADOOP-11629:
-
Attachment: HADOOP-11629.1.patch

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.1.patch, HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated HADOOP-11629:
-
Attachment: (was: HADOOP-11629.1.patch)

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated HADOOP-11629:
-
Attachment: HADOOP-11629.1.patch

Thanks [~cnauroth]! New patch attached.

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.1.patch, HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11626) DFSInputStream should only update ReadStatistics when the read is success.

2015-02-24 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-11626:
---
Attachment: HADOOP-11626.000.patch

Change the order of updating {{ReadStatistics}}. 

No test is added, since this one line change is trivial.

> DFSInputStream should only update ReadStatistics when the read is success.
> --
>
> Key: HADOOP-11626
> URL: https://issues.apache.org/jira/browse/HADOOP-11626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Trivial
> Attachments: HADOOP-11626.000.patch
>
>
> In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the 
> {{ReadStatistics}} even the read is failed:
> {code}
> int nread = reader.readAll(buf, offset, len);
> updateReadStatistics(readStatistics, nread, reader);
> if (nread != len) {
>   throw new IOException("truncated return from reader.read(): " +
> "excpected " + len + ", got " + nread);
> }
> {code}
> It should only record success read, i.e., after throwing {{IOE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11626) DFSInputStream should only update ReadStatistics when the read is success.

2015-02-24 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-11626:
---
Status: Patch Available  (was: Open)

> DFSInputStream should only update ReadStatistics when the read is success.
> --
>
> Key: HADOOP-11626
> URL: https://issues.apache.org/jira/browse/HADOOP-11626
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Trivial
> Attachments: HADOOP-11626.000.patch
>
>
> In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the 
> {{ReadStatistics}} even the read is failed:
> {code}
> int nread = reader.readAll(buf, offset, len);
> updateReadStatistics(readStatistics, nread, reader);
> if (nread != len) {
>   throw new IOException("truncated return from reader.read(): " +
> "excpected " + len + ", got " + nread);
> }
> {code}
> It should only record success read, i.e., after throwing {{IOE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335527#comment-14335527
 ] 

Chris Nauroth commented on HADOOP-11629:


[~shanyu], would you please update the patch to address the Findbugs warning?  
There is a {{null}} check for the {{conf}} a few lines later in the code.  
You'll need to move the block for intializing {{bandwidthGaugeUpdater}} after 
that.

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11631) securemode documentation should refer to the http auth doc

2015-02-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11631:
-

 Summary: securemode documentation should refer to the http auth doc
 Key: HADOOP-11631
 URL: https://issues.apache.org/jira/browse/HADOOP-11631
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


SecureMode.md should point folks to the HTTP Auth doc for securing the 
user-facing web interfaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335479#comment-14335479
 ] 

Elliott Clark commented on HADOOP-11630:


So one this cleans up the config variable so that it actually does what it 
says. And two it allows that work to continue forward while real production 
systems move forward. It in no way slows any progress on any other solution.

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335474#comment-14335474
 ] 

Allen Wittenauer commented on HADOOP-11630:
---

I'd rather see that work done than make it too easy too early for user's to 
screw this up on real systems. Given that in trunk it is very easy to enable 
IPv6 in the shell code, I'm not sure what this patch buys us until that other 
code shows up. I'd much rather get rid of this code hack altogether, which 
would make trunk the place to do that.

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335462#comment-14335462
 ] 

Elliott Clark commented on HADOOP-11630:


Oh yeah all of that is messed up and it needs work. However that's work that 
can be done in a different issue.

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-11630:
---
Attachment: HDFS-7834-trunk-0.patch

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335458#comment-14335458
 ] 

Allen Wittenauer commented on HADOOP-11630:
---

Check your data locality

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6473) Add hadoop health check/diagnostics to run from command line, JSP pages, other tools

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6473:
-
Labels: ipv6  (was: )

> Add hadoop health check/diagnostics to run from command line, JSP pages, 
> other tools
> 
>
> Key: HADOOP-6473
> URL: https://issues.apache.org/jira/browse/HADOOP-6473
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Steve Loughran
>Priority: Minor
>  Labels: ipv6
>
> If the lifecycle ping() is for short-duration "are we still alive" checks, 
> Hadoop still needs something bigger to check the overall system health,.This 
> would be for end users, but also for automated cluster deployment, a complete 
> validation of the cluster, 
> It could be a command line tool, and something that runs on different nodes, 
> checked via IPC or JSP. the idea would be to do thorough checks with good 
> diagnostics.  Oh, and they should be executable through JUnit too.
> For example
>  -if running on windows, check that cygwin is on the path, fail with a 
> pointer to a wiki issue if not
>  -datanodes should check that it can create locks on the filesystem, create 
> files, timestamps are (roughly) aligned with local time.
>  -namenodes should try and create files/locks in the filesystem
>  -task tracker should try and exec() something
>  -run through the classpath and look for problems; duplicate JARs, 
> unsupported java, xerces versions, etc.
> * The number of tests should be extensible -rather than one single class with 
> all the tests, there'd be something separate for name, task, data, job 
> tracker nodes
> * They can't be in the nodes themselves, as they should be executable even if 
> the nodes don't come up. 
> * output could be in human readable text or html, and a form that could be 
> processed through hadoop itself in future
> * these tests could have side effects, such as actually trying to submit work 
> to a cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11574) Uber-JIRA: improve Hadoop network resilience & diagnostics

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11574:
--
Labels: ipv6 supportability  (was: supportability)

> Uber-JIRA: improve Hadoop network resilience & diagnostics
> --
>
> Key: HADOOP-11574
> URL: https://issues.apache.org/jira/browse/HADOOP-11574
> Project: Hadoop Common
>  Issue Type: Task
>  Components: net
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>  Labels: ipv6, supportability
>
> Improve Hadoop's resilience to bad network conditions/problems, including
> * improving recognition of problem states
> * improving diagnostics
> * better handling of IPv6 addresses, even if the protocol is unsupported.
> * better behaviour client-side when there are connectivity problems. (i.e 
> while some errors you can spin on, DNS failures are not on the list)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335457#comment-14335457
 ] 

Elliott Clark commented on HADOOP-11630:


Just removing the PreferIPV6 is working on a test HDFS cluster that I have 
right now.

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7550) Need for Integrity Validation of RPC

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7550:
-
Labels: ipv6  (was: )

> Need for Integrity Validation of RPC
> 
>
> Key: HADOOP-7550
> URL: https://issues.apache.org/jira/browse/HADOOP-7550
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Dave Thompson
>Assignee: Dave Thompson
>  Labels: ipv6
>
> Some recent investigation of network packet corruption has shown a need for 
> hadoop RPC integrity validation beyond assurances already provided by 802.3 
> link layer and TCP 16-bit CRC.
> During an unusual occurrence on a 4k node cluster, we've seen as high as 4 
> TCP anomalies per second on a single node, sustained over an hour (14k per 
> hour).   A TCP anomaly  would be an escaped link layer packet that resulted 
> in a TCP CRC failure, TCP packet out of sequence
> or TCP packet size error.
> According to this paper[*]:  http://tinyurl.com/3aue72r
> TCP's 16-bit CRC has an effective detection rate of 2^10.   1 in 1024 errors 
> may escape detection, and in fact what originally alerted us to this issue 
> was seeing failures due to bit-errors in hadoop traffic.  Extrapolating from 
> that paper, one might expect 14 escaped packet errors per hour for that 
> single node of a 4k cluster.  While the above error rate
> was unusually high due to a broadband aggregate switch issue, hadoop not 
> having an integrity check on RPC makes it problematic to discover, and limit 
> any potential data damage due to
> acting on a corrupt RPC message.
> --
> [*] In case this jira outlives that tinyurl, the IEEE paper cited is:  
> "Performance of Checksums and CRCs over Real Data" by Jonathan Stone, Michael 
> Greenwald, Craig Partridge, Jim Hughes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9586) unit test failure: org.apache.hadoop.hdfs.TestFileCreation.testFileCreationSetLocalInterface

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9586:
-
Labels: ipv6  (was: )

> unit test failure: 
> org.apache.hadoop.hdfs.TestFileCreation.testFileCreationSetLocalInterface
> 
>
> Key: HADOOP-9586
> URL: https://issues.apache.org/jira/browse/HADOOP-9586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Giridharan Kesavan
>  Labels: ipv6
>
> https://builds.apache.org/job/Hadoop-branch1/lastCompletedBuild/testReport/org.apache.hadoop.hdfs/TestFileCreation/testFileCreationSetLocalInterface/
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
> /user/jenkins/filestatus.dat could only be replicated to 0 nodes, instead of 1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>   at $Proxy5.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>   at $Proxy5.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
>   at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335453#comment-14335453
 ] 

Allen Wittenauer commented on HADOOP-11630:
---

The problem is that Hadoop today makes assumptions about the IP address.  
There's a lot more required to get IPv6 to work than just setting this.

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11582) org.apache.hadoop.net.TestDNS failing with NumberFormatException -IPv6 related?

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11582:
--
Labels: ipv6  (was: )

> org.apache.hadoop.net.TestDNS failing with NumberFormatException -IPv6 
> related?
> ---
>
> Key: HADOOP-11582
> URL: https://issues.apache.org/jira/browse/HADOOP-11582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.0.0
> Environment: OSX yosemite
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>  Labels: ipv6
>
> {{org.apache.hadoop.net.TestDNS}} failing {{java.lang.NumberFormatException: 
> For input string: ":3246:9aff:fe80:438f"}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11630:
--
Labels: ipv6  (was: )

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11630) Allow HDFS to bind to ipv6 conditionally

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved HDFS-7834 to HADOOP-11630:
-

  Component/s: (was: scripts)
   scripts
Affects Version/s: (was: 2.6.0)
   2.6.0
  Key: HADOOP-11630  (was: HDFS-7834)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Allow HDFS to bind to ipv6 conditionally
> 
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HDFS-7834-branch-2-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335450#comment-14335450
 ] 

Elliott Clark commented on HADOOP-11630:


Yeah I'd like to get that functionality into branch-2 and I'd also like to 
clean up trunk so that only setting HADOOP_ALLOW_IPV6 is all that's needed.

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11630:
--
Summary: Allow hadoop to bind to ipv6 conditionally  (was: Allow HDFS to 
bind to ipv6 conditionally)

> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335451#comment-14335451
 ] 

Hadoop QA commented on HADOOP-11629:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700569/HADOOP-11629.patch
  against trunk revision 9a37247.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-azure.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5769//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5769//artifact/patchprocess/newPatchFindbugsWarningshadoop-azure.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5769//console

This message is automatically generated.

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335412#comment-14335412
 ] 

Allen Wittenauer commented on HADOOP-11628:
---

Win 2k does, Win 2k3 does not, based upon 
https://technet.microsoft.com/en-us/library/cc772815%28v=ws.10%29.aspx .

Ugh:  
http://stackoverflow.com/questions/12229658/java-spnego-unwanted-spn-canonicalization

If you follow the thread mentioned, gives more details on why this is a bad 
deployment strategy.  So I'm thinking this should probably be a runtime option 
with a default of off.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11574) Uber-JIRA: improve Hadoop network resilience & diagnostics

2015-02-24 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335407#comment-14335407
 ] 

Ray Chiang commented on HADOOP-11574:
-

I like the user-centric definitions above.  Then for each type of error, such 
as:

- DNS/UnknownHostException
- RPC/RemoteException
- SecurityException

We can see where it's deficient in the context of each user.

As with most of our log messages, I might worry a bit about finding the right 
balance of giving notification and filling the logs too much.

> Uber-JIRA: improve Hadoop network resilience & diagnostics
> --
>
> Key: HADOOP-11574
> URL: https://issues.apache.org/jira/browse/HADOOP-11574
> Project: Hadoop Common
>  Issue Type: Task
>  Components: net
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>  Labels: supportability
>
> Improve Hadoop's resilience to bad network conditions/problems, including
> * improving recognition of problem states
> * improving diagnostics
> * better handling of IPv6 addresses, even if the protocol is unsupported.
> * better behaviour client-side when there are connectivity problems. (i.e 
> while some errors you can spin on, DNS failures are not on the list)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11574) Uber-JIRA: improve Hadoop network resilience & diagnostics

2015-02-24 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-11574:

Labels: supportability  (was: )

> Uber-JIRA: improve Hadoop network resilience & diagnostics
> --
>
> Key: HADOOP-11574
> URL: https://issues.apache.org/jira/browse/HADOOP-11574
> Project: Hadoop Common
>  Issue Type: Task
>  Components: net
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>  Labels: supportability
>
> Improve Hadoop's resilience to bad network conditions/problems, including
> * improving recognition of problem states
> * improving diagnostics
> * better handling of IPv6 addresses, even if the protocol is unsupported.
> * better behaviour client-side when there are connectivity problems. (i.e 
> while some errors you can spin on, DNS failures are not on the list)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335392#comment-14335392
 ] 

Daryn Sharp commented on HADOOP-11628:
--

Pretty much all browsers and cmdline tools like curl default to 
canonicalization.  I don't have access to Windows hosts but pretty sure it does 
the same.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11629:
---
Status: Patch Available  (was: Open)

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11629:
---
Hadoop Flags: Reviewed

+1 for the patch pending Jenkins run.  Thanks, Shanyu.

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated HADOOP-11629:
-
Attachment: HADOOP-11629.patch

patch attached.

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-11629.patch
>
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11629:
---
 Target Version/s: 2.7.0
Affects Version/s: (was: 2.6.1)

> WASB filesystem should not start BandwidthGaugeUpdater if 
> fs.azure.skip.metrics set to true
> ---
>
> Key: HADOOP-11629
> URL: https://issues.apache.org/jira/browse/HADOOP-11629
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>
> In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to 
> true, we do not register Azure FileSystem metrics with the metrics system. 
> However, BandwidthGaugeUpdater object is still created in 
> AzureNativeFileSystemStore, resulting in unnecessary threads being spawned.
> Under heavy load the system could be busy dealing with these threads and GC 
> has to work on removing the thread objects. E.g. When multiple WebHCat 
> clients submitting jobs to WebHCat server, we observed that the WebHCat 
> server spawns ~400 daemon threads, which slows down the server and sometimes 
> cause timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11629) WASB filesystem should not start BandwidthGaugeUpdater if fs.azure.skip.metrics set to true

2015-02-24 Thread shanyu zhao (JIRA)
shanyu zhao created HADOOP-11629:


 Summary: WASB filesystem should not start BandwidthGaugeUpdater if 
fs.azure.skip.metrics set to true
 Key: HADOOP-11629
 URL: https://issues.apache.org/jira/browse/HADOOP-11629
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.1
Reporter: shanyu zhao
Assignee: shanyu zhao


In Hadoop-11248 we added configuration "fs.azure.skip.metrics". If set to true, 
we do not register Azure FileSystem metrics with the metrics system. However, 
BandwidthGaugeUpdater object is still created in AzureNativeFileSystemStore, 
resulting in unnecessary threads being spawned.

Under heavy load the system could be busy dealing with these threads and GC has 
to work on removing the thread objects. E.g. When multiple WebHCat clients 
submitting jobs to WebHCat server, we observed that the WebHCat server spawns 
~400 daemon threads, which slows down the server and sometimes cause timeout.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335209#comment-14335209
 ] 

Hadoop QA commented on HADOOP-11620:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700527/HADOOP-11620.4.patch
  against trunk revision 1aea440.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5768//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5768//console

This message is automatically generated.

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch, HADOOP-11620.4.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335175#comment-14335175
 ] 

Larry McCay commented on HADOOP-11620:
--

Thanks, [~asuresh]!
LGTM!

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch, HADOOP-11620.4.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11620:
-
Attachment: HADOOP-11620.4.patch

[~owen.omalley], [~lmccay], as per you suggestions, 
Uploading patch to revert to old scheme..

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch, HADOOP-11620.4.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335120#comment-14335120
 ] 

Hadoop QA commented on HADOOP-11628:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700519/HADOOP-11628.patch
  against trunk revision 1aea440.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5767//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5767//console

This message is automatically generated.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335092#comment-14335092
 ] 

Steve Loughran commented on HADOOP-11602:
-


# if it's in project, we don't have to worry so much about audience: it is 
private & we can tag as across all the code
# non-hadoop common though, that's trickier. 

Maybe we could say
# everything downstream of hadoop-common uses the helper method
# everything that isn't (hadoop-auth?) uses the Java {{toUpper() logic as is}}
# I was also thinking, we could have an {{equalsIgnoringCase(String s1, String 
s2)}} helper method for the case logic; include null-checks in there too.


> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335090#comment-14335090
 ] 

Allen Wittenauer commented on HADOOP-11628:
---

OK, it looks like it is implementation dependent. MIT does canonicalize, MS 
does not.  Wheee.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335087#comment-14335087
 ] 

Allen Wittenauer commented on HADOOP-11628:
---

I don't think this is the correct fix.  I'm fairly certain that SPNs that are 
CNAMEs are supposed to stay CNAMEs.  In other words, JDK8 fixed JDK7's broken 
behavior.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-11628:
-
Status: Patch Available  (was: Open)

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-11628:
-
Attachment: HADOOP-11628.patch

Explicitly canonicalize.  Cannot test due to inability to fake cnames, but has 
been tested internally.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-02-24 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-11628:


 Summary: SPNEGO auth does not work with CNAMEs in JDK8
 Key: HADOOP-11628
 URL: https://issues.apache.org/jira/browse/HADOOP-11628
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335035#comment-14335035
 ] 

Tsuyoshi OZAWA commented on HADOOP-11602:
-

[~ste...@apache.org] [~shv] I found some difficulty about StringUtil approach - 
 org.apache.hadoop.util.StringUtil is marked as @InterfaceAudience.Private. 
Also, we have some packages which don't depend on hadoop-common. How can we 
deal with this problem? Should we create StringConverter or StringUtil for each 
packages?


> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11620) Add Support for Load Balancing across a group of KMS servers for HA

2015-02-24 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335034#comment-14335034
 ] 

Owen O'Malley commented on HADOOP-11620:


I agree with Larry that the other pattern was better. It is a little strange 
using a compound like "host1;host2" for the host part of the URI, but moving an 
override of the port number into the host part is too confusing for little 
gain. Please go back to the previous version.

> Add Support for Load Balancing across a group of KMS servers for HA
> ---
>
> Key: HADOOP-11620
> URL: https://issues.apache.org/jira/browse/HADOOP-11620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: HADOOP-11620.1.patch, HADOOP-11620.2.patch, 
> HADOOP-11620.3.patch
>
>
> This patch needs to add support for :
> * specification of multiple hostnames in the kms key provider uri
> * KMS client to load balance requests across the hosts specified in the kms 
> keyprovider uri.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335018#comment-14335018
 ] 

Hudson commented on HADOOP-11602:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7187 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7187/])
Revert "HADOOP-11602. Fix toUpperCase/toLowerCase to use Locale.ENGLISH. 
(ozawa)" (ozawa: rev 9cedad11d8d2197a54732667a15344983de5c437)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationData.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java
* hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaByStorageTypeEntry.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
* 
hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTask.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
* 
hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobBuilder.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/Environment.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TimedOutTestsListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
* 
hadoop-common-project/hadoo

[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335013#comment-14335013
 ] 

Tsuyoshi OZAWA commented on HADOOP-11602:
-

Reverted the change(946456c6d88780abe0251b098dd771e9e1e93ab3) against trunk. 
I'll create new patch shortly.

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-24 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11602:

Status: Open  (was: Patch Available)

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8642) Document that io.native.lib.available only controls native bz2 and zlib compression codecs

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334998#comment-14334998
 ] 

Hudson commented on HADOOP-8642:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #2064 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2064/])
HADOOP-8642. Document that io.native.lib.available only controls native bz2 and 
zlib compression codecs. (aajisaka) (aajisaka: rev 
ab5976161f3afaaf2ace60bab400e0d8dbc61923)
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Document that io.native.lib.available only controls native bz2 and zlib 
> compression codecs
> --
>
> Key: HADOOP-8642
> URL: https://issues.apache.org/jira/browse/HADOOP-8642
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
> Fix For: 2.7.0
>
> Attachments: HADOOP-8642.2.patch, HADOOP-8642.3.patch, 
> HADOOP-8642.4.patch, HADOOP-8642.5.patch, HADOOP-8642.patch
>
>
> Per core-default.xml {{io.native.lib.available}} indicates "Should native 
> hadoop libraries, if present, be used" however it looks like it only affects 
> bzip2 and zlib. Even if we set {{io.native.lib.available}} to false, native 
> libraries are loaded and the libraries other than bzip2 and zlib are actually 
> used. We should document that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334994#comment-14334994
 ] 

Hudson commented on HADOOP-10478:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2064 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2064/])
HADOOP-10478. Fix new findbugs warnings in hadoop-maven-plugins. Contributed by 
Li Lu. (wheat9: rev 16bd79ee8e95dbe69a8c903128572363231e2b01)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java


> Fix new findbugs warnings in hadoop-maven-plugins
> -
>
> Key: HADOOP-10478
> URL: https://issues.apache.org/jira/browse/HADOOP-10478
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Li Lu
>  Labels: newbie
> Fix For: 2.7.0
>
> Attachments: HADOOP-10478-022315.patch
>
>
> The following findbug warning needs to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ 
> hadoop-maven-plugins ---
> [INFO] BugInstance size is 1
> [INFO] Error size is 0
> [INFO] Total bugs: 1
> [INFO] Found reliance on default encoding in new 
> org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new 
> java.io.InputStreamReader(InputStream) 
> ["org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread"] At 
> Exec.java:[lines 89-114]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11619) FTPFileSystem should override getDefaultPort

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334995#comment-14334995
 ] 

Hudson commented on HADOOP-11619:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2064 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2064/])
HADOOP-11619. FTPFileSystem should override getDefaultPort. (Brahma Reddy 
Battula via gera) (gera: rev 1dba57271fa56a7383139deb0b89a61c58eedf25)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/ftp/TestFTPFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java


> FTPFileSystem should override getDefaultPort
> 
>
> Key: HADOOP-11619
> URL: https://issues.apache.org/jira/browse/HADOOP-11619
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Gera Shegalov
>Assignee: Brahma Reddy Battula
> Fix For: 2.7.0
>
> Attachments: HADOOP-11619-002.patch, HADOOP-11619-003.patch, 
> HADOOP-11619-004.patch, HADOOP-11619-005.patch, HADOOP-11619.patch
>
>
> FTPFileSystem should override FileSystem#getDefaultPort to return 
> FTP.DEFAULT_PORT 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11625) Minor fixes to command manual & SLA doc

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334993#comment-14334993
 ] 

Hudson commented on HADOOP-11625:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2064 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2064/])
HADOOP-11625. Minor fixes to command manual & SLA doc (aw) (aw: rev 
208430a15d68aa44346150884a13712f2381d593)
* hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md


> Minor fixes to command manual & SLA doc
> ---
>
> Key: HADOOP-11625
> URL: https://issues.apache.org/jira/browse/HADOOP-11625
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11625-00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8642) Document that io.native.lib.available only controls native bz2 and zlib compression codecs

2015-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14334977#comment-14334977
 ] 

Hudson commented on HADOOP-8642:


FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #114 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/114/])
HADOOP-8642. Document that io.native.lib.available only controls native bz2 and 
zlib compression codecs. (aajisaka) (aajisaka: rev 
ab5976161f3afaaf2ace60bab400e0d8dbc61923)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Document that io.native.lib.available only controls native bz2 and zlib 
> compression codecs
> --
>
> Key: HADOOP-8642
> URL: https://issues.apache.org/jira/browse/HADOOP-8642
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
> Fix For: 2.7.0
>
> Attachments: HADOOP-8642.2.patch, HADOOP-8642.3.patch, 
> HADOOP-8642.4.patch, HADOOP-8642.5.patch, HADOOP-8642.patch
>
>
> Per core-default.xml {{io.native.lib.available}} indicates "Should native 
> hadoop libraries, if present, be used" however it looks like it only affects 
> bzip2 and zlib. Even if we set {{io.native.lib.available}} to false, native 
> libraries are loaded and the libraries other than bzip2 and zlib are actually 
> used. We should document that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >