[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339867#comment-14339867
 ] 

Hadoop QA commented on HADOOP-11569:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12701292/HADOOP-11569-006.patch
  against trunk revision 8ca0d95.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5790//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5790//console

This message is automatically generated.

> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9489) Eclipse instructions in BUILDING.txt don't work

2015-02-27 Thread Chengbing Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339873#comment-14339873
 ] 

Chengbing Liu commented on HADOOP-9489:
---

This is quite helpful to Eclipse users, I think we should have it in the next 
release. Thanks [~cnauroth]!

> Eclipse instructions in BUILDING.txt don't work
> ---
>
> Key: HADOOP-9489
> URL: https://issues.apache.org/jira/browse/HADOOP-9489
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Carl Steinbach
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-9489.1.patch, HADOOP-9489.2.patch, 
> eclipse_hadoop_errors.txt
>
>
> I have tried several times to import Hadoop trunk into Eclipse following the 
> instructions in the BUILDING.txt file, but so far have not been able to get 
> it to work.
> If I use a fresh install of Eclipse 4.2.2, Eclipse will complain about an 
> undefined M2_REPO environment variable. I discovered that this is defined 
> automatically by the M2Eclipse plugin, and think that the BUILDING.txt doc 
> should be updated to explain this.
> After installing M2Eclipse I tried importing the code again, and now get over 
> 2500 errors related to missing class dependencies. Many of these errors 
> correspond to missing classes in the oah*.proto namespace, which makes me 
> think that 'mvn eclipse:eclipse' is not triggering protoc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339885#comment-14339885
 ] 

Tsuyoshi Ozawa commented on HADOOP-11569:
-

+1, committing this shortly.

> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11569:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks Vinay for your review and thanks 
Uma for your review.

> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339895#comment-14339895
 ] 

Vinayakumar B commented on HADOOP-11569:


Thanks [~ozawa] for the review and commit.
Thanks [~umamaheswararao] for the reviews

> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339897#comment-14339897
 ] 

Hudson commented on HADOOP-11569:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7216 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7216/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2015-02-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11613:
--
Attachment: HADOOP-11613-001.patch

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11613-001.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9489) Eclipse instructions in BUILDING.txt don't work

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339930#comment-14339930
 ] 

Hadoop QA commented on HADOOP-9489:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12681630/HADOOP-9489.2.patch
  against trunk revision 8ca0d95.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5791//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5791//console

This message is automatically generated.

> Eclipse instructions in BUILDING.txt don't work
> ---
>
> Key: HADOOP-9489
> URL: https://issues.apache.org/jira/browse/HADOOP-9489
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Carl Steinbach
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-9489.1.patch, HADOOP-9489.2.patch, 
> eclipse_hadoop_errors.txt
>
>
> I have tried several times to import Hadoop trunk into Eclipse following the 
> instructions in the BUILDING.txt file, but so far have not been able to get 
> it to work.
> If I use a fresh install of Eclipse 4.2.2, Eclipse will complain about an 
> undefined M2_REPO environment variable. I discovered that this is defined 
> automatically by the M2Eclipse plugin, and think that the BUILDING.txt doc 
> should be updated to explain this.
> After installing M2Eclipse I tried importing the code again, and now get over 
> 2500 errors related to missing class dependencies. Many of these errors 
> correspond to missing classes in the oah*.proto namespace, which makes me 
> think that 'mvn eclipse:eclipse' is not triggering protoc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2015-02-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11613:
--
Attachment: (was: HADOOP-11613-001.patch)

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11566) Add tests for erasure coders to cover erasure of parity units

2015-02-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11566:
---
Summary: Add tests for erasure coders to cover erasure of parity units   
(was: Add tests for raw erasure coders to cover erasure of parity units )

> Add tests for erasure coders to cover erasure of parity units 
> --
>
> Key: HADOOP-11566
> URL: https://issues.apache.org/jira/browse/HADOOP-11566
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> Discussing with [~zhz] in HADOOP-11542: it's planned to have follow up a JIRA 
> to enhance the tests for parity chunks as well. Like erasedDataIndexes, 
> erasedParityIndexes will be added to specify which parity units are to be 
> erased and recovered then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2015-02-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11613:
--
Attachment: HADOOP-11613-001.patch

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11613-001.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2015-02-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339935#comment-14339935
 ] 

Brahma Reddy Battula commented on HADOOP-11613:
---

Thanks a lot for review..Updated the patch based on your comment.There is no 
impact on testcases ( all are passing )

Dn't want to change the interface in {{ StorageInterface }} Hence unsupported 
exception handling here only..Let me know your opinion on same

{code}
if (current.isPageBlob()) {
try {
  ret.add(new MockCloudPageBlobWrapper(
  convertKeyToEncodedUri(current.getKey()), current
  .getMetadata(), current.getContentLength()));
} catch (UnsupportedEncodingException e) {
  throw new RuntimeException(
  "problem while convertKeyToEncodedUri", e);
}
{code}

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11613-001.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10027) *Compressor_deflateBytesDirect passes instance instead of jclass to GetStaticObjectField

2015-02-27 Thread Hui Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Zheng updated HADOOP-10027:
---
Attachment: HADOOP-10027.2.patch

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> 
>
> Key: HADOOP-10027
> URL: https://issues.apache.org/jira/browse/HADOOP-10027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Eric Abbott
>Assignee: Hui Zheng
>Priority: Minor
> Attachments: HADOOP-10027.1.patch, HADOOP-10027.2.patch
>
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
> // Get members of ZlibCompressor
> jobject clazz = (*env)->GetStaticObjectField(env, this,
>  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x7f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x7f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x7f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x7f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x7f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x7f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10027) *Compressor_deflateBytesDirect passes instance instead of jclass to GetStaticObjectField

2015-02-27 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340016#comment-14340016
 ] 

Hui Zheng commented on HADOOP-10027:


I updated the patch which delete these code and add a testcase for multithread.

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> 
>
> Key: HADOOP-10027
> URL: https://issues.apache.org/jira/browse/HADOOP-10027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Eric Abbott
>Assignee: Hui Zheng
>Priority: Minor
> Attachments: HADOOP-10027.1.patch, HADOOP-10027.2.patch
>
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
> // Get members of ZlibCompressor
> jobject clazz = (*env)->GetStaticObjectField(env, this,
>  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x7f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x7f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x7f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x7f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x7f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x7f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340030#comment-14340030
 ] 

Hudson commented on HADOOP-9922:


FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #117 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/117/])
HADOOP-9922. hadoop windows native build will fail in 32 bit machine. 
Contributed by Kiran Kumar M R. (cnauroth: rev 
2214dab60ff11b8de74c9d661585452a078fe0c1)
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/src/main/native/native.sln
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj


> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340028#comment-14340028
 ] 

Hudson commented on HADOOP-11569:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #117 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/117/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11637) bash location hard-coded in shell scripts

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340027#comment-14340027
 ] 

Hudson commented on HADOOP-11637:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #117 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/117/])
HADOOP-11637. bash location hard-coded in shell scripts (aw) (aw: rev 
dce8b9c4d0b2da1780f743d81e840ca0fdfc62cf)
* hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh
* hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> bash location hard-coded in shell scripts
> -
>
> Key: HADOOP-11637
> URL: https://issues.apache.org/jira/browse/HADOOP-11637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11637.patch
>
>
> Let's fix all of the /bin/bash and /usr/bin/bash hardcodes globally in the 
> shell code in one big patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340049#comment-14340049
 ] 

Hudson commented on HADOOP-11569:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #851 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/851/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11637) bash location hard-coded in shell scripts

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340048#comment-14340048
 ] 

Hudson commented on HADOOP-11637:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #851 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/851/])
HADOOP-11637. bash location hard-coded in shell scripts (aw) (aw: rev 
dce8b9c4d0b2da1780f743d81e840ca0fdfc62cf)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
* hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
* hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh


> bash location hard-coded in shell scripts
> -
>
> Key: HADOOP-11637
> URL: https://issues.apache.org/jira/browse/HADOOP-11637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11637.patch
>
>
> Let's fix all of the /bin/bash and /usr/bin/bash hardcodes globally in the 
> shell code in one big patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340051#comment-14340051
 ] 

Hudson commented on HADOOP-9922:


SUCCESS: Integrated in Hadoop-Yarn-trunk #851 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/851/])
HADOOP-9922. hadoop windows native build will fail in 32 bit machine. 
Contributed by Kiran Kumar M R. (cnauroth: rev 
2214dab60ff11b8de74c9d661585452a078fe0c1)
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/native/native.sln
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj


> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10027) *Compressor_deflateBytesDirect passes instance instead of jclass to GetStaticObjectField

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340068#comment-14340068
 ] 

Hadoop QA commented on HADOOP-10027:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12701338/HADOOP-10027.2.patch
  against trunk revision 4f75b15.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5792//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5792//console

This message is automatically generated.

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> 
>
> Key: HADOOP-10027
> URL: https://issues.apache.org/jira/browse/HADOOP-10027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Eric Abbott
>Assignee: Hui Zheng
>Priority: Minor
> Attachments: HADOOP-10027.1.patch, HADOOP-10027.2.patch
>
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
> // Get members of ZlibCompressor
> jobject clazz = (*env)->GetStaticObjectField(env, this,
>  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x7f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x7f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x7f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x7f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x7f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x7f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11638) Linux-specific gettid() used in OpensslSecureRandom.c

2015-02-27 Thread Dmitry Sivachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Sivachenko updated HADOOP-11638:
---
Attachment: (was: thr.patch)

> Linux-specific gettid() used in OpensslSecureRandom.c
> -
>
> Key: HADOOP-11638
> URL: https://issues.apache.org/jira/browse/HADOOP-11638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Dmitry Sivachenko
>
> In OpensslSecureRandom.c you use Linux-specific syscall gettid():
> static unsigned long pthreads_thread_id(void)
> {
> return (unsigned long)syscall(SYS_gettid);
> }
> Man page says:
> gettid()  is Linux-specific and should not be used in programs that are
> intended to be portable.
> This breaks hadoop-2.6.0 compilation on FreeBSD (may be on other OSes too).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11638) Linux-specific gettid() used in OpensslSecureRandom.c

2015-02-27 Thread Dmitry Sivachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340144#comment-14340144
 ] 

Dmitry Sivachenko commented on HADOOP-11638:


I don't quite understand the code here: do you really need gettid() here or it 
can be replaced with pthread_self()?

> Linux-specific gettid() used in OpensslSecureRandom.c
> -
>
> Key: HADOOP-11638
> URL: https://issues.apache.org/jira/browse/HADOOP-11638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Dmitry Sivachenko
>
> In OpensslSecureRandom.c you use Linux-specific syscall gettid():
> static unsigned long pthreads_thread_id(void)
> {
> return (unsigned long)syscall(SYS_gettid);
> }
> Man page says:
> gettid()  is Linux-specific and should not be used in programs that are
> intended to be portable.
> This breaks hadoop-2.6.0 compilation on FreeBSD (may be on other OSes too).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340168#comment-14340168
 ] 

Hudson commented on HADOOP-9922:


FAILURE: Integrated in Hadoop-Hdfs-trunk #2049 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2049/])
HADOOP-9922. hadoop windows native build will fail in 32 bit machine. 
Contributed by Kiran Kumar M R. (cnauroth: rev 
2214dab60ff11b8de74c9d661585452a078fe0c1)
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* hadoop-common-project/hadoop-common/src/main/native/native.sln
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c


> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11637) bash location hard-coded in shell scripts

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340164#comment-14340164
 ] 

Hudson commented on HADOOP-11637:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2049 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2049/])
HADOOP-11637. bash location hard-coded in shell scripts (aw) (aw: rev 
dce8b9c4d0b2da1780f743d81e840ca0fdfc62cf)
* hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh


> bash location hard-coded in shell scripts
> -
>
> Key: HADOOP-11637
> URL: https://issues.apache.org/jira/browse/HADOOP-11637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11637.patch
>
>
> Let's fix all of the /bin/bash and /usr/bin/bash hardcodes globally in the 
> shell code in one big patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340165#comment-14340165
 ] 

Hudson commented on HADOOP-11569:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2049 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2049/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340176#comment-14340176
 ] 

Hudson commented on HADOOP-11569:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #108 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/108/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11637) bash location hard-coded in shell scripts

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340175#comment-14340175
 ] 

Hudson commented on HADOOP-11637:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #108 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/108/])
HADOOP-11637. bash location hard-coded in shell scripts (aw) (aw: rev 
dce8b9c4d0b2da1780f743d81e840ca0fdfc62cf)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
* hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh
* hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
* hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> bash location hard-coded in shell scripts
> -
>
> Key: HADOOP-11637
> URL: https://issues.apache.org/jira/browse/HADOOP-11637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11637.patch
>
>
> Let's fix all of the /bin/bash and /usr/bin/bash hardcodes globally in the 
> shell code in one big patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340179#comment-14340179
 ] 

Hudson commented on HADOOP-9922:


FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #108 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/108/])
HADOOP-9922. hadoop windows native build will fail in 32 bit machine. 
Contributed by Kiran Kumar M R. (cnauroth: rev 
2214dab60ff11b8de74c9d661585452a078fe0c1)
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/native/native.sln


> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-02-27 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu reassigned HADOOP-11639:
-

Assignee: Remus Rusanu

> Clean up Windows native code compilation warnings related to Windows Secure 
> Container Executor.
> ---
>
> Key: HADOOP-11639
> URL: https://issues.apache.org/jira/browse/HADOOP-11639
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Chris Nauroth
>Assignee: Remus Rusanu
>
> YARN-2198 introduced additional code in Hadoop Common to support the 
> NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
> compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340248#comment-14340248
 ] 

Hudson commented on HADOOP-9922:


FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #117 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/117/])
HADOOP-9922. hadoop windows native build will fail in 32 bit machine. 
Contributed by Kiran Kumar M R. (cnauroth: rev 
2214dab60ff11b8de74c9d661585452a078fe0c1)
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-common-project/hadoop-common/src/main/native/native.sln
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/CHANGES.txt


> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11637) bash location hard-coded in shell scripts

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340243#comment-14340243
 ] 

Hudson commented on HADOOP-11637:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #117 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/117/])
HADOOP-11637. bash location hard-coded in shell scripts (aw) (aw: rev 
dce8b9c4d0b2da1780f743d81e840ca0fdfc62cf)
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh
* hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh


> bash location hard-coded in shell scripts
> -
>
> Key: HADOOP-11637
> URL: https://issues.apache.org/jira/browse/HADOOP-11637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11637.patch
>
>
> Let's fix all of the /bin/bash and /usr/bin/bash hardcodes globally in the 
> shell code in one big patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340244#comment-14340244
 ] 

Hudson commented on HADOOP-11569:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #117 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/117/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11569) Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340281#comment-14340281
 ] 

Hudson commented on HADOOP-11569:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2067/])
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B. (ozawa: rev 
48c7ee7553af94a57952bca03b49c04b9bbfab45)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestMapFile.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Provide Merge API for MapFile to merge multiple similar MapFiles to one 
> MapFile
> ---
>
> Key: HADOOP-11569
> URL: https://issues.apache.org/jira/browse/HADOOP-11569
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.7.0
>
> Attachments: HADOOP-11569-001.patch, HADOOP-11569-002.patch, 
> HADOOP-11569-003.patch, HADOOP-11569-004.patch, HADOOP-11569-005.patch, 
> HADOOP-11569-006.patch
>
>
> If there are multiple similar MapFiles of the same keyClass and value 
> classes, then these can be merged together to One MapFile to allow search 
> easier.
> Provide an API  similar to {{SequenceFile#merge()}}.
> Merging will be easy with the fact that MapFiles are already sorted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11637) bash location hard-coded in shell scripts

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340280#comment-14340280
 ] 

Hudson commented on HADOOP-11637:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2067/])
HADOOP-11637. bash location hard-coded in shell scripts (aw) (aw: rev 
dce8b9c4d0b2da1780f743d81e840ca0fdfc62cf)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
* hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-tools/hadoop-sls/src/main/bin/rumen2sls.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-env.sh
* hadoop-tools/hadoop-sls/src/main/bin/slsrun.sh
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh


> bash location hard-coded in shell scripts
> -
>
> Key: HADOOP-11637
> URL: https://issues.apache.org/jira/browse/HADOOP-11637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11637.patch
>
>
> Let's fix all of the /bin/bash and /usr/bin/bash hardcodes globally in the 
> shell code in one big patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2015-02-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340284#comment-14340284
 ] 

Hudson commented on HADOOP-9922:


SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2067/])
HADOOP-9922. hadoop windows native build will fail in 32 bit machine. 
Contributed by Kiran Kumar M R. (cnauroth: rev 
2214dab60ff11b8de74c9d661585452a078fe0c1)
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/src/main/native/native.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.sln
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/native/native.sln
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c


> hadoop windows native build will fail in 32 bit machine
> ---
>
> Key: HADOOP-9922
> URL: https://issues.apache.org/jira/browse/HADOOP-9922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Vinayakumar B
>Assignee: Kiran Kumar M R
> Fix For: 2.7.0
>
> Attachments: HADOOP-9922-002.patch, HADOOP-9922-003.patch, 
> HADOOP-9922-004.patch, HADOOP-9922-005.patch, HADOOP-9922.patch
>
>
> Building Hadoop in windows 32 bit machine fails as native project is not 
> having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11618) DelegateToFileSystem always uses default FS's default port

2015-02-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11618:
--
Attachment: HADOOP-11618-001.patch

> DelegateToFileSystem always uses default FS's default port 
> ---
>
> Key: HADOOP-11618
> URL: https://issues.apache.org/jira/browse/HADOOP-11618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Gera Shegalov
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11618-001.patch, HADOOP-11618.patch
>
>
> DelegateToFileSystem constructor has the following code:
> {code}
> super(theUri, supportedScheme, authorityRequired,
> FileSystem.getDefaultUri(conf).getPort());
> {code}
> The default port should be taken from theFsImpl instead.
> {code}
> super(theUri, supportedScheme, authorityRequired,
> theFsImpl.getDefaultPort());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11618) DelegateToFileSystem always uses default FS's default port

2015-02-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340332#comment-14340332
 ] 

Brahma Reddy Battula commented on HADOOP-11618:
---

[~jira.shegalov] thanks a lot for review..Updated patch..( Added the testcase, 
Init of ftps is enough to reproduce this bug)

> DelegateToFileSystem always uses default FS's default port 
> ---
>
> Key: HADOOP-11618
> URL: https://issues.apache.org/jira/browse/HADOOP-11618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Gera Shegalov
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11618-001.patch, HADOOP-11618.patch
>
>
> DelegateToFileSystem constructor has the following code:
> {code}
> super(theUri, supportedScheme, authorityRequired,
> FileSystem.getDefaultUri(conf).getPort());
> {code}
> The default port should be taken from theFsImpl instead.
> {code}
> super(theUri, supportedScheme, authorityRequired,
> theFsImpl.getDefaultPort());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11618) DelegateToFileSystem always uses default FS's default port

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340378#comment-14340378
 ] 

Hadoop QA commented on HADOOP-11618:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12701380/HADOOP-11618-001.patch
  against trunk revision 01a1621.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5793//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5793//console

This message is automatically generated.

> DelegateToFileSystem always uses default FS's default port 
> ---
>
> Key: HADOOP-11618
> URL: https://issues.apache.org/jira/browse/HADOOP-11618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Gera Shegalov
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11618-001.patch, HADOOP-11618.patch
>
>
> DelegateToFileSystem constructor has the following code:
> {code}
> super(theUri, supportedScheme, authorityRequired,
> FileSystem.getDefaultUri(conf).getPort());
> {code}
> The default port should be taken from theFsImpl instead.
> {code}
> super(theUri, supportedScheme, authorityRequired,
> theFsImpl.getDefaultPort());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340400#comment-14340400
 ] 

Tsuyoshi Ozawa commented on HADOOP-11602:
-

{code}
+  contentType = (contentType != null) ? 
StringUtils.toLowerCase(contentType)
{code}

This line looks just 80 characters. Please correct me if I'm wrong.

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-003.patch, HADOOP-11602-branch-2.001.patch, 
> HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11602:

Attachment: HADOOP-11602-004.patch

Addressed Akira's comments.

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-003.patch, HADOOP-11602-004.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2015-02-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-10075:

Affects Version/s: 2.6.0
   Status: Open  (was: Patch Available)

Cancelling a patch since the patch looks stale. I think it's good timing to 
upgrade the version of jetty because http/2 is supported on jetty 9.3.0: 
https://projects.eclipse.org/projects/rt.jetty/reviews/9.3.0-release-review

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0, 2.2.0
>Reporter: Robert Rati
>Assignee: Robert Rati
> Attachments: HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11183) Memory-based S3AOutputstream

2015-02-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340551#comment-14340551
 ] 

Steve Loughran commented on HADOOP-11183:
-

For a test, yes, both output streams. You're adding new production code here

> Memory-based S3AOutputstream
> 
>
> Key: HADOOP-11183
> URL: https://issues.apache.org/jira/browse/HADOOP-11183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11183-004.patch, HADOOP-11183-005.patch, 
> HADOOP-11183-006.patch, HADOOP-11183.001.patch, HADOOP-11183.002.patch, 
> HADOOP-11183.003.patch, design-comments.pdf
>
>
> Currently s3a buffers files on disk(s) before uploading. This JIRA 
> investigates adding a memory-based upload implementation.
> The motivation is evidently performance: this would be beneficial for users 
> with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on 
> an S3-compatible object store (FYI: my contributions are made in name of 
> Amplidata). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11642) Upgrade azure sdk version from 0.6.0 to 2.0.0

2015-02-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11642:
---
Assignee: shashank  (was: Chris Nauroth)

> Upgrade azure sdk version from 0.6.0 to 2.0.0
> -
>
> Key: HADOOP-11642
> URL: https://issues.apache.org/jira/browse/HADOOP-11642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
> Environment: windows, azure
>Reporter: shashank
>Assignee: shashank
> Attachments: AzureSdkUpgrade.patch
>
>
> hadoop-azure uses unsupported version of azure sdk (0.6.0).Upgrade it to 2.0.0
> Breaking changes 
> :https://github.com/Azure/azure-storage-java/blob/master/BreakingChanges.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11642) Upgrade azure sdk version from 0.6.0 to 2.0.0

2015-02-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340674#comment-14340674
 ] 

Chris Nauroth commented on HADOOP-11642:


[~shkhande], thank you for the patch.  This looks good.

The 2 new javac warnings are deprecation warnings.  The Azure SDK JavaDocs 
describe what to use instead:

http://dl.windowsazure.com/storage/javadoc/com/microsoft/azure/storage/ServiceClient.html#setRetryPolicyFactory(com.microsoft.azure.storage.RetryPolicyFactory)

http://dl.windowsazure.com/storage/javadoc/com/microsoft/azure/storage/ServiceClient.html#setTimeoutInMs(int)

Could you please make those changes?  After that, it should be ready to commit.

BTW, I'm assigning the issue to you.  (We always assign the issue to the person 
making the patch, so that jira keeps track of your contributions over time.)

> Upgrade azure sdk version from 0.6.0 to 2.0.0
> -
>
> Key: HADOOP-11642
> URL: https://issues.apache.org/jira/browse/HADOOP-11642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
> Environment: windows, azure
>Reporter: shashank
>Assignee: Chris Nauroth
> Attachments: AzureSdkUpgrade.patch
>
>
> hadoop-azure uses unsupported version of azure sdk (0.6.0).Upgrade it to 2.0.0
> Breaking changes 
> :https://github.com/Azure/azure-storage-java/blob/master/BreakingChanges.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2015-02-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340775#comment-14340775
 ] 

Akira AJISAKA commented on HADOOP-11613:


I suppose we can catch {{UnsupportedEncodingException}} and throw unchecked 
{{RuntimeException}} in {{convertKeyToEncodeUri}} method. I recommend to 
comment in the code that the exception should not happen since "UTF-8" encoding 
is always supported.

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-11613-001.patch, HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10027) *Compressor_deflateBytesDirect passes instance instead of jclass to GetStaticObjectField

2015-02-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10027:
---
Target Version/s: 2.7.0

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> 
>
> Key: HADOOP-10027
> URL: https://issues.apache.org/jira/browse/HADOOP-10027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Eric Abbott
>Assignee: Hui Zheng
>Priority: Minor
> Attachments: HADOOP-10027.1.patch, HADOOP-10027.2.patch
>
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
> // Get members of ZlibCompressor
> jobject clazz = (*env)->GetStaticObjectField(env, this,
>  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x7f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x7f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x7f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x7f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x7f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x7f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10027) *Compressor_deflateBytesDirect passes instance instead of jclass to GetStaticObjectField

2015-02-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340801#comment-14340801
 ] 

Chris Nauroth commented on HADOOP-10027:


[~huizane], thank you for the patch.  This looks like the right approach.  I 
did a closer review of HADOOP-3604, and the linked Java bugs are quite old.  I 
don't believe any realistic deployment would still be running on such old Java 
versions.

This same code pattern is in all of the native compression codecs, likely due 
to copy-paste.  To make this patch comprehensive, let's update all of them: 
bzip2, lz4, snappy and zlib.

The new test is a good idea, but I think it needs some changes.  As written, it 
starts 10 threads, but then JUnit will leave execution of 
{{testZlibCompressDecompressInMultiThreads}} before those threads really 
complete.  If an exception is thrown from within the background thread, there 
is no reporting back to the main JUnit thread.  Because of those 2 things, 
unexpected failures on the background threads wouldn't actually show up as 
JUnit failures.  To fix this, I think you'll need to capture the {{Thread}} 
instances in array, {{join}} to all of them at the end of the test, and also 
work out a way to propagate possible exceptions out of those threads.  There is 
a helper class at {{org.apache.hadoop.test.MultithreadedTestUtil}} that might 
help you implement this.

> *Compressor_deflateBytesDirect passes instance instead of jclass to 
> GetStaticObjectField
> 
>
> Key: HADOOP-10027
> URL: https://issues.apache.org/jira/browse/HADOOP-10027
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Eric Abbott
>Assignee: Hui Zheng
>Priority: Minor
> Attachments: HADOOP-10027.1.patch, HADOOP-10027.2.patch
>
>
> http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c?view=markup
> This pattern appears in all the native compressors.
> // Get members of ZlibCompressor
> jobject clazz = (*env)->GetStaticObjectField(env, this,
>  ZlibCompressor_clazz);
> The 2nd argument to GetStaticObjectField is supposed to be a jclass, not a 
> jobject. Adding the JVM param -Xcheck:jni will cause "FATAL ERROR in native 
> method: JNI received a class argument that is not a class" and a core dump 
> such as the following.
> (gdb) 
> #0 0x7f02e4aef8a5 in raise () from /lib64/libc.so.6
> #1 0x7f02e4af1085 in abort () from /lib64/libc.so.6
> #2 0x7f02e45bd727 in os::abort(bool) () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #3 0x7f02e43cec63 in jniCheck::validate_class(JavaThread*, _jclass*, 
> bool) () from /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #4 0x7f02e43ea669 in checked_jni_GetStaticObjectField () from 
> /opt/jdk1.6.0_31/jre/lib/amd64/server/libjvm.so
> #5 0x7f02d38eaf79 in 
> Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_deflateBytesDirect () 
> from /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> In addition, that clazz object is only used for synchronization. In the case 
> of the native method _deflateBytesDirect, the result is a class wide lock 
> used to access the instance field uncompressed_direct_buf. Perhaps using the 
> instance as the sync point is more appropriate?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11643) Define EC schema API for ErasureCodec

2015-02-27 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11643:
--

 Summary: Define EC schema API for ErasureCodec
 Key: HADOOP-11643
 URL: https://issues.apache.org/jira/browse/HADOOP-11643
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


As part of {{ErasureCodec}}, {{ECSchema}} API will be defined here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340960#comment-14340960
 ] 

Akira AJISAKA commented on HADOOP-11602:


bq. This line looks just 80 characters. Please correct me if I'm wrong.
You are right. Sorry for my mistake.

Thanks [~ozawa] for the update. LTGM, +1 pending Jenkins. [~ste...@apache.org] 
and [~shv] could you review the patch?

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-003.patch, HADOOP-11602-004.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11643) Define EC schema API for ErasureCodec

2015-02-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11643:
---
Description: As part of {{ErasureCodec}} API to be defined in HDFS-7699, 
{{ECSchema}} API will be first defined here for better sync among related 
issues.  (was: As part of {{ErasureCodec}}, {{ECSchema}} API will be defined 
here.)

> Define EC schema API for ErasureCodec
> -
>
> Key: HADOOP-11643
> URL: https://issues.apache.org/jira/browse/HADOOP-11643
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> As part of {{ErasureCodec}} API to be defined in HDFS-7699, {{ECSchema}} API 
> will be first defined here for better sync among related issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Status: Open  (was: Patch Available)

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Attachment: HADOOP-11226.3.patch

Added {{ipc.client.low-latency}} with default=false

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Status: Patch Available  (was: Open)

No added tests, since this has no easy way to test without a correctly 
configured system & a tcpdump.

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341061#comment-14341061
 ] 

Hadoop QA commented on HADOOP-11226:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12701487/HADOOP-11226.3.patch
  against trunk revision cf51ff2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5795//console

This message is automatically generated.

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11644) Contribute CMX compression

2015-02-27 Thread Xabriel J Collazo Mojica (JIRA)
Xabriel J Collazo Mojica created HADOOP-11644:
-

 Summary: Contribute CMX compression
 Key: HADOOP-11644
 URL: https://issues.apache.org/jira/browse/HADOOP-11644
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Xabriel J Collazo Mojica
Assignee: Xabriel J Collazo Mojica


Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy 
and ZLIB.

Each one of these algorithms fills a gap:

bzip2 : Very high compression ratio, splittable
LZ4 : Very fast, non splittable
Snappy : Very fast, non splittable
zLib : good balance of compression and speed.

We think there is a gap for a compression algorithm that can perform fast 
compress and decompress, while also being splittable. This can help 
significantly on jobs where the input file sizes are >= 1GB.
For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, 
splittable, concatenable compression algorithm developed specifically for 
Hadoop workloads. Many of our customers use CMX, and we would love to be able 
to contribute it to hadoop-common. 

CMX is block oriented : We typically use 64k blocks. Blocks are independently 
decompressable.

CMX is splittable : We implement the SplittableCompressionCodec interface. All 
CMX files are a multiple of 64k, so the splittability is achieved in a simple 
way with no need for external indexes.

CMX is concatenable : Two independent CMX files can be concatenated together. 
We have seen that some projects like Apache Flume require this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Status: Open  (was: Patch Available)

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Status: Patch Available  (was: Open)

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch, HADOOP-11226.4.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Attachment: HADOOP-11226.4.patch

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch, HADOOP-11226.4.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11643) Define EC schema API for ErasureCodec

2015-02-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11643:
---
Attachment: HADOOP-11643_v1.patch

Provided a patch. No unit test is added as it doesn't have much logic.

> Define EC schema API for ErasureCodec
> -
>
> Key: HADOOP-11643
> URL: https://issues.apache.org/jira/browse/HADOOP-11643
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11643_v1.patch
>
>
> As part of {{ErasureCodec}} API to be defined in HDFS-7699, {{ECSchema}} API 
> will be first defined here for better sync among related issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11602) Fix toUpperCase/toLowerCase to use Locale.ENGLISH

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341125#comment-14341125
 ] 

Hadoop QA commented on HADOOP-11602:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12701393/HADOOP-11602-004.patch
  against trunk revision 01a1621.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-annotations hadoop-common-project/hadoop-auth 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-nfs 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-mapreduce-project/hadoop-mapreduce-examples hadoop-maven-plugins 
hadoop-tools/hadoop-azure hadoop-tools/hadoop-distcp hadoop-tools/hadoop-extras 
hadoop-tools/hadoop-gridmix hadoop-tools/hadoop-openstack 
hadoop-tools/hadoop-rumen hadoop-tools/hadoop-streaming 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-httpfs 

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5794//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5794//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5794//console

This message is automatically generated.

> Fix toUpperCase/toLowerCase to use Locale.ENGLISH
> -
>
> Key: HADOOP-11602
> URL: https://issues.apache.org/jira/browse/HADOOP-11602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-11602-001.patch, HADOOP-11602-002.patch, 
> HADOOP-11602-003.patch, HADOOP-11602-004.patch, 
> HADOOP-11602-branch-2.001.patch, HADOOP-11602-branch-2.002.patch
>
>
> String#toLowerCase()/toUpperCase() without a locale argument can occur 
> unexpected behavior based on the locale. It's written in 
> [Javadoc|http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#toLowerCase()]:
> {quote}
> For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", 
> where '\u0131' is the LATIN SMALL LETTER DOTLESS I character
> {quote}
> This issue is derived from HADOOP-10101.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-02-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341146#comment-14341146
 ] 

Hadoop QA commented on HADOOP-11226:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12701499/HADOOP-11226.4.patch
  against trunk revision edceced.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5796//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5796//console

This message is automatically generated.

> ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
> -
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch, HADOOP-11226.4.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-02-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu moved HDFS-7699 to HADOOP-11645:
---

Key: HADOOP-11645  (was: HDFS-7699)
Project: Hadoop Common  (was: Hadoop HDFS)

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-02-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11645:

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-11264

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11646) Erasure Coder API for encoding and decoding of block group

2015-02-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu moved HDFS-7662 to HADOOP-11646:
---

Fix Version/s: (was: HDFS-7285)
   HDFS-7285
  Key: HADOOP-11646  (was: HDFS-7662)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Erasure Coder API for encoding and decoding of block group
> --
>
> Key: HADOOP-11646
> URL: https://issues.apache.org/jira/browse/HADOOP-11646
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HDFS-7662-v1.patch, HDFS-7662-v2.patch, 
> HDFS-7662-v3.patch
>
>
> This is to define ErasureCoder API for encoding and decoding of BlockGroup. 
> Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and 
> leverages RawErasureCoder defined in HDFS-7353 to perform concrete encoding 
> or decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11646) Erasure Coder API for encoding and decoding of block group

2015-02-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11646:

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-11264

> Erasure Coder API for encoding and decoding of block group
> --
>
> Key: HADOOP-11646
> URL: https://issues.apache.org/jira/browse/HADOOP-11646
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HDFS-7662-v1.patch, HDFS-7662-v2.patch, 
> HDFS-7662-v3.patch
>
>
> This is to define ErasureCoder API for encoding and decoding of BlockGroup. 
> Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and 
> leverages RawErasureCoder defined in HDFS-7353 to perform concrete encoding 
> or decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11647) Reed-Solomon ErasureCoder

2015-02-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu moved HDFS-7664 to HADOOP-11647:
---

Key: HADOOP-11647  (was: HDFS-7664)
Project: Hadoop Common  (was: Hadoop HDFS)

> Reed-Solomon ErasureCoder
> -
>
> Key: HADOOP-11647
> URL: https://issues.apache.org/jira/browse/HADOOP-11647
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-7664-v1.patch
>
>
> This is to implement Reed-Solomon ErasureCoder using the API defined in 
> HDFS-7662. It supports to plugin via configuration for concrete 
> RawErasureCoder, using either JRSErasureCoder added in HDFS-7418 or 
> IsaRSErasureCoder added in HDFS-7338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11647) Reed-Solomon ErasureCoder

2015-02-27 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11647:

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-11264

> Reed-Solomon ErasureCoder
> -
>
> Key: HADOOP-11647
> URL: https://issues.apache.org/jira/browse/HADOOP-11647
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-7664-v1.patch
>
>
> This is to implement Reed-Solomon ErasureCoder using the API defined in 
> HDFS-7662. It supports to plugin via configuration for concrete 
> RawErasureCoder, using either JRSErasureCoder added in HDFS-7418 or 
> IsaRSErasureCoder added in HDFS-7338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11643) Define EC schema API for ErasureCodec

2015-02-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11643:
---
Attachment: HADOOP-11643_v2.patch

Fixed the patch format.

> Define EC schema API for ErasureCodec
> -
>
> Key: HADOOP-11643
> URL: https://issues.apache.org/jira/browse/HADOOP-11643
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11643_v1.patch, HADOOP-11643_v2.patch
>
>
> As part of {{ErasureCodec}} API to be defined in HDFS-7699, {{ECSchema}} API 
> will be first defined here for better sync among related issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11630) Allow hadoop to bind to ipv6 conditionally

2015-02-27 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341348#comment-14341348
 ] 

Elliott Clark commented on HADOOP-11630:


It shouldn't be an all or nothing ( and hasn't been the attitude for other 
things like windows support or tiered storage). That mentality is how projects 
get ~600 patches that are languishing.
Does the patch help: yes. It's simplifies the process of allowing ipv6 
(something [~ste...@apache.org] had to go back and edit having gotten it wrong 
initially).
Does the patch make fixing any of the other ipv6 issues harder: nope.


> Allow hadoop to bind to ipv6 conditionally
> --
>
> Key: HADOOP-11630
> URL: https://issues.apache.org/jira/browse/HADOOP-11630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: ipv6
> Attachments: HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch
>
>
> Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true
> While this was needed a while ago. IPV6 on java works much better now and 
> there should be a way to allow it to bind dual stack if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)