[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374405#comment-15374405
 ] 

Yongjun Zhang commented on HADOOP-9844:
---

HI [~steve_l],

Thanks for your work here. 

The only patch attached to this issue I see now is rev 001 which was uploaded 
on 06/Aug/2013. Based on your comment
https://issues.apache.org/jira/browse/HADOOP-9844?focusedCommentId=15089552=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15089552

you seem to have uploaded a new rev, maybe you forgot to upload one after 
making the comment?

Thanks.


> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11361:
---
Attachment: HADOOP-11361-007.patch

Updated the patch. Removed redundant check.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361-006.patch, HADOOP-11361-007.patch, HADOOP-11361.patch, 
> HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374384#comment-15374384
 ] 

Yongjun Zhang commented on HADOOP-11361:


Thanks guys, that's another improvement!

One little thing:
{code}
synchronized (this) {
  if (lastRecs != null) {
updateAttrCache(lastRecs);
if (getAllMetrics) {  <== this check is redundant
  updateInfoCache(lastRecs);
}
  }
  jmxCacheTS = Time.now();
  lastRecsCleared = true;
}
{code}
The {{if (getAllMetrics)}} in the above code is redundant, because it has to be 
true when {{lastRecs}} is not null.

Thanks.


> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361-006.patch, HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374382#comment-15374382
 ] 

Rakesh R commented on HADOOP-13366:
---

[~ajisakaa], Uploaded new patch with the suggested changes. Please take a look 
at it again. Thanks!

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch, 
> HADOOP-13366-02.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HADOOP-13366:
--
Attachment: HADOOP-13366-02.patch

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch, 
> HADOOP-13366-02.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HADOOP-13366:
--
Attachment: HADOOP-13366-02.patch

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HADOOP-13366:
--
Attachment: (was: HADOOP-13366-02.patch)

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13351) TestDFSClientSocketSize buffer size tests are flaky

2016-07-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13351:
---
Status: Patch Available  (was: Open)

> TestDFSClientSocketSize buffer size tests are flaky
> ---
>
> Key: HADOOP-13351
> URL: https://issues.apache.org/jira/browse/HADOOP-13351
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13551.001.patch, HADOOP-13551.002.patch, 
> HADOOP-13551.003.patch
>
>
> {{TestDFSClientSocketSize}} has two tests that assert that a value that was 
> set via {{java.net.Socket#setSendBufferSize}} is equal to the value 
> subsequently returned by {{java.net.Socket#getSendBufferSize}}.
> These tests are flaky when we run them. The occasionally fail.
> This is expected behavior, actually, because 
> {{Socket#setSendBufferSize()}}[is only a 
> hint|https://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#setSendBufferSize(int)].
>   (Similar to how the underlying libc {{setsockopt(SO_SNDBUF)}} works).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374361#comment-15374361
 ] 

Akira Ajisaka commented on HADOOP-13366:


As [~templedf] commented, it's better to use {{@see}} tag instead of See. That 
way the document becomes as follows:
{code}
  /**
   * @see core-default.xml .
   */
{code}
Hi [~rakeshr], would you update the patch?

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374300#comment-15374300
 ] 

Hadoop QA commented on HADOOP-11361:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817579/HADOOP-11361-006.patch
 |
| JIRA Issue | HADOOP-11361 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e7eadb688025 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 06c56ff |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9974/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9974/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9974/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> 

[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374277#comment-15374277
 ] 

Rakesh R commented on HADOOP-13366:
---

Test case failure is unrelated, please ignore it. HADOOP-12588 is addressing 
the failure test.

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-12 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374250#comment-15374250
 ] 

Rajesh Balamohan commented on HADOOP-13212:
---

Without patch,  here is the report which are mainly due to timeout issues. Are 
you concerned on the timeout issues?.

{noformat}
Tests in error:
  
TestS3AContractDistCp>AbstractContractDistCpTest.largeFilesToRemote:96->AbstractContractDistCpTest.largeFiles:172->AbstractContractDistCpTest.runDistCp:188
 »
  
TestS3AContractDistCp>AbstractContractDistCpTest.largeFilesFromRemote:108->AbstractContractDistCpTest.largeFiles:174
 » FileNotFound
  TestS3ADeleteFilesOneByOne>TestS3ADeleteManyFiles.testBulkRenameAndDelete:99 »
  TestS3ADeleteManyFiles.testBulkRenameAndDelete:99 »  test timed out after 
1800...
  
TestS3ADirectoryPerformance.testTimeToStatNonEmptyDirectory:153->timeToStatPath:179
 »

Tests run: 261, Failures: 0, Errors: 4, Skipped: 7
{noformat}

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374249#comment-15374249
 ] 

Hadoop QA commented on HADOOP-13366:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 188 unchanged - 69 fixed = 188 total (was 257) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817577/HADOOP-13366-01.patch 
|
| JIRA Issue | HADOOP-13366 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 38d34cf72e7a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 06c56ff |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9973/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9973/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9973/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: 

[jira] [Commented] (HADOOP-13367) Support more types of Store in S3Native File System

2016-07-12 Thread liu chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374237#comment-15374237
 ] 

liu chang commented on HADOOP-13367:


Thanks for your response. I'm working on `COS` which is a object service 
maintained by Tencent.inc in the China. Should I create a new module like 
`hadoop_tools/hadoop_tencent`? or just implement an sub-class of s3.


> Support more types of Store in S3Native File System
> ---
>
> Key: HADOOP-13367
> URL: https://issues.apache.org/jira/browse/HADOOP-13367
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: liu chang
>Priority: Minor
>
> There are a lot of object storage services whose protocol is similar to S3. 
> We could add more types of NativeFileSystemStore to support those services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11361:
---
Attachment: HADOOP-11361-006.patch

Updated the patch according to comments.
Retained the original parameters for {{getMetrics()}} to avoid breakage in 
callers, if any, implemented outside Hadoop Code.



> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361-006.patch, HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13370) After creating HA, getting java.net.UnknownHostException

2016-07-12 Thread Bhavan Jindal (JIRA)
Bhavan Jindal created HADOOP-13370:
--

 Summary: After creating HA, getting java.net.UnknownHostException
 Key: HADOOP-13370
 URL: https://issues.apache.org/jira/browse/HADOOP-13370
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.4
Reporter: Bhavan Jindal
Priority: Critical


I have made my standalone cluster to a high available using quorum journal 
method. After that i am not able to access HDFS. Getting below error :

[hadoop@namenode hadoop]$ hdfs dfs -ls /
16/07/12 22:46:05 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
-ls: java.net.UnknownHostException: myhahdpcluster
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
[hadoop@namenode hadoop]$


All the other HA components are working fine. Can you please let me know if i 
am missing anything.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374204#comment-15374204
 ] 

Rakesh R edited comment on HADOOP-13366 at 7/13/16 2:42 AM:


Thank you [~ajisakaa] for the reviews. Attached new patch addressing the 
comments. Also, in the latest patch I've corrected {{core-default.xml}} path in 
{{CommonConfigurationKeys.java}} file. Kindly review it again.


was (Author: rakeshr):
Thank you [~ajisakaa] for the reviews. Attached new patch addressing the 
comments.

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374219#comment-15374219
 ] 

Rakesh R commented on HADOOP-13366:
---

Thanks [~templedf] for the interest in this. @docRoot tag is used in our 
existing hadoop code 
[Configuration.java|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L121]
 and is working fine. Please see [Configuration Api 
Doc|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/conf/Configuration.html].

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13369) [umbrella] Fix javadoc warnings by JDK8 on trunk

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374213#comment-15374213
 ] 

Tsuyoshi Ozawa commented on HADOOP-13369:
-

It must be large work, so let's create sub tasks here.

> [umbrella] Fix javadoc warnings by JDK8 on trunk
> 
>
> Key: HADOOP-13369
> URL: https://issues.apache.org/jira/browse/HADOOP-13369
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
>
> After migrating JDK8, lots warnings show up. We should fix them overall.
> {quote}
> [WARNING] ^[WARNING] 
> /home/ubuntu/hadoopdev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java:53:
>  warning: no description for @throws
> ...
> [WARNING] 
> /home/ubuntu/hadoopdev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java:53:
>  warning: no @param for options
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13369) [umbrella] Fix javadoc warnings by JDK8 on trunk

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-13369:
---

 Summary: [umbrella] Fix javadoc warnings by JDK8 on trunk
 Key: HADOOP-13369
 URL: https://issues.apache.org/jira/browse/HADOOP-13369
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


After migrating JDK8, lots warnings show up. We should fix them overall.
{quote}
[WARNING] ^[WARNING] 
/home/ubuntu/hadoopdev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/lib/ZKClient.java:53:
 warning: no description for @throws
...
[WARNING] 
/home/ubuntu/hadoopdev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/LeveldbIterator.java:53:
 warning: no @param for options
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374204#comment-15374204
 ] 

Rakesh R commented on HADOOP-13366:
---

Thank you [~ajisakaa] for the reviews. Attached new patch addressing the 
comments.

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HADOOP-13366:
--
Attachment: HADOOP-13366-01.patch

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch, HADOOP-13366-01.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374164#comment-15374164
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-11361 at 7/13/16 2:07 AM:
--

Thanks [~vinayrpet] and [~yzhangal] for your reviews. My comments are as 
follows:

1. This is related to Youngjun's comment about null check, and I prefer to 
remove the variable {{MetricsCollectorImpl builder}} and to use local variable 
{{List lastRecs}} for simplicity: 

{code}
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}

synchronized(this) {
  if (lastRecs != null) {
updateAttrCache(lastRecs);
if (getAllMetrics) {
  updateInfoCache(lastRecs);
}
  }
  jmxCacheTS = Time.now();
  lastRecsCleared = true;
}
{code}

In this case, we also need to update {{getMetrics(MetricsCollectorImpl 
builder)}} to return {{List}} instead of 
{{Iterable}}.

2. This is minor nits comment, but I think we should add null-check assertions 
against lastRecs in following methods: {{private void 
updateInfoCache(List lastRecs)}}, {{private int 
updateAttrCache(List lastRecs)}}. It increases readability 
and simplicity. On trunk, we can use {{Nonnull}} 
annotation(https://blogs.oracle.com/java-platform-group/entry/java_8_s_new_type),
 but it's been introduced since jdk8. Assert.checkNotNull provided in Guava is 
also enough.


was (Author: ozawa):
Thanks [~vinayrpet] and [~yzhangal] for your reviews. My comments are as 
follows:

1. . This is related to Youngjun's comment about null check, I perfer to remove 
the variable {{MetricsCollectorImpl builder}} and to use local variable 
{{List lastRecs}} for simplicity: 

{code}
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}

synchronized(this) {
  if (lastRecs != null) {
updateAttrCache(lastRecs);
if (getAllMetrics) {
  updateInfoCache(lastRecs);
}
  }
  jmxCacheTS = Time.now();
  lastRecsCleared = true;
}
{code}

In this case, we also need to update {{getMetrics(MetricsCollectorImpl 
builder)}} to return {{List}} instead of 
{{Iterable}}.

2. This is minor nits comment, but I think we should add null-check assertions 
against lastRecs in following methods: {{private void 
updateInfoCache(List lastRecs)}}, {{private int 
updateAttrCache(List lastRecs)}}. It increases readability 
and simplicity. On trunk, we can use {{Nonnull}} 
annotation(https://blogs.oracle.com/java-platform-group/entry/java_8_s_new_type),
 but it's been introduced since jdk8. Assert.checkNotNull provided in Guava is 
also enough.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374170#comment-15374170
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-11361 at 7/13/16 2:06 AM:
--

3. How about adding a comment why we release lock before calling  private 
method {{getMetrics}} in {{updateJmxCache}}?

{code}
// HADOOP-11361: Release lock here for avoid deadlock between
// MetricsSystemImpl's lock and MetricsSourceAdapter's lock. 
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}
{code}




was (Author: ozawa):
3. How about adding a comment why we release lock before calling {{getMetrics}} 
private method in {{updateJmxCache}}?

{code}
// HADOOP-11361: Release lock here for avoid deadlock between
// MetricsSystemImpl's lock and MetricsSourceAdapter's lock. 
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}
{code}



> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374170#comment-15374170
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-11361 at 7/13/16 2:01 AM:
--

3. How about adding a comment why we release lock before calling {{getMetrics}} 
private method in {{updateJmxCache}}?

{code}
// HADOOP-11361: Release lock here for avoid deadlock between
// MetricsSystemImpl's lock and MetricsSourceAdapter's lock. 
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}
{code}




was (Author: ozawa):
3. How about adding a comment why we release lock in {{updateJmxCache}}?

{code}
// HADOOP-11361: Release lock here for avoid deadlock between
// MetricsSystemImpl's lock and MetricsSourceAdapter's lock. 
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}
{code}



> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374170#comment-15374170
 ] 

Tsuyoshi Ozawa commented on HADOOP-11361:
-

3. How about adding a comment why we release lock in {{updateJmxCache}}?

{code}
// HADOOP-11361: Release lock here for avoid deadlock between
// MetricsSystemImpl's lock and MetricsSourceAdapter's lock. 
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}
{code}



> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374164#comment-15374164
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-11361 at 7/13/16 1:58 AM:
--

Thanks [~vinayrpet] and [~yzhangal] for your reviews. My comments are as 
follows:

1. . This is related to Youngjun's comment about null check, I perfer to remove 
the variable {{MetricsCollectorImpl builder}} and to use local variable 
{{List lastRecs}} for simplicity: 

{code}
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}

synchronized(this) {
  if (lastRecs != null) {
updateAttrCache(lastRecs);
if (getAllMetrics) {
  updateInfoCache(lastRecs);
}
  }
  jmxCacheTS = Time.now();
  lastRecsCleared = true;
}
{code}

In this case, we also need to update {{getMetrics(MetricsCollectorImpl 
builder)}} to return {{List}} instead of 
{{Iterable}}.

2. This is minor nits comment, but I think we should add null-check assertions 
against lastRecs in following methods: {{private void 
updateInfoCache(List lastRecs)}}, {{private int 
updateAttrCache(List lastRecs)}}. It increases readability 
and simplicity. On trunk, we can use {{Nonnull}} 
annotation(https://blogs.oracle.com/java-platform-group/entry/java_8_s_new_type),
 but it's been introduced since jdk8. Assert.checkNotNull provided in Guava is 
also enough.


was (Author: ozawa):
Thanks [~vinayrpet] and [~yzhangal] for your reviews. My comments are as 
follows:

1. . This is related to Youngjun's comment about null check, I perfer to remove 
the variable {{MetricsCollectorImpl builder}} and to use local variable 
{{List lastRecs}} for simplicity: 

{code}
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}

synchronized(this) {
  if (lastRecs != null) {
updateAttrCache(lastRecs);
if (getAllMetrics) {
  updateInfoCache(lastRecs);
}
  }
  jmxCacheTS = Time.now();
  lastRecsCleared = true;
}
{code}

In this case, we also need to update {{getMetrics(MetricsCollectorImpl 
builder)}} to return {{List}} instead of 
{{Iterable}}.

2. This is minor nits comment, but I think we should add null-check assertions 
against lastRecs here: {{private void updateInfoCache(List 
lastRecs)}}, {{private int updateAttrCache(List lastRecs)}}. 
It increases readability and simplicity. On trunk, we can use {{Nonnull}} 
annotation(https://blogs.oracle.com/java-platform-group/entry/java_8_s_new_type),
 but it's been introduced since jdk8. Assert.checkNotNull provided in Guava is 
also enough.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374164#comment-15374164
 ] 

Tsuyoshi Ozawa commented on HADOOP-11361:
-

Thanks [~vinayrpet] and [~yzhangal] for your reviews. My comments are as 
follows:

1. . This is related to Youngjun's comment about null check, I perfer to remove 
the variable {{MetricsCollectorImpl builder}} and to use local variable 
{{List lastRecs}} for simplicity: 

{code}
List lastRecs = null;
if (getAllMetrics) {
  lastRecs = getMetrics(new MetricsCollectorImpl());
}

synchronized(this) {
  if (lastRecs != null) {
updateAttrCache(lastRecs);
if (getAllMetrics) {
  updateInfoCache(lastRecs);
}
  }
  jmxCacheTS = Time.now();
  lastRecsCleared = true;
}
{code}

In this case, we also need to update {{getMetrics(MetricsCollectorImpl 
builder)}} to return {{List}} instead of 
{{Iterable}}.

2. This is minor nits comment, but I think we should add null-check assertions 
against lastRecs here: {{private void updateInfoCache(List 
lastRecs)}}, {{private int updateAttrCache(List lastRecs)}}. 
It increases readability and simplicity. On trunk, we can use {{Nonnull}} 
annotation(https://blogs.oracle.com/java-platform-group/entry/java_8_s_new_type),
 but it's been introduced since jdk8. Assert.checkNotNull provided in Guava is 
also enough.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374078#comment-15374078
 ] 

Daniel Templeton commented on HADOOP-13366:
---

Yeah, I was wondering that myself.  And shouldn't these all use @see tags?

> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13315) FileContext#umask is not initialized properly

2016-07-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374077#comment-15374077
 ] 

Hudson commented on HADOOP-13315:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10084 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10084/])
HADOOP-13315. FileContext#umask is not initialized properly. (John Zhuge (lei: 
rev a290a98b6ab2424ae9b7faab0ce9496d09ca46f3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java


> FileContext#umask is not initialized properly
> -
>
> Key: HADOOP-13315
> URL: https://issues.apache.org/jira/browse/HADOOP-13315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13315.001.patch
>
>
> Notice field {{umask}} is not set to parameter {{theUmask}} and {{theUmask}} 
> is unused.
> {code:title=FileContext.java}
>   private FileContext(final AbstractFileSystem defFs,
> final FsPermission theUmask, final Configuration aConf) {
> defaultFS = defFs;
> umask = FsPermission.getUMask(aConf);
> conf = aConf;
> ...
>   public static FileContext getFileContext(final AbstractFileSystem defFS,
> final Configuration aConf) {
> return new FileContext(defFS, FsPermission.getUMask(aConf), aConf);
>   }
> {code}
> Proposal:
> * Set {{umask}} to {{theUmask}}. Since the only caller {{getFileContext}} 
> already passes the same value in {{theUmask}}, there is no change in behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13343) globStatus returns null for file path that exists but is filtered

2016-07-12 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated HADOOP-13343:
-
Affects Version/s: (was: 2.7.2)
   2.4.0

> globStatus returns null for file path that exists but is filtered
> -
>
> Key: HADOOP-13343
> URL: https://issues.apache.org/jira/browse/HADOOP-13343
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Priority: Minor
> Attachments: HADOOP-13343.001.patch
>
>
> If a file path without globs is passed to globStatus and the file exists but 
> the specified input filter suppresses it then globStatus will return null 
> instead of an empty array.  This makes it impossible for the caller to 
> discern the difference between the file not existing at all vs. being 
> suppressed by the filter and is inconsistent with the way it handles globs 
> for an existing dir but fail to match anything within the dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13290) Appropriate use of generics in FairCallQueue

2016-07-12 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374006#comment-15374006
 ] 

Konstantin Shvachko commented on HADOOP-13290:
--

+1 from me too.

> Appropriate use of generics in FairCallQueue
> 
>
> Key: HADOOP-13290
> URL: https://issues.apache.org/jira/browse/HADOOP-13290
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Jonathan Hung
>  Labels: newbie++
> Attachments: HADOOP-13290.001.patch, HADOOP-13290.002.patch
>
>
> # {{BlockingQueue}} is intermittently used with and without generic 
> parameters in {{FairCallQueue}} class. Should be parameterized.
> # Same for {{FairCallQueue}}. Should be parameterized. Could be a bit more 
> tricky for that one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13315) FileContext#umask is not initialized properly

2016-07-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373991#comment-15373991
 ] 

John Zhuge commented on HADOOP-13315:
-

Thanks [~eddyxu]!

> FileContext#umask is not initialized properly
> -
>
> Key: HADOOP-13315
> URL: https://issues.apache.org/jira/browse/HADOOP-13315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13315.001.patch
>
>
> Notice field {{umask}} is not set to parameter {{theUmask}} and {{theUmask}} 
> is unused.
> {code:title=FileContext.java}
>   private FileContext(final AbstractFileSystem defFs,
> final FsPermission theUmask, final Configuration aConf) {
> defaultFS = defFs;
> umask = FsPermission.getUMask(aConf);
> conf = aConf;
> ...
>   public static FileContext getFileContext(final AbstractFileSystem defFS,
> final Configuration aConf) {
> return new FileContext(defFS, FsPermission.getUMask(aConf), aConf);
>   }
> {code}
> Proposal:
> * Set {{umask}} to {{theUmask}}. Since the only caller {{getFileContext}} 
> already passes the same value in {{theUmask}}, there is no change in behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13315) FileContext#umask is not initialized properly

2016-07-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13315:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   2.9.0
   Status: Resolved  (was: Patch Available)

+1. This private constructor is only be called in {{FileContext#getFileConext}} 
which has a duplicated {{FsPermission.getMask(...)}}. 

Thanks for finding the bug, John.

> FileContext#umask is not initialized properly
> -
>
> Key: HADOOP-13315
> URL: https://issues.apache.org/jira/browse/HADOOP-13315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13315.001.patch
>
>
> Notice field {{umask}} is not set to parameter {{theUmask}} and {{theUmask}} 
> is unused.
> {code:title=FileContext.java}
>   private FileContext(final AbstractFileSystem defFs,
> final FsPermission theUmask, final Configuration aConf) {
> defaultFS = defFs;
> umask = FsPermission.getUMask(aConf);
> conf = aConf;
> ...
>   public static FileContext getFileContext(final AbstractFileSystem defFS,
> final Configuration aConf) {
> return new FileContext(defFS, FsPermission.getUMask(aConf), aConf);
>   }
> {code}
> Proposal:
> * Set {{umask}} to {{theUmask}}. Since the only caller {{getFileContext}} 
> already passes the same value in {{theUmask}}, there is no change in behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13368) DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should be O(1) operation

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373968#comment-15373968
 ] 

Hadoop QA commented on HADOOP-13368:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817538/HADOOP-13368.000.patch
 |
| JIRA Issue | HADOOP-13368 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fffa02ee191b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bf6f4a3 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9972/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-tools/hadoop-aws 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9972/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should 
> be O(1) operation
> 

[jira] [Commented] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-07-12 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373966#comment-15373966
 ] 

John Zhuge commented on HADOOP-13301:
-

Make sense. Thanks [~eddyxu].

> Millisecond timestamp for FsShell console log
> -
>
> Key: HADOOP-13301
> URL: https://issues.apache.org/jira/browse/HADOOP-13301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13301.001.patch
>
>
> The log message timestamp on FsShell console show only seconds. 
> {noformat}
> $ export HADOOP_ROOT_LOGGER=TRACE,console
> $ hdfs dfs -rm -skipTrash /tmp/2G*
> 16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}
> Would like to see milliseconds for quick performance turning.
> {noformat}
> 2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-07-12 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373921#comment-15373921
 ] 

Kai Zheng commented on HADOOP-11540:


Hi [~atm],

Could you help take another look at the update? The test failure isn't 
relevant. Thanks!

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v11.patch, HADOOP-11540-v12.patch, 
> HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, HADOOP-11540-v5.patch, 
> HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, HADOOP-11540-v8.patch, 
> HADOOP-11540-v9.patch, HADOOP-11540-with-11996-codes.patch, Native Erasure 
> Coder Performance - Intel ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13301) Millisecond timestamp for FsShell console log

2016-07-12 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373906#comment-15373906
 ] 

Lei (Eddy) Xu commented on HADOOP-13301:


+1. It is a trivial change but incompatible due to log format change.

I will commit it to trunk, but not branch-2 by EOD tomorrow if no any 
objections.

> Millisecond timestamp for FsShell console log
> -
>
> Key: HADOOP-13301
> URL: https://issues.apache.org/jira/browse/HADOOP-13301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13301.001.patch
>
>
> The log message timestamp on FsShell console show only seconds. 
> {noformat}
> $ export HADOOP_ROOT_LOGGER=TRACE,console
> $ hdfs dfs -rm -skipTrash /tmp/2G*
> 16/06/20 16:00:03 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}
> Would like to see milliseconds for quick performance turning.
> {noformat}
> 2016-06-20 16:01:42,588 DEBUG util.Shell: setsid exited with exit code 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13312) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-07-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373856#comment-15373856
 ] 

Akira Ajisaka commented on HADOOP-13312:


Thanks [~vinodkv] for updating the patch and committing.

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13312
> URL: https://issues.apache.org/jira/browse/HADOOP-13312
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.7.3
>
> Attachments: HADOOP-13312-branch-2.7.00.patch, 
> HADOOP-13312-branch-2.7.01.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some commits to branch-2.7 without editing CHANGES.txt. We need to update 
> the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13312) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-07-12 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-13312:
-
Fix Version/s: 2.7.3

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13312
> URL: https://issues.apache.org/jira/browse/HADOOP-13312
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.7.3
>
> Attachments: HADOOP-13312-branch-2.7.00.patch, 
> HADOOP-13312-branch-2.7.01.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some commits to branch-2.7 without editing CHANGES.txt. We need to update 
> the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13312) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-07-12 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved HADOOP-13312.
--
  Resolution: Fixed
Hadoop Flags: Reviewed

Committed this to branch-2.7 and branch-2.7.3.

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13312
> URL: https://issues.apache.org/jira/browse/HADOOP-13312
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-13312-branch-2.7.00.patch, 
> HADOOP-13312-branch-2.7.01.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some commits to branch-2.7 without editing CHANGES.txt. We need to update 
> the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13312) Update CHANGES.txt to reflect all the changes in branch-2.7

2016-07-12 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373840#comment-15373840
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-13312:
--

More tickets missing: HADOOP-13350, HADOOP-12682, HADOOP-12636 and HDFS-10488.

Committing the patch with those entries added in there..

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-13312
> URL: https://issues.apache.org/jira/browse/HADOOP-13312
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-13312-branch-2.7.00.patch, 
> HADOOP-13312-branch-2.7.01.patch
>
>
> When committing to branch-2.7, we need to edit CHANGES.txt. However, there 
> are some commits to branch-2.7 without editing CHANGES.txt. We need to update 
> the change log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13368) DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should be O(1) operation

2016-07-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13368:
---
Status: Patch Available  (was: Open)

> DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should 
> be O(1) operation
> -
>
> Key: HADOOP-13368
> URL: https://issues.apache.org/jira/browse/HADOOP-13368
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13368.000.patch
>
>
> To lookup, {{DFSOpsCountStatistics$OpType#fromSymbol}} and 
> {{s3a.Statistic#fromSymbol}} iterates all the enums to get the entry by its 
> symbol. Usages of {{fromSymbol()}} include {{isTracked()}} and {{getLong()}}. 
> As there are dozens of enum entries, it merits to make these two similar 
> operations O(1) complexity. This point is especially true if downstream app 
> probes a dozen of stats in an outer loop (see [TEZ-3331]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13368) DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should be O(1) operation

2016-07-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13368:
---
Attachment: HADOOP-13368.000.patch

The v0 patch uses a pre-built private static final hashmap to speed up the 
{{fromSymbol()}} lookup.

> DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should 
> be O(1) operation
> -
>
> Key: HADOOP-13368
> URL: https://issues.apache.org/jira/browse/HADOOP-13368
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13368.000.patch
>
>
> To lookup, {{DFSOpsCountStatistics$OpType#fromSymbol}} and 
> {{s3a.Statistic#fromSymbol}} iterates all the enums to get the entry by its 
> symbol. Usages of {{fromSymbol()}} include {{isTracked()}} and {{getLong()}}. 
> As there are dozens of enum entries, it merits to make these two similar 
> operations O(1) complexity. This point is especially true if downstream app 
> probes a dozen of stats in an outer loop (see [TEZ-3331]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13368) DFSOpsCountStatistics$OpType#fromSymbol and s3a.Statistic#fromSymbol should be O(1) operation

2016-07-12 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13368:
--

 Summary: DFSOpsCountStatistics$OpType#fromSymbol and 
s3a.Statistic#fromSymbol should be O(1) operation
 Key: HADOOP-13368
 URL: https://issues.apache.org/jira/browse/HADOOP-13368
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


To lookup, {{DFSOpsCountStatistics$OpType#fromSymbol}} and 
{{s3a.Statistic#fromSymbol}} iterates all the enums to get the entry by its 
symbol. Usages of {{fromSymbol()}} include {{isTracked()}} and {{getLong()}}. 

As there are dozens of enum entries, it merits to make these two similar 
operations O(1) complexity. This point is especially true if downstream app 
probes a dozen of stats in an outer loop (see [TEZ-3331]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13352) Make X-FRAME-OPTIONS configurable in HttpServer2

2016-07-12 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373758#comment-15373758
 ] 

Jitendra Nath Pandey commented on HADOOP-13352:
---

Committed this to branch-2.8 as well.

> Make X-FRAME-OPTIONS configurable in HttpServer2
> 
>
> Key: HADOOP-13352
> URL: https://issues.apache.org/jira/browse/HADOOP-13352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, security
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.8.0
>
> Attachments: HADOOP-13352.001.patch, HADOOP-13352.002.patch
>
>
> In HADOOP-12964 we introduced support for X-FRAME-OPTIONS in HttpServer2. 
> This JIRA makes it configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12964) Http server vulnerable to clickjacking

2016-07-12 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373757#comment-15373757
 ] 

Jitendra Nath Pandey commented on HADOOP-12964:
---

Committed this to branch-2.8.

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 2.8.0
>
> Attachments: hadoop-12964.001.patch, hadoop-12964.002.patch, 
> hadoop-12964.003.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13352) Make X-FRAME-OPTIONS configurable in HttpServer2

2016-07-12 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-13352:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Make X-FRAME-OPTIONS configurable in HttpServer2
> 
>
> Key: HADOOP-13352
> URL: https://issues.apache.org/jira/browse/HADOOP-13352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net, security
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.8.0
>
> Attachments: HADOOP-13352.001.patch, HADOOP-13352.002.patch
>
>
> In HADOOP-12964 we introduced support for X-FRAME-OPTIONS in HttpServer2. 
> This JIRA makes it configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12964) Http server vulnerable to clickjacking

2016-07-12 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-12964:
--
Fix Version/s: (was: 2.9.0)
   2.8.0

> Http server vulnerable to clickjacking 
> ---
>
> Key: HADOOP-12964
> URL: https://issues.apache.org/jira/browse/HADOOP-12964
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 2.8.0
>
> Attachments: hadoop-12964.001.patch, hadoop-12964.002.patch, 
> hadoop-12964.003.patch
>
>
> Nessus report shows a medium level issue that "Web Application Potentially 
> Vulnerable to Clickjacking" with the description as follows:
> "The remote web server does not set an X-Frame-Options response header in all 
> content responses. This could potentially expose the site to a clickjacking 
> or UI Redress attack wherein an attacker can trick a user into clicking an 
> area of the vulnerable page that is different than what the user perceives 
> the page to be. This can result in a user performing fraudulent or malicious 
> transactions"
> We could add X-Frame-Options, supported in all major browsers, in the Http 
> response header to mitigate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13366) Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc

2016-07-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373733#comment-15373733
 ] 

Akira Ajisaka commented on HADOOP-13366:


Thanks Rakesh for reporting this and providing the patch! I'm thinking 
core-default.html should be core-default.xml in the following code.
{code}
  /**
   * See core-default.xml .
   */
{code}


> Fix dead link in o.a.h.fs.CommonConfigurationKeysPublic javadoc
> ---
>
> Key: HADOOP-13366
> URL: https://issues.apache.org/jira/browse/HADOOP-13366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HADOOP-13366-00.patch
>
>
> This jira is to fix the dead link to {{core-default.xml}} in 
> [CommonConfigurationKeysPublic|https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/fs/CommonConfigurationKeysPublic.html]
>  javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-07-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373711#comment-15373711
 ] 

Sean Busbey commented on HADOOP-13344:
--

downstream folks ought to be using hadoop-client, so we'd have to change that. 
given recent developments I guess we'd need to change hdfs-client as well.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-07-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373692#comment-15373692
 ] 

Xiao Chen commented on HADOOP-13298:


Hi [~ozawa],
Thanks for working on this!
Unfortunately, this is what I did when hearing from Sean in the email 
discussion, and it didn't work.

bq. 2. tar.gz includes LICENSE.txt and NOTICE.txt. 
What we need is to include the L in the jars (META-INF sections in 
particular). This can be verified by running the following script in 
hadoop-dist dir after mvn package.
{code}
#!/bin/sh 

for f in $(find ./target -name "hadoop*SNAPSHOT.jar"); do
jar -tf $f | grep "LICENSE" > /dev/null
RET1=$?
jar -tf $f | grep "NOTICE" > /dev/null
RET2=$?

if [ $RET1 -ne 0 -a $RET2 -ne 0 ]; then
echo $f "missing LICENSE and NOTICE!";
elif [ $RET1 -ne 0 ]; then
echo $f "missing LICENSE!";
elif [ $RET2 -ne 0 ]; then
echo $f "missing NOTICE!";
else
echo $f "is ok";
fi
done
{code}
The L are no longer copied into the jar with this change. :(

> Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/
> -
>
> Key: HADOOP-13298
> URL: https://issues.apache.org/jira/browse/HADOOP-13298
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Xiao Chen
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-13298.001.patch, HADOOP-13298.002.patch
>
>
> After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
> {{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
> remove it and do it the maven way.
> Details in 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/%3CCAFS=wjwx8nmqj6fzxuzzbwraeoggfr+_ywl_mkfp4lnuxpg...@mail.gmail.com%3E
> Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
> the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-07-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13298:
---
Description: 
After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
{{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
remove it and do it the maven way.

Details in 
https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/%3CCAFS=wjwx8nmqj6fzxuzzbwraeoggfr+_ywl_mkfp4lnuxpg...@mail.gmail.com%3E

Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
the help!

  was:
After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
{{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
remove it and do it the maven way.

Details in 
https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/browser

Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
the help!


> Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/
> -
>
> Key: HADOOP-13298
> URL: https://issues.apache.org/jira/browse/HADOOP-13298
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Xiao Chen
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-13298.001.patch, HADOOP-13298.002.patch
>
>
> After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
> {{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
> remove it and do it the maven way.
> Details in 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/%3CCAFS=wjwx8nmqj6fzxuzzbwraeoggfr+_ywl_mkfp4lnuxpg...@mail.gmail.com%3E
> Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
> the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373623#comment-15373623
 ] 

Yongjun Zhang edited comment on HADOOP-11361 at 7/12/16 8:20 PM:
-

Thanks [~ozawa] and [~vinayrpet] for the comments and Vinay for the updated 
patch!

The latest patch looks good to me, except for two little things:
1. The {{builder}} could be NULL here
{code}
   synchronized(this) {
  List lastRecs = builder.getRecords();
  updateAttrCache(lastRecs);
  if (getAllMetrics) {
updateInfoCache(lastRecs);
  }
{code}
The calls need to be conditioned by {{getAllMetrics}}.

2. The revision name should be 006 to avoid confusion.

Thanks a lot.



was (Author: yzhangal):
Thanks [~ozawa] and [~vinayrpet] for the comments and Vinay for the updated 
patch!

The latest patch looks good to me, except for two little things:
1. The builder could be NULL here
{code}
   synchronized(this) {
  List lastRecs = builder.getRecords();
{code}
The call need to be conditioned by {{getAllMetrics}}.

2. The revision name should be 006 to avoid confusion.

Thanks a lot.


> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373623#comment-15373623
 ] 

Yongjun Zhang commented on HADOOP-11361:


Thanks [~ozawa] and [~vinayrpet] for the comments and Vinay for the updated 
patch!

The latest patch looks good to me, except for two little things:
1. The builder could be NULL here
{code}
   synchronized(this) {
  List lastRecs = builder.getRecords();
{code}
The call need to be conditioned by {{getAllMetrics}}.

2. The revision name should be 006 to avoid confusion.

Thanks a lot.


> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373462#comment-15373462
 ] 

Hadoop QA commented on HADOOP-11361:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Possible null pointer dereference of builder in 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache()  
Dereferenced at MetricsSourceAdapter.java:builder in 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache()  
Dereferenced at MetricsSourceAdapter.java:[line 186] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817476/HADOOP-11361-005.patch
 |
| JIRA Issue | HADOOP-11361 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3625f50cbd4b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7705812 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9971/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9971/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9971/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-07-12 Thread Thomas Poepping (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373399#comment-15373399
 ] 

Thomas Poepping commented on HADOOP-13344:
--

I don't think we would want (or need to) change dependency information for 
anything but common, because that's the classpath most often used by other 
applications. I'm taking a deeper look into it. It may be that the best 
solution is to remove the slf4j binding from every classpath, and I agree, that 
would be a big change.



> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11361:
---
Attachment: HADOOP-11361-005.patch

Fixed compilation errors :)

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373300#comment-15373300
 ] 

Hadoop QA commented on HADOOP-13240:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 34 unchanged - 11 fixed = 34 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817465/HADOOP-13240.002.patch
 |
| JIRA Issue | HADOOP-13240 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ebe9c979f312 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7705812 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9969/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9969/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch
>
>
> mvn 

[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373294#comment-15373294
 ] 

Hadoop QA commented on HADOOP-11361:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
59s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 59s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817468/HADOOP-11361-005.patch
 |
| JIRA Issue | HADOOP-11361 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c3fa1317534a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7705812 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9970/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9970/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9970/artifact/patchprocess/patch-compile-root.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9970/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9970/artifact/patchprocess/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9970/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Moved] (HADOOP-13367) Support more types of Store in S3Native File System

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-10615 to HADOOP-13367:


Component/s: (was: fs)
 fs/s3
Key: HADOOP-13367  (was: HDFS-10615)
Project: Hadoop Common  (was: Hadoop HDFS)

> Support more types of Store in S3Native File System
> ---
>
> Key: HADOOP-13367
> URL: https://issues.apache.org/jira/browse/HADOOP-13367
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: liu chang
>Priority: Minor
>
> There are a lot of object storage services whose protocol is similar to S3. 
> We could add more types of NativeFileSystemStore to support those services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13367) Support more types of Store in S3Native File System

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13367:

Affects Version/s: 2.7.2

> Support more types of Store in S3Native File System
> ---
>
> Key: HADOOP-13367
> URL: https://issues.apache.org/jira/browse/HADOOP-13367
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: liu chang
>Priority: Minor
>
> There are a lot of object storage services whose protocol is similar to S3. 
> We could add more types of NativeFileSystemStore to support those services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-07-12 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11361:
---
Attachment: HADOOP-11361-005.patch

bq. It looks to me that the acquirement of lastRecs and the updateAttrCache 
should be protected in a same synchronized(this) block, to avoid this race 
condition
In that case, there is no need to have a field {{lastRecs}}. 
{{lastRecs}} was used only in {{updateAttrCache()}} and {{updateInfoCache()}}.
We can have the method local lastRecs and same can be passed to 
{{updateAttrCache()}} and {{updateInfoCache()}}.
{{getMetrics()}} can return just {{builder.getRecords()}}. 
So total number of {{synchronized}} calls will be reduced.

Attaching the proposed changes.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361.patch, 
> HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373242#comment-15373242
 ] 

Hadoop QA commented on HADOOP-13208:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
34s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} root: The patch generated 7 new + 40 unchanged - 
52 fixed = 47 total (was 92) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
12s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator

2016-07-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373237#comment-15373237
 ] 

Steve Loughran commented on HADOOP-13207:
-

checkstyle is just line width on the test descriptions that get printed as 
tests start
{code}
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java:197:
describe("Expect non-recursive listFiles(false) to list all entries in top 
dir only");: Line is longer than 80 characters (found 90).
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java:227:
describe("Expect recursive listFiles(true) to list all files down the 
tree");: Line is longer than 80 characters (found 81).
{code}

> Specify FileSystem listStatus, listFiles and RemoteIterator
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, 
> HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-07-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13240:

Attachment: HADOOP-13240.002.patch

Patch 002:
* Fix checkstyle errors

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch
>
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373178#comment-15373178
 ] 

Hadoop QA commented on HADOOP-13207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 29 unchanged - 51 fixed = 31 total (was 80) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
28s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817450/HADOOP-13207-branch-2-009.patch
 |
| JIRA Issue | HADOOP-13207 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux 8062f53c5465 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / e94e6be |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373080#comment-15373080
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user steveloughran commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70463002
  
--- Diff: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md ---
@@ -940,6 +950,10 @@ the DNS TTL of a JVM is "infinity".
 To work with AWS better, set the DNS time-to-live of an application which
 works with S3 to something lower. See [AWS 
documentation](http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-jvm-ttl.html).
 
+*internal.S3V4AuthErrorRetryStrategy 
(S3V4AuthErrorRetryStrategy.java:buildRetryParams(117)) - Attempting to re-send 
the request to...*
--- End diff --

sorry, wrong JIRA. https://issues.apache.org/jira/browse/HADOOP-13324


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-07-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373028#comment-15373028
 ] 

Sean Busbey commented on HADOOP-13344:
--

I think Allen means that you might have to alter poms to make any slf4j binding 
dependencies test scope (presuming the tests use biding-specific configuration 
files currently). I think Steve is pointing to the same issue wrt applications 
that create assemblies, if our existing poms include an slf4j binding as a 
runtime or compile dependency. If we remove the bindings from all those poms, 
we might then have to add it back in somewhere to ensure one is available in 
those places where we want a binding (namely when launching our daemons).

If we have to change the root pom then the patch will probably be too big for 
jenkins, since it will trigger a full build across all modules.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13208:

Status: Patch Available  (was: Open)

> S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the 
> pseudo-tree of directories
> 
>
> Key: HADOOP-13208
> URL: https://issues.apache.org/jira/browse/HADOOP-13208
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13208-branch-2-001.patch, 
> HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, 
> HADOOP-13208-branch-2-009.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A major cost in split calculation against object stores turns out be listing 
> the directory tree itself. That's because against S3, it takes S3A two HEADs 
> and two lists to list the content of any directory path (2 HEADs + 1 list for 
> getFileStatus(); the next list to query the contents).
> Listing a directory could be improved slightly by combining the final two 
> listings. However, a listing of a directory tree will still be 
> O(directories). In contrast, a recursive {{listFiles()}} operation should be 
> implementable by a bulk listing of all descendant paths; one List operation 
> per thousand descendants. 
> As the result of this call is an iterator, the ongoing listing can be 
> implemented within the iterator itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13208:

Attachment: HADOOP-13208-branch-2-009.patch

Patch 009.

# specify RemoteIterator in {{filesystem.md}}, specifically that 
implementations MUST return finite sequence, that a {{while(true) next();}} is 
a valid iteration, and cover concurrency issues.
# listing tests also grab the remote iterators returned and do the 
{{while(true)}} evaluation, verifying the results match.
# Fix S3AFileSystem iterators to correctly pass these new tests (the other 
filesystems, which don't override the default list* operations, all pass)
# javadoc all RemoteIterators in S3AFileSystem, so explaining chained operations


> S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the 
> pseudo-tree of directories
> 
>
> Key: HADOOP-13208
> URL: https://issues.apache.org/jira/browse/HADOOP-13208
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13208-branch-2-001.patch, 
> HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, 
> HADOOP-13208-branch-2-009.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A major cost in split calculation against object stores turns out be listing 
> the directory tree itself. That's because against S3, it takes S3A two HEADs 
> and two lists to list the content of any directory path (2 HEADs + 1 list for 
> getFileStatus(); the next list to query the contents).
> Listing a directory could be improved slightly by combining the final two 
> listings. However, a listing of a directory tree will still be 
> O(directories). In contrast, a recursive {{listFiles()}} operation should be 
> implementable by a bulk listing of all descendant paths; one List operation 
> per thousand descendants. 
> As the result of this call is an iterator, the ongoing listing can be 
> implemented within the iterator itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Patch Available  (was: Open)

> Specify FileSystem listStatus, listFiles and RemoteIterator
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, 
> HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Attachment: HADOOP-13207-branch-2-009.patch

Patch 009.

# specify RemoteIterator in {{filesystem.md}}, specifically that 
implementations MUST return finite sequence, that a {{while(true) next();}} is 
a valid iteration, and cover concurrency issues.
# listing tests also grab the remote iterators returned and do the 
{{while(true)}} evaluation, verifying the results match.


> Specify FileSystem listStatus, listFiles and RemoteIterator
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, 
> HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator

2016-07-12 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Summary: Specify FileSystem listStatus, listFiles and RemoteIterator  (was: 
Specify FileSystem listStatus and listFiles)

> Specify FileSystem listStatus, listFiles and RemoteIterator
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, 
> HADOOP-13207-branch-2-008.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372891#comment-15372891
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user fedecz commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70439085
  
--- Diff: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md ---
@@ -940,6 +950,10 @@ the DNS TTL of a JVM is "infinity".
 To work with AWS better, set the DNS time-to-live of an application which
 works with S3 to something lower. See [AWS 
documentation](http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-jvm-ttl.html).
 
+*internal.S3V4AuthErrorRetryStrategy 
(S3V4AuthErrorRetryStrategy.java:buildRetryParams(117)) - Attempting to re-send 
the request to...*
--- End diff --

I don't see anything related to this patch in that ticket's patch, are you 
sure that's the one? I'm looking at the attached patch in HADOOP-13224


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372850#comment-15372850
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user steveloughran commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70433629
  
--- Diff: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md ---
@@ -940,6 +950,10 @@ the DNS TTL of a JVM is "infinity".
 To work with AWS better, set the DNS time-to-live of an application which
 works with S3 to something lower. See [AWS 
documentation](http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-jvm-ttl.html).
 
+*internal.S3V4AuthErrorRetryStrategy 
(S3V4AuthErrorRetryStrategy.java:buildRetryParams(117)) - Attempting to re-send 
the request to...*
--- End diff --

Well it's being covered in HADOOP-13224, so it's best to pull it here and 
review that patch instead


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread Federico Czerwinski (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372849#comment-15372849
 ] 

Federico Czerwinski commented on HADOOP-13075:
--

thanks Steve for taking the time to review it. I've replied your comments in 
the PR, hopefully will clarify some things.
I'll work on the comments and update the PR.

I've tested against ap-southeast-2, Sydney.

I haven't been using this patch in particular yet. I've used one based in 
hadoop 2.7 in a spark cluster but that patch doesn't have support for SSE-C. 
I don't have any performance statistics I'm afraid. What is that 
_GET-with-range_ request that you mention? I don't remember seeing that in the 
code.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372840#comment-15372840
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user fedecz commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70432364
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
 ---
@@ -98,24 +101,33 @@
*/
   private long contentRangeStart;
 
-  public S3AInputStream(String bucket,
-  String key,
+  public S3AInputStream(S3ObjectAttributes s3Attributes,
   long contentLength,
   AmazonS3Client client,
   FileSystem.Statistics stats,
   S3AInstrumentation instrumentation,
   long readahead,
   S3AInputPolicy inputPolicy) {
-Preconditions.checkArgument(StringUtils.isNotEmpty(bucket), "No 
Bucket");
-Preconditions.checkArgument(StringUtils.isNotEmpty(key), "No Key");
-Preconditions.checkArgument(contentLength >= 0 , "Negative content 
length");
-this.bucket = bucket;
-this.key = key;
+Preconditions.checkNotNull(s3Attributes);
+Preconditions.checkArgument(
--- End diff --

will do


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372837#comment-15372837
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user fedecz commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70432238
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ---
@@ -1674,6 +1693,111 @@ public void progressChanged(ProgressEvent 
progressEvent) {
 }
   }
 
+  protected void setSSEKMSOrCIfRequired(InitiateMultipartUploadRequest 
req) {
+if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)){
+  if(S3AEncryptionMethods.SSE_KMS.getMethod()
+  .equals(serverSideEncryptionAlgorithm)) {
+if (StringUtils.isNotBlank(serverSideEncryptionKey)) {
+  //Use specified key
+  req.setSSEAwsKeyManagementParams(
+  new SSEAwsKeyManagementParams(serverSideEncryptionKey)
+  );
+}else{
+  //Use default key
+  req.setSSEAwsKeyManagementParams(new 
SSEAwsKeyManagementParams());
+}
+  }else if(S3AEncryptionMethods.SSE_C.getMethod()
+  .equals(serverSideEncryptionAlgorithm)) {
+if (StringUtils.isNotBlank(serverSideEncryptionKey)) {
+  //at the moment, only supports copy using the same key
+  req.setSSECustomerKey(new 
SSECustomerKey(serverSideEncryptionKey));
+}
+  }
+}
+  }
+
+
+  protected void setSSEKMSOrCIfRequired(CopyObjectRequest 
copyObjectRequest) {
+if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)){
+  if(S3AEncryptionMethods.SSE_KMS.getMethod()
+  .equals(serverSideEncryptionAlgorithm)) {
+if (StringUtils.isNotBlank(serverSideEncryptionKey)) {
+  //Use specified key
+  copyObjectRequest.setSSEAwsKeyManagementParams(
+  new SSEAwsKeyManagementParams(serverSideEncryptionKey)
+  );
+}else{
+  //Use default key
+  copyObjectRequest.setSSEAwsKeyManagementParams(
+  new SSEAwsKeyManagementParams()
+  );
+}
+  }else if(S3AEncryptionMethods.SSE_C.getMethod()
+  .equals(serverSideEncryptionAlgorithm)) {
+if (StringUtils.isNotBlank(serverSideEncryptionKey)) {
+  //at the moment, only supports copy using the same key
+  copyObjectRequest.setSourceSSECustomerKey(
+  new SSECustomerKey(serverSideEncryptionKey)
+  );
+  copyObjectRequest.setDestinationSSECustomerKey(
+  new SSECustomerKey(serverSideEncryptionKey)
+  );
+}
+  }
+}
+  }
+
+  protected void setSSECIfRequired(GetObjectMetadataRequest request) {
+if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)){
+  if(S3AEncryptionMethods.SSE_C.getMethod()
+  .equals(serverSideEncryptionAlgorithm)) {
+if (StringUtils.isNotBlank(serverSideEncryptionKey)) {
+  //at the moment, only supports copy using the same key
+  request.setSSECustomerKey(
+  new SSECustomerKey(serverSideEncryptionKey)
--- End diff --

true, but not all of them can be merged. I'm relying in else clauses as 
well depending on some of the conditions being false. I'll try to rewrite it 
though and will see how it looks.


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] 

[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372833#comment-15372833
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user fedecz commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70431908
  
--- Diff: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md ---
@@ -940,6 +950,10 @@ the DNS TTL of a JVM is "infinity".
 To work with AWS better, set the DNS time-to-live of an application which
 works with S3 to something lower. See [AWS 
documentation](http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-jvm-ttl.html).
 
+*internal.S3V4AuthErrorRetryStrategy 
(S3V4AuthErrorRetryStrategy.java:buildRetryParams(117)) - Attempting to re-send 
the request to...*
--- End diff --

Nope, this is the section _Other Issues_ in the documentation, so I wanted 
to document if the user was having that warning, he/she should specify the 
endpoint in the config. I guess I could set a title to describe it better 
instead of just pasting the warning.


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372827#comment-15372827
 ] 

Tsuyoshi Ozawa commented on HADOOP-13363:
-

Thank you for the clarification, Steve. I got the point. This kind of "updating 
dependency work" is related to the classpath isolation work(HADOOP-13070), so 
I'd like to start discussion on ML.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372826#comment-15372826
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user fedecz commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/113#discussion_r70430407
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java 
---
@@ -142,12 +142,8 @@ private Constants() {
   public static final String SERVER_SIDE_ENCRYPTION_ALGORITHM =
   "fs.s3a.server-side-encryption-algorithm";
 
-  /**
-   * The standard encryption algorithm AWS supports.
-   * Different implementations may support others (or none).
-   */
-  public static final String SERVER_SIDE_ENCRYPTION_AES256 =
--- End diff --

As you said, that constant is only used in that test which I did change. I 
changed it to abstract and created 3 different implementations: one for SSE-S3, 
SSE-KMS and SSE-C. Basically I'm running all the tests in TestS3AEncryption, 
but with different encryption algorithms depending on the concrete class.
Yes, it builds and all test pass.


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13343) globStatus returns null for file path that exists but is filtered

2016-07-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372774#comment-15372774
 ] 

Steve Loughran commented on HADOOP-13343:
-

well, it is a regression. You could just be the first to notice. Which also 
shows limitations of test coverage.

we should be updating filesystem.md BTW, that's where the definition of what 
filesystems are meant to do live. The whole glob code is an undocumented bit of 
functionality there —and I know that as I have plans to subclass the globber 
and do profund things once I've got HADOOP-13208 checked off. I'd like that 
spec and the contract tests to go with.

> globStatus returns null for file path that exists but is filtered
> -
>
> Key: HADOOP-13343
> URL: https://issues.apache.org/jira/browse/HADOOP-13343
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Priority: Minor
> Attachments: HADOOP-13343.001.patch
>
>
> If a file path without globs is passed to globStatus and the file exists but 
> the specified input filter suppresses it then globStatus will return null 
> instead of an empty array.  This makes it impossible for the caller to 
> discern the difference between the file not existing at all vs. being 
> suppressed by the filter and is inconsistent with the way it handles globs 
> for an existing dir but fail to match anything within the dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372730#comment-15372730
 ] 

Steve Loughran commented on HADOOP-13212:
-

that's not good. what test results do you get without the patch being applied?

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2016-07-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372724#comment-15372724
 ] 

Steve Loughran commented on HADOOP-13363:
-

wire-level backward compatibility is a core feature of protobuf, I'm not 
worried there. What I am worried about is compile-compat, as that is what broke 
hadoop, hbase, etc, everything using protobuf 2.4. Google's internal build 
process is a clean, unified build of everything, so they don't have to worry 
about source level compatibliity

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: HADOOP-13363.001.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12420) While trying to access Amazon S3 through hadoop-aws(Spark basically) I was getting Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.t

2016-07-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372720#comment-15372720
 ] 

Steve Loughran commented on HADOOP-12420:
-

the rule for s3a work now and in future "use a consistent version of the amazon 
libraries with which hadoop was built with". You should not be seeing this 
error with 2.72+SDK 1.7.4.  Try to use a later verson of AWS ADK and yes, 
things will break. Sorry. 

timeout/connection problems are unrelated to this JIRA. You may want to (a) 
look at HADOOP-12346 , change those config options locally and see if that 
helps. Otherwise, do grab hadoop branch-2.8 and build spark against it, and see 
if that fixes things. As if it doesn't, now is the time to identify and fix the 
problems —before we get that 2.8.0 release out the door.

> While trying to access Amazon S3 through hadoop-aws(Spark basically) I was 
> getting Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> --
>
> Key: HADOOP-12420
> URL: https://issues.apache.org/jira/browse/HADOOP-12420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Tariq Mohammad
>Assignee: Tariq Mohammad
>Priority: Minor
> Fix For: 2.8.0
>
>
> While trying to access data stored in Amazon S3 through Apache Spark, which  
> internally uses hadoop-aws jar I was getting the following exception :
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> Probable reason could be the fact that aws java sdk expects a long parameter 
> for the setMultipartUploadThreshold(long multiPartThreshold) method, but 
> hadoop-aws was using a parameter of type int(multiPartThreshold). 
> I tried using the downloaded hadoop-aws jar and the build through its maven 
> dependency, but in both the cases I encountered the same exception. Although 
> I can see private long multiPartThreshold; in hadoop-aws GitHub repo, it's 
> not getting reflected in the downloaded jar or in the jar created from maven 
> dependency.
> Following lines in the S3AFileSystem class create this difference :
> Build from trunk : 
> private long multiPartThreshold;
> this.multiPartThreshold = conf.getLong("fs.s3a.multipart.threshold", 
> 2147483647L); => Line 267
> Build through maven dependency : 
> private int multiPartThreshold;
> multiPartThreshold = conf.getInt(MIN_MULTIPART_THRESHOLD, 
> DEFAULT_MIN_MULTIPART_THRESHOLD); => Line 249



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-12 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372718#comment-15372718
 ] 

Rajesh Balamohan commented on HADOOP-13212:
---

This is based on us-west-2 region. Ran the tests with "-DtestsThreadCount=2".  

{noformat}
Tests in error:
  
TestS3AContractDistCp>AbstractContractDistCpTest.largeFilesToRemote:96->AbstractContractDistCpTest.largeFiles:176
 »
  TestS3ADeleteFilesOneByOne>TestS3ADeleteManyFiles.testBulkRenameAndDelete:99 »
  TestS3ADeleteManyFiles.testBulkRenameAndDelete:99 »  test timed out after 
1800...
  
TestS3ADirectoryPerformance.testTimeToStatNonEmptyDirectory:153->timeToStatPath:179
 »

Tests run: 261, Failures: 0, Errors: 3, Skipped: 7
{noformat}

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-07-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372708#comment-15372708
 ] 

Steve Loughran commented on HADOOP-13344:
-

one more piece of fun here: any app which creates an assembly JAR (example: 
spark, maybe hive) and which does not then shadow SLF4J ends up including the 
slf4j classes, so making it near-impossible to guarantee that all duplicate 
copies of SLF4J have been removed from the CP. Painful

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-07-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372643#comment-15372643
 ] 

Hadoop QA commented on HADOOP-13298:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  7m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m  7s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRenameWhileOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817361/HADOOP-13298.002.patch
 |
| JIRA Issue | HADOOP-13298 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 4f4e4c0d4ecb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 819224d |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9966/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9966/testReport/ |
| modules | C: hadoop-build-tools . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9966/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/
> -
>
> Key: HADOOP-13298
> URL: https://issues.apache.org/jira/browse/HADOOP-13298
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Xiao Chen
>Assignee: Sean Busbey
>

[jira] [Commented] (HADOOP-13354) Update WASB driver to use the latest version (4.2.0) of SDK for Microsoft Azure Storage Clients

2016-07-12 Thread Sivaguru Sankaridurg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372472#comment-15372472
 ] 

Sivaguru Sankaridurg commented on HADOOP-13354:
---

Hi [~steve_l],

Thank you for your comments.

Please find details of the tests that I ran, below:
1. I ensured that both the contract tests and the live tests passed. I have 
attached the output to the JIRA.
2. I ran Windows Azure Storage Blob (WASB) unit tests against a live Azure 
Storage account (West US region).



I am seeing the following changes in the package-dependencies, after 
incorporating my changes:
I have included comments at the end of each line below, describing the change. 
This is from comparing the dependencies-output produced by running "mvn 
dependency:tree -Dverbose", before and after my changes.


[INFO] |  +- org.apache.commons:commons-lang3:jar:3.4:compile   
   /* Upgraded from 
org.apache.commons:commons-lang3:jar:3.3.2:compile */
[INFO] |  \- com.microsoft.azure:azure-keyvault-core:jar:0.8.0:compile  
   /* added, new */



> Update WASB driver to use the latest version (4.2.0) of SDK for Microsoft 
> Azure Storage Clients
> ---
>
> Key: HADOOP-13354
> URL: https://issues.apache.org/jira/browse/HADOOP-13354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
> Attachments: HADOOP-13354.001.patch, HADOOP-13354.002.patch, 
> HADOOP-13354.003.patch, HADOOP-13354.004.patch, Test-Results-With-4.2.0-fixes
>
>
> Update WASB driver to use the latest version (4.2.0) of SDK for Microsoft 
> Azure Storage Clients.
> We are currently using version 2.2.0 of the SDK.
> Version 4.2.0 brings some breaking changes. 
> Need to fix code to resolve all these breaking changes and certify that 
> everything works properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372395#comment-15372395
 ] 

Tsuyoshi Ozawa commented on HADOOP-13298:
-

[~busbey] [~ajisakaa] [~xiaochen] I tested following points with the latest 
patch: 1. hadoop-build-tools/src/main/resources/META-INF is not created, 2. 
tar.gz includes LICENSE.txt and NOTICE.txt. Could you check it?

> Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/
> -
>
> Key: HADOOP-13298
> URL: https://issues.apache.org/jira/browse/HADOOP-13298
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Xiao Chen
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-13298.001.patch, HADOOP-13298.002.patch
>
>
> After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
> {{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
> remove it and do it the maven way.
> Details in 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/browser
> Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
> the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13298:

Status: Patch Available  (was: Open)

> Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/
> -
>
> Key: HADOOP-13298
> URL: https://issues.apache.org/jira/browse/HADOOP-13298
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Xiao Chen
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-13298.001.patch, HADOOP-13298.002.patch
>
>
> After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
> {{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
> remove it and do it the maven way.
> Details in 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/browser
> Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
> the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13298) Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/

2016-07-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13298:

Attachment: HADOOP-13298.002.patch

Attaching a patch based on the discussion Sean and Xiao had on the mailing list.

* Moving destination of copy to 
hadoop-build-tools/target/generated-sources/META-INF/
* Changing the directory to be included by maven-remote-resources-plugin to 
hadoop-build-tools/target/generated-sources/META-INF/.






> Fix the leftover L files in hadoop-build-tools/src/main/resources/META-INF/
> -
>
> Key: HADOOP-13298
> URL: https://issues.apache.org/jira/browse/HADOOP-13298
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Xiao Chen
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HADOOP-13298.001.patch, HADOOP-13298.002.patch
>
>
> After HADOOP-12893, an extra copy of LICENSE.txt and NOTICE.txt exists in 
> {{hadoop-build-tools/src/main/resources/META-INF/}} after build. We should 
> remove it and do it the maven way.
> Details in 
> https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201606.mbox/browser
> Thanks [~ste...@apache.org] for raising the issue and [~busbey] for offering 
> the help!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13354) Update WASB driver to use the latest version (4.2.0) of SDK for Microsoft Azure Storage Clients

2016-07-12 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-13354:
--
Attachment: Test-Results-With-4.2.0-fixes

Test results indicating that both Contract Tests and Live Tests passed after 
include the code changes.

> Update WASB driver to use the latest version (4.2.0) of SDK for Microsoft 
> Azure Storage Clients
> ---
>
> Key: HADOOP-13354
> URL: https://issues.apache.org/jira/browse/HADOOP-13354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
> Attachments: HADOOP-13354.001.patch, HADOOP-13354.002.patch, 
> HADOOP-13354.003.patch, HADOOP-13354.004.patch, Test-Results-With-4.2.0-fixes
>
>
> Update WASB driver to use the latest version (4.2.0) of SDK for Microsoft 
> Azure Storage Clients.
> We are currently using version 2.2.0 of the SDK.
> Version 4.2.0 brings some breaking changes. 
> Need to fix code to resolve all these breaking changes and certify that 
> everything works properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org