[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316219#comment-15316219
 ] 

Hadoop QA commented on HADOOP-13203:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
21s{color} | {color:red} root: The patch generated 3 new + 21 unchanged - 0 
fixed = 24 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808236/HADOOP-13203-branch-2-003.patch
 |
| JIRA Issue | HADOOP-13203 |
| Optional Tests |  asflicense  

[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-05 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13203:
--
Attachment: HADOOP-13203-branch-2-003.patch

Thanks [~cnauroth]. Updated the patch.

1. Removed changes related to setReadAhead.
2. Minor update to the comments section in reopen
3. I agree with your comments on forward/backward seeks. Without the patch, it 
would be difficult to reduce the number of open calls on backward-seeks. 
Additional option for reducing the number of open calls with forward-seeks can 
be to set a higher value for readAhead.

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13031) Refactor rack-aware counters from FileSystemStorageStatistics to HFDS specific StorageStatistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13031:
---
Component/s: fs

> Refactor rack-aware counters from FileSystemStorageStatistics to HFDS 
> specific StorageStatistics
> 
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to refactor the code that maintains rack-aware read metrics to 
> use the newly added StorageStatistics. Specially,
> # Rack-aware read bytes metrics is mostly specific to HDFS. For example, 
> local file system doesn't need that. We consider to move it from base 
> FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
> sub-class.
> # We would have to develop an optimized thread-local mechanism to do this, to 
> avoid causing a performance regression in HDFS stream performance.
> Optionally, it would be better to simply move this to HDFS's existing 
> per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
> metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
> something that application framework such as MR and Tez can get to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13031) Refactor rack-aware counters from FileSystemStorageStatistics to HFDS specific StorageStatistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13031:
---
Affects Version/s: 2.9.0

> Refactor rack-aware counters from FileSystemStorageStatistics to HFDS 
> specific StorageStatistics
> 
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to refactor the code that maintains rack-aware read metrics to 
> use the newly added StorageStatistics. Specially,
> # Rack-aware read bytes metrics is mostly specific to HDFS. For example, 
> local file system doesn't need that. We consider to move it from base 
> FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
> sub-class.
> # We would have to develop an optimized thread-local mechanism to do this, to 
> avoid causing a performance regression in HDFS stream performance.
> Optionally, it would be better to simply move this to HDFS's existing 
> per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
> metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
> something that application framework such as MR and Tez can get to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13031) Refactor rack-aware counters from FileSystemStorageStatistics to HFDS specific StorageStatistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13031:
---
Summary: Refactor rack-aware counters from FileSystemStorageStatistics to 
HFDS specific StorageStatistics  (was: Refactor the code that maintains 
rack-aware counters in FileSystem$Statistics)

> Refactor rack-aware counters from FileSystemStorageStatistics to HFDS 
> specific StorageStatistics
> 
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to refactor the code that maintains rack-aware read metrics to 
> use the newly added StorageStatistics. Specially,
> # Rack-aware read bytes metrics is mostly specific to HDFS. For example, 
> local file system doesn't need that. We consider to move it from base 
> FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
> sub-class.
> # We would have to develop an optimized thread-local mechanism to do this, to 
> avoid causing a performance regression in HDFS stream performance.
> Optionally, it would be better to simply move this to HDFS's existing 
> per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
> metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
> something that application framework such as MR and Tez can get to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13031) Refactor the code that maintains rack-aware counters in FileSystem$Statistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13031:
---
Description: 
[HADOOP-13065] added a new interface for retrieving FS and FC Statistics. This 
jira is to refactor the code that maintains rack-aware read metrics to use the 
newly added StorageStatistics. Specially,

# Rack-aware read bytes metrics is mostly specific to HDFS. For example, local 
file system doesn't need that. We consider to move it from base 
FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
sub-class.
# We would have to develop an optimized thread-local mechanism to do this, to 
avoid causing a performance regression in HDFS stream performance.

Optionally, it would be better to simply move this to HDFS's existing 
per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
something that application framework such as MR and Tez can get to.


  was:
According to discussion in [HDFS-10175], using a composite (e.g. enum map, 
array) data structure to manage the distance->bytesRead mapping will probably 
make the code simpler.

# {{StatisticsData}} will be a bit shorter by delegating the operations to the 
composite data structure.
# The {{incrementBytesReadByDistance(int distance, long newBytes)}} and 
{{getBytesReadByDistance(int distance)}} which switch-case all hard-code 
variables, may be simplified as we can set/get the {{bytesRead}} by distance 
directly from map/array.

This jira is to track the discussion and effort of refactoring the code that 
maintains rack-aware counters.


> Refactor the code that maintains rack-aware counters in FileSystem$Statistics
> -
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to refactor the code that maintains rack-aware read metrics to 
> use the newly added StorageStatistics. Specially,
> # Rack-aware read bytes metrics is mostly specific to HDFS. For example, 
> local file system doesn't need that. We consider to move it from base 
> FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
> sub-class.
> # We would have to develop an optimized thread-local mechanism to do this, to 
> avoid causing a performance regression in HDFS stream performance.
> Optionally, it would be better to simply move this to HDFS's existing 
> per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
> metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
> something that application framework such as MR and Tez can get to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13031) Refactor the code that maintains rack-aware counters in FileSystem$Statistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13031:
---
Priority: Major  (was: Minor)

> Refactor the code that maintains rack-aware counters in FileSystem$Statistics
> -
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mingliang Liu
>
> According to discussion in [HDFS-10175], using a composite (e.g. enum map, 
> array) data structure to manage the distance->bytesRead mapping will probably 
> make the code simpler.
> # {{StatisticsData}} will be a bit shorter by delegating the operations to 
> the composite data structure.
> # The {{incrementBytesReadByDistance(int distance, long newBytes)}} and 
> {{getBytesReadByDistance(int distance)}} which switch-case all hard-code 
> variables, may be simplified as we can set/get the {{bytesRead}} by distance 
> directly from map/array.
> This jira is to track the discussion and effort of refactoring the code that 
> maintains rack-aware counters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13031) Refactor the code that maintains rack-aware counters in FileSystem$Statistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13031:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-13065

> Refactor the code that maintains rack-aware counters in FileSystem$Statistics
> -
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>
> According to discussion in [HDFS-10175], using a composite (e.g. enum map, 
> array) data structure to manage the distance->bytesRead mapping will probably 
> make the code simpler.
> # {{StatisticsData}} will be a bit shorter by delegating the operations to 
> the composite data structure.
> # The {{incrementBytesReadByDistance(int distance, long newBytes)}} and 
> {{getBytesReadByDistance(int distance)}} which switch-case all hard-code 
> variables, may be simplified as we can set/get the {{bytesRead}} by distance 
> directly from map/array.
> This jira is to track the discussion and effort of refactoring the code that 
> maintains rack-aware counters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13140) FileSystem#initialize must not attempt to create StorageStatistics objects with null or empty schemes

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13140:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13065

> FileSystem#initialize must not attempt to create StorageStatistics objects 
> with null or empty schemes
> -
>
> Key: HADOOP-13140
> URL: https://issues.apache.org/jira/browse/HADOOP-13140
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Brahma Reddy Battula
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13140.000.patch, HADOOP-13140.001.patch, 
> HADOOP-13140.002.patch
>
>
> {{org.apache.hadoop.fs.GlobalStorageStatistics#put}} is not checking the null 
> scheme, and the internal map will complain NPE. This was reported by a flaky 
> test {{TestFileSystemApplicationHistoryStore}}. Thanks [~brahmareddy] for 
> reporting.
> To address this,
> # Fix the test by providing a valid URI, e.g. {{file:///}}
> # Guard the null scheme in {{GlobalStorageStatistics#put}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13032:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-13065

> Refactor FileSystem$Statistics to use StorageStatistics
> ---
>
> Key: HADOOP-13032
> URL: https://issues.apache.org/jira/browse/HADOOP-13032
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to track the effort of moving the {{Statistics}} class out of 
> {{FileSystem}}, and make it use that new interface.
> We should keep the thread local implementation. Benefits are:
> # they could be used in both {{FileContext}} and {{FileSystem}}
> # unified stats data structure
> # shorter source code
> Please note this will be an backwards-incompatible change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12709) Cut s3:// from trunk

2016-06-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316121#comment-15316121
 ] 

Mingliang Liu commented on HADOOP-12709:


Hi [~cnauroth], I filed JIRA ticket [HADOOP-13239] for deprecating s3:// from 
branch-2, assigned it to myself, and updated this JIRA's description. Let's 
track the effort of cutting it from {{trunk}} here, as what we have been doing.

> Cut s3:// from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch, 
> HADOOP-12709.005.patch
>
>
> UPDATE: This JIRA ticket is to track the effort of cutting the s3:// from 
> {{trunk}} branch. Please see the cloned JIRA ticket [HADOOP-13239] for 
> deprecating s3:// from {{branch-2}}.
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12709) Cut s3:// from trunk

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12709:
---
Description: 
UPDATE: This JIRA ticket is to track the effort of cutting the s3:// from 
{{trunk}} branch. Please see the cloned JIRA ticket [HADOOP-13239] for 
deprecating s3:// from {{branch-2}}.

The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
shows that it's not being used. while invaluable at the time, s3n and 
especially s3a render it obsolete except for reading existing data.

I propose

# Mark Java source as {{@deprecated}}
# Warn the first time in a JVM that an S3 instance is created, "deprecated 
-will be removed in future releases"
# in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
which holds it for anyone who still wants it. Or: retain the code but remove 
the {{fs.s3.impl}} config option, so you have to explicitly add it for use.

  was:
The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
shows that it's not being used. while invaluable at the time, s3n and 
especially s3a render it obsolete except for reading existing data.

I propose

# Mark Java source as {{@deprecated}}
# Warn the first time in a JVM that an S3 instance is created, "deprecated 
-will be removed in future releases"
# in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
which holds it for anyone who still wants it. Or: retain the code but remove 
the {{fs.s3.impl}} config option, so you have to explicitly add it for use.


> Cut s3:// from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch, 
> HADOOP-12709.005.patch
>
>
> UPDATE: This JIRA ticket is to track the effort of cutting the s3:// from 
> {{trunk}} branch. Please see the cloned JIRA ticket [HADOOP-13239] for 
> deprecating s3:// from {{branch-2}}.
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Hadoop Flags:   (was: Incompatible change)

> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12709) Cut s3:// from trunk

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12709:
---
Affects Version/s: (was: 2.8.0)
   3.0.0-alpha1

> Cut s3:// from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch, 
> HADOOP-12709.005.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13239:
---
Description: 
The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
shows that it's not being used. while invaluable at the time, s3n and 
especially s3a render it obsolete except for reading existing data.

[HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is to 
deprecate it from {{branch-2}}.
# Mark Java source as {{@deprecated}}
# Warn the first time in a JVM that an S3 instance is created, "deprecated 
-will be removed in future releases"

Thanks [~ste...@apache.org] for the proposal.

  was:
The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
shows that it's not being used. while invaluable at the time, s3n and 
especially s3a render it obsolete except for reading existing data.

I propose

# Mark Java source as {{@deprecated}}
# Warn the first time in a JVM that an S3 instance is created, "deprecated 
-will be removed in future releases"
# in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
which holds it for anyone who still wants it. Or: retain the code but remove 
the {{fs.s3.impl}} config option, so you have to explicitly add it for use.


> Deprecate s3:// in branch-2
> ---
>
> Key: HADOOP-13239
> URL: https://issues.apache.org/jira/browse/HADOOP-13239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is 
> to deprecate it from {{branch-2}}.
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> Thanks [~ste...@apache.org] for the proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12709) Cut s3:// from trunk

2016-06-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12709:
---
Summary: Cut s3:// from trunk  (was: Deprecate s3:// in branch-2,; cut from 
trunk)

> Cut s3:// from trunk
> 
>
> Key: HADOOP-12709
> URL: https://issues.apache.org/jira/browse/HADOOP-12709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch, 
> HADOOP-12709.005.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13239) Deprecate s3:// in branch-2

2016-06-05 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13239:
--

 Summary: Deprecate s3:// in branch-2
 Key: HADOOP-13239
 URL: https://issues.apache.org/jira/browse/HADOOP-13239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
shows that it's not being used. while invaluable at the time, s3n and 
especially s3a render it obsolete except for reading existing data.

I propose

# Mark Java source as {{@deprecated}}
# Warn the first time in a JVM that an S3 instance is created, "deprecated 
-will be removed in future releases"
# in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
which holds it for anyone who still wants it. Or: retain the code but remove 
the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-06-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316065#comment-15316065
 ] 

Xiao Chen commented on HADOOP-13155:


FYI created HDFS-10489 for the config deprecation.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.07.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13220) Ignore findbugs checking in MiniKdc#stop and add the kerby version hadoop-project/pom.xml

2016-06-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315939#comment-15315939
 ] 

Akira AJISAKA commented on HADOOP-13220:


LGTM, +1.

> Ignore findbugs checking in MiniKdc#stop and add the kerby version 
> hadoop-project/pom.xml
> -
>
> Key: HADOOP-13220
> URL: https://issues.apache.org/jira/browse/HADOOP-13220
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-13220-V1.patch
>
>
> This is a  follow up jira from HADOOP-12911.
> 1. Now with the findbug warning:
> {noformat}
> org.apache.hadoop.minikdc.MiniKdc.stop() calls Thread.sleep() with a lock 
> held At MiniKdc.java:lock held At MiniKdc.java:[line 345] 
> {noformat}
> As discussed in HADOOP-12911:
> bq. Why was this committed with a findbugs errors rather than adding the 
> necessary plumbing in pom.xml to make it go away?
> we will add the findbugsExcludeFile.xml and will get rid of this given 
> kerby-1.0.0-rc3 release.
> 2. Add the kerby version hadoop-project/pom.xml
> bq. hadoop-project/pom.xml contains the dependencies of all libraries used in 
> all modules of hadoop, under dependencyManagement. Only here version will be 
> mentioned. All other Hadoop Modules will inherit hadoop-project, so all 
> submodules will use the same version. In submodule, version need not be 
> mentioned in pom.xml. This will make version management easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13220) Ignore findbugs checking in MiniKdc#stop and add the kerby version hadoop-project/pom.xml

2016-06-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13220:
---
Assignee: Jiajia Li

> Ignore findbugs checking in MiniKdc#stop and add the kerby version 
> hadoop-project/pom.xml
> -
>
> Key: HADOOP-13220
> URL: https://issues.apache.org/jira/browse/HADOOP-13220
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-13220-V1.patch
>
>
> This is a  follow up jira from HADOOP-12911.
> 1. Now with the findbug warning:
> {noformat}
> org.apache.hadoop.minikdc.MiniKdc.stop() calls Thread.sleep() with a lock 
> held At MiniKdc.java:lock held At MiniKdc.java:[line 345] 
> {noformat}
> As discussed in HADOOP-12911:
> bq. Why was this committed with a findbugs errors rather than adding the 
> necessary plumbing in pom.xml to make it go away?
> we will add the findbugsExcludeFile.xml and will get rid of this given 
> kerby-1.0.0-rc3 release.
> 2. Add the kerby version hadoop-project/pom.xml
> bq. hadoop-project/pom.xml contains the dependencies of all libraries used in 
> all modules of hadoop, under dependencyManagement. Only here version will be 
> mentioned. All other Hadoop Modules will inherit hadoop-project, so all 
> submodules will use the same version. In submodule, version need not be 
> mentioned in pom.xml. This will make version management easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10672) Add support for pushing metrics to OpenTSDB

2016-06-05 Thread zhangyubiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangyubiao updated HADOOP-10672:
-
Status: Patch Available  (was: In Progress)

> Add support for pushing metrics to OpenTSDB
> ---
>
> Key: HADOOP-10672
> URL: https://issues.apache.org/jira/browse/HADOOP-10672
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.21.0
>Reporter: Kamaldeep Singh
>Assignee: zhangyubiao
>Priority: Minor
> Attachments: HADOOP-10672-v1.patch, HADOOP-10672-v2.patch, 
> HADOOP-10672-v3.patch, HADOOP-10672-v4.patch, HADOOP-10672-v5.patch, 
> HADOOP-10672.patch
>
>
> We wish to add support for pushing metrics to OpenTSDB from hadoop 
> Code and instructions at - https://github.com/eBay/hadoop-tsdb-connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10672) Add support for pushing metrics to OpenTSDB

2016-06-05 Thread zhangyubiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangyubiao updated HADOOP-10672:
-
Status: In Progress  (was: Patch Available)

> Add support for pushing metrics to OpenTSDB
> ---
>
> Key: HADOOP-10672
> URL: https://issues.apache.org/jira/browse/HADOOP-10672
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.21.0
>Reporter: Kamaldeep Singh
>Assignee: zhangyubiao
>Priority: Minor
> Attachments: HADOOP-10672-v1.patch, HADOOP-10672-v2.patch, 
> HADOOP-10672-v3.patch, HADOOP-10672-v4.patch, HADOOP-10672-v5.patch, 
> HADOOP-10672.patch
>
>
> We wish to add support for pushing metrics to OpenTSDB from hadoop 
> Code and instructions at - https://github.com/eBay/hadoop-tsdb-connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials

2016-06-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13237.
-
   Resolution: Won't Fix
Fix Version/s: 2.8.0

> s3a initialization against public bucket fails if caller lacks any credentials
> --
>
> Key: HADOOP-13237
> URL: https://issues.apache.org/jira/browse/HADOOP-13237
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> If an S3 bucket is public, anyone should be able to read from it.
> However, you cannot create an s3a client bonded to a public bucket unless you 
> have some credentials; the {{doesBucketExist()}} check rejects the call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials

2016-06-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315828#comment-15315828
 ] 

Steve Loughran commented on HADOOP-13237:
-

I don't see us being able to fix this; I've tried to bypass auth or insert fake 
credentials, and the s3 reads of the public landsat dataset fail at the 
{{verifyBucketExists()}} call. Command that out and it fails on the first read. 
That's even though the datasets are visible over http.

Assumption: you really need credentials to use the AWS library, even if you are 
accessing other people's public data. The client is presumably setting up auth 
without negotiating over requirements at the far end, and bailing out early if 
there aren't any. And, if you make up credentials, they get rejected s3 side 
for being invalid.

> s3a initialization against public bucket fails if caller lacks any credentials
> --
>
> Key: HADOOP-13237
> URL: https://issues.apache.org/jira/browse/HADOOP-13237
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> If an S3 bucket is public, anyone should be able to read from it.
> However, you cannot create an s3a client bonded to a public bucket unless you 
> have some credentials; the {{doesBucketExist()}} check rejects the call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org