[jira] [Commented] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825592#comment-15825592
 ] 

Hudson commented on HDFS-11339:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11124 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11124/])
HDFS-11339. Fix import. (arp: rev 89bb05d92ba79f05d23a0252d838088e830f20a3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProfilingFileIoEvents.java


> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11124:

Description: 
At the moment, when we do fsck for an EC file which has corrupt blocks and 
missing blocks, the result of fsck is like this:

{quote}
/data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
block(s): 
/data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
block blk_-9223372036854775792
 CORRUPT 1 blocks of total size 393216 B
0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
len=393216 Live_repl=4  
[DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
 
DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
 
DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
 
DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
 
DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
 
DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
 
DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
{quote}

It would be useful for admins if it reports the blockIds of the internal blocks.

  was:
At moment, when we do fsck for an EC file which has corrupt blocks and missing 
blocks, the result of fsck is like this:

{quote}
/data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
block(s): 
/data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
block blk_-9223372036854775792
 CORRUPT 1 blocks of total size 393216 B
0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
len=393216 Live_repl=4  
[DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
 
DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
 
DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
 
DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
 
DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
 
DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
 
DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
{quote}

It would be useful for admins if it reports the blockIds of the internal blocks.


> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11124.1.patch
>
>
> At the moment, when we do fsck for an EC file which has corrupt blocks and 
> missing blocks, the result of fsck is like this:
> {quote}
> /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
> block(s): 
> /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
> block blk_-9223372036854775792
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
> len=393216 Live_repl=4  
> [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
> {quote}
> It would be useful for admins if it reports the blockIds of the internal 
> blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825572#comment-15825572
 ] 

Takanobu Asanuma commented on HDFS-11124:
-

The failed tests are passed in my laptop computer.

Hi [~jingzhao]. Could you take a look at this jira?

> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11124.1.patch
>
>
> At moment, when we do fsck for an EC file which has corrupt blocks and 
> missing blocks, the result of fsck is like this:
> {quote}
> /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
> block(s): 
> /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
> block blk_-9223372036854775792
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
> len=393216 Live_repl=4  
> [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
> {quote}
> It would be useful for admins if it reports the blockIds of the internal 
> blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825567#comment-15825567
 ] 

Arpit Agarwal commented on HDFS-11339:
--

Pushed an addendum commit to trunk to fix an import, noticed it while 
cherry-picking to branch-2.

{code}
-import org.jboss.netty.util.internal.ThreadLocalRandom;

 import javax.annotation.Nullable;
+import java.util.concurrent.ThreadLocalRandom;
{code}

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11339:
-
Fix Version/s: 2.9.0

Cherry-picked to branch-2.

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11299:
-
Fix Version/s: 2.9.0

Cherry-picked to branch-2.

> Support multiple Datanode File IO hooks
> ---
>
> Key: HDFS-11299
> URL: https://issues.apache.org/jira/browse/HDFS-11299
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11299.000.patch, HDFS-11299.001.patch, 
> HDFS-11299.002.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of choosing only one hook using Config parameters, we want to add two 
> separate hooks - one for profiling and one for fault injection. The fault 
> injection hook will be useful for testing purposes. 
> This jira only introduces support for fault injection hook. The 
> implementation for that will come later on.
> Also, now Default and Counting FileIOEvents would not be needed as we can 
> control enabling the profiling and fault injection hooks using config 
> parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825534#comment-15825534
 ] 

Arpit Agarwal commented on HDFS-11282:
--

Cherry-picked to branch-2.

> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11282) Document the missing metrics of DataNode Volume IO operations

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11282:
-
Fix Version/s: 2.9.0

> Document the missing metrics of DataNode Volume IO operations
> -
>
> Key: HDFS-11282
> URL: https://issues.apache.org/jira/browse/HDFS-11282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11282.001.patch, HDFS-11282.002.patch, 
> HDFS-11282.003.patch, HDFS-11282.004.patch, metrics-rendered.png
>
>
> In HDFS-10959, it added many metrics of datanode volume io opearions. But it 
> hasn't been documented. This JIRA addressed on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11279) Cleanup unused DataNode#checkDiskErrorAsync()

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11279:
-
Fix Version/s: 2.9.0

Pushed to branch-2.

> Cleanup unused DataNode#checkDiskErrorAsync()
> -
>
> Key: HDFS-11279
> URL: https://issues.apache.org/jira/browse/HDFS-11279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11279.000.patch, HDFS-11279.001.patch, 
> HDFS-11279.002.patch
>
>
> After HDFS-11274, we will not trigger checking all datanode volumes upon IO 
> failure on a single volume. This makes the original implementation 
> DataNode#checkDiskErrorAsync and DatasetVolumeChecker#checkAllVolumesAsync() 
> not used in any of the production code. 
> This ticket is opened to remove these unused code and related tests if any. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11274:
-
Fix Version/s: 2.9.0

> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch, HDFS-11274-branch-2.01.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only on suspected volume 
> upon datanode file IO errors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11182:
-
Fix Version/s: 2.9.0

Pushed to branch-2. Thank you for reviewing the branch-2 patch [~xyao].

> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11182-branch-2.01.patch
>
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11337) [branch-2] Add instrumentation hooks around Datanode disk IO

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11337:
-
  Resolution: Implemented
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to trunk and updated Fix Version for HDFS-11337.

> [branch-2] Add instrumentation hooks around Datanode disk IO
> 
>
> Key: HDFS-11337
> URL: https://issues.apache.org/jira/browse/HDFS-11337
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-11337-branch-2.01.patch, 
> HDFS-11337-branch-2.02.patch, HDFS-11337-branch-2.03.patch, 
> HDFS-11337-branch-2.04.patch, HDFS-11337-branch-2.05.patch, 
> HDFS-11337-branch-2.06.patch
>
>
> Cloned from HDFS-10958 to verify the branch-2 backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10959) Adding per disk IO statistics and metrics in DataNode.

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10959:
-
Fix Version/s: 2.9.0

Pushed to branch-2.

> Adding per disk IO statistics and metrics in DataNode.
> --
>
> Key: HDFS-10959
> URL: https://issues.apache.org/jira/browse/HDFS-10959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-10959.00.patch, HDFS-10959.01.patch, 
> HDFS-10959.02.patch, HDFS-10959.03.patch, HDFS-10959.04.patch, 
> HDFS-10959-branch-2.01.patch
>
>
> This ticket is opened to support per disk IO statistics in DataNode based on 
> HDFS-10930. The statistics added will make it easier to implement HDFS-4169 
> "Add per-disk latency metrics to DataNode".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825516#comment-15825516
 ] 

Arpit Agarwal commented on HDFS-11274:
--

Results from running test-patch on the branch-2 patch.

{code}
| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 0s | The patch does not contain any @author
|  ||| tags.
|  +1  |test4tests  |  0m 0s | The patch appears to include 17 new or
|  ||| modified test files.
|  +1  |mvninstall  |  3m 30s| branch-2 passed
|  +1  |   compile  |  0m 27s| branch-2 passed
|  +1  |checkstyle  |  0m 25s| branch-2 passed
|  +1  |   mvnsite  |  0m 33s| branch-2 passed
|  +1  |mvneclipse  |  0m 11s| branch-2 passed
|  +1  |  findbugs  |  1m 0s | branch-2 passed
|  +1  |   javadoc  |  0m 40s| branch-2 passed
|  +1  |mvninstall  |  0m 27s| the patch passed
|  +1  |   compile  |  0m 25s| the patch passed
|  -1  | javac  |  0m 25s| hadoop-hdfs-project_hadoop-hdfs
|  ||| generated 1 new + 43 unchanged - 1 fixed
|  ||| = 44 total (was 44)
|  -1  |checkstyle  |  0m 23s| hadoop-hdfs-project/hadoop-hdfs: The
|  ||| patch generated 18 new + 670 unchanged -
|  ||| 36 fixed = 688 total (was 706)
|  +1  |   mvnsite  |  0m 29s| the patch passed
|  +1  |mvneclipse  |  0m 9s | the patch passed
|  +1  |whitespace  |  0m 0s | The patch has no whitespace issues.
|  +1  |  findbugs  |  1m 3s | the patch passed
|  +1  |   javadoc  |  0m 38s| the patch passed
|  -1  |  unit  |  99m 51s   | hadoop-hdfs in the patch failed.
|  +1  |asflicense  |  0m 13s| The patch does not generate ASF License
|  ||| warnings.
|  ||  111m 23s  |

  Reason | Tests
 Failed junit tests  |  hadoop.hdfs.server.datanode.TestBatchIbr
 |  hadoop.hdfs.server.datanode.TestDataNodeReconfiguration
 |  
hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
 |  hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
 |  
hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy
 |  
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
 |  
hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy
 |  
hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
 |  hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
 |  
hadoop.hdfs.server.blockmanagement.TestPendingReplication
 |  
hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
 |  hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
 |  hadoop.hdfs.server.balancer.TestBalancer
 |  hadoop.hdfs.server.mover.TestStorageMover
 |  hadoop.hdfs.TestInjectionForSimulatedStorage
 |  hadoop.hdfs.TestCrcCorruption
 |  hadoop.hdfs.TestEncryptedTransfer
 |  hadoop.hdfs.TestMaintenanceState
 |  hadoop.hdfs.TestReplication
 |  hadoop.hdfs.TestSetrepDecreasing
 |  hadoop.hdfs.TestSetrepIncreasing
 |  hadoop.hdfs.TestDatanodeDeath
 |  hadoop.hdfs.TestFileAppend2
 |  hadoop.hdfs.TestDecommission
 |  hadoop.hdfs.TestLeaseRecovery2
{code}


All test failures did not repro on rerunning (caused by _failed to create a 
child event loop_ errors). The javac warning is in pre-existing one in a test 
case. Checkstyle issues were also in existing code except for some indentation 
warnings. I will commit it to branch-2 shortly.

> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch, HDFS-11274-branch-2.01.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only 

[jira] [Commented] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825321#comment-15825321
 ] 

Hadoop QA commented on HDFS-4025:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 546 unchanged - 0 fixed = 551 total (was 546) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-4025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847725/HDFS-4025.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 532db7744759 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e407449 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18188/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18188/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18188/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18188/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18188/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2017-01-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824967#comment-15824967
 ] 

Wei-Chiu Chuang commented on HDFS-11121:


LGTM +1

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch, HDFS-11121.2.patch, 
> HDFS-11121.3.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them

2017-01-16 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-4025:
-
Attachment: HDFS-4025.007.patch

Thank you [~jingzhao] for reviewing the patch and for the comments. I have 
tried addressing all your comments in patch v07.

> QJM: Sychronize past log segments to JNs that missed them
> -
>
> Key: HDFS-4025
> URL: https://issues.apache.org/jira/browse/HDFS-4025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: QuorumJournalManager (HDFS-3077)
>Reporter: Todd Lipcon
>Assignee: Hanisha Koneru
> Fix For: QuorumJournalManager (HDFS-3077)
>
> Attachments: HDFS-4025.000.patch, HDFS-4025.001.patch, 
> HDFS-4025.002.patch, HDFS-4025.003.patch, HDFS-4025.004.patch, 
> HDFS-4025.005.patch, HDFS-4025.006.patch, HDFS-4025.007.patch
>
>
> Currently, if a JournalManager crashes and misses some segment of logs, and 
> then comes back, it will be re-added as a valid part of the quorum on the 
> next log roll. However, it will not have a complete history of log segments 
> (i.e any individual JN may have gaps in its transaction history). This 
> mirrors the behavior of the NameNode when there are multiple local 
> directories specified.
> However, it would be better if a background thread noticed these gaps and 
> "filled them in" by grabbing the segments from other JournalNodes. This 
> increases the resilience of the system when JournalNodes get reformatted or 
> otherwise lose their local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824877#comment-15824877
 ] 

Hadoop QA commented on HDFS-11124:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847709/HDFS-11124.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9dd818f32104 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cf69557 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18187/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18187/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18187/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: 

[jira] [Commented] (HDFS-11337) [branch-2] Add instrumentation hooks around Datanode disk IO

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824863#comment-15824863
 ] 

Arpit Agarwal commented on HDFS-11337:
--

All test failures are unrelated. The tests passed locally.

> [branch-2] Add instrumentation hooks around Datanode disk IO
> 
>
> Key: HDFS-11337
> URL: https://issues.apache.org/jira/browse/HDFS-11337
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-11337-branch-2.01.patch, 
> HDFS-11337-branch-2.02.patch, HDFS-11337-branch-2.03.patch, 
> HDFS-11337-branch-2.04.patch, HDFS-11337-branch-2.05.patch, 
> HDFS-11337-branch-2.06.patch
>
>
> Cloned from HDFS-10958 to verify the branch-2 backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11337) [branch-2] Add instrumentation hooks around Datanode disk IO

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824838#comment-15824838
 ] 

Hadoop QA commented on HDFS-11337:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
17s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
28s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
53s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 12s{color} | {color:orange} root: The patch generated 5 new + 1294 unchanged 
- 15 fixed = 1299 total (was 1309) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
55s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m 

[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-01-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Status: Open  (was: Patch Available)

Wait for HADOOP-13992

> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch, HDFS-10860.005.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11124:

Attachment: (was: HDFS-11124.1.patch)

> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11124.1.patch
>
>
> At moment, when we do fsck for an EC file which has corrupt blocks and 
> missing blocks, the result of fsck is like this:
> {quote}
> /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
> block(s): 
> /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
> block blk_-9223372036854775792
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
> len=393216 Live_repl=4  
> [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
> {quote}
> It would be useful for admins if it reports the blockIds of the internal 
> blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11124:

Attachment: HDFS-11124.1.patch

> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11124.1.patch
>
>
> At moment, when we do fsck for an EC file which has corrupt blocks and 
> missing blocks, the result of fsck is like this:
> {quote}
> /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
> block(s): 
> /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
> block blk_-9223372036854775792
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
> len=393216 Live_repl=4  
> [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
> {quote}
> It would be useful for admins if it reports the blockIds of the internal 
> blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11274:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

The branch-2 patch has a dependency on a couple of other issues. Will run 
test-patch locally.

> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch, HDFS-11274-branch-2.01.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only on suspected volume 
> upon datanode file IO errors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824739#comment-15824739
 ] 

Hadoop QA commented on HDFS-11274:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-11274 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11274 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847702/HDFS-11274-branch-2.01.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18186/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch, HDFS-11274-branch-2.01.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only on suspected volume 
> upon datanode file IO errors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11274:
-
Status: Patch Available  (was: Reopened)

> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch, HDFS-11274-branch-2.01.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only on suspected volume 
> upon datanode file IO errors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11274:
-
Attachment: HDFS-11274-branch-2.01.patch

Attaching branch-2 backport patch. The conflicts were straightforward, mainly 
needed some fixes for Java 7.

> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch, HDFS-11274-branch-2.01.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only on suspected volume 
> upon datanode file IO errors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11274) Datanode should only check the failed volume upon IO errors

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-11274:
--

Reopening to run Jenkins against the branch-2 patch.

> Datanode should only check the failed volume upon IO errors 
> 
>
> Key: HDFS-11274
> URL: https://issues.apache.org/jira/browse/HDFS-11274
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11274.01.patch, HDFS-11274.02.patch, 
> HDFS-11274.03.patch, HDFS-11274.04.patch, HDFS-11274.05.patch, 
> HDFS-11274.06.patch
>
>
> This is a performance improvement that is possible after HDFS-11182. The goal 
> is to trigger async volume check with throttling only on suspected volume 
> upon datanode file IO errors. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11307) The rpc to portmap service for NFS has hardcoded timeout.

2017-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824719#comment-15824719
 ] 

Hudson commented on HDFS-11307:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11121 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11121/])
HDFS-11307. The rpc to portmap service for NFS has hardcoded timeout. 
(jitendra: rev d1d0b3e1fd593d590aaf2e3db8f730a296b20aa1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestMountd.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* (edit) 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java


> The rpc to portmap service for NFS has hardcoded timeout. 
> --
>
> Key: HDFS-11307
> URL: https://issues.apache.org/jira/browse/HDFS-11307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11307.001.patch, HDFS-11307.002.patch, 
> HDFS-11307.003.patch
>
>
> The NFS service makes rpc call to the portmap but the timeout is hardcoded. 
> Tests on slow virtual machines sometimes fail due to timeout. We should make 
> the timeout configurable, with the same default value as current value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11307) The rpc to portmap service for NFS has hardcoded timeout.

2017-01-16 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-11307:

Fix Version/s: 3.0.0-alpha2

> The rpc to portmap service for NFS has hardcoded timeout. 
> --
>
> Key: HDFS-11307
> URL: https://issues.apache.org/jira/browse/HDFS-11307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11307.001.patch, HDFS-11307.002.patch, 
> HDFS-11307.003.patch
>
>
> The NFS service makes rpc call to the portmap but the timeout is hardcoded. 
> Tests on slow virtual machines sometimes fail due to timeout. We should make 
> the timeout configurable, with the same default value as current value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11307) The rpc to portmap service for NFS has hardcoded timeout.

2017-01-16 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-11307:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> The rpc to portmap service for NFS has hardcoded timeout. 
> --
>
> Key: HDFS-11307
> URL: https://issues.apache.org/jira/browse/HDFS-11307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
> Fix For: 2.9.0
>
> Attachments: HDFS-11307.001.patch, HDFS-11307.002.patch, 
> HDFS-11307.003.patch
>
>
> The NFS service makes rpc call to the portmap but the timeout is hardcoded. 
> Tests on slow virtual machines sometimes fail due to timeout. We should make 
> the timeout configurable, with the same default value as current value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11307) The rpc to portmap service for NFS has hardcoded timeout.

2017-01-16 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824704#comment-15824704
 ] 

Jitendra Nath Pandey commented on HDFS-11307:
-

I have committed this to trunk and branch-2. Thanks [~msingh]!

> The rpc to portmap service for NFS has hardcoded timeout. 
> --
>
> Key: HDFS-11307
> URL: https://issues.apache.org/jira/browse/HDFS-11307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
> Fix For: 2.9.0
>
> Attachments: HDFS-11307.001.patch, HDFS-11307.002.patch, 
> HDFS-11307.003.patch
>
>
> The NFS service makes rpc call to the portmap but the timeout is hardcoded. 
> Tests on slow virtual machines sometimes fail due to timeout. We should make 
> the timeout configurable, with the same default value as current value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824703#comment-15824703
 ] 

Hudson commented on HDFS-11339:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11120 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11120/])
HDFS-11339. Support File IO sampling for Datanode IO profiling hooks. (arp: rev 
79e939d0b848a50200612c8c471db6bce1c822be)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProfilingFileIoEvents.java


> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11339:
-
Component/s: (was: hdfs)
 datanode

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11339:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution [~hanishakoneru].

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11307) The rpc to portmap service for NFS has hardcoded timeout.

2017-01-16 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824673#comment-15824673
 ] 

Jitendra Nath Pandey commented on HDFS-11307:
-

+1 for the latest patch. I will commit it shortly.

> The rpc to portmap service for NFS has hardcoded timeout. 
> --
>
> Key: HDFS-11307
> URL: https://issues.apache.org/jira/browse/HDFS-11307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11307.001.patch, HDFS-11307.002.patch, 
> HDFS-11307.003.patch
>
>
> The NFS service makes rpc call to the portmap but the timeout is hardcoded. 
> Tests on slow virtual machines sometimes fail due to timeout. We should make 
> the timeout configurable, with the same default value as current value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824657#comment-15824657
 ] 

Hudson commented on HDFS-11342:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #9 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9/])
HDFS-11342. Fix FileInputStream leak in loadLastPartialChunkChecksum. (arp: rev 
a853b4e1b5742faadf7b667b0cebbc0dac001395)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11342.001.patch
>
>
> FsVolumeImpl#loadLastPartialChunkChecksum leaks a FileInputStream here:
> {code}
>   @Override
>   public byte[] loadLastPartialChunkChecksum(
>   File blockFile, File metaFile) throws IOException {
> // readHeader closes the temporary FileInputStream.
> DataChecksum dcs = BlockMetadataHeader
> .readHeader(fileIoProvider.getFileInputStream(this, metaFile))
> .getChecksum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824643#comment-15824643
 ] 

Hadoop QA commented on HDFS-11339:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11339 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847669/HDFS-11339.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8c0463c41f3f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2f8e9b7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18184/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18184/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18184/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: 

[jira] [Updated] (HDFS-11337) [branch-2] Add instrumentation hooks around Datanode disk IO

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11337:
-
Attachment: HDFS-11337-branch-2.06.patch

The v6 patch has one change wrt the v5 patch.

{code}
  private void adjustCrcFilePosition() throws IOException {
streams.flushDataOut();
-checksumOut.flush();
+if (checksumOut != null) {
+  checksumOut.flush();
+}
{code}

The Jenkins test failure is unrelated. I plan to commit the v6 patch shortly.

> [branch-2] Add instrumentation hooks around Datanode disk IO
> 
>
> Key: HDFS-11337
> URL: https://issues.apache.org/jira/browse/HDFS-11337
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-11337-branch-2.01.patch, 
> HDFS-11337-branch-2.02.patch, HDFS-11337-branch-2.03.patch, 
> HDFS-11337-branch-2.04.patch, HDFS-11337-branch-2.05.patch, 
> HDFS-11337-branch-2.06.patch
>
>
> Cloned from HDFS-10958 to verify the branch-2 backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11342:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.0-alpha2
Target Version/s:   (was: 3.0.0-alpha2)
  Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for the contribution [~vagarychen]!

> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11342.001.patch
>
>
> FsVolumeImpl#loadLastPartialChunkChecksum leaks a FileInputStream here:
> {code}
>   @Override
>   public byte[] loadLastPartialChunkChecksum(
>   File blockFile, File metaFile) throws IOException {
> // readHeader closes the temporary FileInputStream.
> DataChecksum dcs = BlockMetadataHeader
> .readHeader(fileIoProvider.getFileInputStream(this, metaFile))
> .getChecksum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824622#comment-15824622
 ] 

Chen Liang commented on HDFS-11342:
---

The failed unit tests do not seem to be related and local tests passed.

> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HDFS-11342.001.patch
>
>
> FsVolumeImpl#loadLastPartialChunkChecksum leaks a FileInputStream here:
> {code}
>   @Override
>   public byte[] loadLastPartialChunkChecksum(
>   File blockFile, File metaFile) throws IOException {
> // readHeader closes the temporary FileInputStream.
> DataChecksum dcs = BlockMetadataHeader
> .readHeader(fileIoProvider.getFileInputStream(this, metaFile))
> .getChecksum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11337) [branch-2] Add instrumentation hooks around Datanode disk IO

2017-01-16 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824617#comment-15824617
 ] 

Jitendra Nath Pandey commented on HDFS-11337:
-

+1, The latest patch looks good to me.

> [branch-2] Add instrumentation hooks around Datanode disk IO
> 
>
> Key: HDFS-11337
> URL: https://issues.apache.org/jira/browse/HDFS-11337
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-11337-branch-2.01.patch, 
> HDFS-11337-branch-2.02.patch, HDFS-11337-branch-2.03.patch, 
> HDFS-11337-branch-2.04.patch, HDFS-11337-branch-2.05.patch
>
>
> Cloned from HDFS-10958 to verify the branch-2 backport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824586#comment-15824586
 ] 

Hadoop QA commented on HDFS-11342:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11342 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847660/HDFS-11342.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8218ef6d87d8 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2f8e9b7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18183/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18183/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18183/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> 

[jira] [Commented] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824535#comment-15824535
 ] 

Arpit Agarwal commented on HDFS-11339:
--

+1 pending Jenkins.

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11339:
--
Attachment: HDFS-11339.003.patch

Thanks [~arpitagarwal]. Patch v03 fixes the TestHdfsConfigFields test failure.

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch, HDFS-11339.003.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824446#comment-15824446
 ] 

Arpit Agarwal commented on HDFS-11342:
--

+1 pending Jenkins. Thanks for fixing this [~vagarychen].

> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HDFS-11342.001.patch
>
>
> FsVolumeImpl#loadLastPartialChunkChecksum leaks a FileInputStream here:
> {code}
>   @Override
>   public byte[] loadLastPartialChunkChecksum(
>   File blockFile, File metaFile) throws IOException {
> // readHeader closes the temporary FileInputStream.
> DataChecksum dcs = BlockMetadataHeader
> .readHeader(fileIoProvider.getFileInputStream(this, metaFile))
> .getChecksum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11342:
--
Status: Patch Available  (was: Open)

> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HDFS-11342.001.patch
>
>
> FsVolumeImpl#loadLastPartialChunkChecksum leaks a FileInputStream here:
> {code}
>   @Override
>   public byte[] loadLastPartialChunkChecksum(
>   File blockFile, File metaFile) throws IOException {
> // readHeader closes the temporary FileInputStream.
> DataChecksum dcs = BlockMetadataHeader
> .readHeader(fileIoProvider.getFileInputStream(this, metaFile))
> .getChecksum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11342) Fix FileInputStream leak in loadLastPartialChunkChecksum

2017-01-16 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11342:
--
Attachment: HDFS-11342.001.patch

uploaded v001 patch, using a try-with-resource syntax to close the stream after 
reading the checksum

> Fix FileInputStream leak in loadLastPartialChunkChecksum
> 
>
> Key: HDFS-11342
> URL: https://issues.apache.org/jira/browse/HDFS-11342
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HDFS-11342.001.patch
>
>
> FsVolumeImpl#loadLastPartialChunkChecksum leaks a FileInputStream here:
> {code}
>   @Override
>   public byte[] loadLastPartialChunkChecksum(
>   File blockFile, File metaFile) throws IOException {
> // readHeader closes the temporary FileInputStream.
> DataChecksum dcs = BlockMetadataHeader
> .readHeader(fileIoProvider.getFileInputStream(this, metaFile))
> .getChecksum();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11339) Support File IO sampling for Datanode IO profiling hooks

2017-01-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824391#comment-15824391
 ] 

Arpit Agarwal commented on HDFS-11339:
--

The TestHdfsConfigFields failure looks related. The rest are unrelated to this 
change.

> Support File IO sampling for Datanode IO profiling hooks
> 
>
> Key: HDFS-11339
> URL: https://issues.apache.org/jira/browse/HDFS-11339
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11339.000.patch, HDFS-11339.001.patch, 
> HDFS-11339.002.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of profiling all the file IO events, we can sample a fraction of the 
> events and profile them. The fraction to be sampled can be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824345#comment-15824345
 ] 

Hadoop QA commented on HDFS-10391:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 
1542 unchanged - 17 fixed = 1550 total (was 1559) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Nullcheck of NameNodeRpcServer.serviceRpcServer at line 502 of value 
previously dereferenced in new 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer(Configuration, 
NameNode)  At NameNodeRpcServer.java:502 of value previously dereferenced in 
new org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer(Configuration, 
NameNode)  At NameNodeRpcServer.java:[line 344] |
| Failed junit tests | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.TestDFSUtil |
| Timed out junit tests | org.apache.hadoop.tools.TestJMXGet |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847637/HDFS-10391.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea62d51dd57d 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-11303) Hedged read might hang infinitely if read data from all DN failed

2017-01-16 Thread Chen Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824141#comment-15824141
 ] 

Chen Zhang commented on HDFS-11303:
---

Hi Andrew,

I’m a newbie to apache community, HDFS-11303 is the first issue I've submitted
In last week, I received a mail said you updated this issue, thanks a lot for 
your attention on this issue!
But the issue seems been frozen to me, I can’t do any operation on it now, 
could you help point out what’s the status of this issue now, and what should I 
do next?

Thanks a lot

Best
Chen




> Hedged read might hang infinitely if read data from all DN failed 
> --
>
> Key: HDFS-11303
> URL: https://issues.apache.org/jira/browse/HDFS-11303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Chen Zhang
>Assignee: Chen Zhang
> Attachments: HDFS-11303-001.patch
>
>
> Hedged read will read from a DN first, if timeout, then read other DNs 
> simultaneously.
> If read all DN failed, this bug will cause the future-list not empty(the 
> first timeout request left in list), and hang in the loop infinitely



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10391) Always enable NameNode service RPC port

2017-01-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated HDFS-10391:
-
Attachment: HDFS-10391.002.patch

The 2nd patch fixes the findbugs error, most of the checkstyle errors (in some 
cases the 80 line length is hard to keep), and most of the unit test errors. 
However, I intentionally didn't fix TestDFSUtil.testGetNNUris() as I'm not 100% 
sure about the correct way of handling the nameservices in that test (exact 
same issue as in NameNode.get/setServiceAddress()).

> Always enable NameNode service RPC port
> ---
>
> Key: HDFS-10391
> URL: https://issues.apache.org/jira/browse/HDFS-10391
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Gergely Novák
>  Labels: Incompatible
> Attachments: HDFS-10391.001.patch, HDFS-10391.002.patch
>
>
> The NameNode should always be setup with a service RPC port so that it does 
> not have to be explicitly enabled by an administrator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11268) Correctly reconstruct erasure coding file from FSImage

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823969#comment-15823969
 ] 

Hadoop QA commented on HDFS-11268:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 25 unchanged - 2 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 
40s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847610/HDFS-11268-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eeb52ac147c9 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2f8e9b7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18181/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18181/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18181/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Correctly reconstruct erasure coding file from FSImage
> --
>
> Key: HDFS-11268
> URL: https://issues.apache.org/jira/browse/HDFS-11268
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>

[jira] [Commented] (HDFS-11268) Correctly reconstruct erasure coding file from FSImage

2017-01-16 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823810#comment-15823810
 ] 

SammiChen commented on HDFS-11268:
--

Hi [~jojochuang]], you're right, EC policy ID is stored using replication 
factor field. After I did more investigation, It turns out that the problem is 
the EC policy ID loading process from the  FsImage file. I updated the JIRA 
description accordingly. 

> Correctly reconstruct erasure coding file from FSImage
> --
>
> Key: HDFS-11268
> URL: https://issues.apache.org/jira/browse/HDFS-11268
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11268-001.patch
>
>
> Currently, FSImageFormatProtoBuf has the information about whether the file 
> is striped or not and saved file's erasure coding policy ID in replication 
> field. But later, when FSImage is loaded to create the name space, the 
> default system erasure coding policy is used to reconstruct file's block 
> structure.  In case if the erasure coding policy of file is not the default 
> erasure coding policy, the content of the file cannot be accessed correctly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11268) Correctly reconstruct erasure coding file from FSImage

2017-01-16 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11268:
-
Status: Patch Available  (was: Open)

> Correctly reconstruct erasure coding file from FSImage
> --
>
> Key: HDFS-11268
> URL: https://issues.apache.org/jira/browse/HDFS-11268
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11268-001.patch
>
>
> Currently, FSImageFormatProtoBuf has the information about whether the file 
> is striped or not and saved file's erasure coding policy ID in replication 
> field. But later, when FSImage is loaded to create the name space, the 
> default system erasure coding policy is used to reconstruct file's block 
> structure.  In case if the erasure coding policy of file is not the default 
> erasure coding policy, the content of the file cannot be accessed correctly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11268) Correctly reconstruct erasure coding file from FSImage

2017-01-16 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11268:
-
Attachment: HDFS-11268-001.patch

Initial patch

> Correctly reconstruct erasure coding file from FSImage
> --
>
> Key: HDFS-11268
> URL: https://issues.apache.org/jira/browse/HDFS-11268
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11268-001.patch
>
>
> Currently, FSImageFormatProtoBuf has the information about whether the file 
> is striped or not and saved file's erasure coding policy ID in replication 
> field. But later, when FSImage is loaded to create the name space, the 
> default system erasure coding policy is used to reconstruct file's block 
> structure.  In case if the erasure coding policy of file is not the default 
> erasure coding policy, the content of the file cannot be accessed correctly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11268) Correctly reconstruct erasure coding file from FSImage

2017-01-16 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11268:
-
Description: Currently, FSImageFormatProtoBuf has the information about 
whether the file is striped or not and saved file's erasure coding policy ID in 
replication field. But later, when FSImage is loaded to create the name space, 
the default system erasure coding policy is used to reconstruct file's block 
structure.  In case if the erasure coding policy of file is not the default 
erasure coding policy, the content of the file cannot be accessed correctly.   
(was: Currently, FSImage only has the information about whether the file is 
striped or not. It doesn't save the erasure coding policy ID. Later, when the 
FSImage is loaded to create the name space, the default system ec policy is 
used to as file's ec policy.  In case if the ec policy on file is not the 
default ec policy, then the content of the file cannot be accessed correctly in 
this case. )

> Correctly reconstruct erasure coding file from FSImage
> --
>
> Key: HDFS-11268
> URL: https://issues.apache.org/jira/browse/HDFS-11268
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> Currently, FSImageFormatProtoBuf has the information about whether the file 
> is striped or not and saved file's erasure coding policy ID in replication 
> field. But later, when FSImage is loaded to create the name space, the 
> default system erasure coding policy is used to reconstruct file's block 
> structure.  In case if the erasure coding policy of file is not the default 
> erasure coding policy, the content of the file cannot be accessed correctly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11268) Correctly reconstruct erasure coding file from FSImage

2017-01-16 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11268:
-
Summary: Correctly reconstruct erasure coding file from FSImage  (was: 
Persist erasure coding policy ID in FSImage)

> Correctly reconstruct erasure coding file from FSImage
> --
>
> Key: HDFS-11268
> URL: https://issues.apache.org/jira/browse/HDFS-11268
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> Currently, FSImage only has the information about whether the file is striped 
> or not. It doesn't save the erasure coding policy ID. Later, when the FSImage 
> is loaded to create the name space, the default system ec policy is used to 
> as file's ec policy.  In case if the ec policy on file is not the default ec 
> policy, then the content of the file cannot be accessed correctly in this 
> case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823778#comment-15823778
 ] 

Hadoop QA commented on HDFS-10506:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10506 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847590/HDFS-10506.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dfbf97f811ee 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2f8e9b7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18180/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18180/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18180/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> OIV's ReverseXML processor cannot reconstruct some snapshot details
> ---
>
> Key: HDFS-10506
> URL: https://issues.apache.org/jira/browse/HDFS-10506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin P. McCabe
>Assignee: 

[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11124:

Status: Patch Available  (was: Open)

> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11124.1.patch
>
>
> At moment, when we do fsck for an EC file which has corrupt blocks and 
> missing blocks, the result of fsck is like this:
> {quote}
> /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
> block(s): 
> /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
> block blk_-9223372036854775792
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
> len=393216 Live_repl=4  
> [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
> {quote}
> It would be useful for admins if it reports the blockIds of the internal 
> blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck

2017-01-16 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-11124:

Status: Open  (was: Patch Available)

> Report blockIds of internal blocks for EC files in Fsck
> ---
>
> Key: HDFS-11124
> URL: https://issues.apache.org/jira/browse/HDFS-11124
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11124.1.patch
>
>
> At moment, when we do fsck for an EC file which has corrupt blocks and 
> missing blocks, the result of fsck is like this:
> {quote}
> /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 
> block(s): 
> /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 
> block blk_-9223372036854775792
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 
> len=393216 Live_repl=4  
> [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)]
> {quote}
> It would be useful for admins if it reports the blockIds of the internal 
> blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2017-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823678#comment-15823678
 ] 

Hadoop QA commented on HDFS-11121:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 10 unchanged - 3 fixed = 10 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 
36s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12847583/HDFS-11121.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 21f0443dc436 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2f8e9b7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18179/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18179/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical

[jira] [Commented] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details

2017-01-16 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823628#comment-15823628
 ] 

Akira Ajisaka commented on HDFS-10506:
--

01 patch
* The output of OIV XML processor now contains the missing fields.
* ReverseXML processor can reconstruct the fields.
* Added the fields to the fsimage in TestOfflineImageViewer.

> OIV's ReverseXML processor cannot reconstruct some snapshot details
> ---
>
> Key: HDFS-10506
> URL: https://issues.apache.org/jira/browse/HDFS-10506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin P. McCabe
>Assignee: Akira Ajisaka
> Attachments: HDFS-10506.01.patch
>
>
> OIV's ReverseXML processor cannot reconstruct some snapshot details.  
> Specifically,  should contain a  and  field, 
> but does not.   should contain a  field.  OIV also 
> needs to be changed to emit these fields into the XML (they are currently 
> missing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details

2017-01-16 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10506:
-
Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> OIV's ReverseXML processor cannot reconstruct some snapshot details
> ---
>
> Key: HDFS-10506
> URL: https://issues.apache.org/jira/browse/HDFS-10506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin P. McCabe
>Assignee: Akira Ajisaka
> Attachments: HDFS-10506.01.patch
>
>
> OIV's ReverseXML processor cannot reconstruct some snapshot details.  
> Specifically,  should contain a  and  field, 
> but does not.   should contain a  field.  OIV also 
> needs to be changed to emit these fields into the XML (they are currently 
> missing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details

2017-01-16 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10506:
-
Attachment: HDFS-10506.01.patch

> OIV's ReverseXML processor cannot reconstruct some snapshot details
> ---
>
> Key: HDFS-10506
> URL: https://issues.apache.org/jira/browse/HDFS-10506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin P. McCabe
>Assignee: Akira Ajisaka
> Attachments: HDFS-10506.01.patch
>
>
> OIV's ReverseXML processor cannot reconstruct some snapshot details.  
> Specifically,  should contain a  and  field, 
> but does not.   should contain a  field.  OIV also 
> needs to be changed to emit these fields into the XML (they are currently 
> missing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11335) Remove HdfsClientConfigKeys.DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY usage from DNConf

2017-01-16 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823595#comment-15823595
 ] 

Lei (Eddy) Xu commented on HDFS-11335:
--

[~manojg]

Should we also remove {{DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY}} from 
{{DFSConfigKeys}}?

> Remove HdfsClientConfigKeys.DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY usage 
> from DNConf
> --
>
> Key: HDFS-11335
> URL: https://issues.apache.org/jira/browse/HDFS-11335
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11335.01.patch
>
>
> DataNode configuration {{DNConf}} reading in   
> {{HdfsClientConfigKeys.DFS_CLIENT_SLOW_IO_WARNING_THRESHOLD_KEY}} 
> (_dfs.client.slow.io.warning.threshold.ms_) is redundant as this threshold 
> limit is needed only for {{DfsClientConf}}. Better to remove the unwanted 
> usage in DNConf. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org