[jira] [Updated] (HDFS-7621) Erasure Coding: update the Balancer/Mover data migration logic

2015-05-30 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7621:

Attachment: HDFS-7621.007.patch

007 patch address issues Zhe Zhang commented. Fix indices bug in 
BlockManager.addBlock. Thank [~zhz] for careful reviews.
It would be better if we have a trunk committer to review this, since most 
changes are about trunk code.

> Erasure Coding: update the Balancer/Mover data migration logic
> --
>
> Key: HDFS-7621
> URL: https://issues.apache.org/jira/browse/HDFS-7621
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Walter Su
>  Labels: HDFS-7285
> Attachments: HDFS-7621.001.patch, HDFS-7621.002.patch, 
> HDFS-7621.003.patch, HDFS-7621.004.patch, HDFS-7621.005.patch, 
> HDFS-7621.006.patch, HDFS-7621.007.patch
>
>
> Currently the Balancer/Mover only considers the distribution of replicas of 
> the same block during data migration: the migration cannot decrease the 
> number of racks. With EC the Balancer and Mover should also take into account 
> the distribution of blocks belonging to the same block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8489) Subclass BlockInfo to represent contiguous blocks

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565877#comment-14565877
 ] 

Hadoop QA commented on HDFS-8489:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 10 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 12s | The applied patch generated  2 
new checkstyle issues (total was 692, now 689). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 22s | Tests failed in hadoop-hdfs. |
| | | 209m 25s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736312/HDFS-8489.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / eb6bf91 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11171/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11171/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11171/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11171/console |


This message was automatically generated.

> Subclass BlockInfo to represent contiguous blocks
> -
>
> Key: HDFS-8489
> URL: https://issues.apache.org/jira/browse/HDFS-8489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8489.00.patch, HDFS-8489.01.patch, 
> HDFS-8489.02.patch, HDFS-8489.03.patch, HDFS-8489.04.patch
>
>
> As second step of the cleanup, we should make {{BlockInfo}} an abstract class 
> and merge the subclass {{BlockInfoContiguous}} from HDFS-7285 into trunk. The 
> patch should clearly separate where to use the abstract class versus the 
> subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-05-30 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8460:

Attachment: HDFS-8460-HDFS-7285.002.patch

> Erasure Coding: stateful read result doesn't match data occasionally because 
> of flawed test
> ---
>
> Key: HDFS-8460
> URL: https://issues.apache.org/jira/browse/HDFS-8460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Yi Liu
>Assignee: Walter Su
> Attachments: HDFS-8460-HDFS-7285.001.patch, 
> HDFS-8460-HDFS-7285.002.patch
>
>
> I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
> occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8500) Improve refreshNode in DFSAdmin to support hdfs federation

2015-05-30 Thread Zhang Wei (JIRA)
Zhang Wei created HDFS-8500:
---

 Summary: Improve refreshNode in DFSAdmin to support hdfs federation
 Key: HDFS-8500
 URL: https://issues.apache.org/jira/browse/HDFS-8500
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Affects Versions: 2.7.0
Reporter: Zhang Wei
Priority: Minor


In hdfs federated cluster, we find the dfsadmin command "hdfs dfsadmin 
-refreshNodes"  can only refresh the nameservice (or namenode) which configured 
in fs.defaultFS. The other nameservices configured in hdfs-site.xml can't be 
refreshed unless changing the fs.defaultFS value and run this command again. I 
think we need additional parameters as following to give a convenient way: 
[-a] fresh all namenodes configured in hdfs-site.xml.
[-ns ] specify a nameservice to refresh.
[host:ipc_port] specify a nn to refresh.

please give your opinions. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8500) Improve refreshNodes in DFSAdmin to support hdfs federation

2015-05-30 Thread Zhang Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhang Wei updated HDFS-8500:

Summary: Improve refreshNodes in DFSAdmin to support hdfs federation  (was: 
Improve refreshNode in DFSAdmin to support hdfs federation)

> Improve refreshNodes in DFSAdmin to support hdfs federation
> ---
>
> Key: HDFS-8500
> URL: https://issues.apache.org/jira/browse/HDFS-8500
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 2.7.0
>Reporter: Zhang Wei
>Priority: Minor
>  Labels: decommission, federation, refreshnodes
>
> In hdfs federated cluster, we find the dfsadmin command "hdfs dfsadmin 
> -refreshNodes"  can only refresh the nameservice (or namenode) which 
> configured in fs.defaultFS. The other nameservices configured in 
> hdfs-site.xml can't be refreshed unless changing the fs.defaultFS value and 
> run this command again. I think we need additional parameters as following to 
> give a convenient way: 
> [-a] fresh all namenodes configured in hdfs-site.xml.
> [-ns ] specify a nameservice to refresh.
> [host:ipc_port] specify a nn to refresh.
> please give your opinions. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8500) Improve refreshNodes in DFSAdmin to support hdfs federation

2015-05-30 Thread Zhang Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhang Wei reassigned HDFS-8500:
---

Assignee: Zhang Wei

> Improve refreshNodes in DFSAdmin to support hdfs federation
> ---
>
> Key: HDFS-8500
> URL: https://issues.apache.org/jira/browse/HDFS-8500
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Affects Versions: 2.7.0
>Reporter: Zhang Wei
>Assignee: Zhang Wei
>Priority: Minor
>  Labels: decommission, federation, refreshnodes
>
> In hdfs federated cluster, we find the dfsadmin command "hdfs dfsadmin 
> -refreshNodes"  can only refresh the nameservice (or namenode) which 
> configured in fs.defaultFS. The other nameservices configured in 
> hdfs-site.xml can't be refreshed unless changing the fs.defaultFS value and 
> run this command again. I think we need additional parameters as following to 
> give a convenient way: 
> [-a] fresh all namenodes configured in hdfs-site.xml.
> [-ns ] specify a nameservice to refresh.
> [host:ipc_port] specify a nn to refresh.
> please give your opinions. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8501) Erasure Coding: Improve memory efficiency of BlockInfoStriped

2015-05-30 Thread Walter Su (JIRA)
Walter Su created HDFS-8501:
---

 Summary: Erasure Coding: Improve memory efficiency of 
BlockInfoStriped
 Key: HDFS-8501
 URL: https://issues.apache.org/jira/browse/HDFS-8501
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


Erasure Coding: Improve memory efficiency of BlockInfoStriped

Assume we have a BlockInfoStriped:
{noformat}
triplets[] = {s0, s1, s2, s3}
indices[] = {0, 1, 2, 3}
{noformat}

When we run balancer/mover to re-locate replica on s2, firstly it becomes:
{noformat}
triplets[] = {s0, s1, s2, s3, s2}
indices[] = {0, 1, 2, 3, 2}
{noformat}
Then the replica on s1 is removed, finally it becomes:
{noformat}
triplets[] = {s0, s1, null, s3, s2}
indices[] = {0, 1, -1, 3, 2}
{noformat}

The worst case is:
{noformat}
triplets[] = {null, null, null, null, s0, s1, s2, s3}
indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
{noformat}


We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a storage 
is removed, we bring the last item front.
With the improvement, the worst case become:
{noformat}
triplets[] = {s0, s1, s2, s3, null}
indices[] = {0, 1, 2, 3, -1}
{noformat}
We have an empty slot.

Notes:
Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
worst case could be:
{noformat}
triplets[] = {s0, s1, s2, s3, null, null, null, null}
indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
{noformat}
But the Balancer strategy won't move same block/blockGroup twice in a row. So 
this case is very rare.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8501) Erasure Coding: Improve memory efficiency of BlockInfoStriped

2015-05-30 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8501:

Description: 
Erasure Coding: Improve memory efficiency of BlockInfoStriped

Assume we have a BlockInfoStriped:
{noformat}
triplets[] = {s0, s1, s2, s3}
indices[] = {0, 1, 2, 3}
{noformat}

When we run balancer/mover to re-locate replica on s2, firstly it becomes:
{noformat}
triplets[] = {s0, s1, s2, s3, s2}
indices[] = {0, 1, 2, 3, 2}
{noformat}
Then the replica on s2 is removed, finally it becomes:
{noformat}
triplets[] = {s0, s1, null, s3, s2}
indices[] = {0, 1, -1, 3, 2}
{noformat}

The worst case is:
{noformat}
triplets[] = {null, null, null, null, s0, s1, s2, s3}
indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
{noformat}


We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a storage 
is removed, we bring the last item front.
With the improvement, the worst case become:
{noformat}
triplets[] = {s0, s1, s2, s3, null}
indices[] = {0, 1, 2, 3, -1}
{noformat}
We have an empty slot.

Notes:
Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
worst case could be:
{noformat}
triplets[] = {s0, s1, s2, s3, null, null, null, null}
indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
{noformat}
But the Balancer strategy won't move same block/blockGroup twice in a row. So 
this case is very rare.

  was:
Erasure Coding: Improve memory efficiency of BlockInfoStriped

Assume we have a BlockInfoStriped:
{noformat}
triplets[] = {s0, s1, s2, s3}
indices[] = {0, 1, 2, 3}
{noformat}

When we run balancer/mover to re-locate replica on s2, firstly it becomes:
{noformat}
triplets[] = {s0, s1, s2, s3, s2}
indices[] = {0, 1, 2, 3, 2}
{noformat}
Then the replica on s1 is removed, finally it becomes:
{noformat}
triplets[] = {s0, s1, null, s3, s2}
indices[] = {0, 1, -1, 3, 2}
{noformat}

The worst case is:
{noformat}
triplets[] = {null, null, null, null, s0, s1, s2, s3}
indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
{noformat}


We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a storage 
is removed, we bring the last item front.
With the improvement, the worst case become:
{noformat}
triplets[] = {s0, s1, s2, s3, null}
indices[] = {0, 1, 2, 3, -1}
{noformat}
We have an empty slot.

Notes:
Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
worst case could be:
{noformat}
triplets[] = {s0, s1, s2, s3, null, null, null, null}
indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
{noformat}
But the Balancer strategy won't move same block/blockGroup twice in a row. So 
this case is very rare.


> Erasure Coding: Improve memory efficiency of BlockInfoStriped
> -
>
> Key: HDFS-8501
> URL: https://issues.apache.org/jira/browse/HDFS-8501
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
>
> Erasure Coding: Improve memory efficiency of BlockInfoStriped
> Assume we have a BlockInfoStriped:
> {noformat}
> triplets[] = {s0, s1, s2, s3}
> indices[] = {0, 1, 2, 3}
> {noformat}
> When we run balancer/mover to re-locate replica on s2, firstly it becomes:
> {noformat}
> triplets[] = {s0, s1, s2, s3, s2}
> indices[] = {0, 1, 2, 3, 2}
> {noformat}
> Then the replica on s2 is removed, finally it becomes:
> {noformat}
> triplets[] = {s0, s1, null, s3, s2}
> indices[] = {0, 1, -1, 3, 2}
> {noformat}
> The worst case is:
> {noformat}
> triplets[] = {null, null, null, null, s0, s1, s2, s3}
> indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
> {noformat}
> We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a 
> storage is removed, we bring the last item front.
> With the improvement, the worst case become:
> {noformat}
> triplets[] = {s0, s1, s2, s3, null}
> indices[] = {0, 1, 2, 3, -1}
> {noformat}
> We have an empty slot.
> Notes:
> Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
> worst case could be:
> {noformat}
> triplets[] = {s0, s1, s2, s3, null, null, null, null}
> indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
> {noformat}
> But the Balancer strategy won't move same block/blockGroup twice in a row. So 
> this case is very rare.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8471) Implement read block over HTTP/2

2015-05-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HDFS-8471:

Attachment: HDFS-8471.2.patch

Hide stream id when calling ReadBlockHandler. 

https://github.com/netty/netty/issues/3667 has not been finished yet, so we 
need to hack it by ourselves.

> Implement read block over HTTP/2
> 
>
> Key: HDFS-8471
> URL: https://issues.apache.org/jira/browse/HDFS-8471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8471.1.patch, HDFS-8471.2.patch, HDFS-8471.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8501) Erasure Coding: Improve memory efficiency of BlockInfoStriped

2015-05-30 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8501:

Description: 
Erasure Coding: Improve memory efficiency of BlockInfoStriped

Assume we have a BlockInfoStriped:
{noformat}
triplets[] = {s0, s1, s2, s3}
indices[] = {0, 1, 2, 3}
{noformat}

When we run balancer/mover to re-locate replica on s2, firstly it becomes:
{noformat}
triplets[] = {s0, s1, s2, s3, s4}
indices[] = {0, 1, 2, 3, 2}
{noformat}
Then the replica on s2 is removed, finally it becomes:
{noformat}
triplets[] = {s0, s1, null, s3, s4}
indices[] = {0, 1, -1, 3, 2}
{noformat}

The worst case is:
{noformat}
triplets[] = {null, null, null, null, s4, s5, s6, s7}
indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
{noformat}


We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a storage 
is removed, we bring the last item front.
With the improvement, the worst case become:
{noformat}
triplets[] = {s4, s5, s6, s7, null}
indices[] = {0, 1, 2, 3, -1}
{noformat}
We have an empty slot.

Notes:
Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
worst case could be:
{noformat}
triplets[] = {s4, s5, s6, s7, null, null, null, null}
indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
{noformat}
But the Balancer strategy won't move same block/blockGroup twice in a row. So 
this case is very rare.

  was:
Erasure Coding: Improve memory efficiency of BlockInfoStriped

Assume we have a BlockInfoStriped:
{noformat}
triplets[] = {s0, s1, s2, s3}
indices[] = {0, 1, 2, 3}
{noformat}

When we run balancer/mover to re-locate replica on s2, firstly it becomes:
{noformat}
triplets[] = {s0, s1, s2, s3, s2}
indices[] = {0, 1, 2, 3, 2}
{noformat}
Then the replica on s2 is removed, finally it becomes:
{noformat}
triplets[] = {s0, s1, null, s3, s2}
indices[] = {0, 1, -1, 3, 2}
{noformat}

The worst case is:
{noformat}
triplets[] = {null, null, null, null, s0, s1, s2, s3}
indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
{noformat}


We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a storage 
is removed, we bring the last item front.
With the improvement, the worst case become:
{noformat}
triplets[] = {s0, s1, s2, s3, null}
indices[] = {0, 1, 2, 3, -1}
{noformat}
We have an empty slot.

Notes:
Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
worst case could be:
{noformat}
triplets[] = {s0, s1, s2, s3, null, null, null, null}
indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
{noformat}
But the Balancer strategy won't move same block/blockGroup twice in a row. So 
this case is very rare.


> Erasure Coding: Improve memory efficiency of BlockInfoStriped
> -
>
> Key: HDFS-8501
> URL: https://issues.apache.org/jira/browse/HDFS-8501
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
>
> Erasure Coding: Improve memory efficiency of BlockInfoStriped
> Assume we have a BlockInfoStriped:
> {noformat}
> triplets[] = {s0, s1, s2, s3}
> indices[] = {0, 1, 2, 3}
> {noformat}
> When we run balancer/mover to re-locate replica on s2, firstly it becomes:
> {noformat}
> triplets[] = {s0, s1, s2, s3, s4}
> indices[] = {0, 1, 2, 3, 2}
> {noformat}
> Then the replica on s2 is removed, finally it becomes:
> {noformat}
> triplets[] = {s0, s1, null, s3, s4}
> indices[] = {0, 1, -1, 3, 2}
> {noformat}
> The worst case is:
> {noformat}
> triplets[] = {null, null, null, null, s4, s5, s6, s7}
> indices[] = {-1, -1, -1, -1, 0, 1, 2, 3}
> {noformat}
> We should learn from {{BlockInfoContiguous.removeStorage(..)}}. When a 
> storage is removed, we bring the last item front.
> With the improvement, the worst case become:
> {noformat}
> triplets[] = {s4, s5, s6, s7, null}
> indices[] = {0, 1, 2, 3, -1}
> {noformat}
> We have an empty slot.
> Notes:
> Assume we copy 4 storage first, then delete 4. Even with the improvement, the 
> worst case could be:
> {noformat}
> triplets[] = {s4, s5, s6, s7, null, null, null, null}
> indices[] = {0, 1, 2, 3, -1, -1, -1, -1}
> {noformat}
> But the Balancer strategy won't move same block/blockGroup twice in a row. So 
> this case is very rare.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565939#comment-14565939
 ] 

Hudson commented on HDFS-7401:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #943 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/943/])
HDFS-7401. Add block info to DFSInputStream' WARN message when it adds node to 
deadNodes (Contributed by Arshad Mohammad) (vinayakumarb: rev 
b75df697e0f101f86788ad23a338ab3545b8d702)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java


> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Arshad Mohammad
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-7401-2.patch, HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) Avoid retry cache collision when Standby NameNode loading edits

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565935#comment-14565935
 ] 

Hudson commented on HDFS-7609:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #943 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/943/])
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma. (jing9: rev 7817674a3a4d097b647dd77f1345787dd376d5ea)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


> Avoid retry cache collision when Standby NameNode loading edits
> ---
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7609-2.patch, HDFS-7609-3.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8460) Erasure Coding: stateful read result doesn't match data occasionally because of flawed test

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565946#comment-14565946
 ] 

Hadoop QA commented on HDFS-8460:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   5m 35s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 24s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 172m 49s | Tests failed in hadoop-hdfs. |
| | | 193m 35s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736319/HDFS-8460-HDFS-7285.002.patch
 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11172/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11172/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11172/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11172/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11172/console |


This message was automatically generated.

> Erasure Coding: stateful read result doesn't match data occasionally because 
> of flawed test
> ---
>
> Key: HDFS-8460
> URL: https://issues.apache.org/jira/browse/HDFS-8460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Yi Liu
>Assignee: Walter Su
> Attachments: HDFS-8460-HDFS-7285.001.patch, 
> HDFS-8460-HDFS-7285.002.patch
>
>
> I found this issue in TestDFSStripedInputStream, {{testStatefulRead}} failed 
> occasionally shows that read result doesn't match data written.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565964#comment-14565964
 ] 

Hudson commented on HDFS-7401:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #213 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/213/])
HDFS-7401. Add block info to DFSInputStream' WARN message when it adds node to 
deadNodes (Contributed by Arshad Mohammad) (vinayakumarb: rev 
b75df697e0f101f86788ad23a338ab3545b8d702)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java


> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Arshad Mohammad
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-7401-2.patch, HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) Avoid retry cache collision when Standby NameNode loading edits

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565960#comment-14565960
 ] 

Hudson commented on HDFS-7609:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #213 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/213/])
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma. (jing9: rev 7817674a3a4d097b647dd77f1345787dd376d5ea)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Avoid retry cache collision when Standby NameNode loading edits
> ---
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7609-2.patch, HDFS-7609-3.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8471) Implement read block over HTTP/2

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565968#comment-14565968
 ] 

Hadoop QA commented on HDFS-8471:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 52s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:red}-1{color} | javac |   7m 46s | The applied patch generated  2  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 19s | The applied patch generated  
34 new checkstyle issues (total was 163, now 189). |
| {color:green}+1{color} | whitespace |   0m  5s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 19s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 147m 16s | Tests failed in hadoop-hdfs. |
| | | 194m  6s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.server.datanode.TestDataStorage |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.security.TestPermission |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestAppendDifferentChecksum |
|   | hadoop.hdfs.TestRemoteBlockReader |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.cli.TestAclCLI |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.security.token.block.TestBlockToken |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | hadoop.hdfs.TestDFSStartupVersions |
|   | hadoop.hdfs.TestWriteBlockGetsBlockLengthHint |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.TestFetchImage |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestDFSInputStream |
|   | hadoop.hdfs.TestDFSOutputStream |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.hdfs.TestReserved

[jira] [Updated] (HDFS-8471) Implement read block over HTTP/2

2015-05-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HDFS-8471:

Attachment: HDFS-8471.3.patch

Add SuppressWarnings to remove the javac warnings. Clean up some unused imports.

> Implement read block over HTTP/2
> 
>
> Key: HDFS-8471
> URL: https://issues.apache.org/jira/browse/HDFS-8471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8471.1.patch, HDFS-8471.2.patch, HDFS-8471.3.patch, 
> HDFS-8471.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8452) In WebHDFS, duplicate directory creation is not throwing exception.

2015-05-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565999#comment-14565999
 ] 

Steve Loughran commented on HDFS-8452:
--

FWIW {{java.io.File.mkdirs()}} doesn't throw an error if the destination exists 
and is a file; it returns 0 if there was anything at the path, be it file, 
directory or some OS-specific thing.

{{RawLocalFS() adds a check for the destination existing as something other 
than a dir, though there's a small race condition with the check and the dir 
creation being separate}}

> In WebHDFS, duplicate directory creation is not throwing exception.
> ---
>
> Key: HDFS-8452
> URL: https://issues.apache.org/jira/browse/HDFS-8452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Jagadesh Kiran N
>Priority: Minor
>
> *Case 1 (CLI):*
> a. In HDFS Create a new Directory 
>   {code}./hdfs dfs -mkdir /new  , A New directory will be 
> created{code}
>b. Now Execute the same Command again 
> {code}   mkdir: `/new': File exists  , Error message will be shown  {code}
> *Case 2 (RestAPI) :*
> a. In HDFS Create a new Directory
>  {code}curl -i -X PUT -L 
> "http://host1:50070/webhdfs/v1/new1?op=MKDIRS&overwrite=false"{code}
>   A New Directory will be created 
>  b. Now Execute the same webhdfs  command again 
> No exception will be thrown back to the client.
>{code}
> HTTP/1.1 200 OK
> Cache-Control: no-cache
> Expires: Thu, 21 May 2015 15:11:57 GMT
> Date: Thu, 21 May 2015 15:11:57 GMT
> Pragma: no-cache
> Content-Type: application/json
> Transfer-Encoding: chunked
>{code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) Avoid retry cache collision when Standby NameNode loading edits

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566008#comment-14566008
 ] 

Hudson commented on HDFS-7609:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2141 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2141/])
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma. (jing9: rev 7817674a3a4d097b647dd77f1345787dd376d5ea)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Avoid retry cache collision when Standby NameNode loading edits
> ---
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7609-2.patch, HDFS-7609-3.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) Avoid retry cache collision when Standby NameNode loading edits

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566017#comment-14566017
 ] 

Hudson commented on HDFS-7609:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #202 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/202/])
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma. (jing9: rev 7817674a3a4d097b647dd77f1345787dd376d5ea)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Avoid retry cache collision when Standby NameNode loading edits
> ---
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7609-2.patch, HDFS-7609-3.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8078) HDFS client gets errors trying to to connect to IPv6 DataNode

2015-05-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566025#comment-14566025
 ] 

Steve Loughran commented on HDFS-8078:
--

bq. We could design a more coherent test plan for this functionality (how will 
we know if a new change breaks ipv6 functionality? Right now we have no idea 
because Jenkins explicitly disables ipv6.)

well, that's easily fixed if it's a maven/jenkins setup

> HDFS client gets errors trying to to connect to IPv6 DataNode
> -
>
> Key: HDFS-8078
> URL: https://issues.apache.org/jira/browse/HDFS-8078
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>  Labels: BB2015-05-TBR, ipv6
> Attachments: HDFS-8078.9.patch
>
>
> 1st exception, on put:
> 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: 2401:db00:1010:70ba:face:0:8:0:50010
>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
> Appears to actually stem from code in DataNodeID which assumes it's safe to 
> append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for 
> IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
> requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
> Currently using InetAddress.getByName() to validate IPv6 (guava 
> InetAddresses.forString has been flaky) but could also use our own parsing. 
> (From logging this, it seems like a low-enough frequency call that the extra 
> object creation shouldn't be problematic, and for me the slight risk of 
> passing in bad input that is not actually an IPv4 or IPv6 address and thus 
> calling an external DNS lookup is outweighed by getting the address 
> normalized and avoiding rewriting parsing.)
> Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
> ---
> 2nd exception (on datanode)
> 15/04/13 13:18:07 ERROR datanode.DataNode: 
> dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
> operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
> /2401:db00:11:d010:face:0:2f:0:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
> at java.lang.Thread.run(Thread.java:745)
> Which also comes as client error "-get: 2401 is not an IP string literal."
> This one has existing parsing logic which needs to shift to the last colon 
> rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
> rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) Avoid retry cache collision when Standby NameNode loading edits

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566044#comment-14566044
 ] 

Hudson commented on HDFS-7609:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/211/])
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma. (jing9: rev 7817674a3a4d097b647dd77f1345787dd376d5ea)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


> Avoid retry cache collision when Standby NameNode loading edits
> ---
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7609-2.patch, HDFS-7609-3.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) Avoid retry cache collision when Standby NameNode loading edits

2015-05-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566055#comment-14566055
 ] 

Hudson commented on HDFS-7609:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2159/])
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma. (jing9: rev 7817674a3a4d097b647dd77f1345787dd376d5ea)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


> Avoid retry cache collision when Standby NameNode loading edits
> ---
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7609-2.patch, HDFS-7609-3.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8471) Implement read block over HTTP/2

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566102#comment-14566102
 ] 

Hadoop QA commented on HDFS-8471:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 12s | The applied patch generated  
18 new checkstyle issues (total was 163, now 171). |
| {color:green}+1{color} | whitespace |   0m  4s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 16s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 15s | Tests failed in hadoop-hdfs. |
| | | 209m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736339/HDFS-8471.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / eb6bf91 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11174/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11174/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11174/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11174/console |


This message was automatically generated.

> Implement read block over HTTP/2
> 
>
> Key: HDFS-8471
> URL: https://issues.apache.org/jira/browse/HDFS-8471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8471.1.patch, HDFS-8471.2.patch, HDFS-8471.3.patch, 
> HDFS-8471.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-05-30 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore reassigned HDFS-6249:


Assignee: surendra singh lilhore

> Output AclEntry in PBImageXmlWriter
> ---
>
> Key: HDFS-6249
> URL: https://issues.apache.org/jira/browse/HDFS-6249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: surendra singh lilhore
>Priority: Minor
>  Labels: newbie
>
> It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8471) Implement read block over HTTP/2

2015-05-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HDFS-8471:

Attachment: HDFS-8471.4.patch

Do not create new promise in DtpHttp2Handler.

> Implement read block over HTTP/2
> 
>
> Key: HDFS-8471
> URL: https://issues.apache.org/jira/browse/HDFS-8471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8471.1.patch, HDFS-8471.2.patch, HDFS-8471.3.patch, 
> HDFS-8471.4.patch, HDFS-8471.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8471) Implement read block over HTTP/2

2015-05-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566381#comment-14566381
 ] 

Hadoop QA commented on HDFS-8471:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  
16 new checkstyle issues (total was 163, now 169). |
| {color:green}+1{color} | whitespace |   0m  4s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 18s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 164m  1s | Tests failed in hadoop-hdfs. |
| | | 210m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736386/HDFS-8471.4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a8acdd6 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11175/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11175/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11175/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11175/console |


This message was automatically generated.

> Implement read block over HTTP/2
> 
>
> Key: HDFS-8471
> URL: https://issues.apache.org/jira/browse/HDFS-8471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8471.1.patch, HDFS-8471.2.patch, HDFS-8471.3.patch, 
> HDFS-8471.4.patch, HDFS-8471.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-05-30 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566384#comment-14566384
 ] 

Jitendra Nath Pandey commented on HDFS-7240:


Webex:
https://hortonworks.webex.com/meet/jitendra

1-650-479-3208 Call-in toll number (US/Canada) 
1-877-668-4493 Call-in toll-free number (US/Canada)
Access code: 623 433 021​

Time: 6/3/2015, 1pm to 3pm

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8502) Ozone: Storage container data pipeline

2015-05-30 Thread Jitendra Nath Pandey (JIRA)
Jitendra Nath Pandey created HDFS-8502:
--

 Summary: Ozone: Storage container data pipeline
 Key: HDFS-8502
 URL: https://issues.apache.org/jira/browse/HDFS-8502
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey


This jira layout the basic framework of the data pipeline to replicate the 
storage containers while writing. An important design goal is to keep the 
pipeline semantics independent of the storage container implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)