[jira] [Commented] (HDFS-8859) Improve DataNode (ReplicaMap) memory footprint to save about 45%

2015-08-09 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679157#comment-14679157
 ] 

Yi Liu commented on HDFS-8859:
--

The two test failures are not related.

> Improve DataNode (ReplicaMap) memory footprint to save about 45%
> 
>
> Key: HDFS-8859
> URL: https://issues.apache.org/jira/browse/HDFS-8859
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-8859.001.patch, HDFS-8859.002.patch
>
>
> By using following approach we can save about *45%* memory footprint for each 
> block replica in DataNode memory (This JIRA only talks about *ReplicaMap* in 
> DataNode), the details are:
> In ReplicaMap, 
> {code}
> private final Map> map =
> new HashMap>();
> {code}
> Currently we use a HashMap {{Map}} to store the replicas 
> in memory.  The key is block id of the block replica which is already 
> included in {{ReplicaInfo}}, so this memory can be saved.  Also HashMap Entry 
> has a object overhead.  We can implement a lightweight Set which is  similar 
> to {{LightWeightGSet}}, but not a fixed size ({{LightWeightGSet}} uses fix 
> size for the entries array, usually it's a big value, an example is 
> {{BlocksMap}}, this can avoid full gc since no need to resize),  also we 
> should be able to get Element through key.
> Following is comparison of memory footprint If we implement a lightweight set 
> as described:
> We can save:
> {noformat}
> SIZE (bytes)   ITEM
> 20The Key: Long (12 bytes object overhead + 8 
> bytes long)
> 12HashMap Entry object overhead
> 4  reference to the key in Entry
> 4  reference to the value in Entry
> 4  hash in Entry
> {noformat}
> Total:  -44 bytes
> We need to add:
> {noformat}
> SIZE (bytes)   ITEM
> 4 a reference to next element in ReplicaInfo
> {noformat}
> Total:  +4 bytes
> So totally we can save 40bytes for each block replica 
> And currently one finalized replica needs around 46 bytes (notice: we ignore 
> memory alignment here).
> We can save 1 - (4 + 46) / (44 + 46) = *45%*  memory for each block replica 
> in DataNode.
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-08-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8630:
-
Attachment: HDFS-8630.001.patch

Rebased the patch based on changes done in HDFS-8815 ...
Please review 

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679233#comment-14679233
 ] 

Hadoop QA commented on HDFS-8630:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m  5s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |  10m  1s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   3m 16s | The applied patch generated  2 
new checkstyle issues (total was 140, now 142). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 52s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 42s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 57s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  95m 15s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 27s | Tests passed in 
hadoop-hdfs-client. |
| | | 154m  1s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
| Timed out tests | org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749466/HDFS-8630.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 8f73bdd |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11948/console |


This message was automatically generated.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-08-09 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679401#comment-14679401
 ] 

Kai Sasaki commented on HDFS-8287:
--

When running these test in local, they do not fail. Findbugs might cause this 
error? Could you please check it?

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8838) Tolerate datanode failures in DFSStripedOutputStream when the data length is small

2015-08-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8838:
--
Attachment: h8838_20150809.patch

h8838_20150809.patch: fixes getBlockGroup.

> Tolerate datanode failures in DFSStripedOutputStream when the data length is 
> small
> --
>
> Key: HDFS-8838
> URL: https://issues.apache.org/jira/browse/HDFS-8838
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-8838-HDFS-7285-000.patch, h8838_20150729.patch, 
> h8838_20150731-HDFS-7285.patch, h8838_20150731.log, h8838_20150731.patch, 
> h8838_20150804-HDFS-7285.patch, h8838_20150809.patch
>
>
> Currently, DFSStripedOutputStream cannot tolerate datanode failures when the 
> data length is small.  We fix the bugs here and add more tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8838) Tolerate datanode failures in DFSStripedOutputStream when the data length is small

2015-08-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8838:
--
Attachment: HDFS-8838-HDFS-7285-20150809.patch

> Tolerate datanode failures in DFSStripedOutputStream when the data length is 
> small
> --
>
> Key: HDFS-8838
> URL: https://issues.apache.org/jira/browse/HDFS-8838
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-8838-HDFS-7285-000.patch, 
> HDFS-8838-HDFS-7285-20150809.patch, h8838_20150729.patch, 
> h8838_20150731-HDFS-7285.patch, h8838_20150731.log, h8838_20150731.patch, 
> h8838_20150804-HDFS-7285.patch, h8838_20150809.patch
>
>
> Currently, DFSStripedOutputStream cannot tolerate datanode failures when the 
> data length is small.  We fix the bugs here and add more tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-08-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679443#comment-14679443
 ] 

Walter Su commented on HDFS-8287:
-

{code}
+currentCellBuffersIndex = currentCellBuffersIndex++ % 2;
{code}

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8838) Tolerate datanode failures in DFSStripedOutputStream when the data length is small

2015-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679554#comment-14679554
 ] 

Hadoop QA commented on HDFS-8838:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 26s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 39s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 22s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 212m 30s | Tests failed in hadoop-hdfs. |
| | | 254m 54s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestWriteStripedFileWithFailure |
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749501/HDFS-8838-HDFS-7285-20150809.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / fbf7e81 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11949/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11949/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11949/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11949/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11949/console |


This message was automatically generated.

> Tolerate datanode failures in DFSStripedOutputStream when the data length is 
> small
> --
>
> Key: HDFS-8838
> URL: https://issues.apache.org/jira/browse/HDFS-8838
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-8838-HDFS-7285-000.patch, 
> HDFS-8838-HDFS-7285-20150809.patch, h8838_20150729.patch, 
> h8838_20150731-HDFS-7285.patch, h8838_20150731.log, h8838_20150731.patch, 
> h8838_20150804-HDFS-7285.patch, h8838_20150809.patch
>
>
> Currently, DFSStripedOutputStream cannot tolerate datanode failures when the 
> data length is small.  We fix the bugs here and add more tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-08-09 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679593#comment-14679593
 ] 

Kai Sasaki commented on HDFS-8287:
--

[~walter.k.su] Sorry. What did you mean with this?

{quote}
{code}
+currentCellBuffersIndex = currentCellBuffersIndex++ % 2;
{code}
{quote}

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8882:
---

 Summary: Use datablocks, parityblocks and cell size from ec zone
 Key: HDFS-8882
 URL: https://issues.apache.org/jira/browse/HDFS-8882
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


As part of earlier development, constants were used for datablocks, parity 
blocks and cellsize.

Now all these are available in ec zone. Use from there and stop using constant 
values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8882:

Attachment: HDFS-8882-HDFS-7285-01.patch

Attaching the patch. Please review.

> Use datablocks, parityblocks and cell size from ec zone
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8882:

Affects Version/s: HDFS-7285
   Status: Patch Available  (was: Open)

> Use datablocks, parityblocks and cell size from ec zone
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679633#comment-14679633
 ] 

Kai Zheng commented on HDFS-8882:
-

Thanks [~vinayrpet] for the big effort! Just a quick question, does this need 
to sync with HDFS-8854, or did it?

> Use datablocks, parityblocks and cell size from ec zone
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8287) DFSStripedOutputStream.writeChunk should not wait for writing parity

2015-08-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679636#comment-14679636
 ] 

Walter Su commented on HDFS-8287:
-

Your idea in the patch is good. It's just you need to rethink what's the result 
of this line of code, and thread synchronization.

> DFSStripedOutputStream.writeChunk should not wait for writing parity 
> -
>
> Key: HDFS-8287
> URL: https://issues.apache.org/jira/browse/HDFS-8287
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Kai Sasaki
> Attachments: HDFS-8287-HDFS-7285.00.patch
>
>
> When a stripping cell is full, writeChunk computes and generates parity 
> packets.  It sequentially calls waitAndQueuePacket so that user client cannot 
> continue to write data until it finishes.
> We should allow user client to continue writing instead but not blocking it 
> when writing parity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679642#comment-14679642
 ] 

Vinayakumar B commented on HDFS-8882:
-

thanks [~drankye] for pointing me. I didnt sync patch with that. 
Will take a detailed look at that.

> Use datablocks, parityblocks and cell size from ec zone
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679651#comment-14679651
 ] 

Walter Su commented on HDFS-8882:
-

You have finished HDFS-8558. After HDFS-8854, I'm thinking to pass ECPolicy 
instead of dataBlkNum+cellSize to Dispatcher.
For DataStreamer part, You can just replace ECSchema to ECPolicy cleanly after 
HDFS-8854.
I suggest we hold all patches replated to ECSchema until HDFS-8854 is done. The 
patch of HDFS-8854 is big(it's a clean refactor) I don't want to make it more 
complicated. Thanks.

> Use datablocks, parityblocks and cell size from ec zone
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8882) Use datablocks, parityblocks and cell size from ec zone

2015-08-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14679657#comment-14679657
 ] 

Vinayakumar B commented on HDFS-8882:
-

Thanks [~walter.k.su] for the heads-up. I will hold this till other one is in. 

> Use datablocks, parityblocks and cell size from ec zone
> ---
>
> Key: HDFS-8882
> URL: https://issues.apache.org/jira/browse/HDFS-8882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8882-HDFS-7285-01.patch
>
>
> As part of earlier development, constants were used for datablocks, parity 
> blocks and cellsize.
> Now all these are available in ec zone. Use from there and stop using 
> constant values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)