[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16131661#comment-16131661
 ] 

Yuanbo Liu commented on HDFS-12283:
-----------------------------------

[~anu]/[~cheersyang] Thanks a lot for your comments. 
Except for the part you've discussed, here is my reply for Anu's questions.
{quote}
DeletedBlockLogImpl.java#commitTransactions: This is a hypothetical question. 
During the commitTransaction call...
{quote}
If one of the txids is invalid, that will cause commitTransactions to fail, and 
those txids will be added to retry queue again. I use batch operation here for 
efficiency reason, but your comment really hit me that we should commit txid 
one by one in case that one invalid txid make some other txids retry many times.
{quote}
addTransactions(Map<String, List<String>> blockMap)  in this call is there a 
size limit to the list of blocks in the argument. 
{quote}
I think the answer is yes, because this method is invoked when ksm sends 
deleting keys command to scm, and we will have such kind of limitation in ksm. 
This will be addressed in HDFS-12235.
Other comments make sense to me and I will address them in v4 patch. Thanks 
again for your kindly reviewing.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -------------------------------------------------
>
>                 Key: HDFS-12283
>                 URL: https://issues.apache.org/jira/browse/HDFS-12283
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone, scm
>            Reporter: Weiwei Yang
>            Assignee: Yuanbo Liu
>         Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch, 
> HDFS-12283-HDFS-7240.002.patch, HDFS-12283-HDFS-7240.003.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to