[ 
https://issues.apache.org/jira/browse/PHOENIX-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703608#comment-14703608
 ] 

James Taylor commented on PHOENIX-2154:
---------------------------------------

Thanks for the patch, [~maghamravikiran]. It's my understanding (from 
[~lhofhansl]), that if you use this command:
{code}
            TableMapReduceUtil.initTableReducerJob(logicalIndexTable, null, 
job);
{code}
that the same context.write(outputKey, kv) we do will work, but the MR 
framework will issue the required batched mutation for the KVs we write through 
direct HBase calls. Is that not the case?

If that works, then I think the code changes will be much less. Not sure what 
controls the amount of batching the the HBase calls will do.

> Failure of one mapper should not affect other mappers in MR index build
> -----------------------------------------------------------------------
>
>                 Key: PHOENIX-2154
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2154
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: maghamravikiran
>         Attachments: IndexTool.java, PHOENIX-2154-WIP.patch
>
>
> Once a mapper in the MR index job succeeds, it should not need to be re-done 
> in the event of the failure of one of the other mappers. The initial 
> population of an index is based on a snapshot in time, so new rows getting 
> *after* the index build has started and/or failed do not impact it.
> Also, there's a 1:1 correspondence between index rows and table rows, so 
> there's really no need to dedup. However, the index rows will have a 
> different row key than the data table, so I'm not sure how the HFiles are 
> split. Will they potentially overlap and is this an issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to