[ 
https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16442938#comment-16442938
 ] 

Sankar Hariappan commented on HIVE-19219:
-----------------------------------------

Attached 04.patch after rebasing with master.

Waiting for ptest run.

> Incremental REPL DUMP should throw error if requested events are cleaned-up.
> ----------------------------------------------------------------------------
>
>                 Key: HIVE-19219
>                 URL: https://issues.apache.org/jira/browse/HIVE-19219
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2, repl
>    Affects Versions: 3.0.0
>            Reporter: Sankar Hariappan
>            Assignee: Sankar Hariappan
>            Priority: Major
>              Labels: DR, pull-request-available, replication
>             Fix For: 3.1.0
>
>         Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, 
> HIVE-19219.03.patch, HIVE-19219.04.patch
>
>
> This is the case where the events were deleted on source because of old event 
> purging and hence min(source event id) > target event id (last replicated 
> event id).
> Repl dump should fail in this case so that user can drop the database and 
> bootstrap again.
> Cleaner thread is concurrently removing the expired events from 
> NOTIFICATION_LOG table. So, it is necessary to check if the current dump 
> missed any event while dumping. After fetching events in batches, we shall 
> check if it is fetched in contiguous sequence of event id. If it is not in 
> contiguous sequence, then likely some events missed in the dump and hence 
> throw error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to