Sankar Hariappan created HIVE-19248:
---------------------------------------

             Summary: Hive replication cause file copy failures if HDFS block 
size differs across clusters
                 Key: HIVE-19248
                 URL: https://issues.apache.org/jira/browse/HIVE-19248
             Project: Hive
          Issue Type: Bug
          Components: HiveServer2, repl
    Affects Versions: 3.0.0
            Reporter: Sankar Hariappan
            Assignee: Sankar Hariappan
             Fix For: 3.1.0


This is the case where the events were deleted on source because of old event 
purging and hence min(source event id) > target event id (last replicated event 
id).

Repl dump should fail in this case so that user can drop the database and 
bootstrap again.

Cleaner thread is concurrently removing the expired events from 
NOTIFICATION_LOG table. So, it is necessary to check if the current dump missed 
any event while dumping. After fetching events in batches, we shall check if it 
is fetched in contiguous sequence of event id. If it is not in contiguous 
sequence, then likely some events missed in the dump and hence throw error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to