[ 
https://issues.apache.org/jira/browse/HIVE-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142152#comment-16142152
 ] 

Hive QA commented on HIVE-17367:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12883786/HIVE-17367.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 11005 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=231)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=100)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] 
(batchId=235)
org.apache.hadoop.hive.metastore.TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext
 (batchId=209)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpRetryOnServerIdleTimeout 
(batchId=228)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6543/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6543/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6543/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12883786 - PreCommit-HIVE-Build

> IMPORT table doesn't load from data dump if a metadata-only dump was already 
> imported.
> --------------------------------------------------------------------------------------
>
>                 Key: HIVE-17367
>                 URL: https://issues.apache.org/jira/browse/HIVE-17367
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2, Import/Export, repl
>    Affects Versions: 3.0.0
>            Reporter: Sankar Hariappan
>            Assignee: Sankar Hariappan
>              Labels: DR, replication
>             Fix For: 3.0.0
>
>         Attachments: HIVE-17367.01.patch, HIVE-17367.02.patch
>
>
> Repl v1 creates a set of EXPORT/IMPORT commands to replicate modified data 
> (as per events) across clusters.
> For instance, let's say, insert generates 2 events such as
> ALTER_TABLE (ID: 10)
> INSERT (ID: 11)
> Each event generates a set of EXPORT and IMPORT commands.
> ALTER_TABLE event generates metadata only export/import
> INSERT generates metadata+data export/import.
> As Hive always dump the latest copy of table during export, it sets the 
> latest notification event ID as current state of it. So, in this example, 
> import of metadata by ALTER_TABLE event sets the current state of the table 
> as 11.
> Now, when we try to import the data dumped by INSERT event, it is noop as the 
> table's current state(11) is equal to the dump state (11) which in-turn leads 
> to the data never gets replicated to target cluster.
> So, it is necessary to allow overwrite of table/partition if their current 
> state equals the dump state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to