[ https://issues.apache.org/jira/browse/HIVE-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14596613#comment-14596613 ]
Hive QA commented on HIVE-10165: -------------------------------- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12741056/HIVE-10165.9.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9104 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join28 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4337/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4337/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4337/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12741056 - PreCommit-HIVE-TRUNK-Build > Improve hive-hcatalog-streaming extensibility and support updates and deletes. > ------------------------------------------------------------------------------ > > Key: HIVE-10165 > URL: https://issues.apache.org/jira/browse/HIVE-10165 > Project: Hive > Issue Type: Improvement > Components: HCatalog > Affects Versions: 1.2.0 > Reporter: Elliot West > Assignee: Elliot West > Labels: streaming_api > Attachments: HIVE-10165.0.patch, HIVE-10165.4.patch, > HIVE-10165.5.patch, HIVE-10165.6.patch, HIVE-10165.7.patch, > HIVE-10165.9.patch, mutate-system-overview.png > > > h3. Overview > I'd like to extend the > [hive-hcatalog-streaming|https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest] > API so that it also supports the writing of record updates and deletes in > addition to the already supported inserts. > h3. Motivation > We have many Hadoop processes outside of Hive that merge changed facts into > existing datasets. Traditionally we achieve this by: reading in a > ground-truth dataset and a modified dataset, grouping by a key, sorting by a > sequence and then applying a function to determine inserted, updated, and > deleted rows. However, in our current scheme we must rewrite all partitions > that may potentially contain changes. In practice the number of mutated > records is very small when compared with the records contained in a > partition. This approach results in a number of operational issues: > * Excessive amount of write activity required for small data changes. > * Downstream applications cannot robustly read these datasets while they are > being updated. > * Due to scale of the updates (hundreds or partitions) the scope for > contention is high. > I believe we can address this problem by instead writing only the changed > records to a Hive transactional table. This should drastically reduce the > amount of data that we need to write and also provide a means for managing > concurrent access to the data. Our existing merge processes can read and > retain each record's {{ROW_ID}}/{{RecordIdentifier}} and pass this through to > an updated form of the hive-hcatalog-streaming API which will then have the > required data to perform an update or insert in a transactional manner. > h3. Benefits > * Enables the creation of large-scale dataset merge processes > * Opens up Hive transactional functionality in an accessible manner to > processes that operate outside of Hive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)