[ 
https://issues.apache.org/jira/browse/SPARK-15348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16337760#comment-16337760
 ] 

Arvind Jajoo commented on SPARK-15348:
--------------------------------------

I think in order to have an end to end streaming ETL implementation within 
spark , this feature needs to be supported in spark sql now, specially after 
structured streaming. 

i.e. MERGE INTO statement can be run directly from spark sql for batch or 
microbatch incremental updates. 

Currently , this needs to be done outside of spark using hive but then it 
breaks end to end streaming ETL semantics.

> Hive ACID
> ---------
>
>                 Key: SPARK-15348
>                 URL: https://issues.apache.org/jira/browse/SPARK-15348
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 1.6.3, 2.0.2, 2.1.2, 2.2.0
>            Reporter: Ran Haim
>            Priority: Major
>
> Spark does not support any feature of hive's transnational tables,
> you cannot use spark to delete/update a table and it also has problems 
> reading the aggregated data when no compaction was done.
> Also it seems that compaction is not supported - alter table ... partition 
> .... COMPACT 'major'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to