So you want to push data from Spark streaming to both Hive and SAP HANA
tables.

Let us take one at a time.

Spark writing to Hive table but you need to delete some rows from Hive
beforehand?

Have you defined your ORC table as ORC transactional or you are just
marking them as deleted with two additional columns op_type , op_time ,
keeping data immutable?

Example op_type = 3 and op_time = cast(from_unixtime(unix_timestamp()) AS
timestamp) for deleted records and when

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 25 August 2016 at 00:08, Oldskoola <sascha.schm...@outlook.com> wrote:

> Hi,
>
> I'm building aggregates over Streaming Data. When new data effects
> previously processed aggregates, I'll need to update the effected rows or
> delete them before writing the new processed aggregates back to HDFS (Hive
> Metastore) and a SAP HANA Table. How would you do this, when writing a
> complete dataframe every Interval is not an option?
>
> Somewhat related is the question for custom JDBC SQL for writing to the SAP
> HANA DB. How would you implement SAP HANA specific commands if the build in
> JDBC df writer is not sufficient for your needs. In this case I primarily
> want to to do the incremental updates as described before and maybe also
> want to send specific CREATE TABLE syntax for columnar store and time
> table.
>
> Thank you very much in advance. I'm a little stuck on this one.
>
> Regards
> Sascha
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Incremental-Updates-and-custom-SQL-via-JDBC-tp27598.
> html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to