>
> Thanks for confirmation. We are using the workaround to create a separate
> Hive external table STORED AS PARQUET with the exact location of Delta
> table. Our use case is batch-driven and we are running VACUUM with 0
> retention after every batch is completed. Do you see any potential problem
Hello!
Can you please share some code/thoughts on how to publish data from a
dataframe to RabbbitMQ?
Thanks.
Regards,
Florin
Good to hear
It was what I thought
Hard to validate with out the actual configuration
(Did not have time to setup ambari)
On Fri, Jun 21, 2019, 15:44 Nirmal Kumar wrote:
> Hey Raymond,
>
> This root cause of the problem was the hive database location was
>
Hey Raymond,
This root cause of the problem was the hive database location was
'file:/home/hive/spark-warehouse/testdb.db/employee_orc’
I checked that using desc extended testdb.employee
It might be some config issue in the cluster at that time that made the
location to point to local
Hi Jürgen,
Did you ever find a way to resolve this issue ?
Looking at the implementation of the application master, it seems that there
is no heartbeat/keepalive mechanism for the communication between the driver
and AM, so when something closes the connection for inactivity, the AM shuts
down:
Hi
Thanks for confirmation. We are using the workaround to create a separate
Hive external table STORED AS PARQUET with the exact location of Delta
table. Our use case is batch-driven and we are running VACUUM with 0
retention after every batch is completed. Do you see any potential problem
with
@ayan guha @Gourav Sengupta
Delta Lake is OSS currently does not support defining tables in Hive
metastore using DDL commands. We are hoping to add the necessary
compatibility fixes in Apache Spark to make Delta Lake work with tables and
DDL commands. So we will support them in a future release.
Hi Ayan,
I may be wrong about this, but I think that Delta files are in Parquet
format. But I am sure that you have already checked this. Am I missing
something?
Regards,
Gourav Sengupta
On Fri, Jun 21, 2019 at 6:39 AM ayan guha wrote:
> Hi
> We used spark.sql to create a table using DELTA.