Unsubscribe
thanks!
郭
祝工作顺利、万事胜意
Unsubscribe
*Thanks,Aayush Ostwal*
Unsubscribe
*Thanks,Parag Chaudhari*
ok so as expected the underlying database is Hive. Hive uses hdfs storage.
You said you encountered limitations on concurrent writes. The order and
limitations are introduced by Hive metastore so to speak. Since this is all
happening through Spark, by default implementation of the Hive metastore
Hi Mich and Pol,
Thanks for the feedback. The database layer is Hadoop 3.3.5. The cluster
restarted so I lost the stack trace in the application UI. In the snippets
I saved, it looks like the exception being thrown was from Hive. Given the
feedback you've provided, I suspect the issue is with how
Hi Patrick,
You can have multiple writers simultaneously writing to the same table in
HDFS by utilizing an open table format with concurrency control. Several
formats, such as Apache Hudi, Apache Iceberg, Delta Lake, and Qbeast
Format, offer this capability. All of them provide advanced features
Unsubscribe