[GitHub] [hudi] yihua commented on issue #7577: [SUPPORT]

2023-01-05 Thread GitBox
yihua commented on issue #7577: URL: https://github.com/apache/hudi/issues/7577#issuecomment-1371893173 Also, if sync table services are triggered by the streaming job, the [concurrency control configs](https://hudi.apache.org/docs/metadata#deployment-model-b-single-writer-with-async-table-

[GitHub] [hudi] yihua commented on issue #7577: [SUPPORT]

2023-01-04 Thread GitBox
yihua commented on issue #7577: URL: https://github.com/apache/hudi/issues/7577#issuecomment-1371886902 The issue description mentions that the table type is MOR (`What table type cow or mor - MOR`), but the write config shows it's using COW (`hoodie.datasource.write.table.type=COPY_ON_WRIT

[GitHub] [hudi] yihua commented on issue #7577: [SUPPORT]

2023-01-03 Thread GitBox
yihua commented on issue #7577: URL: https://github.com/apache/hudi/issues/7577#issuecomment-1370366726 Hi @Shagish thanks for raising this issue. Could you share the write configs of your Hudi Spark job? One possibility is that the metadata table might be out of sync with the data table