[GitHub] [hudi] yihua commented on issue #7589: [Support] Keep only clustered file(all) after cleaning

2023-05-22 Thread via GitHub
yihua commented on issue #7589: URL: https://github.com/apache/hudi/issues/7589#issuecomment-1557557503 @maheshguptags Cool, thanks. Just to clarify, for a Hudi table on storage, you can always create a savepoint using the base path, regardless of whether the table is registered in the tem

[GitHub] [hudi] yihua commented on issue #7589: [Support] Keep only clustered file(all) after cleaning

2023-05-19 Thread via GitHub
yihua commented on issue #7589: URL: https://github.com/apache/hudi/issues/7589#issuecomment-1555373180 Hi @maheshguptags I think your CALL statement misses a right bracket. This should be the right command: ``` spark.sql("""call create_savepoint(path => table_path, commit_time => '2

[GitHub] [hudi] yihua commented on issue #7589: [Support] Keep only clustered file(all) after cleaning

2023-03-22 Thread via GitHub
yihua commented on issue #7589: URL: https://github.com/apache/hudi/issues/7589#issuecomment-1480379081 Hi @maheshguptags, sorry for the late reply. I put up a PR to support the savepoint call procedure with the base path of the Hudi table: #8271. I tested them locally and they work. Cou