[
https://issues.apache.org/jira/browse/HUDI-5232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-5232:
Assignee: JinxinTang
> Add flushTasks config in StreamWriteFunction in hudi-flink
>
JinxinTang created HUDI-5232:
Summary: Add flushTasks config in StreamWriteFunction in hudi-flink
Key: HUDI-5232
URL: https://issues.apache.org/jira/browse/HUDI-5232
Project: Apache Hudi
Issue
[
https://issues.apache.org/jira/browse/HUDI-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-5230:
-
Issue Type: Improvement (was: Bug)
> Lazy init secondaryView in PriorityBasedFileSystemView
>
[
https://issues.apache.org/jira/browse/HUDI-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-5230:
Assignee: JinxinTang
> Lazy init secondaryView in PriorityBasedFileSystemView
>
JinxinTang created HUDI-5230:
Summary: Lazy init secondaryView in PriorityBasedFileSystemView
Key: HUDI-5230
URL: https://issues.apache.org/jira/browse/HUDI-5230
Project: Apache Hudi
Issue Type:
[
https://issues.apache.org/jira/browse/HUDI-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-5128:
Assignee: JinxinTang
> Fix getFileSystem way in FileSystemBackedTableMetadata,
>
[
https://issues.apache.org/jira/browse/HUDI-5128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-5128:
-
Summary: Fix getFileSystem way in FileSystemBackedTableMetadata,
DatePartitionPathSelector and
JinxinTang created HUDI-5128:
Summary: Unify getFileSystem way in FileSystemBackedTableMetadata,
DatePartitionPathSelector and BootstrapUtils
Key: HUDI-5128
URL: https://issues.apache.org/jira/browse/HUDI-5128
JinxinTang created HUDI-5107:
Summary: Fix hadoop config in DirectWriteMarkers,
HoodieFlinkEngineContext and StreamerUtil are not consistent issue
Key: HUDI-5107
URL: https://issues.apache.org/jira/browse/HUDI-5107
[
https://issues.apache.org/jira/browse/HUDI-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-5107:
Assignee: JinxinTang
> Fix hadoop config in DirectWriteMarkers, HoodieFlinkEngineContext and
>
JinxinTang created HUDI-5086:
Summary: The doc of
org.apache.hudi.sink.meta.CkpMetadata#bootstrap is not correct
Key: HUDI-5086
URL: https://issues.apache.org/jira/browse/HUDI-5086
Project: Apache Hudi
[
https://issues.apache.org/jira/browse/HUDI-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-5086:
Assignee: JinxinTang
> The doc of org.apache.hudi.sink.meta.CkpMetadata#bootstrap is not correct
>
[
https://issues.apache.org/jira/browse/HUDI-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-5005:
-
Description:
# When stream write reuse aborted instant, there is chance this one is older
than instant
[
https://issues.apache.org/jira/browse/HUDI-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-5005:
Assignee: JinxinTang
> flink stream write reuse abort instant will lead to coordinator delete file
JinxinTang created HUDI-5005:
Summary: flink stream write reuse abort instant will lead to
coordinator delete file not right.
Key: HUDI-5005
URL: https://issues.apache.org/jira/browse/HUDI-5005
Project:
[
https://issues.apache.org/jira/browse/HUDI-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang reassigned HUDI-4950:
Assignee: JinxinTang
> Use HoodieHiveCatalog to infer by read log file will exit due to oom
>
JinxinTang created HUDI-4950:
Summary: Use HoodieHiveCatalog to infer by read log file will exit
due to oom
Key: HUDI-4950
URL: https://issues.apache.org/jira/browse/HUDI-4950
Project: Apache Hudi
JinxinTang created HUDI-4877:
Summary:
org.apache.hudi.index.bucket.TestHoodieSimpleBucketIndex#testTagLocation not
work correct
Key: HUDI-4877
URL: https://issues.apache.org/jira/browse/HUDI-4877
JinxinTang created HUDI-4813:
Summary: Infer keygen not work in sparksql side
Key: HUDI-4813
URL: https://issues.apache.org/jira/browse/HUDI-4813
Project: Apache Hudi
Issue Type: Bug
JinxinTang created HUDI-4808:
Summary: HoodieSimpleBucketIndex should also consider bucket num
in log file not in base file which written by flink mor table
Key: HUDI-4808
URL:
[
https://issues.apache.org/jira/browse/HUDI-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4777:
-
Summary: Flink gen bucket index of mor table not consistent with spark lead
to duplicate bucket issue
JinxinTang created HUDI-4777:
Summary: flink gen bucket index of mor table not consistent with
spark lead to duplicate bucket issue
Key: HUDI-4777
URL: https://issues.apache.org/jira/browse/HUDI-4777
[
https://issues.apache.org/jira/browse/HUDI-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4767:
-
Summary: non partition table in hudi-filnk module should also respect
KEYGEN_CLASS_NAME in conf (was:
JinxinTang created HUDI-4767:
Summary: non partition table in hudi-filnk module should also
KEYGEN_CLASS_NAME in conf
Key: HUDI-4767
URL: https://issues.apache.org/jira/browse/HUDI-4767
Project: Apache
JinxinTang created HUDI-4628:
Summary: hudi-flink support GLOBAL_BLOOM,GLOBAL_SIMPLE,BUCKET
index type
Key: HUDI-4628
URL: https://issues.apache.org/jira/browse/HUDI-4628
Project: Apache Hudi
[
https://issues.apache.org/jira/browse/HUDI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4461:
-
Description:
org.apache.hudi.exception.HoodieException: Error while checking whether table
exists under
[
https://issues.apache.org/jira/browse/HUDI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4461:
-
Description: (was: org.apache.hudi.exception.HoodieException: Error
while checking whether table
[
https://issues.apache.org/jira/browse/HUDI-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4461:
-
Description:
org.apache.hudi.exception.HoodieException: Error while checking whether table
exists under
JinxinTang created HUDI-4461:
Summary: org.apache.hudi.sink.TestWriteCopyOnWrite will failed
when local hadoop env exists
Key: HUDI-4461
URL: https://issues.apache.org/jira/browse/HUDI-4461
Project:
JinxinTang created HUDI-4460:
Summary: org.apache.hudi.sink.TestWriteCopyOnWrite will failed
when local hadoop env exists
Key: HUDI-4460
URL: https://issues.apache.org/jira/browse/HUDI-4460
Project:
[
https://issues.apache.org/jira/browse/HUDI-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4422:
-
Description:
Caused by: java.lang.RuntimeException:
[
https://issues.apache.org/jira/browse/HUDI-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568442#comment-17568442
]
JinxinTang commented on HUDI-4422:
--
Please assign to me, I can fix it.
> read parquet failed due to
[
https://issues.apache.org/jira/browse/HUDI-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4422:
-
Description:
Caused by: java.lang.RuntimeException:
[
https://issues.apache.org/jira/browse/HUDI-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
JinxinTang updated HUDI-4422:
-
Description:
Caused by: java.lang.RuntimeException:
JinxinTang created HUDI-4422:
Summary: read parquet failed due to length is 0 or corrupt parquet
file
Key: HUDI-4422
URL: https://issues.apache.org/jira/browse/HUDI-4422
Project: Apache Hudi
[
https://issues.apache.org/jira/browse/HUDI-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568409#comment-17568409
]
JinxinTang commented on HUDI-4397:
--
great ~
> Flink Inline Cluster and Compact plan distribute strategy
36 matches
Mail list logo