[
https://issues.apache.org/jira/browse/HUDI-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757002#comment-17757002
]
Lin Liu commented on HUDI-4756:
---
Today I will fix all the test failures and send for review
hudi-bot commented on PR #9422:
URL: https://github.com/apache/hudi/pull/9422#issuecomment-1686583717
## CI report:
* 42d026cd694d6368e45b058a4ff7a9bd36b0d3a2 UNKNOWN
* f7c426b7906a2d64aaabbe96c6d9a011ab9b441a Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2
hudi-bot commented on PR #9422:
URL: https://github.com/apache/hudi/pull/9422#issuecomment-1686569043
## CI report:
* 42d026cd694d6368e45b058a4ff7a9bd36b0d3a2 UNKNOWN
* f7c426b7906a2d64aaabbe96c6d9a011ab9b441a Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2
Prasagnya commented on issue #9493:
URL: https://github.com/apache/hudi/issues/9493#issuecomment-1686516725
Hey @ad1happy2go , Is there anyway _I can overwrite a specific partition_?
I came down to this delete usecase from my usecase of overwriting one
partition on a given week day. Other
jonvex commented on code in PR #9422:
URL: https://github.com/apache/hudi/pull/9422#discussion_r1300217569
##
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/functional/TestMORColstats.java:
##
@@ -237,24 +232,27 @@ private void
testBaseFileAndLogFileUpdateMatche
jonvex commented on code in PR #9422:
URL: https://github.com/apache/hudi/pull/9422#discussion_r1300213317
##
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/functional/TestMORColstats.java:
##
@@ -202,10 +246,23 @@ private void
testBaseFileAndLogFileUpdateMatche
hudi-bot commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686459322
## CI report:
* 1e493605d0a26b442efbf1518b063dbb1e616872 Azure:
[PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1939
codope closed issue #9348: [SUPPORT] hide soft-deleted rows
URL: https://github.com/apache/hudi/issues/9348
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-ma
ad1happy2go commented on issue #9351:
URL: https://github.com/apache/hudi/issues/9351#issuecomment-1686456616
@zbbkeepgoing Ideally delta and hudi should ideally be scanning similar
number of files if both are skipping files due to column stats. Can you confirm
if hudi reading all the files
ad1happy2go commented on issue #9348:
URL: https://github.com/apache/hudi/issues/9348#issuecomment-1686457776
@ys8 Closing out this issue. Please let us know or reopen in case of any
concerns.
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
jonvex commented on code in PR #9422:
URL: https://github.com/apache/hudi/pull/9422#discussion_r1300201256
##
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/functional/TestMORColstats.java:
##
@@ -0,0 +1,481 @@
+/*
+ * Licensed to the Apache Software Foundation (
hudi-bot commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686443773
## CI report:
* c0019c0fc1d1803b9e0ccfbd1c9de953d6aba4f1 Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1937
ad1happy2go commented on issue #9354:
URL: https://github.com/apache/hudi/issues/9354#issuecomment-1686428832
@andreacfm Sorry for the delay in response here. You should use JDK 1.8 to
compile the code. looks like you are using some later version of java.
--
This is an automated message f
ad1happy2go commented on issue #9493:
URL: https://github.com/apache/hudi/issues/9493#issuecomment-1686418063
@Prasagnya What are the cleaner configurations you are using. If the default
setting is in use, data cleaning will occur once there are 10 or more
subsequent commits.
It's po
Prasagnya opened a new issue, #9493:
URL: https://github.com/apache/hudi/issues/9493
I am trying to delete partitions by issuing a save command on an empty Spark
Data Frame. I expect Hudi to modify both metadata, as well as delete the actual
parquet files in the destination root folder (bas
nsivabalan commented on code in PR #4913:
URL: https://github.com/apache/hudi/pull/4913#discussion_r1300154539
##
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieWriteHandle.java:
##
@@ -273,4 +280,31 @@ protected static Option
toAvroRecord(HoodieRecord re
codope closed issue #9391: [ENHANCEMENT] Kafka Key as part of hudi metadata
columns
URL: https://github.com/apache/hudi/issues/9391
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comme
ad1happy2go commented on issue #9391:
URL: https://github.com/apache/hudi/issues/9391#issuecomment-1686390391
Closing out the issue as the PR merged. Thanks a lot @prathit06 .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub an
ad1happy2go commented on issue #9418:
URL: https://github.com/apache/hudi/issues/9418#issuecomment-1686383133
@JoshuaZhuCN When I checked , it looked it is quering the view only. Below
is the screenshot -
https://github.com/apache/hudi/assets/63430370/108ddd80-ab8c-4201-a64d-e2b2ae80
majian1998 commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686382580
@hudi-bot run azure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
[
https://issues.apache.org/jira/browse/HUDI-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sagar Sumit updated HUDI-4631:
--
Status: In Progress (was: Open)
> Enhance retries for failed writes w/ write conflicts in a multi write
[
https://issues.apache.org/jira/browse/HUDI-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sagar Sumit updated HUDI-4631:
--
Status: Patch Available (was: In Progress)
> Enhance retries for failed writes w/ write conflicts in a
[
https://issues.apache.org/jira/browse/HUDI-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sagar Sumit closed HUDI-2772.
-
Resolution: Fixed
> Deltastreamer fails to read checkpoint from previous commit metadata by spark
> write
[
https://issues.apache.org/jira/browse/HUDI-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17756919#comment-17756919
]
Sagar Sumit commented on HUDI-2772:
---
Fixed by HUDI-2793 and HUDI-2947
> Deltastreamer f
[
https://issues.apache.org/jira/browse/HUDI-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sagar Sumit closed HUDI-2793.
-
Resolution: Fixed
> Fix copying of deltastreamer checkpoint
> ---
>
>
hudi-bot commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686331544
## CI report:
* c0019c0fc1d1803b9e0ccfbd1c9de953d6aba4f1 Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1937
beyond1920 commented on code in PR #9485:
URL: https://github.com/apache/hudi/pull/9485#discussion_r1300097311
##
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/HoodieSqlCommonUtils.scala:
##
@@ -223,6 +223,17 @@ object HoodieSqlCommonUtils exte
beyond1920 commented on code in PR #9485:
URL: https://github.com/apache/hudi/pull/9485#discussion_r1300104947
##
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DefaultSource.scala:
##
@@ -102,7 +102,7 @@ class DefaultSource extends RelationProvider
hudi-bot commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686257671
## CI report:
* c0019c0fc1d1803b9e0ccfbd1c9de953d6aba4f1 Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1937
hudi-bot commented on PR #9491:
URL: https://github.com/apache/hudi/pull/9491#issuecomment-1686244964
## CI report:
* f7ed2de10ffe2bf3aa02cd0d83834c28363ab3d8 Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1938
hudi-bot commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686244791
## CI report:
* c0019c0fc1d1803b9e0ccfbd1c9de953d6aba4f1 Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1937
majian1998 commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1686215731
@hudi-bot run azure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
ad1happy2go commented on issue #9469:
URL: https://github.com/apache/hudi/issues/9469#issuecomment-1686194536
@praneethh I tried the similar stuff using spark-sql. It gave me two records
only. below if the code I used -
```
CREATE TABLE issue_9469_23 USING HUDI
PARTITIONED BY(l
This is an automated email from the ASF dual-hosted git repository.
danny0405 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hudi.git
from 5a7b5f28d99 [MINOR] Close record readers after use during tests (#9457)
add ad1f7173631 [HUDI-6156] Prevent leav
danny0405 merged PR #9483:
URL: https://github.com/apache/hudi/pull/9483
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscr...@hudi.apache
danny0405 commented on PR #9483:
URL: https://github.com/apache/hudi/pull/9483#issuecomment-1686168968
Tests have passed:
https://dev.azure.com/apache-hudi-ci-org/apache-hudi-ci/_build/results?buildId=19369&view=results
--
This is an automated message from the Apache Git Service.
To respo
pratyakshsharma opened a new pull request, #9492:
URL: https://github.com/apache/hudi/pull/9492
### Change Logs
_Describe context and summary for this change. Highlight if any code was
copied._
### Impact
This PR adds RFC for HoodieReverseStreamer. This is the first cut
hudi-bot commented on PR #9491:
URL: https://github.com/apache/hudi/pull/9491#issuecomment-1686004018
## CI report:
* 4596c13f890886806d26a7ca9d39bafe44226c86 Azure:
[CANCELED](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=193
hudi-bot commented on PR #9491:
URL: https://github.com/apache/hudi/pull/9491#issuecomment-1685990212
## CI report:
* 4596c13f890886806d26a7ca9d39bafe44226c86 Azure:
[PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1938
xushiyan commented on code in PR #8929:
URL: https://github.com/apache/hudi/pull/8929#discussion_r1299845069
##
hudi-aws/src/main/java/org/apache/hudi/aws/sync/AWSGlueCatalogSyncClient.java:
##
@@ -91,16 +92,23 @@ public class AWSGlueCatalogSyncClient extends
HoodieSyncClient {
lokeshj1703 commented on PR #8958:
URL: https://github.com/apache/hudi/pull/8958#issuecomment-1685949683
@yihua I tried spark 3.4 without the option `--conf
'spark.kryo.registrator=org.apache.spark.HoodieSparkKryoRegistrar'`. I was able
to get some basic hudi op working locally. Do we know
PankajKaushal commented on issue #9478:
URL: https://github.com/apache/hudi/issues/9478#issuecomment-1685940286
@ad1happy2go Please find the timeline attached for both table and metadata
table.
Yes, you are right, for metadata table there is no commit with replacecommit
as action. We wan
LittleWat commented on issue #7689:
URL: https://github.com/apache/hudi/issues/7689#issuecomment-1685922869
I'm also facing this error. The pod is istio-injected. Is this related...?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Gi
hudi-bot commented on PR #9491:
URL: https://github.com/apache/hudi/pull/9491#issuecomment-1685908568
## CI report:
* 4596c13f890886806d26a7ca9d39bafe44226c86 Azure:
[PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1938
ad1happy2go commented on issue #9478:
URL: https://github.com/apache/hudi/issues/9478#issuecomment-1685907856
@PankajKaushal The configuration looks okay.
The metadata table doesn't looks to be archiving as no compaction is
happening for the metadata table. Can you check and confirm
hudi-bot commented on PR #9491:
URL: https://github.com/apache/hudi/pull/9491#issuecomment-1685896287
## CI report:
* 4596c13f890886806d26a7ca9d39bafe44226c86 UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
hudi-bot commented on PR #9467:
URL: https://github.com/apache/hudi/pull/9467#issuecomment-1685883384
## CI report:
* d20d5b2e45e0eccf8f3ec40077696eecf9dfc4bb Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1936
[
https://issues.apache.org/jira/browse/HUDI-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated HUDI-6732:
-
Labels: pull-request-available (was: )
> Handle wildcards for partition paths passed in via spark
voonhous opened a new pull request, #9491:
URL: https://github.com/apache/hudi/pull/9491
…tion DDL
### Change Logs
The drop partition DDL is not handling wildcards properly, specifically, for
partitions with wildcards that are submitted via the Spark-SQL entry point.
```
hudi-bot commented on PR #9467:
URL: https://github.com/apache/hudi/pull/9467#issuecomment-1685814810
## CI report:
* d20d5b2e45e0eccf8f3ec40077696eecf9dfc4bb Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1936
hudi-bot commented on PR #9472:
URL: https://github.com/apache/hudi/pull/9472#issuecomment-1685802938
## CI report:
* c0019c0fc1d1803b9e0ccfbd1c9de953d6aba4f1 Azure:
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1937
xushiyan commented on code in PR #9221:
URL: https://github.com/apache/hudi/pull/9221#discussion_r1299711286
##
hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java:
##
@@ -98,8 +98,9 @@ public HiveSyncConfig(Properties props) {
public HiveSyncCon
JingFengWang commented on issue #9424:
URL: https://github.com/apache/hudi/issues/9424#issuecomment-1685782248
> @JingFengWang, you could use `TIMESTAMP_LTZ` type to solve the above
problem. I have tested that uses `TIMESTAMP_LTZ` type and worked well.
Meanwhile, I think we could support th
ad1happy2go commented on issue #9481:
URL: https://github.com/apache/hudi/issues/9481#issuecomment-1685773972
@cbomgit I tried to create table on existing hudi table using hudi 0.12.3
and it worked fine for me. No need to give any table properties also. Can you
try with hudi 0.12.3 or 0.13.
[
https://issues.apache.org/jira/browse/HUDI-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
voon updated HUDI-6732:
---
Attachment: image-2023-08-21-14-59-27-095.png
Description:
The drop partition DDL is not handling wildcards prope
101 - 155 of 155 matches
Mail list logo