rishabhbandi closed issue #6055: Hudi Partial Update not working by using MERGE
statement on Hudi External Table
URL: https://github.com/apache/hudi/issues/6055
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
rishabhbandi commented on issue #6055:
URL: https://github.com/apache/hudi/issues/6055#issuecomment-1303007672
Hi Team, we created a separate custom java class to perform the partial
update.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
hudi-bot commented on PR #6983:
URL: https://github.com/apache/hudi/pull/6983#issuecomment-1302963014
## CI report:
* f5a8b04cb184f9c9f00961884c479856594f57f2 Azure:
hudi-bot commented on PR #6983:
URL: https://github.com/apache/hudi/pull/6983#issuecomment-1302960236
## CI report:
* f5a8b04cb184f9c9f00961884c479856594f57f2 Azure:
hudi-bot commented on PR #7063:
URL: https://github.com/apache/hudi/pull/7063#issuecomment-1302957486
## CI report:
* 77487796a68b54304f55efc71097ab8ca50b428b UNKNOWN
* 8240e1e8280cd8842d4ba11ef6f781feb3d8a9bd UNKNOWN
* 85b70221d74d0d04900acda25e1ea9b7c71bcb0a UNKNOWN
*
hudi-bot commented on PR #6725:
URL: https://github.com/apache/hudi/pull/6725#issuecomment-1302957195
## CI report:
* 81f856d99da09e5a9438fad2a0d111bc9062aba4 Azure:
hudi-bot commented on PR #5165:
URL: https://github.com/apache/hudi/pull/5165#issuecomment-1302956460
## CI report:
* d690f80ac9cc19c3c97ded93381824bfdb6d7798 Azure:
xiarixiaoyao commented on PR #5165:
URL: https://github.com/apache/hudi/pull/5165#issuecomment-1302942885
@hudi-bot run azure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
danny0405 commented on issue #6019:
URL: https://github.com/apache/hudi/issues/6019#issuecomment-1302935085
Yeah, let's close it out, use release 0.12.1 then if there are still
problems, feel free to re-open it again ~
--
This is an automated message from the Apache Git Service.
To
danny0405 closed issue #6019: [SUPPORT]如何让数据尽快的刷新到hudi中呢
URL: https://github.com/apache/hudi/issues/6019
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
danny0405 commented on issue #6052:
URL: https://github.com/apache/hudi/issues/6052#issuecomment-1302933977
Did you try the release 0.12.1 then ? It expects to work correctly now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
danny0405 commented on issue #5979:
URL: https://github.com/apache/hudi/issues/5979#issuecomment-1302933052
Table hudi C enables the changelog mode then ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
danny0405 commented on issue #4978:
URL: https://github.com/apache/hudi/issues/4978#issuecomment-1302929509
No, we have not fixed it, the Hive/Trino all can not access file group with
pure logs, can we move it to higher priority for release 0.13.0 and solve it
then ?
--
This is an
KevinyhZou created HUDI-5159:
Summary: Support write a success file to partition when it
finished in flink streaming append writer
Key: HUDI-5159
URL: https://issues.apache.org/jira/browse/HUDI-5159
TengHuo commented on issue #7106:
URL: https://github.com/apache/hudi/issues/7106#issuecomment-1302908283
Attach RFC-46 link here:
https://github.com/apache/hudi/blob/master/rfc/rfc-46/rfc-46.md
--
This is an automated message from the Apache Git Service.
To respond to the message,
TengHuo commented on issue #7106:
URL: https://github.com/apache/hudi/issues/7106#issuecomment-1302906771
Sure, np. Thanks @nsivabalan
Let me start a dev email thread.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
hudi-bot commented on PR #7129:
URL: https://github.com/apache/hudi/pull/7129#issuecomment-1302906313
## CI report:
* e86e785602cfed876c75273a4c8a669f0143b77c Azure:
nsivabalan closed issue #4864: Insert with INSERT_DROP_DUPS_OPT_KEY fails
URL: https://github.com/apache/hudi/issues/4864
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
nsivabalan commented on issue #4864:
URL: https://github.com/apache/hudi/issues/4864#issuecomment-1302904205
Since we haven't heard back from you for the past 6+ months going ahead and
closing it out. feel free to reach out to us if you need further assistance.
--
This is an
nsivabalan commented on issue #4864:
URL: https://github.com/apache/hudi/issues/4864#issuecomment-1302903706
Insert drop dups will consider file groups for matching partitions only. So,
if you incoming batch contains records for 1 partition, hudi will do an index
look up only in 1
hudi-bot commented on PR #7129:
URL: https://github.com/apache/hudi/pull/7129#issuecomment-1302903150
## CI report:
* 00ff1d41fae07715d44bc4a2551b76b1cb3eca1f Azure:
nsivabalan commented on issue #4978:
URL: https://github.com/apache/hudi/issues/4978#issuecomment-1302901252
@danny0405 @xiarixiaoyao : do we know if we have fixed this anytime.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
nsivabalan commented on issue #5036:
URL: https://github.com/apache/hudi/issues/5036#issuecomment-1302900811
@pratyakshsharma @jasondavindev : sorry. if you can explain the issue, I can
try to see how I can help you here.
--
This is an automated message from the Apache Git Service.
To
nsivabalan closed issue #5083: [SUPPORT] Doing clustering for bulked insert
table, could cause: Can't redefine: list
URL: https://github.com/apache/hudi/issues/5083
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
nsivabalan commented on issue #5083:
URL: https://github.com/apache/hudi/issues/5083#issuecomment-1302900460
thanks for the update @boneanxs .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
hudi-bot commented on PR #7129:
URL: https://github.com/apache/hudi/pull/7129#issuecomment-1302900337
## CI report:
* 00ff1d41fae07715d44bc4a2551b76b1cb3eca1f Azure:
nsivabalan commented on issue #5211:
URL: https://github.com/apache/hudi/issues/5211#issuecomment-1302899692
@kartik18 : any updates on this regard please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
nsivabalan commented on issue #5351:
URL: https://github.com/apache/hudi/issues/5351#issuecomment-1302899438
@p-powell : for immutable use-cases, we recommend setting some configs to
get better performance.
https://hudi.apache.org/docs/performance#bulk-insert
let us know if you
china-shang opened a new issue, #7133:
URL: https://github.com/apache/hudi/issues/7133
Why does lazyReading need to be turned on? It looks like he needs to seek
back and forth. And need to keep the file open all the time,If you don't turn
it on, you can always read forward? Is it to save
nsivabalan commented on issue #5372:
URL: https://github.com/apache/hudi/issues/5372#issuecomment-1302896768
hey @meitianjinbu : are we still looking for any assistance on this regard.
btw, we added an FAQ on hbase conflicting w/ metadata table
nsivabalan commented on issue #5481:
URL: https://github.com/apache/hudi/issues/5481#issuecomment-1302889177
@MikeBuh : did you get a chance to try out the suggestions from Ethan above.
let us know of any updates you have. would love to learn how the tuning went.
--
This is an
nsivabalan commented on issue #5482:
URL: https://github.com/apache/hudi/issues/5482#issuecomment-1302888669
if you are having other problems, can you help clarify.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
nsivabalan commented on issue #5482:
URL: https://github.com/apache/hudi/issues/5482#issuecomment-1302888141
gist seems like s3 connection timeouts from connection pool.
can you try bumping the connections.
```
--conf spark.hadoop.fs.s3a.connection.maximum=1000
```
--
nsivabalan commented on issue #5492:
URL: https://github.com/apache/hudi/issues/5492#issuecomment-1302887335
@ashah-lightbox : gentle ping. any updates please. if you got the issue
resolved, can we close it out.
--
This is an automated message from the Apache Git Service.
To respond
nsivabalan commented on issue #5514:
URL: https://github.com/apache/hudi/issues/5514#issuecomment-1302886962
closing this since we already landed the fix. Feel free to open a new issue
if you are looking for further assistance.
--
This is an automated message from the Apache Git
nsivabalan closed issue #5514: [SUPPORT] Read optimized query on MOR table
lists files without any Spark action
URL: https://github.com/apache/hudi/issues/5514
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
nsivabalan commented on issue #5519:
URL: https://github.com/apache/hudi/issues/5519#issuecomment-1302886502
@xiarixiaoyao : can we follow up on this. did we get to reproduce and make
any fix on this regard?
@Zhangshunyu : if you don't use bulk_insert row writer path, are things ok ?
nsivabalan commented on issue #5537:
URL: https://github.com/apache/hudi/issues/5537#issuecomment-1302885962
@YannByron : looks like the author has given some hacky solution. Is there
any enhancement we can add to hudi based on that.
--
This is an automated message from the Apache
nsivabalan commented on issue #5537:
URL: https://github.com/apache/hudi/issues/5537#issuecomment-1302885402
@melin : gentle ping.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
nsivabalan commented on issue #5539:
URL: https://github.com/apache/hudi/issues/5539#issuecomment-1302885245
@nleena123 : would you mind closing the issue if are not looking for any
further assistance.
--
This is an automated message from the Apache Git Service.
To respond to the
nsivabalan commented on issue #5673:
URL: https://github.com/apache/hudi/issues/5673#issuecomment-1302884770
@sunke38 : did you get a chance to give it a try. are we good to close it
out. or is there anything you need more assistance.
--
This is an automated message from the Apache
nsivabalan commented on issue #5777:
URL: https://github.com/apache/hudi/issues/5777#issuecomment-1302883337
@jiangjiguang : oops. sorry.
@jjtjiang : can you respond when you can to my above comments.
--
This is an automated message from the Apache Git Service.
To respond to the
nsivabalan commented on issue #5826:
URL: https://github.com/apache/hudi/issues/5826#issuecomment-1302882737
@minihippo : can we close if you can confirm that this is not an issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
nsivabalan commented on issue #6055:
URL: https://github.com/apache/hudi/issues/6055#issuecomment-1302881510
hey @rishabhbandi @hassan-ammar : were you folks able to resolve the issue.
Did any fix go into hudi on this regard.
can you guys help me understand is the issue still persists.
xiarixiaoyao closed pull request #7129: [MINOR] Support column type evolution
for Hive
URL: https://github.com/apache/hudi/pull/7129
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
nsivabalan commented on issue #6019:
URL: https://github.com/apache/hudi/issues/6019#issuecomment-1302878499
@yuzhaojing : can we follow up here please. If its already fixed in already
released version of hudi, can we close it out.
--
This is an automated message from the Apache Git
nsivabalan commented on issue #6014:
URL: https://github.com/apache/hudi/issues/6014#issuecomment-1302878056
@veenaypatil : I see you are using non partitioned key gen. So, index look
up is going to relative to the number of file groups you have in total. do you
know whats total file
nsivabalan commented on issue #5984:
URL: https://github.com/apache/hudi/issues/5984#issuecomment-1302875447
@rubenssoto : hey hi. unless we get more info to reproduce, gonna be tough
for us to make further investigation buddy. closing it due to no activity.
OOM w/ global sort
nsivabalan closed issue #5984: [SUPPORT] Error on GlobalSortPartitioner using
0.9.0
URL: https://github.com/apache/hudi/issues/5984
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
nsivabalan commented on issue #5979:
URL: https://github.com/apache/hudi/issues/5979#issuecomment-1302874631
@yuzhaojing @danny0405 : can we follow up on this issue. W/ latest CDC
support, would the issue reported in this ticket will be solved?
--
This is an automated message from the
nsivabalan closed issue #5952: [SUPPORT] HudiDeltaStreamer S3EventSource SQS
optimize for reading large number of files in parallel fashion
URL: https://github.com/apache/hudi/issues/5952
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
nsivabalan commented on issue #5952:
URL: https://github.com/apache/hudi/issues/5952#issuecomment-1302873977
Since we have a patch addressing the proposed fix, closing out the issue.
Feel free to reach out to us if you need any further assistance.
--
This is an automated message from
nsivabalan commented on issue #6052:
URL: https://github.com/apache/hudi/issues/6052#issuecomment-1302872944
@shqiprimbkodelabs @danny0405 : are we good to close this one or is there
anything pending still ?
--
This is an automated message from the Apache Git Service.
To respond to
nsivabalan closed issue #6048: [SUPPORT] S3 throttling while loading a table
written with "hoodie.metadata.enable" = true
URL: https://github.com/apache/hudi/issues/6048
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
nsivabalan commented on issue #6048:
URL: https://github.com/apache/hudi/issues/6048#issuecomment-1302872527
@noahtaite : going ahead and closing this one for now. Feel free to raise a
new issue if you are looking for further assistance.
--
This is an automated message from the Apache
nsivabalan closed issue #6038: [SUPPORT] MOR taking more time than COW using
HoodieJavaWriteClient
URL: https://github.com/apache/hudi/issues/6038
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
nsivabalan commented on issue #6038:
URL: https://github.com/apache/hudi/issues/6038#issuecomment-1302872105
feel free to raise a new issue if you are looking for further enhancement.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
nsivabalan commented on issue #7049:
URL: https://github.com/apache/hudi/issues/7049#issuecomment-1302871037
thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
nsivabalan closed issue #7049: [SUPPORT] SQLQueryBasedTransformer Not writing
transformed parquet data
URL: https://github.com/apache/hudi/issues/7049
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
fsilent commented on code in PR #7129:
URL: https://github.com/apache/hudi/pull/7129#discussion_r1013541122
##
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/HoodieParquetInputFormat.java:
##
@@ -45,7 +46,8 @@
*/
@UseRecordReaderFromInputFormat
fsilent commented on code in PR #7129:
URL: https://github.com/apache/hudi/pull/7129#discussion_r1013540860
##
hudi-spark-datasource/hudi-spark-common/pom.xml:
##
@@ -222,6 +222,20 @@
test
+
+
Review Comment:
changed. now dont need to add hive
YannByron commented on PR #7128:
URL: https://github.com/apache/hudi/pull/7128#issuecomment-1302868990
basically, It's not related to cdc. that
https://github.com/apache/hudi/pull/7042 can work without any other changes.
This pr should just think about whether the `?` optional wildcard
hudi-bot commented on PR #7063:
URL: https://github.com/apache/hudi/pull/7063#issuecomment-1302867429
## CI report:
* 77487796a68b54304f55efc71097ab8ca50b428b UNKNOWN
* 8240e1e8280cd8842d4ba11ef6f781feb3d8a9bd UNKNOWN
* 85b70221d74d0d04900acda25e1ea9b7c71bcb0a UNKNOWN
*
hudi-bot commented on PR #7063:
URL: https://github.com/apache/hudi/pull/7063#issuecomment-1302864990
## CI report:
* 77487796a68b54304f55efc71097ab8ca50b428b UNKNOWN
* 8240e1e8280cd8842d4ba11ef6f781feb3d8a9bd UNKNOWN
* 85b70221d74d0d04900acda25e1ea9b7c71bcb0a UNKNOWN
*
hudi-bot commented on PR #6725:
URL: https://github.com/apache/hudi/pull/6725#issuecomment-1302864792
## CI report:
* 81f856d99da09e5a9438fad2a0d111bc9062aba4 Azure:
boneanxs commented on PR #6725:
URL: https://github.com/apache/hudi/pull/6725#issuecomment-1302863626
@hudi-bot run azure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
boneanxs commented on PR #6725:
URL: https://github.com/apache/hudi/pull/6725#issuecomment-1302863524
@alexeykudinkin @xushiyan could you please review the new commit? The test
failure is 137, not relate to this pr.
--
This is an automated message from the Apache Git Service.
To respond
fsilent commented on code in PR #7129:
URL: https://github.com/apache/hudi/pull/7129#discussion_r1013522502
##
hudi-spark-datasource/hudi-spark-common/pom.xml:
##
@@ -222,6 +222,20 @@
test
+
+
Review Comment:
because support column type evolution for
xiarixiaoyao commented on code in PR #7129:
URL: https://github.com/apache/hudi/pull/7129#discussion_r1013530563
##
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/HoodieParquetInputFormat.java:
##
@@ -45,7 +46,8 @@
*/
@UseRecordReaderFromInputFormat
fsilent commented on code in PR #7129:
URL: https://github.com/apache/hudi/pull/7129#discussion_r1013522502
##
hudi-spark-datasource/hudi-spark-common/pom.xml:
##
@@ -222,6 +222,20 @@
test
+
+
Review Comment:
because support column type evolution for
hudi-bot commented on PR #7132:
URL: https://github.com/apache/hudi/pull/7132#issuecomment-1302822386
## CI report:
* 23edfddd3ba7aff627930cf60fbca8255c3b40d4 Azure:
nsivabalan commented on issue #7106:
URL: https://github.com/apache/hudi/issues/7106#issuecomment-1302809739
https://issues.apache.org/jira/browse/HUDI-5158
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
sivabalan narayanan created HUDI-5158:
-
Summary: Add column pruning support to any payload
Key: HUDI-5158
URL: https://issues.apache.org/jira/browse/HUDI-5158
Project: Apache Hudi
Issue
[
https://issues.apache.org/jira/browse/HUDI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ethan Guo updated HUDI-5064:
Fix Version/s: 0.13.0
> Improve docs around concurrency control and deployment models
>
[
https://issues.apache.org/jira/browse/HUDI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ethan Guo updated HUDI-5064:
Description: Currently, the concurrency control-related configurations for
different deployment models are
[
https://issues.apache.org/jira/browse/HUDI-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ethan Guo updated HUDI-5064:
Component/s: docs
> Improve docs around concurrency control and deployment models
>
nsivabalan commented on issue #7060:
URL: https://github.com/apache/hudi/issues/7060#issuecomment-1302808470
@navbalaraman : gentle ping.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
nsivabalan commented on issue #7062:
URL: https://github.com/apache/hudi/issues/7062#issuecomment-1302808274
also, I don't get this statement of yours "We noticed that the class
HoodieMergeHandle is not being used due to PARQUET_SMALL_FILE_LIMIT = 0 and the
job passes successfully.". can
nsivabalan commented on issue #7062:
URL: https://github.com/apache/hudi/issues/7062#issuecomment-1302807726
hey @HEPBO3AH : do you mean to say that, even after our fix
https://github.com/apache/hudi/pull/6864, your avg record size estimate is
wrong in some cases. And as a result your are
nsivabalan commented on issue #7102:
URL: https://github.com/apache/hudi/issues/7102#issuecomment-1302804138
Do you still have the ".hoodie" w/ your old state when you ran into the
exception. We can inspect the timeline (".hoodie") to see what was the issue.
if not, we can't do much now.
nsivabalan commented on issue #7102:
URL: https://github.com/apache/hudi/issues/7102#issuecomment-1302803751
guess there is some mis-understanding.
as of now, you ran into issues while writing to hudi table by enabling
metadata table. still on the read path (hive), your read should
slfan1989 commented on PR #7127:
URL: https://github.com/apache/hudi/pull/7127#issuecomment-1302781439
> LGTM. @slfan1989 could you check the CI failures?
@yanghua Thanks a lot for your help reviewing the code, I will check the CI
failures.
--
This is an automated message from the
lewyh commented on issue #7130:
URL: https://github.com/apache/hudi/issues/7130#issuecomment-1302781431
It seems that setting the config value
`.set("spark.hadoop.fs.s3.maxConnections", "1000")` fixes the problem. There is
no longer any server 500 error, or timeout waiting for connection
hudi-bot commented on PR #7132:
URL: https://github.com/apache/hudi/pull/7132#issuecomment-1302779770
## CI report:
* 23edfddd3ba7aff627930cf60fbca8255c3b40d4 Azure:
hudi-bot commented on PR #7039:
URL: https://github.com/apache/hudi/pull/7039#issuecomment-1302779619
## CI report:
* 5ff96812e74f348af76c942f58e67445afbb765e Azure:
This is an automated email from the ASF dual-hosted git repository.
sivabalan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/master by this push:
new d36cc05ed5 [HUDI-5126] Delete duplicate
nsivabalan merged PR #7103:
URL: https://github.com/apache/hudi/pull/7103
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
nsivabalan commented on issue #7106:
URL: https://github.com/apache/hudi/issues/7106#issuecomment-1302755348
this sounds interesting. We have RFC-46 nearing landing. So, might have to
replay this on top of RFC-46.
But can you start a dev email thread. and we can go from there.
Def
nsivabalan closed issue #7106: [PROPOSE] Add column prune support for other
payload class
URL: https://github.com/apache/hudi/issues/7106
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
nsivabalan commented on issue #7116:
URL: https://github.com/apache/hudi/issues/7116#issuecomment-1302753334
@yuzhaojing @danny0405 : Can you folks follow up when you get a chance.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
nsivabalan commented on issue #7122:
URL: https://github.com/apache/hudi/issues/7122#issuecomment-1302751446
Looks like this is a older version of hudi. with 0.12.0, I am not seeing the
class named HoodieTimelineArchiveLog.
Or are you using an internal hudi version that you maintain
nsivabalan commented on issue #7064:
URL: https://github.com/apache/hudi/issues/7064#issuecomment-1302739800
I don't see any issue just from gleaning the code.
Can you post us the info logs you see in both cases for below statement. I
am expecting its same for both(schema from file
lewyh commented on issue #7130:
URL: https://github.com/apache/hudi/issues/7130#issuecomment-1302738763
Thanks for the quick response. I've tried setting the following when
intializing the spark session:
```
conf = (
SparkConf()
.setAppName(app_name)
hudi-bot commented on PR #7132:
URL: https://github.com/apache/hudi/pull/7132#issuecomment-1302735029
## CI report:
* 23edfddd3ba7aff627930cf60fbca8255c3b40d4 UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
[
https://issues.apache.org/jira/browse/HUDI-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
sivabalan narayanan updated HUDI-5157:
--
Story Points: 2
> Duplicate partition path for chained hudi tables.
>
[
https://issues.apache.org/jira/browse/HUDI-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
sivabalan narayanan updated HUDI-5157:
--
Sprint: 2022/11/01
> Duplicate partition path for chained hudi tables.
>
nsivabalan commented on issue #5189:
URL: https://github.com/apache/hudi/issues/5189#issuecomment-1302727444
https://github.com/apache/hudi/pull/7132
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
nsivabalan opened a new pull request, #7132:
URL: https://github.com/apache/hudi/pull/7132
### Change Logs
HoodieIncrSource was dropping every meta field from source except partition
path. This was resulting in duplicate meta field (_hoodie_partition_path) when
reading the 2nd table
nsivabalan closed issue #5189: [SUPPORT] Multiple chaining of hudi tables via
incremental source results in duplicate partition meta column
URL: https://github.com/apache/hudi/issues/5189
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
nsivabalan commented on issue #5189:
URL: https://github.com/apache/hudi/issues/5189#issuecomment-1302725013
https://issues.apache.org/jira/browse/HUDI-5157
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
1 - 100 of 200 matches
Mail list logo