danny0405 commented on issue #8848:
URL: https://github.com/apache/hudi/issues/8848#issuecomment-1652958087
I'm sure the Calcite jar should be included in the hive-exec jar, what
hive-exec jar did you use for your flink bundle?
--
This is an automated message from the Apache Git Service.
danny0405 commented on code in PR #9297:
URL: https://github.com/apache/hudi/pull/9297#discussion_r1275766041
##
hudi-sync/hudi-hive-sync/pom.xml:
##
@@ -200,6 +200,9 @@
+
+ false
+
Review Comment:
true or false ?
danny0405 commented on issue #5457:
URL: https://github.com/apache/hudi/issues/5457#issuecomment-1652955532
Thanks for the feedback @zhangyue19921010 , how you solve this issue finally?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
empcl opened a new pull request, #9297:
URL: https://github.com/apache/hudi/pull/9297
…ion for the maven-jar-plugin
### Change Logs
in the hudi-utilities and hudi-hive-sync modules, add skip configuration for
the maven-jar-plugin
### Risk level (write none, low medium or
anagha-google closed issue #9294: HUDI insert not working
URL: https://github.com/apache/hudi/issues/9294
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
aib628 commented on issue #8848:
URL: https://github.com/apache/hudi/issues/8848#issuecomment-1652858744
> @aib628 What is your issue then ? The Calcite jar is also missing ?
Yeah,The same problem reproduced when use flink connector. We can see the
error message in jobmanager log as
fuxiangkui commented on issue #9267:
URL: https://github.com/apache/hudi/issues/9267#issuecomment-1652854854
it work,thank you
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
big-doudou commented on PR #9182:
URL: https://github.com/apache/hudi/pull/9182#issuecomment-1652846394
> > failover causes the bootstrap event not to be sent
>
> Why we must send the bootstrap event in this case, the bootstrap event
itself is a empty event without any metadata.
zhangyue19921010 commented on issue #5457:
URL: https://github.com/apache/hudi/issues/5457#issuecomment-1652825409
+1, Using flink bulk_insert action, this exception still happen in Hudi
0.13.1 + flink1.14
--
This is an automated message from the Apache Git Service.
To respond to the
danny0405 commented on PR #9113:
URL: https://github.com/apache/hudi/pull/9113#issuecomment-1652823930
@flashJd Since we already have a conclusion, do you have intreast to fix it
for release 0.14.0 ?
--
This is an automated message from the Apache Git Service.
To respond to the message,
danny0405 commented on code in PR #9277:
URL: https://github.com/apache/hudi/pull/9277#discussion_r1275665153
##
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/UpdateHoodieTableCommand.scala:
##
@@ -31,6 +34,22 @@ import
danny0405 commented on PR #9290:
URL: https://github.com/apache/hudi/pull/9290#issuecomment-1652815953
Can you show us the dependency tree, which jar/module is dependent on the
`httpcore` jar?
--
This is an automated message from the Apache Git Service.
To respond to the message, please
danny0405 commented on code in PR #9291:
URL: https://github.com/apache/hudi/pull/9291#discussion_r1275646230
##
website/docs/timeline.md:
##
@@ -39,4 +44,36 @@ organization reflects the actual time or `event time`, the
data was intended for
When there is late arriving data
nsivabalan opened a new pull request, #9296:
URL: https://github.com/apache/hudi/pull/9296
### Change Logs
_Describe context and summary for this change. Highlight if any code was
copied._
### Impact
_Describe any public API or user-facing feature change or any
hudi-bot commented on PR #9295:
URL: https://github.com/apache/hudi/pull/9295#issuecomment-1652703877
## CI report:
* df79b0c99551cff506dd0d81c1f742ce0bfc5ccf Azure:
hudi-bot commented on PR #9293:
URL: https://github.com/apache/hudi/pull/9293#issuecomment-1652655498
## CI report:
* 6cba4c968bcac4760291e3156f5a4b4c364f7e3f Azure:
yihua merged PR #9258:
URL: https://github.com/apache/hudi/pull/9258
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
hudi-bot commented on PR #9295:
URL: https://github.com/apache/hudi/pull/9295#issuecomment-1652555342
## CI report:
* df79b0c99551cff506dd0d81c1f742ce0bfc5ccf Azure:
hudi-bot commented on PR #9295:
URL: https://github.com/apache/hudi/pull/9295#issuecomment-1652503270
## CI report:
* df79b0c99551cff506dd0d81c1f742ce0bfc5ccf UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
hudi-bot commented on PR #7359:
URL: https://github.com/apache/hudi/pull/7359#issuecomment-1652499907
## CI report:
* 2eadc7045fd0cb74b76581797c63dc7ed7815020 Azure:
yihua commented on code in PR #9258:
URL: https://github.com/apache/hudi/pull/9258#discussion_r1275461682
##
website/docs/basic_configurations.md:
##
@@ -1,25 +1,19 @@
---
title: Basic Configurations
summary: This page covers the basic configurations you may use to
yihua opened a new pull request, #9295:
URL: https://github.com/apache/hudi/pull/9295
### Change Logs
As above.
### Impact
none
### Risk level
none
### Documentation Update
As above
### Contributor's checklist
- [ ] Read through
anagha-google opened a new issue, #9294:
URL: https://github.com/apache/hudi/issues/9294
***Attempted:***
Insert of a non-duplicate new record into a CoW table from Spark with the
following Hudi options:
`hudi_options = {
'hoodie.database.name': DATABASE_NAME,
hudi-bot commented on PR #9293:
URL: https://github.com/apache/hudi/pull/9293#issuecomment-1652437646
## CI report:
* 6cba4c968bcac4760291e3156f5a4b4c364f7e3f Azure:
hudi-bot commented on PR #9293:
URL: https://github.com/apache/hudi/pull/9293#issuecomment-1652427622
## CI report:
* 6cba4c968bcac4760291e3156f5a4b4c364f7e3f UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
hudi-bot commented on PR #9292:
URL: https://github.com/apache/hudi/pull/9292#issuecomment-1652418178
## CI report:
* 4c7e7f5c14fa898e406bfce45ea2daccda02c662 Azure:
nsivabalan commented on code in PR #9293:
URL: https://github.com/apache/hudi/pull/9293#discussion_r1275417953
##
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/HoodieTimelineArchiver.java:
##
@@ -726,4 +687,64 @@ private IndexedRecord
Krishen Bhan created HUDI-6596:
--
Summary: Propose rollback implementation changes to guard against
concurrent jobs
Key: HUDI-6596
URL: https://issues.apache.org/jira/browse/HUDI-6596
Project: Apache
[
https://issues.apache.org/jira/browse/HUDI-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated HUDI-6595:
-
Labels: pull-request-available (was: )
> Refactor HoodieTimelineArchiver to improve code use.
>
vinishjail97 opened a new pull request, #9293:
URL: https://github.com/apache/hudi/pull/9293
### Change Logs
Moving the min and max instants functionality to a static function to
improve usability.
### Impact
No impact, refactoring.
### Risk level (write none,
[
https://issues.apache.org/jira/browse/HUDI-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
kazdy closed HUDI-5266.
---
Resolution: Fixed
> Incremental Query for spark-sql
> ---
>
> Key:
[
https://issues.apache.org/jira/browse/HUDI-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747644#comment-17747644
]
kazdy commented on HUDI-5266:
-
this was implemented in https://issues.apache.org/jira/browse/HUDI-6223,
[
https://issues.apache.org/jira/browse/HUDI-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
kazdy updated HUDI-5266:
Fix Version/s: 0.14.0
> Incremental Query for spark-sql
> ---
>
> Key:
[
https://issues.apache.org/jira/browse/HUDI-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
kazdy resolved HUDI-5266.
-
> Incremental Query for spark-sql
> ---
>
> Key: HUDI-5266
>
[
https://issues.apache.org/jira/browse/HUDI-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
kazdy reassigned HUDI-5266:
---
Assignee: kazdy
> Incremental Query for spark-sql
> ---
>
> Key:
hudi-bot commented on PR #9277:
URL: https://github.com/apache/hudi/pull/9277#issuecomment-1652338676
## CI report:
* cf692aeb6c7774b01a236cf058225debb8caff53 UNKNOWN
* 5df6520f0b0762555349e164a17cfada9a3e7548 Azure:
Vinish Reddy created HUDI-6595:
--
Summary: Refactor HoodieTimelineArchiver to improve code use.
Key: HUDI-6595
URL: https://issues.apache.org/jira/browse/HUDI-6595
Project: Apache Hudi
Issue
bvaradar commented on PR #9226:
URL: https://github.com/apache/hudi/pull/9226#issuecomment-1652319444
@hbgstc123 : Added some comments. Please take a look
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
bvaradar commented on code in PR #9226:
URL: https://github.com/apache/hudi/pull/9226#discussion_r1275360526
##
hudi-common/src/main/java/org/apache/hudi/common/util/ClusteringUtils.java:
##
@@ -258,26 +258,29 @@ public static Option
getOldestInstantToRetainForClustering(
hudi-bot commented on PR #7359:
URL: https://github.com/apache/hudi/pull/7359#issuecomment-1652282906
## CI report:
* 2999c56d853134e8476908b79ce77737293ce867 Azure:
hudi-bot commented on PR #7359:
URL: https://github.com/apache/hudi/pull/7359#issuecomment-1652272057
## CI report:
* 2999c56d853134e8476908b79ce77737293ce867 Azure:
yihua commented on code in PR #7359:
URL: https://github.com/apache/hudi/pull/7359#discussion_r1275296193
##
hudi-client/hudi-java-client/src/main/java/org/apache/hudi/table/action/commit/JavaWriteHelper.java:
##
@@ -69,17 +70,26 @@ public List> deduplicateRecords(
bhasudha commented on PR #8869:
URL: https://github.com/apache/hudi/pull/8869#issuecomment-1652251468
@ad1happy2go @yihua I also bumped into this issue recently. My questions
based on this PR :
- This PR checks for the partition path fields leaving out the type of the
hudi-bot commented on PR #9292:
URL: https://github.com/apache/hudi/pull/9292#issuecomment-1652199349
## CI report:
* 4c7e7f5c14fa898e406bfce45ea2daccda02c662 Azure:
hudi-bot commented on PR #9292:
URL: https://github.com/apache/hudi/pull/9292#issuecomment-1652187318
## CI report:
* 4c7e7f5c14fa898e406bfce45ea2daccda02c662 UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
codope commented on code in PR #9292:
URL: https://github.com/apache/hudi/pull/9292#discussion_r1275206975
##
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataLogRecordReader.java:
##
@@ -110,7 +110,7 @@ public Map>
getRecordsByKeys(List (HoodieRecord)
hudi-bot commented on PR #9277:
URL: https://github.com/apache/hudi/pull/9277#issuecomment-1652125752
## CI report:
* cf692aeb6c7774b01a236cf058225debb8caff53 UNKNOWN
* da9dd1fc203c01d0a000d49dcbd58a0a1d729354 Azure:
codope commented on code in PR #9292:
URL: https://github.com/apache/hudi/pull/9292#discussion_r1275203971
##
hudi-common/src/main/java/org/apache/hudi/metadata/BaseTableMetadata.java:
##
@@ -418,6 +417,8 @@ private void checkForSpuriousDeletes(HoodieMetadataPayload
[
https://issues.apache.org/jira/browse/HUDI-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated HUDI-6594:
-
Labels: pull-request-available (was: )
> Support duplicate with HFile Reader/Writer and by
codope opened a new pull request, #9292:
URL: https://github.com/apache/hudi/pull/9292
### Change Logs
Support duplicate with HFile Reader/Writer.
TODO: Metadata reader changes pending.
### Impact
MDT can have duplcates. Useful for record index to work as regular
Sagar Sumit created HUDI-6594:
-
Summary: Support duplicate with HFile Reader/Writer and by
extension MDT records
Key: HUDI-6594
URL: https://issues.apache.org/jira/browse/HUDI-6594
Project: Apache Hudi
hudi-bot commented on PR #9277:
URL: https://github.com/apache/hudi/pull/9277#issuecomment-1652113014
## CI report:
* cf692aeb6c7774b01a236cf058225debb8caff53 UNKNOWN
* da9dd1fc203c01d0a000d49dcbd58a0a1d729354 Azure:
hudi-bot commented on PR #9290:
URL: https://github.com/apache/hudi/pull/9290#issuecomment-1651873520
## CI report:
* 9a009f58a8282d606c945179e62325bf3c3727f3 Azure:
ad1happy2go commented on issue #8890:
URL: https://github.com/apache/hudi/issues/8890#issuecomment-1651832676
@psendyk This is the gist with code I am using to reproduce -
https://gist.github.com/ad1happy2go/a2df5b11c3aff1a15b205b458b6b480a
--
This is an automated message from the Apache
rmnlchh commented on issue #9282:
URL: https://github.com/apache/hudi/issues/9282#issuecomment-1651743365
@ad1happy2go
Kafka topic record schema:
{
"type": "record",
"name": "Creative",
"namespace": "Cardlytics.Ops.Messages.Portal",
"fields": [
{
SteNicholas commented on code in PR #9287:
URL: https://github.com/apache/hudi/pull/9287#discussion_r1274853953
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableSink.java:
##
@@ -145,18 +145,12 @@ public String asSummaryString() {
big-doudou commented on PR #9182:
URL: https://github.com/apache/hudi/pull/9182#issuecomment-1651615360
> > How does this affect metadata cleaning?
>
> It removes the preceeding partial metadata if there is any.
Before the checkpoint is completed, BucketStreamWrite flush buffer
bhasudha commented on code in PR #9291:
URL: https://github.com/apache/hudi/pull/9291#discussion_r1274820045
##
website/docs/table_types.md:
##
@@ -36,17 +38,28 @@ Hudi supports the following query types
- **Snapshot Queries** : Queries see the latest snapshot of the table
bhasudha commented on PR #9291:
URL: https://github.com/apache/hudi/pull/9291#issuecomment-1651600924
Tested locally
![Screenshot 2023-07-26 at 4 23 40
AM](https://github.com/apache/hudi/assets/2179254/dce65e85-0187-4483-86e9-f8ceb1ffbc0d)
![Screenshot 2023-07-26 at 4 23 57
bhasudha opened a new pull request, #9291:
URL: https://github.com/apache/hudi/pull/9291
- Add local configs
- Keep uptodate on different query types
### Change Logs
Docs update
### Impact
Impacts website info
### Risk level (write none, low medium or
hudi-bot commented on PR #9290:
URL: https://github.com/apache/hudi/pull/9290#issuecomment-1651577619
## CI report:
* 9a009f58a8282d606c945179e62325bf3c3727f3 Azure:
hudi-bot commented on PR #9290:
URL: https://github.com/apache/hudi/pull/9290#issuecomment-1651566362
## CI report:
* 9a009f58a8282d606c945179e62325bf3c3727f3 UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
voonhous commented on issue #9256:
URL: https://github.com/apache/hudi/issues/9256#issuecomment-1651552403
> Also, one thing I noticed is that hoodie.properties does not change to
reflect the presence of my new columns. Let me know if you still want me to
test a primitive, non-nested
JingFengWang opened a new pull request, #9290:
URL: https://github.com/apache/hudi/pull/9290
…org.apache.http package cannot be found
### Change Logs
Add http dependency in ../hudi-common/pom.xml
```html
...
org.apache.httpcomponents
ad1happy2go commented on issue #9282:
URL: https://github.com/apache/hudi/issues/9282#issuecomment-1651511555
@rmnlchh I couldn't reproduce this issue.
Code I tried -
https://gist.github.com/ad1happy2go/1391a679de49efa1872563062f04e29b
Can you let us know the schema of the
jlloh commented on issue #9256:
URL: https://github.com/apache/hudi/issues/9256#issuecomment-1651495991
> P.S. Can you please help to check if the configs below contain the new
column spec definition that was added via Spark?
```
getOrderedColumnExpr()
danny0405 commented on code in PR #9287:
URL: https://github.com/apache/hudi/pull/9287#discussion_r1274732834
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableSink.java:
##
@@ -145,18 +145,12 @@ public String asSummaryString() {
@Override
hudi-bot commented on PR #9287:
URL: https://github.com/apache/hudi/pull/9287#issuecomment-1651453920
## CI report:
* 898d92a4a78fa92f4ce2ce878a8ad95a7c3e0590 Azure:
Bhavani Sudha created HUDI-6593:
---
Summary: Refactor deltastreamer commnad line config params to use
HoodieConfig/ConfigProperty
Key: HUDI-6593
URL: https://issues.apache.org/jira/browse/HUDI-6593
danny0405 commented on PR #9182:
URL: https://github.com/apache/hudi/pull/9182#issuecomment-1651449174
> How does this affect metadata cleaning?
It removes the preceeding partial metadata if there is any.
--
This is an automated message from the Apache Git Service.
To respond to
danny0405 commented on issue #8848:
URL: https://github.com/apache/hudi/issues/8848#issuecomment-1651446994
@aib628 What is your issue then ? The Calcite jar is also missing ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
hudi-bot commented on PR #9286:
URL: https://github.com/apache/hudi/pull/9286#issuecomment-1651438963
## CI report:
* e5eb56bc34913e0e2ea0f3c5e47e7adcc88d0c4b Azure:
danny0405 commented on code in PR #9229:
URL: https://github.com/apache/hudi/pull/9229#discussion_r1274711375
##
hudi-utilities/src/main/java/org/apache/hudi/utilities/HoodieCompactor.java:
##
@@ -101,6 +104,12 @@ public static class Config implements Serializable {
public
ksmou commented on code in PR #9229:
URL: https://github.com/apache/hudi/pull/9229#discussion_r1274692248
##
hudi-utilities/src/main/java/org/apache/hudi/utilities/HoodieCompactor.java:
##
@@ -101,6 +104,12 @@ public static class Config implements Serializable {
public
dengd1937 opened a new issue, #9288:
URL: https://github.com/apache/hudi/issues/9288
**_Tips before filing an issue_**
- Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
- Join the mailing list to engage in conversations and get faster support at
Armelabdelkbir commented on issue #9213:
URL: https://github.com/apache/hudi/issues/9213#issuecomment-1651168356
with this configuration i got some times errors:
```
23/07/26 07:48:10 ERROR streaming.MicroBatchExecution: Query [id =
ade16080-efa5-42d5-8432-b01c3216a566, runId =
hudi-bot commented on PR #9287:
URL: https://github.com/apache/hudi/pull/9287#issuecomment-1651117040
## CI report:
* 898d92a4a78fa92f4ce2ce878a8ad95a7c3e0590 Azure:
hudi-bot commented on PR #9287:
URL: https://github.com/apache/hudi/pull/9287#issuecomment-1651103898
## CI report:
* 898d92a4a78fa92f4ce2ce878a8ad95a7c3e0590 UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
hudi-bot commented on PR #9286:
URL: https://github.com/apache/hudi/pull/9286#issuecomment-1651103849
## CI report:
* e5eb56bc34913e0e2ea0f3c5e47e7adcc88d0c4b Azure:
hudi-bot commented on PR #9286:
URL: https://github.com/apache/hudi/pull/9286#issuecomment-1651093441
## CI report:
* e5eb56bc34913e0e2ea0f3c5e47e7adcc88d0c4b UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
aib628 commented on issue #8848:
URL: https://github.com/apache/hudi/issues/8848#issuecomment-1651074254
>
Hi @danny0405 i had try it, and the seem problem reproduced.
```
CREATE TABLE t1(
uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
name VARCHAR(10),
age INT,
Zouxxyy closed pull request #7615: [HUDI-5510] Reload active timeline when
commit finish
URL: https://github.com/apache/hudi/pull/7615
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
SteNicholas commented on code in PR #9211:
URL: https://github.com/apache/hudi/pull/9211#discussion_r1274428725
##
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/TestWriteCopyOnWrite.java:
##
@@ -114,10 +116,28 @@ public void testCheckpointFails() throws
[
https://issues.apache.org/jira/browse/HUDI-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated HUDI-6592:
-
Labels: pull-request-available (was: )
> Flink insert overwrite should support dynamic partition
SteNicholas opened a new pull request, #9287:
URL: https://github.com/apache/hudi/pull/9287
### Change Logs
Flink insert overwrite should support dynamic partition instead of the whole
table, which behavior is consistent with the semantics of insert overwrite in
Flink.
###
Nicholas Jiang created HUDI-6592:
Summary: Flink insert overwrite should support dynamic partition
instead of whole table
Key: HUDI-6592
URL: https://issues.apache.org/jira/browse/HUDI-6592
Project:
codope opened a new pull request, #9286:
URL: https://github.com/apache/hudi/pull/9286
### Change Logs
Add rowId field to HoodieRecordIndexInfo in metadata payload. The rowId is
not being read/written right now. Default is 0L. In future, we would want to
map a record to file and
87 matches
Mail list logo