[jira] [Updated] (HUDI-3349) Revisit HoodieRecord API to be able to replace HoodieRecordPayload

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3349:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Revisit HoodieRecord API to be able to replace HoodieRecordPayload
> --
>
> Key: HUDI-3349
> URL: https://issues.apache.org/jira/browse/HUDI-3349
> Project: Apache Hudi
>  Issue Type: Task
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> From RFC-46:
> To promote `HoodieRecord` to become a standardized API of interacting with a 
> single record, we need:
>  # Rebase usages of `HoodieRecordPayload` w/ `HoodieRecord`
>  # Implement new standardized record-level APIs (like `getPartitionKey` , 
> `getRecordKey`, etc) in `HoodieRecord`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3353) Rebase `HoodieFileWriter` to accept `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3353:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Rebase `HoodieFileWriter` to accept `HoodieRecord`
> --
>
> Key: HUDI-3353
> URL: https://issues.apache.org/jira/browse/HUDI-3353
> Project: Apache Hudi
>  Issue Type: Task
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieFileWriter`s will be 
> 1. Accepting `HoodieRecord`
> 2. Will be engine-specific (so that they're able to handle internal record 
> representation)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3381) Rebase `HoodieMergeHandle` to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3381:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Rebase `HoodieMergeHandle` to operate on `HoodieRecord`
> ---
>
> Key: HUDI-3381
> URL: https://issues.apache.org/jira/browse/HUDI-3381
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieWriteHandle`s will be  
>    1. Accepting `HoodieRecord` instead of raw Avro payload (avoiding Avro 
> conversion)
>    2. Using Combining API engine to merge records (when necessary) 
>    3. Passes `HoodieRecord` as is to `FileWriter



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3384) Implement Spark-specific FileWriters

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3384:
-
Issue Type: Improvement  (was: Task)

> Implement Spark-specific FileWriters
> 
>
> Key: HUDI-3384
> URL: https://issues.apache.org/jira/browse/HUDI-3384
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.11.0
>
>
> As per RFC-46
> `HoodieFileWriter`s will be
>  # Accepting `HoodieRecord`
>  # Will be engine-specific (so that they're able to handle internal record 
> representation)
>  
> Initially, we will focus on Spark with other engines to follow.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3385) Implement Spark-specific `FileReader`s

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3385:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Implement Spark-specific `FileReader`s
> --
>
> Key: HUDI-3385
> URL: https://issues.apache.org/jira/browse/HUDI-3385
> Project: Apache Hudi
>  Issue Type: Task
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> To fully avoid using of any intermediate representation (Avro) we will have 
> to also implement engine-specific `FileReader`s
>  
> Initially, we will focus on Spark with other engines to follow



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3379) Rebase `HoodieAppendHandle` to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3379:
-
Issue Type: Improvement  (was: Task)

> Rebase `HoodieAppendHandle` to operate on `HoodieRecord`
> 
>
> Key: HUDI-3379
> URL: https://issues.apache.org/jira/browse/HUDI-3379
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.11.0
>
>
> From RFC-46:
> `HoodieWriteHandle`s will be  
>    1. Accepting `HoodieRecord` instead of raw Avro payload (avoiding Avro 
> conversion)
>    2. Using Combining API engine to merge records (when necessary) 
>    3. Passes `HoodieRecord` as is to `FileWriter
> h4.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3380) Rebase `HoodieDataBlock`s to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3380:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Rebase `HoodieDataBlock`s to operate on `HoodieRecord`
> --
>
> Key: HUDI-3380
> URL: https://issues.apache.org/jira/browse/HUDI-3380
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> HoodieDataBlock implementations for Avro, Parquet, HFile have to be rebased 
> on HoodieRecord to unblock HUDI-3379



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3381) Rebase `HoodieMergeHandle` to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3381:
-
Issue Type: Improvement  (was: Bug)

> Rebase `HoodieMergeHandle` to operate on `HoodieRecord`
> ---
>
> Key: HUDI-3381
> URL: https://issues.apache.org/jira/browse/HUDI-3381
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieWriteHandle`s will be  
>    1. Accepting `HoodieRecord` instead of raw Avro payload (avoiding Avro 
> conversion)
>    2. Using Combining API engine to merge records (when necessary) 
>    3. Passes `HoodieRecord` as is to `FileWriter



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3379) Rebase `HoodieAppendHandle` to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3379:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Rebase `HoodieAppendHandle` to operate on `HoodieRecord`
> 
>
> Key: HUDI-3379
> URL: https://issues.apache.org/jira/browse/HUDI-3379
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieWriteHandle`s will be  
>    1. Accepting `HoodieRecord` instead of raw Avro payload (avoiding Avro 
> conversion)
>    2. Using Combining API engine to merge records (when necessary) 
>    3. Passes `HoodieRecord` as is to `FileWriter
> h4.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3385) Implement Spark-specific `FileReader`s

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3385:
-
Issue Type: Improvement  (was: Task)

> Implement Spark-specific `FileReader`s
> --
>
> Key: HUDI-3385
> URL: https://issues.apache.org/jira/browse/HUDI-3385
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> To fully avoid using of any intermediate representation (Avro) we will have 
> to also implement engine-specific `FileReader`s
>  
> Initially, we will focus on Spark with other engines to follow



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3380) Rebase `HoodieDataBlock`s to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3380:
-
Issue Type: Improvement  (was: Bug)

> Rebase `HoodieDataBlock`s to operate on `HoodieRecord`
> --
>
> Key: HUDI-3380
> URL: https://issues.apache.org/jira/browse/HUDI-3380
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.11.0
>
>
> HoodieDataBlock implementations for Avro, Parquet, HFile have to be rebased 
> on HoodieRecord to unblock HUDI-3379



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3384) Implement Spark-specific FileWriters

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3384:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Implement Spark-specific FileWriters
> 
>
> Key: HUDI-3384
> URL: https://issues.apache.org/jira/browse/HUDI-3384
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> As per RFC-46
> `HoodieFileWriter`s will be
>  # Accepting `HoodieRecord`
>  # Will be engine-specific (so that they're able to handle internal record 
> representation)
>  
> Initially, we will focus on Spark with other engines to follow.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3354) Rebase `HoodieRealtimeRecordReader` to return `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3354:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Rebase `HoodieRealtimeRecordReader` to return `HoodieRecord`
> 
>
> Key: HUDI-3354
> URL: https://issues.apache.org/jira/browse/HUDI-3354
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieRealtimeRecordReader`s 
> 1. API will be returning opaque `HoodieRecord` instead of raw Avro payload



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3354) Rebase `HoodieRealtimeRecordReader` to return `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3354:
-
Issue Type: Improvement  (was: Task)

> Rebase `HoodieRealtimeRecordReader` to return `HoodieRecord`
> 
>
> Key: HUDI-3354
> URL: https://issues.apache.org/jira/browse/HUDI-3354
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.11.0
>
>
> From RFC-46:
> `HoodieRealtimeRecordReader`s 
> 1. API will be returning opaque `HoodieRecord` instead of raw Avro payload



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3351) Rebase Record combining semantic into `HoodieRecordCombiningEngine`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3351:
-
Issue Type: Improvement  (was: Task)

> Rebase Record combining semantic into `HoodieRecordCombiningEngine`
> ---
>
> Key: HUDI-3351
> URL: https://issues.apache.org/jira/browse/HUDI-3351
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> Extract Record Combining (Merge) API from `HoodieRecordPayload` into a 
> standalone, stateless component – `HoodieRecordCombiningEngine`.
> Such component will be
> 1. Abstracted as stateless object providing API to combine records (according 
> to predefined semantics) for engines (Spark, Flink) of interest
> 2. Plug-in point for user-defined combination semantics



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3353) Rebase `HoodieFileWriter` to accept `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3353:
-
Issue Type: Improvement  (was: Task)

> Rebase `HoodieFileWriter` to accept `HoodieRecord`
> --
>
> Key: HUDI-3353
> URL: https://issues.apache.org/jira/browse/HUDI-3353
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieFileWriter`s will be 
> 1. Accepting `HoodieRecord`
> 2. Will be engine-specific (so that they're able to handle internal record 
> representation)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3378) Rebase `HoodieCreateHandle` to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3378:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Rebase `HoodieCreateHandle` to operate on `HoodieRecord`
> 
>
> Key: HUDI-3378
> URL: https://issues.apache.org/jira/browse/HUDI-3378
> Project: Apache Hudi
>  Issue Type: Task
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieWriteHandle`s will be  
>    1. Accepting `HoodieRecord` instead of raw Avro payload (avoiding Avro 
> conversion)
>    2. Using Combining API engine to merge records (when necessary) 
>    3. Passes `HoodieRecord` as is to `FileWriter



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3378) Rebase `HoodieCreateHandle` to operate on `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3378:
-
Issue Type: Improvement  (was: Task)

> Rebase `HoodieCreateHandle` to operate on `HoodieRecord`
> 
>
> Key: HUDI-3378
> URL: https://issues.apache.org/jira/browse/HUDI-3378
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> From RFC-46:
> `HoodieWriteHandle`s will be  
>    1. Accepting `HoodieRecord` instead of raw Avro payload (avoiding Avro 
> conversion)
>    2. Using Combining API engine to merge records (when necessary) 
>    3. Passes `HoodieRecord` as is to `FileWriter



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3321) HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field name

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3321:
-
Component/s: metadata

> HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field 
> name
> -
>
> Key: HUDI-3321
> URL: https://issues.apache.org/jira/browse/HUDI-3321
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: code-quality, metadata
>Reporter: Manoj Govindassamy
>Assignee: Ethan Guo
>Priority: Major
>  Labels: code-cleanup
> Fix For: 0.11.0
>
>
> Today HFileReader has the hardcoded key field name for the schema. This key 
> field is used by HFileWriter, HFileDataBlock for HFile key dedupliacation 
> feature. When users want to use the HFile format, this hard coded key field 
> name can be preventing the usecase. We need a way to pass in the 
> writer/reader/query configs to the HFile storage layer so as to use the right 
> key field name. 
>  
> Related: HUDI-2763



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3349) Revisit HoodieRecord API to be able to replace HoodieRecordPayload

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3349:
-
Issue Type: Improvement  (was: Task)

> Revisit HoodieRecord API to be able to replace HoodieRecordPayload
> --
>
> Key: HUDI-3349
> URL: https://issues.apache.org/jira/browse/HUDI-3349
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> From RFC-46:
> To promote `HoodieRecord` to become a standardized API of interacting with a 
> single record, we need:
>  # Rebase usages of `HoodieRecordPayload` w/ `HoodieRecord`
>  # Implement new standardized record-level APIs (like `getPartitionKey` , 
> `getRecordKey`, etc) in `HoodieRecord`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3321) HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field name

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3321:
-
Issue Type: Improvement  (was: Task)

> HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field 
> name
> -
>
> Key: HUDI-3321
> URL: https://issues.apache.org/jira/browse/HUDI-3321
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Manoj Govindassamy
>Assignee: Ethan Guo
>Priority: Major
>  Labels: code-cleanup
> Fix For: 0.11.0
>
>
> Today HFileReader has the hardcoded key field name for the schema. This key 
> field is used by HFileWriter, HFileDataBlock for HFile key dedupliacation 
> feature. When users want to use the HFile format, this hard coded key field 
> name can be preventing the usecase. We need a way to pass in the 
> writer/reader/query configs to the HFile storage layer so as to use the right 
> key field name. 
>  
> Related: HUDI-2763



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3321) HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field name

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3321:
-
Component/s: code-quality

> HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field 
> name
> -
>
> Key: HUDI-3321
> URL: https://issues.apache.org/jira/browse/HUDI-3321
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: code-quality
>Reporter: Manoj Govindassamy
>Assignee: Ethan Guo
>Priority: Major
>  Labels: code-cleanup
> Fix For: 0.11.0
>
>
> Today HFileReader has the hardcoded key field name for the schema. This key 
> field is used by HFileWriter, HFileDataBlock for HFile key dedupliacation 
> feature. When users want to use the HFile format, this hard coded key field 
> name can be preventing the usecase. We need a way to pass in the 
> writer/reader/query configs to the HFile storage layer so as to use the right 
> key field name. 
>  
> Related: HUDI-2763



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3350) Create Engine-specific Implementations of `HoodieRecord`

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3350:
-
Issue Type: Improvement  (was: Task)

> Create Engine-specific Implementations of `HoodieRecord`
> 
>
> Key: HUDI-3350
> URL: https://issues.apache.org/jira/browse/HUDI-3350
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Alexey Kudinkin
>Assignee: Alexey Kudinkin
>Priority: Major
> Fix For: 0.12.0
>
>
> To achieve goals of RFC-46 `HoodieRecord` will have to hold internal 
> representations of the records based on the Engine Hudi is being used with.
> For that we need to split `HoodieRecord` into "interface" (or base-class) and 
> engine-specific implementations (holding internal engine-specific 
> representation of the payload).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3305) Drop deprecated util HDFSParquetImporter

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3305:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Drop deprecated util HDFSParquetImporter
> 
>
> Key: HUDI-3305
> URL: https://issues.apache.org/jira/browse/HUDI-3305
> Project: Apache Hudi
>  Issue Type: Task
>  Components: deltastreamer
>Reporter: Xianghu Wang
>Assignee: Xianghu Wang
>Priority: Major
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3304) support partial update on mor table

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3304:
-
Issue Type: New Feature  (was: Improvement)

> support partial update on mor table 
> 
>
> Key: HUDI-3304
> URL: https://issues.apache.org/jira/browse/HUDI-3304
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: writer-core
>Reporter: Jian Feng
>Assignee: Jian Feng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
> Attachments: image2022-1-13_0-33-5.png
>
>
> h2. current status
>  * OverwriteNonDefaultsWithLatestAvroPayload implement partial update 
> behavior in combineAndGetUpdateValue method
>  * Spark sql also have a 'Merge into' syntax support partial update by 
> ExpressionPayload,
>  * both OverwriteNonDefaultsWithLatestAvroPayload and ExpressionPayload can 
> not handle partial update in preCombine method, so they can only support 
> partial update with COW table
> h2. solution
> make preCombine function also support partial update(need pass schema as 
> parameter)
> !image2022-1-13_0-33-5.png|width=832,height=516!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3304) support partial update on mor table

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3304:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> support partial update on mor table 
> 
>
> Key: HUDI-3304
> URL: https://issues.apache.org/jira/browse/HUDI-3304
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: writer-core
>Reporter: Jian Feng
>Assignee: Jian Feng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
> Attachments: image2022-1-13_0-33-5.png
>
>
> h2. current status
>  * OverwriteNonDefaultsWithLatestAvroPayload implement partial update 
> behavior in combineAndGetUpdateValue method
>  * Spark sql also have a 'Merge into' syntax support partial update by 
> ExpressionPayload,
>  * both OverwriteNonDefaultsWithLatestAvroPayload and ExpressionPayload can 
> not handle partial update in preCombine method, so they can only support 
> partial update with COW table
> h2. solution
> make preCombine function also support partial update(need pass schema as 
> parameter)
> !image2022-1-13_0-33-5.png|width=832,height=516!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3255) Add HoodieFlinkSink for flink datastream api

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3255:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Add HoodieFlinkSink for flink datastream api
> 
>
> Key: HUDI-3255
> URL: https://issues.apache.org/jira/browse/HUDI-3255
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: flink
>Reporter: Forward Xu
>Assignee: Forward Xu
>Priority: Major
>  Labels: sev:normal
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3255) Add HoodieFlinkSink for flink datastream api

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3255:
-
Labels:   (was: sev:normal)

> Add HoodieFlinkSink for flink datastream api
> 
>
> Key: HUDI-3255
> URL: https://issues.apache.org/jira/browse/HUDI-3255
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: flink
>Reporter: Forward Xu
>Assignee: Forward Xu
>Priority: Major
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3251) Add HoodieFlinkSource for flink datastream api

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3251:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Add HoodieFlinkSource for flink datastream api
> --
>
> Key: HUDI-3251
> URL: https://issues.apache.org/jira/browse/HUDI-3251
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: flink
>Reporter: Forward Xu
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> {code:java}
> // code placeholder
> DataStream ds = HoodieFlinkSource.builder()
> .env(execEnv)
> .tableSchema(sourceContext.getSchema())
> .path(sourceContext.getPath())
> .partitionKeys(sourceContext.getPartitionKeys())
> .defaultPartName(sourceContext.getDefaultPartName())
> .conf(sourceContext.getConf())
> .requiredPartitions(sourceContext.getRequiredPartitions())
> .requiredPos(sourceContext.getRequiredPos())
> .limit(sourceContext.getLimit())
> .filters(sourceContext.getFilters())
> .build(); {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3026) HoodieAppendhandle may result in duplicate key for hbase index

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3026:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> HoodieAppendhandle may result in duplicate key for hbase index
> --
>
> Key: HUDI-3026
> URL: https://issues.apache.org/jira/browse/HUDI-3026
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: ZiyueGuan
>Assignee: ZiyueGuan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Problem: a same key may occur in two file group when Hbase index is used. 
> These two file group will have same FileID prefix. As Hbase index is global, 
> this is unexpected
> How to repro:
> We should have a table w/o record sorted in spark. Let's say we have five 
> records with key 1,2,3,4,5 to write. They may be iterated in different order. 
> In the first attempt 1, we write three records 5,4,3 to 
> fileID_1_log.1_attempt1. But this attempt failed. Spark will have a try in 
> the second task attempt (attempt 2), we write four records 1,2,3,4 to  
> fileID_1_log.1_attempt2. And then, we find this filegroup is large enough by 
> call canWrite. So hudi write record 5 to fileID_2_log.1_attempt2 and finish 
> this commit.
> When we do compaction, fileID_1_log.1_attempt1 and fileID_1_log.1_attempt2 
> will be compacted. And we finally got 543 + 1234 = 12345 in fileID_1 while we 
> also got 5 in fileID_2. Record 5 will appear in two fileGroup.
> Reason: Markerfile doesn't reconcile log file as code show in  
> [https://github.com/apache/hudi/blob/9a2030ab3190acf600ce4820be9a08929595763e/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/HoodieTable.java#L553.]
> And log file is actually not fail-safe.
> I'm not sure if [~danny0405] have found this problem too as I find 
> FlinkAppendHandle had been made to always return true. But it was just 
> changed back recently. 
> Solution:
> We may have a quick fix by making canWrite in HoodieAppendHandle always 
> return true. However, I think there may be a more elegant solution that we 
> use append result to generate compaction plan rather than list log file, in 
> which we will have a more granular control on log block instead of log file. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3021) HoodieAppendHandle#appendDataAndDeleteBlocks writer object occurred NPE

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3021:
-
Fix Version/s: (was: 0.11.0)

> HoodieAppendHandle#appendDataAndDeleteBlocks writer object occurred NPE
> ---
>
> Key: HUDI-3021
> URL: https://issues.apache.org/jira/browse/HUDI-3021
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: cli, Common Core, flink
>Affects Versions: 0.10.0
>Reporter: WangMinChao
>Assignee: WangMinChao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-12-15-14-47-07-756.png, 
> image-2021-12-15-14-47-20-965.png
>
>
> !image-2021-12-15-14-47-20-965.png!!image-2021-12-15-14-47-07-756.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (HUDI-3021) HoodieAppendHandle#appendDataAndDeleteBlocks writer object occurred NPE

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu closed HUDI-3021.

Resolution: Fixed

> HoodieAppendHandle#appendDataAndDeleteBlocks writer object occurred NPE
> ---
>
> Key: HUDI-3021
> URL: https://issues.apache.org/jira/browse/HUDI-3021
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: cli, Common Core, flink
>Affects Versions: 0.10.0
>Reporter: WangMinChao
>Assignee: WangMinChao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-12-15-14-47-07-756.png, 
> image-2021-12-15-14-47-20-965.png
>
>
> !image-2021-12-15-14-47-20-965.png!!image-2021-12-15-14-47-07-756.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2832) [Umbrella] [RFC-40] Implement SnowflakeSyncTool to support Hudi to Snowflake Integration

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2832:
-
Priority: Blocker  (was: Major)

> [Umbrella] [RFC-40] Implement SnowflakeSyncTool to support Hudi to Snowflake 
> Integration
> 
>
> Key: HUDI-2832
> URL: https://issues.apache.org/jira/browse/HUDI-2832
> Project: Apache Hudi
>  Issue Type: Epic
>  Components: Common Core
>Reporter: Vinoth Govindarajan
>Assignee: Vinoth Govindarajan
>Priority: Blocker
>  Labels: BigQuery, Integration, pull-request-available
> Fix For: 0.12.0
>
>
> Snowflake is a fully managed service that’s simple to use but can power a 
> near-unlimited number of concurrent workloads. Snowflake is a solution for 
> data warehousing, data lakes, data engineering, data science, data 
> application development, and securely sharing and consuming shared data. 
> Snowflake [doesn’t 
> support|https://docs.snowflake.com/en/sql-reference/sql/alter-file-format.html]
>  Apache Hudi file format yet, but it has support for Parquet, ORC, and Delta 
> file format. This proposal is to implement a SnowflakeSync similar to 
> HiveSync to sync the Hudi table as the Snowflake External Parquet table so 
> that users can query the Hudi tables using Snowflake. Many users have 
> expressed interest in Hudi and other support channels asking to integrate 
> Hudi with Snowflake, this will unlock new use cases for Hudi.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2832) [Umbrella] [RFC-40] Implement SnowflakeSyncTool to support Hudi to Snowflake Integration

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2832:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> [Umbrella] [RFC-40] Implement SnowflakeSyncTool to support Hudi to Snowflake 
> Integration
> 
>
> Key: HUDI-2832
> URL: https://issues.apache.org/jira/browse/HUDI-2832
> Project: Apache Hudi
>  Issue Type: Epic
>  Components: Common Core
>Reporter: Vinoth Govindarajan
>Assignee: Vinoth Govindarajan
>Priority: Major
>  Labels: BigQuery, Integration, pull-request-available
> Fix For: 0.12.0
>
>
> Snowflake is a fully managed service that’s simple to use but can power a 
> near-unlimited number of concurrent workloads. Snowflake is a solution for 
> data warehousing, data lakes, data engineering, data science, data 
> application development, and securely sharing and consuming shared data. 
> Snowflake [doesn’t 
> support|https://docs.snowflake.com/en/sql-reference/sql/alter-file-format.html]
>  Apache Hudi file format yet, but it has support for Parquet, ORC, and Delta 
> file format. This proposal is to implement a SnowflakeSync similar to 
> HiveSync to sync the Hudi table as the Snowflake External Parquet table so 
> that users can query the Hudi tables using Snowflake. Many users have 
> expressed interest in Hudi and other support channels asking to integrate 
> Hudi with Snowflake, this will unlock new use cases for Hudi.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-1280) Add tool to capture earliest or latest offsets in kafka topics

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1280:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Add tool to capture earliest or latest offsets in kafka topics 
> ---
>
> Key: HUDI-1280
> URL: https://issues.apache.org/jira/browse/HUDI-1280
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: deltastreamer
>Reporter: Balaji Varadarajan
>Assignee: Trevorzhang
>Priority: Major
> Fix For: 0.12.0
>
>
> For bootstrapping cases using spark.write(), we need to capture offsets from 
> kafka topic and use it as checkpoint for subsequent read from Kafka topics.
>  
> [https://github.com/apache/hudi/issues/1985]
> We need to build this integration for smooth transition to deltastreamer.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-1280) Add tool to capture earliest or latest offsets in kafka topics

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1280:
-
Issue Type: New Feature  (was: Improvement)

> Add tool to capture earliest or latest offsets in kafka topics 
> ---
>
> Key: HUDI-1280
> URL: https://issues.apache.org/jira/browse/HUDI-1280
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: deltastreamer
>Reporter: Balaji Varadarajan
>Assignee: Trevorzhang
>Priority: Major
> Fix For: 0.12.0
>
>
> For bootstrapping cases using spark.write(), we need to capture offsets from 
> kafka topic and use it as checkpoint for subsequent read from Kafka topics.
>  
> [https://github.com/apache/hudi/issues/1985]
> We need to build this integration for smooth transition to deltastreamer.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-1602) Corrupted Avro schema extracted from parquet file

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1602:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Corrupted Avro schema extracted from parquet file
> -
>
> Key: HUDI-1602
> URL: https://issues.apache.org/jira/browse/HUDI-1602
> Project: Apache Hudi
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Alexander Filipchik
>Assignee: Nishith Agarwal
>Priority: Major
>  Labels: core-flow-ds, pull-request-available, sev:critical
> Fix For: 0.12.0
>
>
> we are running a HUDI deltastreamer on a very complex stream. Schema is 
> deeply nested, with several levels of hierarchy (avro schema is around 6600 
> LOC).
>  
> The version of HUDI that writes the dataset if 0.5-SNAPTHOT and we recently 
> started attempts to upgrade to the latest. Hovewer, latest HUDI can't read 
> the provided dataset. Exception I get: 
>  
>  
> {code:java}
> Got exception while parsing the arguments:Got exception while parsing the 
> arguments:Found recursive reference in Avro schema, which can not be 
> processed by Spark:{  "type" : "record",  "name" : "array",  "fields" : [ {   
>  "name" : "id",    "type" : [ "null", "string" ],    "default" : null  }, {   
>  "name" : "type",    "type" : [ "null", "string" ],    "default" : null  }, { 
>    "name" : "exist",    "type" : [ "null", "boolean" ],    "default" : null  
> } ]}          Stack 
> trace:org.apache.spark.sql.avro.IncompatibleSchemaException:Found recursive 
> reference in Avro schema, which can not be processed by Spark:{  "type" : 
> "record",  "name" : "array",  "fields" : [ {    "name" : "id",    "type" : [ 
> "null", "string" ],    "default" : null  }, {    "name" : "type",    "type" : 
> [ "null", "string" ],    "default" : null  }, {    "name" : "exist",    
> "type" : [ "null", "boolean" ],    "default" : null  } ]}
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:75)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:89)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.s

[jira] [Updated] (HUDI-1602) Corrupted Avro schema extracted from parquet file

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1602:
-
Status: Open  (was: Patch Available)

> Corrupted Avro schema extracted from parquet file
> -
>
> Key: HUDI-1602
> URL: https://issues.apache.org/jira/browse/HUDI-1602
> Project: Apache Hudi
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Alexander Filipchik
>Assignee: Nishith Agarwal
>Priority: Major
>  Labels: core-flow-ds, pull-request-available, sev:critical
> Fix For: 0.12.0
>
>
> we are running a HUDI deltastreamer on a very complex stream. Schema is 
> deeply nested, with several levels of hierarchy (avro schema is around 6600 
> LOC).
>  
> The version of HUDI that writes the dataset if 0.5-SNAPTHOT and we recently 
> started attempts to upgrade to the latest. Hovewer, latest HUDI can't read 
> the provided dataset. Exception I get: 
>  
>  
> {code:java}
> Got exception while parsing the arguments:Got exception while parsing the 
> arguments:Found recursive reference in Avro schema, which can not be 
> processed by Spark:{  "type" : "record",  "name" : "array",  "fields" : [ {   
>  "name" : "id",    "type" : [ "null", "string" ],    "default" : null  }, {   
>  "name" : "type",    "type" : [ "null", "string" ],    "default" : null  }, { 
>    "name" : "exist",    "type" : [ "null", "boolean" ],    "default" : null  
> } ]}          Stack 
> trace:org.apache.spark.sql.avro.IncompatibleSchemaException:Found recursive 
> reference in Avro schema, which can not be processed by Spark:{  "type" : 
> "record",  "name" : "array",  "fields" : [ {    "name" : "id",    "type" : [ 
> "null", "string" ],    "default" : null  }, {    "name" : "type",    "type" : 
> [ "null", "string" ],    "default" : null  }, {    "name" : "exist",    
> "type" : [ "null", "boolean" ],    "default" : null  } ]}
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:75)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:89)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.c

[jira] [Commented] (HUDI-1602) Corrupted Avro schema extracted from parquet file

2022-03-29 Thread Raymond Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17514258#comment-17514258
 ] 

Raymond Xu commented on HUDI-1602:
--

We should triage this see if still problem with spark 3.2 bundle

> Corrupted Avro schema extracted from parquet file
> -
>
> Key: HUDI-1602
> URL: https://issues.apache.org/jira/browse/HUDI-1602
> Project: Apache Hudi
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Alexander Filipchik
>Assignee: Nishith Agarwal
>Priority: Major
>  Labels: core-flow-ds, pull-request-available, sev:critical
> Fix For: 0.12.0
>
>
> we are running a HUDI deltastreamer on a very complex stream. Schema is 
> deeply nested, with several levels of hierarchy (avro schema is around 6600 
> LOC).
>  
> The version of HUDI that writes the dataset if 0.5-SNAPTHOT and we recently 
> started attempts to upgrade to the latest. Hovewer, latest HUDI can't read 
> the provided dataset. Exception I get: 
>  
>  
> {code:java}
> Got exception while parsing the arguments:Got exception while parsing the 
> arguments:Found recursive reference in Avro schema, which can not be 
> processed by Spark:{  "type" : "record",  "name" : "array",  "fields" : [ {   
>  "name" : "id",    "type" : [ "null", "string" ],    "default" : null  }, {   
>  "name" : "type",    "type" : [ "null", "string" ],    "default" : null  }, { 
>    "name" : "exist",    "type" : [ "null", "boolean" ],    "default" : null  
> } ]}          Stack 
> trace:org.apache.spark.sql.avro.IncompatibleSchemaException:Found recursive 
> reference in Avro schema, which can not be processed by Spark:{  "type" : 
> "record",  "name" : "array",  "fields" : [ {    "name" : "id",    "type" : [ 
> "null", "string" ],    "default" : null  }, {    "name" : "type",    "type" : 
> [ "null", "string" ],    "default" : null  }, {    "name" : "exist",    
> "type" : [ "null", "boolean" ],    "default" : null  } ]}
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:75)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:89)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72

[jira] [Updated] (HUDI-1602) Corrupted Avro schema extracted from parquet file

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1602:
-
Fix Version/s: 0.11.0
   (was: 0.12.0)

> Corrupted Avro schema extracted from parquet file
> -
>
> Key: HUDI-1602
> URL: https://issues.apache.org/jira/browse/HUDI-1602
> Project: Apache Hudi
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Alexander Filipchik
>Assignee: Nishith Agarwal
>Priority: Major
>  Labels: core-flow-ds, pull-request-available, sev:critical
> Fix For: 0.11.0
>
>
> we are running a HUDI deltastreamer on a very complex stream. Schema is 
> deeply nested, with several levels of hierarchy (avro schema is around 6600 
> LOC).
>  
> The version of HUDI that writes the dataset if 0.5-SNAPTHOT and we recently 
> started attempts to upgrade to the latest. Hovewer, latest HUDI can't read 
> the provided dataset. Exception I get: 
>  
>  
> {code:java}
> Got exception while parsing the arguments:Got exception while parsing the 
> arguments:Found recursive reference in Avro schema, which can not be 
> processed by Spark:{  "type" : "record",  "name" : "array",  "fields" : [ {   
>  "name" : "id",    "type" : [ "null", "string" ],    "default" : null  }, {   
>  "name" : "type",    "type" : [ "null", "string" ],    "default" : null  }, { 
>    "name" : "exist",    "type" : [ "null", "boolean" ],    "default" : null  
> } ]}          Stack 
> trace:org.apache.spark.sql.avro.IncompatibleSchemaException:Found recursive 
> reference in Avro schema, which can not be processed by Spark:{  "type" : 
> "record",  "name" : "array",  "fields" : [ {    "name" : "id",    "type" : [ 
> "null", "string" ],    "default" : null  }, {    "name" : "type",    "type" : 
> [ "null", "string" ],    "default" : null  }, {    "name" : "exist",    
> "type" : [ "null", "boolean" ],    "default" : null  } ]}
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:75)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:89)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.s

[jira] [Assigned] (HUDI-1602) Corrupted Avro schema extracted from parquet file

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu reassigned HUDI-1602:


Assignee: (was: Nishith Agarwal)

> Corrupted Avro schema extracted from parquet file
> -
>
> Key: HUDI-1602
> URL: https://issues.apache.org/jira/browse/HUDI-1602
> Project: Apache Hudi
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Alexander Filipchik
>Priority: Major
>  Labels: core-flow-ds, pull-request-available, sev:critical
> Fix For: 0.11.0
>
>
> we are running a HUDI deltastreamer on a very complex stream. Schema is 
> deeply nested, with several levels of hierarchy (avro schema is around 6600 
> LOC).
>  
> The version of HUDI that writes the dataset if 0.5-SNAPTHOT and we recently 
> started attempts to upgrade to the latest. Hovewer, latest HUDI can't read 
> the provided dataset. Exception I get: 
>  
>  
> {code:java}
> Got exception while parsing the arguments:Got exception while parsing the 
> arguments:Found recursive reference in Avro schema, which can not be 
> processed by Spark:{  "type" : "record",  "name" : "array",  "fields" : [ {   
>  "name" : "id",    "type" : [ "null", "string" ],    "default" : null  }, {   
>  "name" : "type",    "type" : [ "null", "string" ],    "default" : null  }, { 
>    "name" : "exist",    "type" : [ "null", "boolean" ],    "default" : null  
> } ]}          Stack 
> trace:org.apache.spark.sql.avro.IncompatibleSchemaException:Found recursive 
> reference in Avro schema, which can not be processed by Spark:{  "type" : 
> "record",  "name" : "array",  "fields" : [ {    "name" : "id",    "type" : [ 
> "null", "string" ],    "default" : null  }, {    "name" : "type",    "type" : 
> [ "null", "string" ],    "default" : null  }, {    "name" : "exist",    
> "type" : [ "null", "boolean" ],    "default" : null  } ]}
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:75)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:89)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:81)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$.toSqlTypeHelper(SchemaConverters.scala:105)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:82)
>  at 
> org.apache.spark.sql.avro.SchemaConverters$$anonfun$1.apply(SchemaConverters.scala:81)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.

[jira] [Updated] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1864:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: Vaibhav Sinha
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.Task.run(Task.scala:131) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>  ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_171]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_171]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_171]
>

[jira] [Assigned] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu reassigned HUDI-1864:


Assignee: sivabalan narayanan  (was: Vaibhav Sinha)

> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.Task.run(Task.scala:131) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>  ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_171]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_171]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
hudi-bot edited a comment on pull request #2923:
URL: https://github.com/apache/hudi/pull/2923#issuecomment-865487972


   
   ## CI report:
   
   * 1e50c60ed78027605aa0f3765eb504824e041910 Azure: 
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=337)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
hudi-bot edited a comment on pull request #2923:
URL: https://github.com/apache/hudi/pull/2923#issuecomment-865487972


   
   ## CI report:
   
   * 1e50c60ed78027605aa0f3765eb504824e041910 Azure: 
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=337)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
n3nash commented on pull request #2923:
URL: https://github.com/apache/hudi/pull/2923#issuecomment-872765957


   @vaibhav-sinha For your second suggestion, what would we represent 
`timestamp` based LogicalType as instead of `LocalDateTime` ? Are you 
suggesting using long ? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.Task.run(Task.scala:131) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>  ~[spark-core_2.12-3.1.1.jar:3.1.1]
>  

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
hudi-bot edited a comment on pull request #3039:
URL: https://github.com/apache/hudi/pull/3039#issuecomment-914648327


   
   ## CI report:
   
   * 582c64d8330d0827eb7e06784db004dec4658749 Azure: 
[PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=2087)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>  

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
hudi-bot edited a comment on pull request #3039:
URL: https://github.com/apache/hudi/pull/3039#issuecomment-914648327


   
   ## CI report:
   
   * 582c64d8330d0827eb7e06784db004dec4658749 Azure: 
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=2087)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>  

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
vaibhav-sinha commented on pull request #2923:
URL: https://github.com/apache/hudi/pull/2923#issuecomment-872778115


   @n3nash Essentially we did two things in this PR. We changed the 
representation of `timestamp` based LogicalType and added the support for 
Data/Timestamp based objects in `TmestampBasedAvroKeyGenerator`. Both of these 
are independent of each other.
   
   Hence, we can continue using `long` to represent `timestamp` if the memory 
footprint is a concern. In that case, I would suggest we should represent 
`Date` as `integer` just to be consistent with representations.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.schedule

[jira] (HUDI-1864) Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator

2022-03-29 Thread Raymond Xu (Jira)


[ https://issues.apache.org/jira/browse/HUDI-1864 ]


Raymond Xu deleted comment on HUDI-1864:
--

was (Author: githubbot):
hudi-bot commented on pull request #3039:
URL: https://github.com/apache/hudi/pull/3039#issuecomment-914648327


   
   ## CI report:
   
   * 582c64d8330d0827eb7e06784db004dec4658749 UNKNOWN
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Support for java.time.LocalDate in TimestampBasedAvroKeyGenerator
> -
>
> Key: HUDI-1864
> URL: https://issues.apache.org/jira/browse/HUDI-1864
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Vaibhav Sinha
>Assignee: sivabalan narayanan
>Priority: Major
>  Labels: pull-request-available, query-eng, sev:high
> Fix For: 0.12.0
>
>
> When we read data from MySQL which has a column of type {{Date}}, Spark 
> represents it as an instance of {{java.time.LocalDate}}. If I try and use 
> this column for partitioning while doing a write to Hudi, I get the following 
> exception
>  
> {code:java}
> Caused by: org.apache.hudi.exception.HoodieKeyGeneratorException: Unable to 
> parse input partition field :2021-04-21
>   at 
> org.apache.hudi.keygen.TimestampBasedAvroKeyGenerator.getPartitionPath(TimestampBasedAvroKeyGenerator.java:136)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomAvroKeyGenerator.getPartitionPath(CustomAvroKeyGenerator.java:89)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.CustomKeyGenerator.getPartitionPath(CustomKeyGenerator.java:64)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.keygen.BaseKeyGenerator.getKey(BaseKeyGenerator.java:62) 
> ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$2(HoodieSparkSqlWriter.scala:160)
>  ~[hudi-spark3-bundle_2.12-0.8.0.jar:0.8.0]
>   at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator$SliceIterator.next(Iterator.scala:271) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.Iterator.foreach$(Iterator.scala:941) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to(TraversableOnce.scala:315) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.to(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at 
> scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288) 
> ~[scala-library-2.12.10.jar:?]
>   at scala.collection.AbstractIterator.toArray(Iterator.scala:1429) 
> ~[scala-library-2.12.10.jar:?]
>   at org.apache.spark.rdd.RDD.$anonfun$take$2(RDD.scala:1449) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2242) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at org.apache.spark.scheduler.Task.run(Task.scala:131) 
> ~[spark-core_2.12-3.1.1.jar:3.1.1]
>   at 
> org.apache.spa

[jira] [Updated] (HUDI-1872) Move HoodieFlinkStreamer into hudi-utilities module

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1872:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Move HoodieFlinkStreamer into hudi-utilities module
> ---
>
> Key: HUDI-1872
> URL: https://issues.apache.org/jira/browse/HUDI-1872
> Project: Apache Hudi
>  Issue Type: Task
>  Components: flink
>Reporter: Danny Chen
>Assignee: Vinay
>Priority: Major
>  Labels: pull-request-available, sev:normal
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2173) Enhancing DynamoDB based LockProvider

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2173:
-
Issue Type: New Feature  (was: Task)

> Enhancing DynamoDB based LockProvider
> -
>
> Key: HUDI-2173
> URL: https://issues.apache.org/jira/browse/HUDI-2173
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: writer-core
>Reporter: Vinoth Chandar
>Assignee: Dave Hagman
>Priority: Major
> Fix For: 0.11.0
>
>
> Currently, we have ZK and HMS based Lock providers, which can be limited to 
> co-ordinating across a single EMR or Hadoop cluster. 
> For aws users, DynamoDB is a readily available , fully managed , geo 
> replicated datastore, that can actually be used to hold locks, that can now 
> span across EMR/hadoop clusters. 
> This effort involves supporting a new `DynamoDB` lock provider that 
> implements org.apache.hudi.common.lock.LockProvider. We can place the 
> implementation itself in hudi-client-common, so it can be used across Spark, 
> Flink, Deltastreamer etc. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HUDI-2173) Enhancing DynamoDB based LockProvider

2022-03-29 Thread Raymond Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17514261#comment-17514261
 ] 

Raymond Xu commented on HUDI-2173:
--

hey [~dave_hagman] any update on this work? 

> Enhancing DynamoDB based LockProvider
> -
>
> Key: HUDI-2173
> URL: https://issues.apache.org/jira/browse/HUDI-2173
> Project: Apache Hudi
>  Issue Type: Task
>  Components: writer-core
>Reporter: Vinoth Chandar
>Assignee: Dave Hagman
>Priority: Major
> Fix For: 0.11.0
>
>
> Currently, we have ZK and HMS based Lock providers, which can be limited to 
> co-ordinating across a single EMR or Hadoop cluster. 
> For aws users, DynamoDB is a readily available , fully managed , geo 
> replicated datastore, that can actually be used to hold locks, that can now 
> span across EMR/hadoop clusters. 
> This effort involves supporting a new `DynamoDB` lock provider that 
> implements org.apache.hudi.common.lock.LockProvider. We can place the 
> implementation itself in hudi-client-common, so it can be used across Spark, 
> Flink, Deltastreamer etc. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2808) Supports deduplication for streaming write

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2808:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Supports deduplication for streaming write
> --
>
> Key: HUDI-2808
> URL: https://issues.apache.org/jira/browse/HUDI-2808
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: flink
>Reporter: WangMinChao
>Assignee: WangMinChao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> Currently, flink changlog stream writes to MOR table, which can be 
> deduplicated during batch reading, but it will not be deduplicated during 
> stream reading. However, many users hope that stream reading can also achieve 
> the upsert capability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2873) Support optimize data layout by sql and make the build more fast

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2873:
-
Issue Type: New Feature  (was: Task)

> Support optimize data layout by sql and make the build more fast
> 
>
> Key: HUDI-2873
> URL: https://issues.apache.org/jira/browse/HUDI-2873
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: performance, spark
>Reporter: tao meng
>Assignee: shibei
>Priority: Critical
>  Labels: sev:high
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2873) Support optimize data layout by sql and make the build more fast

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2873:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Support optimize data layout by sql and make the build more fast
> 
>
> Key: HUDI-2873
> URL: https://issues.apache.org/jira/browse/HUDI-2873
> Project: Apache Hudi
>  Issue Type: Task
>  Components: performance, spark
>Reporter: tao meng
>Assignee: shibei
>Priority: Critical
>  Labels: sev:high
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2873) Support optimize data layout by sql and make the build more fast

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2873:
-
Labels:   (was: sev:high)

> Support optimize data layout by sql and make the build more fast
> 
>
> Key: HUDI-2873
> URL: https://issues.apache.org/jira/browse/HUDI-2873
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: performance, spark
>Reporter: tao meng
>Assignee: shibei
>Priority: Critical
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2296) flink support ConsistencyGuard plugin

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2296:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> flink support  ConsistencyGuard plugin 
> ---
>
> Key: HUDI-2296
> URL: https://issues.apache.org/jira/browse/HUDI-2296
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: flink
>Reporter: Gengxuhan
>Assignee: Gengxuhan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.12.0
>
>
> flink support ConsistencyGuard plugin,When there is metadata latency in the 
> file system,Submit CKP once, and then start a new instant. Because the commit 
> cannot be seen, the data will be rolled back



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2780) Mor reads the log file and skips the complete block as a bad block, resulting in data loss

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2780:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Mor reads the log file and skips the complete block as a bad block, resulting 
> in data loss
> --
>
> Key: HUDI-2780
> URL: https://issues.apache.org/jira/browse/HUDI-2780
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: jing
>Assignee: jing
>Priority: Critical
>  Labels: core-flow-ds, pull-request-available, sev:critical
> Fix For: 0.12.0
>
> Attachments: image-2021-11-17-15-45-33-031.png, 
> image-2021-11-17-15-46-04-313.png, image-2021-11-17-15-46-14-694.png
>
>
> Check the data in the middle of the bad block through debug, and find that 
> the lost data is in the offset of the bad block, but because of the eof skip 
> during the reading, the compact merge cannot be written to the parquet at 
> that time, but the deltacommit of the time is successful. There are two 
> consecutive hudi magic in the middle of the bad block. Reading blocksize in 
> the next digit actually reads the binary conversion of #HUDI# to 1227030528, 
> which means that the eof exception is reported when the file size is exceeded.
> !image-2021-11-17-15-45-33-031.png!
> Detect the position of the next block and skip the bad block. It should not 
> start from the position after reading the blocksize, but from the position 
> before reading the blocksize
> !image-2021-11-17-15-46-04-313.png!
> !image-2021-11-17-15-46-14-694.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2875) Concurrent call to HoodieMergeHandler cause parquet corruption

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2875:
-
Component/s: writer-core

> Concurrent call to HoodieMergeHandler cause parquet corruption
> --
>
> Key: HUDI-2875
> URL: https://issues.apache.org/jira/browse/HUDI-2875
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Common Core, writer-core
>Reporter: ZiyueGuan
>Assignee: ZiyueGuan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.11.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Problem:
> Some corrupted parquet files are generated and exceptions will be thrown when 
> read.
> e.g.
>  
> Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value 
> at 0 in block -1 in file 
>     at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:251)
>     at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
>     at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:136)
>     at 
> org.apache.hudi.common.util.ParquetReaderIterator.hasNext(ParquetReaderIterator.java:49)
>     at 
> org.apache.hudi.common.util.queue.IteratorBasedQueueProducer.produce(IteratorBasedQueueProducer.java:45)
>     at 
> org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$0(BoundedInMemoryExecutor.java:112)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     ... 4 more
> Caused by: org.apache.parquet.io.ParquetDecodingException: could not read 
> page Page [bytes.size=1054316, valueCount=237, uncompressedSize=1054316] in 
> col  required binary col
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:599)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.access$300(ColumnReaderImpl.java:57)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:536)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:533)
>     at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:95)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:533)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:525)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:638)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.(ColumnReaderImpl.java:353)
>     at 
> org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:80)
>     at 
> org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:75)
>     at 
> org.apache.parquet.io.RecordReaderImplementation.(RecordReaderImplementation.java:271)
>     at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147)
>     at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109)
>     at 
> org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
>     at 
> org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
>     at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
>     at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>     ... 11 more
> Caused by: java.io.EOFException
>     at java.io.DataInputStream.readFully(DataInputStream.java:197)
>     at java.io.DataInputStream.readFully(DataInputStream.java:169)
>     at 
> org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)
>     at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)
>     at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:592)
>  
> How to reproduce:
> We need a way which could interrupt one task w/o shutdown JVM. Let's say, 
> speculation. When speculation is triggered, other tasks working at the same 
> executor will have the risk to suffer a wrong parquet generation. This will 
> not always result in corrupted parquet file. Nearly half of them will throw 
> exception while there is few tasks succeed without any signal.
> RootCause:
> ParquetWriter is not thread safe. User of it should apply proper way to 
> guarantee that there is not concurrent call to ParquetWriter.
> In the following code: 
> [https://github.com/apache/hudi/blob/master/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/SparkMergeHelper.java#L103]
> We ca

[jira] [Updated] (HUDI-2875) Concurrent call to HoodieMergeHandler cause parquet corruption

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2875:
-
Priority: Critical  (was: Major)

> Concurrent call to HoodieMergeHandler cause parquet corruption
> --
>
> Key: HUDI-2875
> URL: https://issues.apache.org/jira/browse/HUDI-2875
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Common Core
>Reporter: ZiyueGuan
>Assignee: ZiyueGuan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.11.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Problem:
> Some corrupted parquet files are generated and exceptions will be thrown when 
> read.
> e.g.
>  
> Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value 
> at 0 in block -1 in file 
>     at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:251)
>     at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
>     at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:136)
>     at 
> org.apache.hudi.common.util.ParquetReaderIterator.hasNext(ParquetReaderIterator.java:49)
>     at 
> org.apache.hudi.common.util.queue.IteratorBasedQueueProducer.produce(IteratorBasedQueueProducer.java:45)
>     at 
> org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$0(BoundedInMemoryExecutor.java:112)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     ... 4 more
> Caused by: org.apache.parquet.io.ParquetDecodingException: could not read 
> page Page [bytes.size=1054316, valueCount=237, uncompressedSize=1054316] in 
> col  required binary col
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:599)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.access$300(ColumnReaderImpl.java:57)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:536)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:533)
>     at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:95)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:533)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:525)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:638)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.(ColumnReaderImpl.java:353)
>     at 
> org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:80)
>     at 
> org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:75)
>     at 
> org.apache.parquet.io.RecordReaderImplementation.(RecordReaderImplementation.java:271)
>     at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147)
>     at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109)
>     at 
> org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
>     at 
> org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
>     at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
>     at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>     ... 11 more
> Caused by: java.io.EOFException
>     at java.io.DataInputStream.readFully(DataInputStream.java:197)
>     at java.io.DataInputStream.readFully(DataInputStream.java:169)
>     at 
> org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)
>     at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)
>     at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)
>     at 
> org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:592)
>  
> How to reproduce:
> We need a way which could interrupt one task w/o shutdown JVM. Let's say, 
> speculation. When speculation is triggered, other tasks working at the same 
> executor will have the risk to suffer a wrong parquet generation. This will 
> not always result in corrupted parquet file. Nearly half of them will throw 
> exception while there is few tasks succeed without any signal.
> RootCause:
> ParquetWriter is not thread safe. User of it should apply proper way to 
> guarantee that there is not concurrent call to ParquetWriter.
> In the following code: 
> [https://github.com/apache/hudi/blob/master/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/SparkMergeHelper.java#L103]
> We call bo

[jira] [Created] (HUDI-3746) CI ignored test failure in TestDataSkippingUtils

2022-03-29 Thread Raymond Xu (Jira)
Raymond Xu created HUDI-3746:


 Summary: CI ignored test failure in TestDataSkippingUtils
 Key: HUDI-3746
 URL: https://issues.apache.org/jira/browse/HUDI-3746
 Project: Apache Hudi
  Issue Type: Bug
  Components: tests-ci
Reporter: Raymond Xu
 Fix For: 0.11.0
 Attachments: ci.log.zip

failure in 

TestDataSkippingUtils

was ignored. something to do with Junit in Scala maybe?

See the attached CI logs and search for `TestDataSkippingUtils`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3746) CI ignored test failure in TestDataSkippingUtils

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3746:
-
Attachment: ci.log.zip

> CI ignored test failure in TestDataSkippingUtils
> 
>
> Key: HUDI-3746
> URL: https://issues.apache.org/jira/browse/HUDI-3746
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: tests-ci
>Reporter: Raymond Xu
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: ci.log.zip
>
>
> failure in 
> TestDataSkippingUtils
> was ignored. something to do with Junit in Scala maybe?
> See the attached CI logs and search for `TestDataSkippingUtils`



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (HUDI-2520) Certify sync with Hive 3

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu closed HUDI-2520.

 Reviewers: Raymond Xu, rex xiong  (was: Raymond Xu)
Resolution: Fixed

> Certify sync with Hive 3
> 
>
> Key: HUDI-2520
> URL: https://issues.apache.org/jira/browse/HUDI-2520
> Project: Apache Hudi
>  Issue Type: Task
>  Components: hive, meta-sync
>Reporter: Sagar Sumit
>Assignee: Forward Xu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.11.0
>
> Attachments: image-2022-03-14-15-52-02-021.png, 
> image-2022-03-27-20-11-38-189.png, image-2022-03-27-20-13-12-001.png
>
>
> # when execute CTAS statment,the query failed due to twice sync meta problem: 
> HoodieSparkSqlWriter synced meta first time, followed by 
> HoodieCatalog.createHoodieTable synced the second time when 
> HoodieStagedTable.commitStagedChanges
> {code:java}
> create table if not exists h3_cow using hudi partitioned by (dt) options 
> (type = 'cow', primaryKey = 'id,name') as select 1 as id, 'a1' as name, 20 as 
> price, '2021-01-03' as dt;
> 22/03/14 14:26:21 ERROR [main] Utils: Aborting task
> org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException: Table or 
> view 'h3_cow' already exists in database 'default'
>         at 
> org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createHiveDataSourceTable(CreateHoodieTableCommand.scala:172)
>         at 
> org.apache.spark.sql.hudi.command.CreateHoodieTableCommand$.createTableInCatalog(CreateHoodieTableCommand.scala:148)
>         at 
> org.apache.spark.sql.hudi.catalog.HoodieCatalog.createHoodieTable(HoodieCatalog.scala:254)
>         at 
> org.apache.spark.sql.hudi.catalog.HoodieStagedTable.commitStagedChanges(HoodieStagedTable.scala:62)
>         at 
> org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:484)
>         at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1496)
>         at 
> org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.writeToTable(WriteToDataSourceV2Exec.scala:468)
>         at 
> org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.writeToTable$(WriteToDataSourceV2Exec.scala:463)
>         at 
> org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:106)
>         at 
> org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:127)
>         at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
>         at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
>         at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
>         at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:110)
>         at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
>         at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
>         at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
>         at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
>         at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
>         at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
>         at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
>         at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
>         at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481){code}
> 2. when truncate partition table,neither metadata nor data is truncated and 
> truncate partition table with partition specs fails 
> {code:java}
> // truncate partition table without partition spec, the query is success but 
> never delete data
> spark-sql> truncate table mor_partition_table_0314;
> Time taken: 0.256 seconds
> // truncate partition table with partition spec, 
> spark-sql> truncate table mor_partition_table_0314 partition(dt=3);
> Error in query: Table spark_catalog.default.mor_partition_table_0314 does not 
> support partition management.;
> 'TruncatePartition unresolvedpartitionspec((dt,3), None)
> +- ResolvedTable org.apache

[jira] [Updated] (HUDI-2473) Fix compaction action type in commit metadata

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2473:
-
Sprint: Hudi-Sprint-Mar-22

> Fix compaction action type in commit metadata
> -
>
> Key: HUDI-2473
> URL: https://issues.apache.org/jira/browse/HUDI-2473
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: table-service
>Reporter: sivabalan narayanan
>Assignee: sivabalan narayanan
>Priority: Blocker
> Fix For: 0.11.0
>
>
> Fix compaction action type in commit metadata.
> as of now, it is empty for compaction commit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3743) Support DELETE_PARTITION for metadata table

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3743:
-
Sprint: Hudi-Sprint-Mar-22

> Support DELETE_PARTITION for metadata table
> ---
>
> Key: HUDI-3743
> URL: https://issues.apache.org/jira/browse/HUDI-3743
> Project: Apache Hudi
>  Issue Type: Task
>  Components: metadata
>Reporter: Sagar Sumit
>Assignee: Sagar Sumit
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.11.0
>
>
> In order to drop any metadata partition (index), we can reuse the 
> DELETE_PARTITION operation in metadata table.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3724) Too many open files w/ COW spark long running tests

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3724:
-
Sprint: Hudi-Sprint-Mar-22

> Too many open files w/ COW spark long running tests
> ---
>
> Key: HUDI-3724
> URL: https://issues.apache.org/jira/browse/HUDI-3724
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: sivabalan narayanan
>Assignee: sivabalan narayanan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.11.0
>
>
> We run integ tests against hudi and recently our spark long running tests are 
> failing for COW table with "too many open files". May be we have some leaks 
> and need to chase them and close it out. 
> {code:java}
>   ... 6 more
> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 6808.0 failed 1 times, most recent failure: Lost task 0.0 in 
> stage 6808.0 (TID 109960) (ip-10-0-40-161.us-west-1.compute.internal executor 
> driver): java.io.FileNotFoundException: 
> /tmp/blockmgr-96dd9c25-86c7-4d00-a20a-d6515eef9a37/39/temp_shuffle_9149fce7-e9b0-4fee-bb21-1eba16dd89a3
>  (Too many open files)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at 
> org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:133)
>   at 
> org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:152)
>   at 
> org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:279)
>   at 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:171)
>   at 
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
>   at org.apache.spark.scheduler.Task.run(Task.scala:131)
>   at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3062) savepoint rollback of last but one savepoint fails

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3062:
-
Sprint: Hudi-Sprint-Mar-22

> savepoint rollback of last but one savepoint fails
> --
>
> Key: HUDI-3062
> URL: https://issues.apache.org/jira/browse/HUDI-3062
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: sivabalan narayanan
>Priority: Blocker
>  Labels: sev:critical
> Fix For: 0.11.0
>
>
> so, I created 2 savepoints as below. 
> c1, c2, c3, sp1, c4, sp2, c5.
> tried savepoint rollback for sp2 and it worked. but left trailing rollback 
> meta files. 
> again tried to savepoint roll back with sp1 and it failed. stacktrace does 
> not have sufficient info. 
> {code:java}
> 21/12/18 06:20:00 INFO HoodieActiveTimeline: Loaded instants upto : 
> Option{val=[==>20211218061954430__rollback__REQUESTED]}
> 21/12/18 06:20:00 INFO BaseRollbackPlanActionExecutor: Requesting Rollback 
> with instant time [==>20211218061954430__rollback__REQUESTED]
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 66
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 26
> 21/12/18 06:20:00 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 
> 192.168.1.4:54359 in memory (size: 25.5 KB, free: 366.2 MB)
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 110
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 99
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 47
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 21
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 43
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 55
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 104
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 124
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 29
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 91
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 123
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 120
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 25
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 32
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 92
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 76
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 89
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 102
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 50
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 49
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 116
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 96
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 118
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 44
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 60
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 87
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 77
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 75
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 9
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 72
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 2
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 37
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 113
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 67
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 28
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 95
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 59
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 68
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 45
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 39
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 74
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 20
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 90
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 56
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 58
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 61
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 13
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 46
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 101
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 105
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 81
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 63
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 78
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 4
> 21/12/18 06:20:00 INFO ContextCleaner: Cleaned accumulator 31
> 21/12/18 0

[jira] [Updated] (HUDI-3729) Enable Spark vectorized read for non-incremental read paths

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3729:
-
Status: Patch Available  (was: In Progress)

> Enable Spark vectorized read for non-incremental read paths
> ---
>
> Key: HUDI-3729
> URL: https://issues.apache.org/jira/browse/HUDI-3729
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: performance, spark
>Reporter: Raymond Xu
>Assignee: Tao Meng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.11.0
>
>
> context
> https://github.com/apache/hudi/pull/4709#issuecomment-1079624811



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3729) Enable Spark vectorized read for non-incremental read paths

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3729:
-
Status: In Progress  (was: Open)

> Enable Spark vectorized read for non-incremental read paths
> ---
>
> Key: HUDI-3729
> URL: https://issues.apache.org/jira/browse/HUDI-3729
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: performance, spark
>Reporter: Raymond Xu
>Assignee: Tao Meng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.11.0
>
>
> context
> https://github.com/apache/hudi/pull/4709#issuecomment-1079624811



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2458) Relax compaction in metadata being fenced based on inflight requests in data table

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2458:
-
Fix Version/s: (was: 0.11.0)

> Relax compaction in metadata being fenced based on inflight requests in data 
> table
> --
>
> Key: HUDI-2458
> URL: https://issues.apache.org/jira/browse/HUDI-2458
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: metadata
>Reporter: sivabalan narayanan
>Assignee: Ethan Guo
>Priority: Major
> Fix For: 0.12.0
>
>
> Relax compaction in metadata being fenced based on inflight requests in data 
> table.
> Compaction in metadata is triggered only if there are no inflight requests in 
> data table. This might cause liveness problem since for very large 
> deployments, we could either have compaction or clustering always in 
> progress. So, we should try to see how we can relax this constraint.
>  
> Proposal to remove this dependency:
> With recent addition of spurious deletes config, we can actually get away 
> with this. 
> As of now, we have 3 inter linked nuances.
>  - Compaction in metadata may not kick in, if there are any inflight 
> operations in data table. 
>  - Rollback when being applied to metadata table has a dependency on last 
> compaction instant in metadata table. We might even throw exception if 
> instant being rolledback is < latest metadata compaction instant time. 
>  - Archival in data table is fenced by latest compaction in metadata table. 
>  
> So, just incase data timeline has any dangling inflght operation (lets say 
> someone tried clustering, and killed midway and did not ever attempt again), 
> metadata compaction will never kick in at all for good. I need to check what 
> does archival do for such inflight operations in data table though when it 
> tries to archive near by commits. 
>  
> So, with spurious deletes support which we added recently, all these can be 
> much simplified. 
> Whenever we want to apply a rollback commit, we don't need to take different 
> actions based on whether the commit being rolled back is already committed to 
> metadata table or not. Just go ahead and apply the rollback. Merging of 
> metadata payload records will take care of this. If the commit was already 
> synced, final merged payload may not have spurious deletes. If the commit 
> being rolledback was never committed to metadata, final merged payload may 
> have some spurious deletes which we can ignore. 
> With this, compaction in metadata does not need to have any dependency on 
> inflight operations in data table. 
> And we can loosen up the dependency of archival in data table on metadata 
> table compaction as well. 
> So, in summary, all the 3 dependencies quoted above will be moot if we go 
> with this approach. Archival in data table does not have any dependency on 
> metadata table compaction. Rollback when being applied to metadata table does 
> not care about last metadata table compaction. Compaction in metadata table 
> can proceed even if there are inflight operations in data table. 
>  
> Especially our logic to apply rollback metadata to metadata table will become 
> a lot simpler and is easy to reason about. 
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2736) Redundant metadata table initialization by the metadata writer

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2736:
-
Component/s: code-quality

> Redundant metadata table initialization by the metadata writer
> --
>
> Key: HUDI-2736
> URL: https://issues.apache.org/jira/browse/HUDI-2736
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: code-quality, metadata
>Reporter: Manoj Govindassamy
>Assignee: Ethan Guo
>Priority: Minor
> Fix For: 0.11.0
>
>
>  
> HoodieBackedTableMetadataWriter has redundant table initialization in the 
> following code paths.
>  # Constructor => initialize() => bootstrap  => initTableMetadata()
>  # Constructor => initTableMetadata()
>  # Flink client => preCommit => initTableMetadata()
> Apart from other refreshing of timeline happens in the init table metadata. 
> Before removing the redundant call, need to verify if they have been added 
> for the getting the latest timeline. 
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2736) Redundant metadata table initialization by the metadata writer

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2736:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Redundant metadata table initialization by the metadata writer
> --
>
> Key: HUDI-2736
> URL: https://issues.apache.org/jira/browse/HUDI-2736
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: code-quality, metadata
>Reporter: Manoj Govindassamy
>Assignee: Ethan Guo
>Priority: Minor
> Fix For: 0.12.0
>
>
>  
> HoodieBackedTableMetadataWriter has redundant table initialization in the 
> following code paths.
>  # Constructor => initialize() => bootstrap  => initTableMetadata()
>  # Constructor => initTableMetadata()
>  # Flink client => preCommit => initTableMetadata()
> Apart from other refreshing of timeline happens in the init table metadata. 
> Before removing the redundant call, need to verify if they have been added 
> for the getting the latest timeline. 
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2839) Align configs across Spark datasource, write client, etc

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2839:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Align configs across Spark datasource, write client, etc
> 
>
> Key: HUDI-2839
> URL: https://issues.apache.org/jira/browse/HUDI-2839
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: configs, spark
>Reporter: Ethan Guo
>Priority: Major
> Fix For: 0.12.0
>
>
> This is aroused when discussing HUDI-2818.  For the same logic such as 
> keygenerator, compaction, clustering, etc., there are different configs in 
> Spark datasource and write client and they may cause conflicts.  This can 
> cause unexpected behavior on the write path.
>  
> Raymond: I encountered this NPE when trying to run 0.10 over a 0.8 table: 
> https://issues.apache.org/jira/browse/HUDI-2818.
> to align configs, do you think we should auto set 
> {{hoodie.table.keygenerator.class}} when user sets 
> {{hoodie.datasource.write.keygenerator.class}} and also the other way around?
> Siva: guess in the regular write path(HoodiesparkSqlWriter), this is what 
> happens. i.e. users sets only 
> {{{}hoodie.datasource.write.keygenerator.class{}}}, but internally we set 
> {{hoodie.table.keygenerator.class}}  from datasource write config.
> Vinoth: {{HoodieConfig}} has some alternaitves/fallback mechanism. Something 
> to consider
> but overall we should fix these
> Ethan: when working on compaction/clustering, I also see different configs 
> around the same logic between spark datasource and write client.  maybe we 
> can take a pass of all configs later and make them consistent



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-2893) Metadata table not found warn logs repeated

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-2893:
-
Priority: Blocker  (was: Major)

> Metadata table not found warn logs repeated 
> 
>
> Key: HUDI-2893
> URL: https://issues.apache.org/jira/browse/HUDI-2893
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata
>Reporter: sivabalan narayanan
>Assignee: Ethan Guo
>Priority: Blocker
> Fix For: 0.11.0
>
>
> When a new table is created for first time, we see repeated warn log 
> statments that metadata table is not found. We need to trim this down to just 
> once.
> // steps to reproduce: just run our quick start guide inserts. 
> {code:java}
> scala> df.write.format("hudi").
>      |   options(getQuickstartWriteConfigs).
>      |   option(PRECOMBINE_FIELD.key(), "ts").
>      |   option(PARTITIONPATH_FIELD.key(), "partitionpath").
>      |   option(TBL_NAME.key(), tableName).
>      |   mode(Overwrite).
>      |   save(basePath)
> 21/11/30 11:12:20 WARN DFSPropertiesConfiguration: Cannot find HUDI_CONF_DIR, 
> please set it as the dir of hudi-defaults.conf
> 21/11/30 11:12:20 WARN HoodieSparkSqlWriter$: hoodie table at 
> file:/tmp/hudi_trips_cow already exists. Deleting existing data & overwriting 
> with new data.
> 21/11/30 11:12:21 WARN HoodieBackedTableMetadata: Metadata table was not 
> found at path file:///tmp/hudi_trips_cow/.hoodie/metadata
> 21/11/30 11:12:21 WARN HoodieBackedTableMetadata: Metadata table was not 
> found at path file:///tmp/hudi_trips_cow/.hoodie/metadata
> 21/11/30 11:12:26 WARN HoodieBackedTableMetadata: Metadata table was not 
> found at path file:///tmp/hudi_trips_cow/.hoodie/metadata
> 21/11/30 11:12:26 WARN HoodieBackedTableMetadata: Metadata table was not 
> found at path file:///tmp/hudi_trips_cow/.hoodie/metadata {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (HUDI-3050) Unable to find server port

2022-03-29 Thread Raymond Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17476358#comment-17476358
 ] 

Raymond Xu edited comment on HUDI-3050 at 3/30/22, 3:44 AM:


[~kinger0610] On which Hudi version do you hit this?  Looking at the code, the 
exception is thrown when the write client is trying to start the timeline 
service without the hostname so the dummy host is generated, which fails.  In 
this case, maybe "google.com" cannot be resolved so it failed.  There is one 
fix around that on master: 
[https://github.com/apache/hudi/pull/4355/|https://github.com/apache/hudi/pull/4355/]
  Could you try it?


was (Author: guoyihua):
[~kinger0610] On which Hudi version do you hit this?  Looking at the code, the 
exception is thrown when the write client is trying to start the timeline 
service without the hostname so the dummy host is generated, which fails.  In 
this case, maybe "google.com" cannot be resolved so it failed.  There is one 
fix around that on master: 
[https://github.com/apache/hudi/pull/4355/|https://github.com/apache/hudi/pull/4355/files.]
  Could you try it?

> Unable to find server port
> --
>
> Key: HUDI-3050
> URL: https://issues.apache.org/jira/browse/HUDI-3050
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Di Wang
>Assignee: Ethan Guo
>Priority: Critical
>  Labels: user-support-issues
> Fix For: 0.11.0
>
>
>  
> {code:java}
> // code placeholder
> Caused by: org.apache.hudi.exception.HoodieException: Unable to find server 
> port
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:41)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.setHostAddr(EmbeddedTimelineService.java:104)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.(EmbeddedTimelineService.java:55)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.startTimelineService(EmbeddedTimelineServerHelper.java:70)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.createEmbeddedTimelineService(EmbeddedTimelineServerHelper.java:51)
> at 
> org.apache.hudi.client.AbstractHoodieClient.startEmbeddedServerView(AbstractHoodieClient.java:109)
> at 
> org.apache.hudi.client.AbstractHoodieClient.(AbstractHoodieClient.java:77)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:139)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:127)
> at 
> org.apache.hudi.client.HoodieFlinkWriteClient.(HoodieFlinkWriteClient.java:97)
> at 
> org.apache.hudi.util.StreamerUtil.createWriteClient(StreamerUtil.java:402)
> at 
> org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:166)
> at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198)
> at 
> org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
> ... 24 common frames omitted
> Caused by: java.net.ConnectException: Connection timed out (Connection timed 
> out)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:38)
> ... 37 common frames omitted
> {code}
> it happens an exception . i think relative code is  is below,
> https://issues.apache.org/jira/browse/HUDI-3037
>   s = new Socket();
>   // see 
> https://stackoverflow.com/questions/9481865/getting-the-ip-address-of-the-current-machine-using-java
>   // for details.
>   s.connect(new InetSocketAddress("google.com", 80));
>   return s.getLocalAddress().getHostAddress();
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (HUDI-3050) Unable to find server port

2022-03-29 Thread Raymond Xu (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17476358#comment-17476358
 ] 

Raymond Xu edited comment on HUDI-3050 at 3/30/22, 3:44 AM:


[~kinger0610] On which Hudi version do you hit this?  Looking at the code, the 
exception is thrown when the write client is trying to start the timeline 
service without the hostname so the dummy host is generated, which fails.  In 
this case, maybe "google.com" cannot be resolved so it failed.  There is one 
fix around that on master: 
[https://github.com/apache/hudi/pull/4355/|https://github.com/apache/hudi/pull/4355/files.]
  Could you try it?


was (Author: guoyihua):
[~kinger0610] On which Hudi version do you hit this?  Looking at the code, the 
exception is thrown when the write client is trying to start the timeline 
service without the hostname so the dummy host is generated, which fails.  In 
this case, maybe "google.com" cannot be resolved so it failed.  There is one 
fix around that on master: [https://github.com/apache/hudi/pull/4355/files.]  
Could you try it?

> Unable to find server port
> --
>
> Key: HUDI-3050
> URL: https://issues.apache.org/jira/browse/HUDI-3050
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Di Wang
>Assignee: Ethan Guo
>Priority: Critical
>  Labels: user-support-issues
> Fix For: 0.11.0
>
>
>  
> {code:java}
> // code placeholder
> Caused by: org.apache.hudi.exception.HoodieException: Unable to find server 
> port
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:41)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.setHostAddr(EmbeddedTimelineService.java:104)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.(EmbeddedTimelineService.java:55)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.startTimelineService(EmbeddedTimelineServerHelper.java:70)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.createEmbeddedTimelineService(EmbeddedTimelineServerHelper.java:51)
> at 
> org.apache.hudi.client.AbstractHoodieClient.startEmbeddedServerView(AbstractHoodieClient.java:109)
> at 
> org.apache.hudi.client.AbstractHoodieClient.(AbstractHoodieClient.java:77)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:139)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:127)
> at 
> org.apache.hudi.client.HoodieFlinkWriteClient.(HoodieFlinkWriteClient.java:97)
> at 
> org.apache.hudi.util.StreamerUtil.createWriteClient(StreamerUtil.java:402)
> at 
> org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:166)
> at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198)
> at 
> org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
> ... 24 common frames omitted
> Caused by: java.net.ConnectException: Connection timed out (Connection timed 
> out)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:38)
> ... 37 common frames omitted
> {code}
> it happens an exception . i think relative code is  is below,
> https://issues.apache.org/jira/browse/HUDI-3037
>   s = new Socket();
>   // see 
> https://stackoverflow.com/questions/9481865/getting-the-ip-address-of-the-current-machine-using-java
>   // for details.
>   s.connect(new InetSocketAddress("google.com", 80));
>   return s.getLocalAddress().getHostAddress();
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3050) Unable to find server port

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3050:
-
Component/s: (was: kafka-connect)

> Unable to find server port
> --
>
> Key: HUDI-3050
> URL: https://issues.apache.org/jira/browse/HUDI-3050
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Di Wang
>Assignee: Ethan Guo
>Priority: Critical
>  Labels: user-support-issues
> Fix For: 0.11.0
>
>
>  
> {code:java}
> // code placeholder
> Caused by: org.apache.hudi.exception.HoodieException: Unable to find server 
> port
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:41)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.setHostAddr(EmbeddedTimelineService.java:104)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.(EmbeddedTimelineService.java:55)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.startTimelineService(EmbeddedTimelineServerHelper.java:70)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.createEmbeddedTimelineService(EmbeddedTimelineServerHelper.java:51)
> at 
> org.apache.hudi.client.AbstractHoodieClient.startEmbeddedServerView(AbstractHoodieClient.java:109)
> at 
> org.apache.hudi.client.AbstractHoodieClient.(AbstractHoodieClient.java:77)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:139)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:127)
> at 
> org.apache.hudi.client.HoodieFlinkWriteClient.(HoodieFlinkWriteClient.java:97)
> at 
> org.apache.hudi.util.StreamerUtil.createWriteClient(StreamerUtil.java:402)
> at 
> org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:166)
> at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198)
> at 
> org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
> ... 24 common frames omitted
> Caused by: java.net.ConnectException: Connection timed out (Connection timed 
> out)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:38)
> ... 37 common frames omitted
> {code}
> it happens an exception . i think relative code is  is below,
> https://issues.apache.org/jira/browse/HUDI-3037
>   s = new Socket();
>   // see 
> https://stackoverflow.com/questions/9481865/getting-the-ip-address-of-the-current-machine-using-java
>   // for details.
>   s.connect(new InetSocketAddress("google.com", 80));
>   return s.getLocalAddress().getHostAddress();
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (HUDI-3050) Unable to find server port

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu closed HUDI-3050.

  Assignee: Danny Chen  (was: Ethan Guo)
Resolution: Fixed

> Unable to find server port
> --
>
> Key: HUDI-3050
> URL: https://issues.apache.org/jira/browse/HUDI-3050
> Project: Apache Hudi
>  Issue Type: Bug
>Reporter: Di Wang
>Assignee: Danny Chen
>Priority: Critical
>  Labels: user-support-issues
> Fix For: 0.11.0
>
>
>  
> {code:java}
> // code placeholder
> Caused by: org.apache.hudi.exception.HoodieException: Unable to find server 
> port
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:41)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.setHostAddr(EmbeddedTimelineService.java:104)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineService.(EmbeddedTimelineService.java:55)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.startTimelineService(EmbeddedTimelineServerHelper.java:70)
> at 
> org.apache.hudi.client.embedded.EmbeddedTimelineServerHelper.createEmbeddedTimelineService(EmbeddedTimelineServerHelper.java:51)
> at 
> org.apache.hudi.client.AbstractHoodieClient.startEmbeddedServerView(AbstractHoodieClient.java:109)
> at 
> org.apache.hudi.client.AbstractHoodieClient.(AbstractHoodieClient.java:77)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:139)
> at 
> org.apache.hudi.client.AbstractHoodieWriteClient.(AbstractHoodieWriteClient.java:127)
> at 
> org.apache.hudi.client.HoodieFlinkWriteClient.(HoodieFlinkWriteClient.java:97)
> at 
> org.apache.hudi.util.StreamerUtil.createWriteClient(StreamerUtil.java:402)
> at 
> org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:166)
> at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198)
> at 
> org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
> ... 24 common frames omitted
> Caused by: java.net.ConnectException: Connection timed out (Connection timed 
> out)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at 
> org.apache.hudi.common.util.NetworkUtils.getHostname(NetworkUtils.java:38)
> ... 37 common frames omitted
> {code}
> it happens an exception . i think relative code is  is below,
> https://issues.apache.org/jira/browse/HUDI-3037
>   s = new Socket();
>   // see 
> https://stackoverflow.com/questions/9481865/getting-the-ip-address-of-the-current-machine-using-java
>   // for details.
>   s.connect(new InetSocketAddress("google.com", 80));
>   return s.getLocalAddress().getHostAddress();
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3066) Very slow file listing after enabling metadata for existing tables in 0.10.0 release

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3066:
-
Labels: metadata performance pull-request-available  (was: performance 
pull-request-available)

> Very slow file listing after enabling metadata for existing tables in 0.10.0 
> release
> 
>
> Key: HUDI-3066
> URL: https://issues.apache.org/jira/browse/HUDI-3066
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata, reader-core
>Affects Versions: 0.10.0
> Environment: EMR 6.4.0
> Hudi version : 0.10.0
>Reporter: Harsha Teja Kanna
>Assignee: Ethan Guo
>Priority: Critical
>  Labels: metadata, performance, pull-request-available
> Fix For: 0.11.0
>
> Attachments: Screen Shot 2021-12-18 at 6.16.29 PM.png, Screen Shot 
> 2021-12-20 at 10.05.50 PM.png, Screen Shot 2021-12-20 at 10.17.44 PM.png, 
> Screen Shot 2021-12-21 at 10.22.54 PM.png, Screen Shot 2021-12-21 at 10.24.12 
> PM.png, metadata_files.txt, metadata_files_compacted.txt, 
> metadata_timeline.txt, metadata_timeline_archived.txt, 
> metadata_timeline_compacted.txt, stderr_part1.txt, stderr_part2.txt, 
> timeline.txt, writer_log.txt
>
>
> After 'metadata table' is enabled, File listing takes long time.
> If metadata is enabled on Reader side(as shown below), it is taking even more 
> time per file listing task
> {code:java}
> import org.apache.hudi.DataSourceReadOptions
> import org.apache.hudi.common.config.HoodieMetadataConfig
> val hadoopConf = spark.conf
> hadoopConf.set(HoodieMetadataConfig.ENABLE.key(), "true")
> val basePath = "s3a://datalake-hudi"
> val sessions = spark
> .read
> .format("org.apache.hudi")
> .option(DataSourceReadOptions.QUERY_TYPE.key(), 
> DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL)
> .option(DataSourceReadOptions.READ_PATHS.key(), 
> s"${basePath}/sessions_by_entrydate/entrydate=2021/*/*/*")
> .load()
> sessions.createOrReplaceTempView("sessions") {code}
> Existing tables (COW) have inline clustering on and have many replace commits.
> Logs seem to suggest the delay is in view.AbstractTableFileSystemView 
> resetFileGroupsReplaced function or metadata.HoodieBackedTableMetadata
> Also many log messages in AbstractHoodieLogRecordReader
>  
> 2021-12-18 23:17:54,056 INFO view.AbstractTableFileSystemView: Took 4118 ms 
> to read  136 instants, 9731 replaced file groups
> 2021-12-18 23:37:46,086 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,090 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.76_0-20-515
>  at instant 20211217035105329
> 2021-12-18 23:37:46,090 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,094 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.121_0-57-663',
>  fileLen=0}
> 2021-12-18 23:37:46,095 INFO log.AbstractHoodieLogRecordReader: Scanning log 
> file 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.20_0-35-613',
>  fileLen=0}
> 2021-12-18 23:37:46,095 INFO s3a.S3AInputStream: Switching to Random IO seek 
> policy
> 2021-12-18 23:37:46,096 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.62_0-34-377
>  at instant 20211217022049877
> 2021-12-18 23:37:46,096 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,105 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.86_0-20-362',
>  fileLen=0}
> 2021-12-18 23:37:46,109 INFO log.AbstractHoodieLogRecordReader: Scanning log 
> file 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.121_0-57-663',
>  fileLen=0}
> 2021-12-18 23:37:46,109 INFO s3a.S3AInputStream: Switching to Random IO seek 
> policy
> 2021-12-18 23:37:46,110 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.77_0-35-590',
>  fileLen=0}
> 2021-12-18 23:37:46,112 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.20_0-35-613
>

[jira] [Updated] (HUDI-3066) Very slow file listing after enabling metadata for existing tables in 0.10.0 release

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3066:
-
Epic Link: HUDI-1292

> Very slow file listing after enabling metadata for existing tables in 0.10.0 
> release
> 
>
> Key: HUDI-3066
> URL: https://issues.apache.org/jira/browse/HUDI-3066
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata, reader-core
>Affects Versions: 0.10.0
> Environment: EMR 6.4.0
> Hudi version : 0.10.0
>Reporter: Harsha Teja Kanna
>Assignee: Ethan Guo
>Priority: Critical
>  Labels: metadata, performance, pull-request-available
> Fix For: 0.11.0
>
> Attachments: Screen Shot 2021-12-18 at 6.16.29 PM.png, Screen Shot 
> 2021-12-20 at 10.05.50 PM.png, Screen Shot 2021-12-20 at 10.17.44 PM.png, 
> Screen Shot 2021-12-21 at 10.22.54 PM.png, Screen Shot 2021-12-21 at 10.24.12 
> PM.png, metadata_files.txt, metadata_files_compacted.txt, 
> metadata_timeline.txt, metadata_timeline_archived.txt, 
> metadata_timeline_compacted.txt, stderr_part1.txt, stderr_part2.txt, 
> timeline.txt, writer_log.txt
>
>
> After 'metadata table' is enabled, File listing takes long time.
> If metadata is enabled on Reader side(as shown below), it is taking even more 
> time per file listing task
> {code:java}
> import org.apache.hudi.DataSourceReadOptions
> import org.apache.hudi.common.config.HoodieMetadataConfig
> val hadoopConf = spark.conf
> hadoopConf.set(HoodieMetadataConfig.ENABLE.key(), "true")
> val basePath = "s3a://datalake-hudi"
> val sessions = spark
> .read
> .format("org.apache.hudi")
> .option(DataSourceReadOptions.QUERY_TYPE.key(), 
> DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL)
> .option(DataSourceReadOptions.READ_PATHS.key(), 
> s"${basePath}/sessions_by_entrydate/entrydate=2021/*/*/*")
> .load()
> sessions.createOrReplaceTempView("sessions") {code}
> Existing tables (COW) have inline clustering on and have many replace commits.
> Logs seem to suggest the delay is in view.AbstractTableFileSystemView 
> resetFileGroupsReplaced function or metadata.HoodieBackedTableMetadata
> Also many log messages in AbstractHoodieLogRecordReader
>  
> 2021-12-18 23:17:54,056 INFO view.AbstractTableFileSystemView: Took 4118 ms 
> to read  136 instants, 9731 replaced file groups
> 2021-12-18 23:37:46,086 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,090 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.76_0-20-515
>  at instant 20211217035105329
> 2021-12-18 23:37:46,090 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,094 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.121_0-57-663',
>  fileLen=0}
> 2021-12-18 23:37:46,095 INFO log.AbstractHoodieLogRecordReader: Scanning log 
> file 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.20_0-35-613',
>  fileLen=0}
> 2021-12-18 23:37:46,095 INFO s3a.S3AInputStream: Switching to Random IO seek 
> policy
> 2021-12-18 23:37:46,096 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.62_0-34-377
>  at instant 20211217022049877
> 2021-12-18 23:37:46,096 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,105 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.86_0-20-362',
>  fileLen=0}
> 2021-12-18 23:37:46,109 INFO log.AbstractHoodieLogRecordReader: Scanning log 
> file 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.121_0-57-663',
>  fileLen=0}
> 2021-12-18 23:37:46,109 INFO s3a.S3AInputStream: Switching to Random IO seek 
> policy
> 2021-12-18 23:37:46,110 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.77_0-35-590',
>  fileLen=0}
> 2021-12-18 23:37:46,112 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.20_0-35-613
>  at instant 20211216183448389
> 2021-12-18 23:37:46,112 INFO log.AbstractH

[jira] [Updated] (HUDI-3066) Very slow file listing after enabling metadata for existing tables in 0.10.0 release

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3066:
-
Priority: Blocker  (was: Critical)

> Very slow file listing after enabling metadata for existing tables in 0.10.0 
> release
> 
>
> Key: HUDI-3066
> URL: https://issues.apache.org/jira/browse/HUDI-3066
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata, reader-core
>Affects Versions: 0.10.0
> Environment: EMR 6.4.0
> Hudi version : 0.10.0
>Reporter: Harsha Teja Kanna
>Assignee: Ethan Guo
>Priority: Blocker
>  Labels: metadata, performance, pull-request-available
> Fix For: 0.11.0
>
> Attachments: Screen Shot 2021-12-18 at 6.16.29 PM.png, Screen Shot 
> 2021-12-20 at 10.05.50 PM.png, Screen Shot 2021-12-20 at 10.17.44 PM.png, 
> Screen Shot 2021-12-21 at 10.22.54 PM.png, Screen Shot 2021-12-21 at 10.24.12 
> PM.png, metadata_files.txt, metadata_files_compacted.txt, 
> metadata_timeline.txt, metadata_timeline_archived.txt, 
> metadata_timeline_compacted.txt, stderr_part1.txt, stderr_part2.txt, 
> timeline.txt, writer_log.txt
>
>
> After 'metadata table' is enabled, File listing takes long time.
> If metadata is enabled on Reader side(as shown below), it is taking even more 
> time per file listing task
> {code:java}
> import org.apache.hudi.DataSourceReadOptions
> import org.apache.hudi.common.config.HoodieMetadataConfig
> val hadoopConf = spark.conf
> hadoopConf.set(HoodieMetadataConfig.ENABLE.key(), "true")
> val basePath = "s3a://datalake-hudi"
> val sessions = spark
> .read
> .format("org.apache.hudi")
> .option(DataSourceReadOptions.QUERY_TYPE.key(), 
> DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL)
> .option(DataSourceReadOptions.READ_PATHS.key(), 
> s"${basePath}/sessions_by_entrydate/entrydate=2021/*/*/*")
> .load()
> sessions.createOrReplaceTempView("sessions") {code}
> Existing tables (COW) have inline clustering on and have many replace commits.
> Logs seem to suggest the delay is in view.AbstractTableFileSystemView 
> resetFileGroupsReplaced function or metadata.HoodieBackedTableMetadata
> Also many log messages in AbstractHoodieLogRecordReader
>  
> 2021-12-18 23:17:54,056 INFO view.AbstractTableFileSystemView: Took 4118 ms 
> to read  136 instants, 9731 replaced file groups
> 2021-12-18 23:37:46,086 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,090 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.76_0-20-515
>  at instant 20211217035105329
> 2021-12-18 23:37:46,090 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,094 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.121_0-57-663',
>  fileLen=0}
> 2021-12-18 23:37:46,095 INFO log.AbstractHoodieLogRecordReader: Scanning log 
> file 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.20_0-35-613',
>  fileLen=0}
> 2021-12-18 23:37:46,095 INFO s3a.S3AInputStream: Switching to Random IO seek 
> policy
> 2021-12-18 23:37:46,096 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.62_0-34-377
>  at instant 20211217022049877
> 2021-12-18 23:37:46,096 INFO log.AbstractHoodieLogRecordReader: Number of 
> remaining logblocks to merge 1
> 2021-12-18 23:37:46,105 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.86_0-20-362',
>  fileLen=0}
> 2021-12-18 23:37:46,109 INFO log.AbstractHoodieLogRecordReader: Scanning log 
> file 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.121_0-57-663',
>  fileLen=0}
> 2021-12-18 23:37:46,109 INFO s3a.S3AInputStream: Switching to Random IO seek 
> policy
> 2021-12-18 23:37:46,110 INFO log.HoodieLogFormatReader: Moving to the next 
> reader for logfile 
> HoodieLogFile\{pathStr='s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.77_0-35-590',
>  fileLen=0}
> 2021-12-18 23:37:46,112 INFO log.AbstractHoodieLogRecordReader: Reading a 
> data block from file 
> s3a://datalake-hudi/sessions/.hoodie/metadata/files/.files-_20211216144130775001.log.20_0-35-613
>  at instant 20211216183448389
> 2021-12-18 23:37:46,112 INFO 

[jira] [Updated] (HUDI-3298) Explore and Try Byteman for error injection on Hudi write flow

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3298:
-
Issue Type: Test  (was: Task)

> Explore and Try Byteman for error injection on Hudi write flow
> --
>
> Key: HUDI-3298
> URL: https://issues.apache.org/jira/browse/HUDI-3298
> Project: Apache Hudi
>  Issue Type: Test
>  Components: tests-ci
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Critical
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3298) Explore and Try Byteman for error injection on Hudi write flow

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3298:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Explore and Try Byteman for error injection on Hudi write flow
> --
>
> Key: HUDI-3298
> URL: https://issues.apache.org/jira/browse/HUDI-3298
> Project: Apache Hudi
>  Issue Type: Task
>  Components: tests-ci
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Critical
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3298) Explore and Try Byteman for error injection on Hudi write flow

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3298:
-
Issue Type: Improvement  (was: Test)

> Explore and Try Byteman for error injection on Hudi write flow
> --
>
> Key: HUDI-3298
> URL: https://issues.apache.org/jira/browse/HUDI-3298
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: tests-ci
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Critical
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3298) Explore and Try Byteman for error injection on Hudi write flow

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3298:
-
Epic Link: HUDI-1236  (was: HUDI-1292)

> Explore and Try Byteman for error injection on Hudi write flow
> --
>
> Key: HUDI-3298
> URL: https://issues.apache.org/jira/browse/HUDI-3298
> Project: Apache Hudi
>  Issue Type: Test
>  Components: tests-ci
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Critical
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3298) Explore and Try Byteman for error injection on Hudi write flow

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3298:
-
Priority: Minor  (was: Critical)

> Explore and Try Byteman for error injection on Hudi write flow
> --
>
> Key: HUDI-3298
> URL: https://issues.apache.org/jira/browse/HUDI-3298
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: tests-ci
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Minor
> Fix For: 0.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3298) Explore and Try Byteman for error injection on Hudi write flow

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3298:
-
Fix Version/s: (was: 0.12.0)

> Explore and Try Byteman for error injection on Hudi write flow
> --
>
> Key: HUDI-3298
> URL: https://issues.apache.org/jira/browse/HUDI-3298
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: tests-ci
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3300) Timeline server FSViewManager should avoid point lookup for metadata file partition

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3300:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Timeline server FSViewManager should avoid point lookup for metadata file 
> partition
> ---
>
> Key: HUDI-3300
> URL: https://issues.apache.org/jira/browse/HUDI-3300
> Project: Apache Hudi
>  Issue Type: Task
>  Components: metadata
>Reporter: Manoj Govindassamy
>Assignee: Yue Zhang
>Priority: Critical
> Fix For: 0.12.0
>
>
> When inline reading is enabled, that is 
> hoodie.metadata.enable.full.scan.log.files = false, 
> MetadataMergedLogRecordReader doesn't cache the file listings records via the 
> ExternalSpillableMap. So, every file listing will lead to re-reading of 
> metadata files partition log and base files. Since files partition is less in 
> size, even when inline reading is enabled, the TimelineServer should 
> construct the FSViewManager with inline reading disabled for metadata files 
> partition. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3300) Timeline server FSViewManager should avoid point lookup for metadata file partition

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3300:
-
Issue Type: Bug  (was: Task)

> Timeline server FSViewManager should avoid point lookup for metadata file 
> partition
> ---
>
> Key: HUDI-3300
> URL: https://issues.apache.org/jira/browse/HUDI-3300
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata
>Reporter: Manoj Govindassamy
>Assignee: Yue Zhang
>Priority: Critical
> Fix For: 0.12.0
>
>
> When inline reading is enabled, that is 
> hoodie.metadata.enable.full.scan.log.files = false, 
> MetadataMergedLogRecordReader doesn't cache the file listings records via the 
> ExternalSpillableMap. So, every file listing will lead to re-reading of 
> metadata files partition log and base files. Since files partition is less in 
> size, even when inline reading is enabled, the TimelineServer should 
> construct the FSViewManager with inline reading disabled for metadata files 
> partition. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3301) MergedLogRecordReader inline reading should be stateless and thread safe

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3301:
-
Labels:   (was: HUDI-bug)

> MergedLogRecordReader inline reading should be stateless and thread safe
> 
>
> Key: HUDI-3301
> URL: https://issues.apache.org/jira/browse/HUDI-3301
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata
>Reporter: Manoj Govindassamy
>Assignee: Yue Zhang
>Priority: Critical
> Fix For: 0.12.0
>
>
> Metadata table inline reading (enable.full.scan.log.files = false) today 
> alters instance member fields and not thread safe.
>  
> When the inline reading is enabled, HoodieMetadataMergedLogRecordReader 
> doesn't do full read of log and base files and doesn't fill in the 
> ExternalSpillableMap records cache. Each getRecordsByKeys() thereby will 
> re-read the log and base files by design. But the issue here is this reading 
> alters the instance members and the filled in records are relevant only for 
> that request. Any concurrent getRecordsByKeys() is also modifying the member 
> variable leading to NPE.
>  
> To avoid this, a temporary fix of making getRecordsByKeys() a synchronized 
> method has been pushed to master. But this fix doesn't solve all usecases. We 
> need to make the whole class stateless and thread safe for inline reading.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3301) MergedLogRecordReader inline reading should be stateless and thread safe

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3301:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> MergedLogRecordReader inline reading should be stateless and thread safe
> 
>
> Key: HUDI-3301
> URL: https://issues.apache.org/jira/browse/HUDI-3301
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: metadata
>Reporter: Manoj Govindassamy
>Assignee: Yue Zhang
>Priority: Critical
>  Labels: HUDI-bug
> Fix For: 0.12.0
>
>
> Metadata table inline reading (enable.full.scan.log.files = false) today 
> alters instance member fields and not thread safe.
>  
> When the inline reading is enabled, HoodieMetadataMergedLogRecordReader 
> doesn't do full read of log and base files and doesn't fill in the 
> ExternalSpillableMap records cache. Each getRecordsByKeys() thereby will 
> re-read the log and base files by design. But the issue here is this reading 
> alters the instance members and the filled in records are relevant only for 
> that request. Any concurrent getRecordsByKeys() is also modifying the member 
> variable leading to NPE.
>  
> To avoid this, a temporary fix of making getRecordsByKeys() a synchronized 
> method has been pushed to master. But this fix doesn't solve all usecases. We 
> need to make the whole class stateless and thread safe for inline reading.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3307) Implement a repair tool for bring back table into a clean state

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3307:
-
Component/s: cli

> Implement a repair tool for bring back table into a clean state
> ---
>
> Key: HUDI-3307
> URL: https://issues.apache.org/jira/browse/HUDI-3307
> Project: Apache Hudi
>  Issue Type: Task
>  Components: cli, metadata
>Reporter: Ethan Guo
>Assignee: Ethan Guo
>Priority: Major
> Fix For: 0.11.0
>
>
> Implement a repair tool for bring back table into a clean state, in case 
> metadata table corrupts the data table



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3307) Implement a repair tool for bring back table into a clean state

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3307:
-
Issue Type: New Feature  (was: Task)

> Implement a repair tool for bring back table into a clean state
> ---
>
> Key: HUDI-3307
> URL: https://issues.apache.org/jira/browse/HUDI-3307
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: cli, metadata
>Reporter: Ethan Guo
>Assignee: sivabalan narayanan
>Priority: Major
> Fix For: 0.11.0
>
>
> Implement a repair tool for bring back table into a clean state, in case 
> metadata table corrupts the data table



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (HUDI-3307) Implement a repair tool for bring back table into a clean state

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu reassigned HUDI-3307:


Assignee: sivabalan narayanan  (was: Ethan Guo)

> Implement a repair tool for bring back table into a clean state
> ---
>
> Key: HUDI-3307
> URL: https://issues.apache.org/jira/browse/HUDI-3307
> Project: Apache Hudi
>  Issue Type: Task
>  Components: cli, metadata
>Reporter: Ethan Guo
>Assignee: sivabalan narayanan
>Priority: Major
> Fix For: 0.11.0
>
>
> Implement a repair tool for bring back table into a clean state, in case 
> metadata table corrupts the data table



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (HUDI-3307) Implement a repair tool for bring back table into a clean state

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu closed HUDI-3307.

Resolution: Done

> Implement a repair tool for bring back table into a clean state
> ---
>
> Key: HUDI-3307
> URL: https://issues.apache.org/jira/browse/HUDI-3307
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: cli, metadata
>Reporter: Ethan Guo
>Assignee: sivabalan narayanan
>Priority: Major
> Fix For: 0.11.0
>
>
> Implement a repair tool for bring back table into a clean state, in case 
> metadata table corrupts the data table



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3321) HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field name

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3321:
-
Labels:   (was: code-cleanup)

> HFileWriter, HFileReader and HFileDataBlock should avoid hardcoded key field 
> name
> -
>
> Key: HUDI-3321
> URL: https://issues.apache.org/jira/browse/HUDI-3321
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: code-quality, metadata
>Reporter: Manoj Govindassamy
>Assignee: Ethan Guo
>Priority: Major
> Fix For: 0.11.0
>
>
> Today HFileReader has the hardcoded key field name for the schema. This key 
> field is used by HFileWriter, HFileDataBlock for HFile key dedupliacation 
> feature. When users want to use the HFile format, this hard coded key field 
> name can be preventing the usecase. We need a way to pass in the 
> writer/reader/query configs to the HFile storage layer so as to use the right 
> key field name. 
>  
> Related: HUDI-2763



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3427) Investigate timeouts in Hudi Kafka Connect Sink

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3427:
-
Fix Version/s: 0.12.0
   (was: 0.11.0)

> Investigate timeouts in Hudi Kafka Connect Sink
> ---
>
> Key: HUDI-3427
> URL: https://issues.apache.org/jira/browse/HUDI-3427
> Project: Apache Hudi
>  Issue Type: Improvement
>Reporter: Ethan Guo
>Priority: Critical
> Fix For: 0.12.0
>
>
> When following Quick Start Guide for Hudi Kafka Connect Sink, the sink 
> periodically throws timeout exceptions, though they does not affect delta 
> commits:
> {code:java}
> [2022-02-14 15:02:39,832] DEBUG [hudi-sink|task-0] ignored: 
> WriteFlusher@349724cf{IDLE}->null 
> (org.apache.hudi.org.eclipse.jetty.io.WriteFlusher:471)
> java.nio.channels.ClosedChannelException
>   at 
> org.apache.hudi.org.eclipse.jetty.io.WriteFlusher.onClose(WriteFlusher.java:502)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.onClose(AbstractEndPoint.java:353)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.ChannelEndPoint.onClose(ChannelEndPoint.java:215)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.doOnClose(AbstractEndPoint.java:225)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.close(AbstractEndPoint.java:192)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.close(AbstractEndPoint.java:175)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractConnection.onFillInterestedFailed(AbstractConnection.java:174)
>   at 
> org.apache.hudi.org.eclipse.jetty.server.HttpConnection.onFillInterestedFailed(HttpConnection.java:503)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractConnection$ReadCallback.failed(AbstractConnection.java:311)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.FillInterest.onFail(FillInterest.java:138)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.onIdleExpired(AbstractEndPoint.java:406)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:166)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout$1.run(IdleTimeout.java:50)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> [2022-02-14 15:02:39,833] DEBUG [hudi-sink|task-0] onClose 
> FillInterest@1c255bc0{null} 
> (org.apache.hudi.org.eclipse.jetty.io.FillInterest:147)
> [2022-02-14 15:02:39,833] DEBUG [hudi-sink|task-0] ignored: 
> WriteFlusher@349724cf{IDLE}->null 
> (org.apache.hudi.org.eclipse.jetty.io.WriteFlusher:471)
> java.util.concurrent.TimeoutException: Idle timeout expired: 30003/3 ms
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:166)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout$1.run(IdleTimeout.java:50)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3427) Investigate timeouts in Hudi Kafka Connect Sink

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3427:
-
Component/s: kafka-connect

> Investigate timeouts in Hudi Kafka Connect Sink
> ---
>
> Key: HUDI-3427
> URL: https://issues.apache.org/jira/browse/HUDI-3427
> Project: Apache Hudi
>  Issue Type: Improvement
>  Components: kafka-connect
>Reporter: Ethan Guo
>Priority: Critical
> Fix For: 0.12.0
>
>
> When following Quick Start Guide for Hudi Kafka Connect Sink, the sink 
> periodically throws timeout exceptions, though they does not affect delta 
> commits:
> {code:java}
> [2022-02-14 15:02:39,832] DEBUG [hudi-sink|task-0] ignored: 
> WriteFlusher@349724cf{IDLE}->null 
> (org.apache.hudi.org.eclipse.jetty.io.WriteFlusher:471)
> java.nio.channels.ClosedChannelException
>   at 
> org.apache.hudi.org.eclipse.jetty.io.WriteFlusher.onClose(WriteFlusher.java:502)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.onClose(AbstractEndPoint.java:353)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.ChannelEndPoint.onClose(ChannelEndPoint.java:215)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.doOnClose(AbstractEndPoint.java:225)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.close(AbstractEndPoint.java:192)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.close(AbstractEndPoint.java:175)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractConnection.onFillInterestedFailed(AbstractConnection.java:174)
>   at 
> org.apache.hudi.org.eclipse.jetty.server.HttpConnection.onFillInterestedFailed(HttpConnection.java:503)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractConnection$ReadCallback.failed(AbstractConnection.java:311)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.FillInterest.onFail(FillInterest.java:138)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.onIdleExpired(AbstractEndPoint.java:406)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:166)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout$1.run(IdleTimeout.java:50)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> [2022-02-14 15:02:39,833] DEBUG [hudi-sink|task-0] onClose 
> FillInterest@1c255bc0{null} 
> (org.apache.hudi.org.eclipse.jetty.io.FillInterest:147)
> [2022-02-14 15:02:39,833] DEBUG [hudi-sink|task-0] ignored: 
> WriteFlusher@349724cf{IDLE}->null 
> (org.apache.hudi.org.eclipse.jetty.io.WriteFlusher:471)
> java.util.concurrent.TimeoutException: Idle timeout expired: 30003/3 ms
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:166)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout$1.run(IdleTimeout.java:50)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HUDI-3427) Investigate timeouts in Hudi Kafka Connect Sink

2022-03-29 Thread Raymond Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-3427:
-
Issue Type: Bug  (was: Improvement)

> Investigate timeouts in Hudi Kafka Connect Sink
> ---
>
> Key: HUDI-3427
> URL: https://issues.apache.org/jira/browse/HUDI-3427
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: kafka-connect
>Reporter: Ethan Guo
>Priority: Blocker
> Fix For: 0.12.0
>
>
> When following Quick Start Guide for Hudi Kafka Connect Sink, the sink 
> periodically throws timeout exceptions, though they does not affect delta 
> commits:
> {code:java}
> [2022-02-14 15:02:39,832] DEBUG [hudi-sink|task-0] ignored: 
> WriteFlusher@349724cf{IDLE}->null 
> (org.apache.hudi.org.eclipse.jetty.io.WriteFlusher:471)
> java.nio.channels.ClosedChannelException
>   at 
> org.apache.hudi.org.eclipse.jetty.io.WriteFlusher.onClose(WriteFlusher.java:502)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.onClose(AbstractEndPoint.java:353)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.ChannelEndPoint.onClose(ChannelEndPoint.java:215)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.doOnClose(AbstractEndPoint.java:225)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.close(AbstractEndPoint.java:192)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.close(AbstractEndPoint.java:175)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractConnection.onFillInterestedFailed(AbstractConnection.java:174)
>   at 
> org.apache.hudi.org.eclipse.jetty.server.HttpConnection.onFillInterestedFailed(HttpConnection.java:503)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractConnection$ReadCallback.failed(AbstractConnection.java:311)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.FillInterest.onFail(FillInterest.java:138)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.AbstractEndPoint.onIdleExpired(AbstractEndPoint.java:406)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:166)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout$1.run(IdleTimeout.java:50)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> [2022-02-14 15:02:39,833] DEBUG [hudi-sink|task-0] onClose 
> FillInterest@1c255bc0{null} 
> (org.apache.hudi.org.eclipse.jetty.io.FillInterest:147)
> [2022-02-14 15:02:39,833] DEBUG [hudi-sink|task-0] ignored: 
> WriteFlusher@349724cf{IDLE}->null 
> (org.apache.hudi.org.eclipse.jetty.io.WriteFlusher:471)
> java.util.concurrent.TimeoutException: Idle timeout expired: 30003/3 ms
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:166)
>   at 
> org.apache.hudi.org.eclipse.jetty.io.IdleTimeout$1.run(IdleTimeout.java:50)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


<    6   7   8   9   10   11   12   13   14   15   >