[jira] [Updated] (FLINK-35828) NullArgumentException when accessing the checkpoint API concurrently

2024-07-12 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-35828:
--
Description: 
 
{code:java}
2024-07-12 16:53:02,664 TRACE org.apache.flink.runtime.rest.FileUploadHandler   
   [] - Received request. 
URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints Method:GET2024-07-12 
16:53:02,664 TRACE org.apache.flink.runtime.rest.FileUploadHandler  
[] - Received request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,664 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,664 
TRACE org.apache.flink.runtime.rest.FileUploadHandler  [] - 
Received request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,664 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,665 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,665 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,665 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,665 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,665 TRACE 
org.apache.flink.runtime.rest.FileUploadHandler  [] - Received 
request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,667 TRACE 
org.apache.flink.runtime.rest.FileUploadHandler  [] - Received 
request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,667 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,667 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,668 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,668 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,668 TRACE 
org.apache.flink.runtime.rest.FileUploadHandler  [] - Received 
request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,669 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,669 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,692 ERROR 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Unhandled 
exception.org.apache.commons.math3.exception.NullArgumentException: input array 
 at org.apache.commons.math3.util.MathArrays.verifyValues(MathArrays.java:1753) 
~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.commons.math3.stat.descriptive.AbstractUnivariateStatistic.test(AbstractUnivariateStatistic.java:158)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:272)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:241)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.getPercentile(DescriptiveStatisticsHistogramStatistics.java:163)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics.getQuantile(DescriptiveStatisticsHistogramStatistics.java:57)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.flink.runtime.checkpoint.StatsSummarySnapshot.getQuantile(StatsSummarySnapshot.java:108)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.flink.runtime.rest.messages.checkpoints.StatsSummaryDto.valueOf(StatsSummaryDto.java:81)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.createCheckpointingStatistics(CheckpointingStatisticsHandler.java:127)
 ~[flink-dist_2.12-1.18.1.jar:1.18.1]  at 

[jira] [Created] (FLINK-35828) NullArgumentException when accessing the checkpoint API concurrently

2024-07-12 Thread Qishang Zhong (Jira)
Qishang Zhong created FLINK-35828:
-

 Summary: NullArgumentException when accessing the checkpoint API 
concurrently
 Key: FLINK-35828
 URL: https://issues.apache.org/jira/browse/FLINK-35828
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Metrics
Affects Versions: 1.18.1
Reporter: Qishang Zhong


 
{code:java}
2024-07-12 16:53:02,664 TRACE org.apache.flink.runtime.rest.FileUploadHandler   
   [] - Received request. 
URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints Method:GET2024-07-12 
16:53:02,664 TRACE org.apache.flink.runtime.rest.FileUploadHandler  
[] - Received request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,664 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,664 
TRACE org.apache.flink.runtime.rest.FileUploadHandler  [] - 
Received request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,664 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,665 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,665 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,665 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,665 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,665 TRACE 
org.apache.flink.runtime.rest.FileUploadHandler  [] - Received 
request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,667 TRACE 
org.apache.flink.runtime.rest.FileUploadHandler  [] - Received 
request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,667 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,667 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,668 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,668 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,668 TRACE 
org.apache.flink.runtime.rest.FileUploadHandler  [] - Received 
request. URL:/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints 
Method:GET2024-07-12 16:53:02,669 TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Received request 
/jobs/e8eae2ce679206b9df19ec3ae07650c4/checkpoints.2024-07-12 16:53:02,669 
TRACE 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Starting request processing.2024-07-12 16:53:02,692 ERROR 
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
 [] - Unhandled 
exception.org.apache.commons.math3.exception.NullArgumentException: input array 
   at 
org.apache.commons.math3.util.MathArrays.verifyValues(MathArrays.java:1753) 
~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 
org.apache.commons.math3.stat.descriptive.AbstractUnivariateStatistic.test(AbstractUnivariateStatistic.java:158)
 ~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 
org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:272)
 ~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 
org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:241)
 ~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.getPercentile(DescriptiveStatisticsHistogramStatistics.java:163)
 ~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 
org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics.getQuantile(DescriptiveStatisticsHistogramStatistics.java:57)
 ~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 
org.apache.flink.runtime.checkpoint.StatsSummarySnapshot.getQuantile(StatsSummarySnapshot.java:108)
 ~[flink-dist_2.12-1.18.1-QCC.jar:1.18.1-QCC]at 

[jira] [Updated] (FLINK-35119) UPDATE DataChangeEvent deserialized data is incorrect

2024-04-16 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-35119:
--
Description: 
When DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
deserialized data is incorrect.
 
Add test data  in 
org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
 
{code:java}
DataChangeEvent.updateEvent(
TableId.tableId("namespace", "schema", "table"), null, after),
DataChangeEvent.updateEvent(
TableId.tableId("namespace", "schema", "table"), null, after, meta) 
{code}

  was:
When DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
deserialized data is incorrect.
 
Add test data  in 
org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
 
{code:java}
DataChangeEvent.updateEvent(
TableId.tableId("namespace", "schema", "table"), null, after, meta) 
{code}


> UPDATE DataChangeEvent deserialized data is incorrect
> -
>
> Key: FLINK-35119
> URL: https://issues.apache.org/jira/browse/FLINK-35119
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qishang Zhong
>Priority: Minor
>  Labels: pull-request-available
>
> When DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
> deserialized data is incorrect.
>  
> Add test data  in 
> org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
>  
> {code:java}
> DataChangeEvent.updateEvent(
> TableId.tableId("namespace", "schema", "table"), null, after),
> DataChangeEvent.updateEvent(
> TableId.tableId("namespace", "schema", "table"), null, after, meta) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35119) UPDATE DataChangeEvent deserialized data is incorrect

2024-04-15 Thread Qishang Zhong (Jira)
Qishang Zhong created FLINK-35119:
-

 Summary: UPDATE DataChangeEvent deserialized data is incorrect
 Key: FLINK-35119
 URL: https://issues.apache.org/jira/browse/FLINK-35119
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Affects Versions: cdc-3.1.0
Reporter: Qishang Zhong


When 
DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
deserialized data is incorrect.
 
Add test data  in 
org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
 
{code:java}
DataChangeEvent.updateEvent(
TableId.tableId("namespace", "schema", "table"), null, after, meta) 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35119) UPDATE DataChangeEvent deserialized data is incorrect

2024-04-15 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-35119:
--
Description: 
When DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
deserialized data is incorrect.
 
Add test data  in 
org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
 
{code:java}
DataChangeEvent.updateEvent(
TableId.tableId("namespace", "schema", "table"), null, after, meta) 
{code}

  was:
When 
DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
deserialized data is incorrect.
 
Add test data  in 
org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
 
{code:java}
DataChangeEvent.updateEvent(
TableId.tableId("namespace", "schema", "table"), null, after, meta) 
{code}


> UPDATE DataChangeEvent deserialized data is incorrect
> -
>
> Key: FLINK-35119
> URL: https://issues.apache.org/jira/browse/FLINK-35119
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qishang Zhong
>Priority: Minor
>
> When DebeziumChangelogMode is upsert, the before of DataChangeEvent is null, 
> deserialized data is incorrect.
>  
> Add test data  in 
> org.apache.flink.cdc.runtime.serializer.event.DataChangeEventSerializerTest
>  
> {code:java}
> DataChangeEvent.updateEvent(
> TableId.tableId("namespace", "schema", "table"), null, after, meta) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34948) CDC RowType can not convert to flink row type

2024-03-27 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831316#comment-17831316
 ] 

Qishang Zhong edited comment on FLINK-34948 at 3/27/24 12:06 PM:
-

[~loserwang1024] There is no corresponding scenario in FlinkCDC. I used this 
conversion in the custom program, connected to a sink based on Flink RowType.

You can refer to the Test code in the 
[PR](https://github.com/apache/flink-cdc/pull/3130).


was (Author: zhongqishang):
[~loserwang1024] There is no corresponding scenario in FlinkCDC. I used this 
conversion in the custom program, connected to a sink based on Flink RowType.

You can refer to the Test code in the PR.

> CDC RowType can not convert to flink row type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34948) CDC RowType can not convert to flink row type

2024-03-27 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831316#comment-17831316
 ] 

Qishang Zhong edited comment on FLINK-34948 at 3/27/24 12:05 PM:
-

[~loserwang1024] There is no corresponding scenario in FlinkCDC. I used this 
conversion in the custom program, connected to a sink based on Flink RowType.

You can refer to the Test code in the PR.


was (Author: zhongqishang):
[~loserwang1024] There is no corresponding scenario in FlinkCDC. I used this 
conversion in the custom program.

You can refer to the Test code in the PR.

> CDC RowType can not convert to flink row type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34948) CDC RowType can not convert to flink row type

2024-03-27 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-34948:
--
Summary: CDC RowType can not convert to flink row type  (was: CDC RowType 
can not convert to flink type)

> CDC RowType can not convert to flink row type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34948) CDC RowType can not convert to flink type

2024-03-27 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831316#comment-17831316
 ] 

Qishang Zhong commented on FLINK-34948:
---

[~loserwang1024] There is no corresponding scenario in FlinkCDC. I used this 
conversion in the custom program.

You can refer to the Test code in the PR.

> CDC RowType can not convert to flink type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34948) CDC RowType can not convert to flink type

2024-03-27 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-34948:
--
Fix Version/s: cdc-3.1.0

> CDC RowType can not convert to flink type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qishang Zhong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34948) CDC RowType can not convert to flink type

2024-03-27 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-34948:
--
Affects Version/s: (was: cdc-3.1.0)

> CDC RowType can not convert to flink type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Priority: Minor
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34948) CDC RowType can not convert to flink type

2024-03-27 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong updated FLINK-34948:
--
Affects Version/s: cdc-3.1.0

> CDC RowType can not convert to flink type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Qishang Zhong
>Priority: Minor
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34948) CDC RowType can not convert to flink type

2024-03-27 Thread Qishang Zhong (Jira)
Qishang Zhong created FLINK-34948:
-

 Summary: CDC RowType can not convert to flink type
 Key: FLINK-34948
 URL: https://issues.apache.org/jira/browse/FLINK-34948
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Reporter: Qishang Zhong


Fix cdc {{RowType}} can not convert to flink type

I meet the follow exception:

 
{code:java}
java.lang.ArrayStoreException
    at java.lang.System.arraycopy(Native Method)
    at java.util.Arrays.copyOf(Arrays.java:3213)
    at java.util.ArrayList.toArray(ArrayList.java:413)
    at 
java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34905) The default length of CHAR/BINARY data type of Add column DDL

2024-03-21 Thread Qishang Zhong (Jira)
Qishang Zhong created FLINK-34905:
-

 Summary: The default length of CHAR/BINARY data type of Add column 
DDL
 Key: FLINK-34905
 URL: https://issues.apache.org/jira/browse/FLINK-34905
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Reporter: Qishang Zhong


I run the DDL in mysql
{code:java}
ALTER TABLE test.products ADD Column1 BINARY NULL;  
ALTER TABLE test.products ADD Column2 CHAR NULL; {code}
Encountered the follow error:
{code:java}

Caused by: java.lang.IllegalArgumentException: Binary string length must be 
between 1 and 2147483647 (both inclusive).
at 
org.apache.flink.cdc.common.types.BinaryType.(BinaryType.java:53)
at 
org.apache.flink.cdc.common.types.BinaryType.(BinaryType.java:61)
at org.apache.flink.cdc.common.types.DataTypes.BINARY(DataTypes.java:42)
at 
org.apache.flink.cdc.connectors.mysql.utils.MySqlTypeUtils.convertFromColumn(MySqlTypeUtils.java:221)
at 
org.apache.flink.cdc.connectors.mysql.utils.MySqlTypeUtils.fromDbzColumn(MySqlTypeUtils.java:111)
at 
org.apache.flink.cdc.connectors.mysql.source.parser.CustomAlterTableParserListener.toCdcColumn(CustomAlterTableParserListener.java:256)
at 
org.apache.flink.cdc.connectors.mysql.source.parser.CustomAlterTableParserListener.lambda$exitAlterByAddColumn$0(CustomAlterTableParserListener.java:126)
at 
io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.runIfNotNull(MySqlAntlrDdlParser.java:358)
at 
org.apache.flink.cdc.connectors.mysql.source.parser.CustomAlterTableParserListener.exitAlterByAddColumn(CustomAlterTableParserListener.java:98)
at 
io.debezium.ddl.parser.mysql.generated.MySqlParser$AlterByAddColumnContext.exitRule(MySqlParser.java:15459)
at 
io.debezium.antlr.ProxyParseTreeListenerUtil.delegateExitRule(ProxyParseTreeListenerUtil.java:64)
at 
org.apache.flink.cdc.connectors.mysql.source.parser.CustomMySqlAntlrDdlParserListener.exitEveryRule(CustomMySqlAntlrDdlParserListener.java:124)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:48)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:30)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
at io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:87)
at 
org.apache.flink.cdc.connectors.mysql.source.MySqlEventDeserializer.deserializeSchemaChangeRecord(MySqlEventDeserializer.java:88)
at 
org.apache.flink.cdc.debezium.event.SourceRecordEventDeserializer.deserialize(SourceRecordEventDeserializer.java:52)
at 
org.apache.flink.cdc.debezium.event.DebeziumEventDeserializationSchema.deserialize(DebeziumEventDeserializationSchema.java:93)
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitElement(MySqlRecordEmitter.java:119)
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.processElement(MySqlRecordEmitter.java:96)
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlPipelineRecordEmitter.processElement(MySqlPipelineRecordEmitter.java:120)
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:73)
at 
org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:46)
at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:160)
at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:419)
at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
at 

[jira] [Commented] (FLINK-28817) NullPointerException in HybridSource when restoring from checkpoint

2022-08-10 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17577822#comment-17577822
 ] 

Qishang Zhong commented on FLINK-28817:
---

[~thw] 

I tried to open a PR for this issue.

[~Benenson] has already test for this PR.

 

> NullPointerException in HybridSource when restoring from checkpoint
> ---
>
> Key: FLINK-28817
> URL: https://issues.apache.org/jira/browse/FLINK-28817
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.14.4, 1.15.1
>Reporter: Michael
>Priority: Major
> Attachments: Preconditions-checkNotNull-error.zip, 
> bf-29-JM-err-analysis.log
>
>
> Scenario:
>  # CheckpointCoordinator - Completed checkpoint 14 for job 
> 
>  # HybridSource successfully completed processing a few SourceFactories, that 
> reads from s3
>  # HybridSourceSplitEnumerator.switchEnumerator failed with 
> com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed 
> out. This is intermittent error, it is usually fixed, when Flink recover from 
> checkpoint & repeat the operation.
>  # Flink starts recovering from checkpoint, 
>  # HybridSourceSplitEnumerator receives 
> SourceReaderFinishedEvent\{sourceIndex=-1}
>  # Processing this event cause 
> 2022/08/08 08:39:34.862 ERROR o.a.f.r.s.c.SourceCoordinator - Uncaught 
> exception in the SplitEnumerator for Source Source: hybrid-source while 
> handling operator event SourceEventWrapper[SourceReaderFinishedEvent
> {sourceIndex=-1}
> ] from subtask 6. Triggering job failover.
> java.lang.NullPointerException: Source for index=0 is not available from 
> sources: \{788=org.apache.flink.connector.file.src.SppFileSource@5a3803f3}
> at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:104)
> at 
> org.apache.flink.connector.base.source.hybridspp.SwitchedSources.sourceOf(SwitchedSources.java:36)
> at 
> org.apache.flink.connector.base.source.hybridspp.HybridSourceSplitEnumerator.sendSwitchSourceEvent(HybridSourceSplitEnumerator.java:152)
> at 
> org.apache.flink.connector.base.source.hybridspp.HybridSourceSplitEnumerator.handleSourceEvent(HybridSourceSplitEnumerator.java:226)
> ...
> I'm running my version of the Hybrid Sources with additional logging, so line 
> numbers & some names could be different from Flink Github.
> My Observation: the problem is intermittent, sometimes it works ok, i.e. 
> SourceReaderFinishedEvent comes with correct sourceIndex. As I see from my 
> log, it happens if my SourceFactory.create()  is executed BEFORE 
> HybridSourceSplitEnumerator - handleSourceEvent 
> SourceReaderFinishedEvent\{sourceIndex=-1}.
> If  HybridSourceSplitEnumerator - handleSourceEvent is executed before my 
> SourceFactory.create(), then sourceIndex=-1 in SourceReaderFinishedEvent
> Preconditions-checkNotNull-error log from JobMgr is attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28817) NullPointerException in HybridSource when restoring from checkpoint

2022-08-08 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17576669#comment-17576669
 ] 

Qishang Zhong commented on FLINK-28817:
---

I encountered the similar problem.

[~thw] Please take a look the following case.

 

org.apache.flink.connector.base.source.hybrid.HybridSourceSplitEnumeratorTest
{code:java}
@Test
public void testRestoreEnumeratorWith2ndSource() throws Exception {
setupEnumeratorAndTriggerSourceSwitch();
HybridSourceEnumeratorState enumeratorState = enumerator.snapshotState(0);
MockSplitEnumerator underlyingEnumerator = getCurrentEnumerator(enumerator);
assertThat(
(List)
Whitebox.getInternalState(underlyingEnumerator, "splits"))
.hasSize(0);
enumerator =
(HybridSourceSplitEnumerator) source.restoreEnumerator(context, 
enumeratorState);
enumerator.start();
enumerator.handleSourceEvent(SUBTASK0, new SourceReaderFinishedEvent(-1));
underlyingEnumerator = getCurrentEnumerator(enumerator);
assertThat(
(List)
Whitebox.getInternalState(underlyingEnumerator, "splits"))
.hasSize(0);
}
 {code}
 

> NullPointerException in HybridSource when restoring from checkpoint
> ---
>
> Key: FLINK-28817
> URL: https://issues.apache.org/jira/browse/FLINK-28817
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.14.4, 1.15.1
>Reporter: Michael
>Priority: Major
> Attachments: bf-29-JM-err-analysis.log
>
>
> Scenario:
>  # CheckpointCoordinator - Completed checkpoint 14 for job 
> 
>  # HybridSource successfully completed processing a few SourceFactories, that 
> reads from s3
>  # Next SourceFactory try to read contents of s3 dir, and it cause an error 
> Unable to execute HTTP request: Read timed out
>  # CheckpointCoordinator - Restoring job  
> from Checkpoint 14
>  # HybridSourceSplitEnumerator - Restoring enumerator for sourceIndex=47
>  # This restoring fail, because of NullPointerException: in 
> HybridSourceSplitEnumerator.close:
>  # Because of this issue, all future restoring from checkpoint also failed
> Extract from the log: --
> 2022/08/02 22:26:51.227 INFO  o.a.f.r.c.CheckpointCoordinator - Restoring job 
>  from Checkpoint 14 @ 1659478803949 for 
>  located at 
> s3://spp-state-371299021277-tech-aidata-di/mb-backfill-jul-20-backfill-prd/2/checkpoints//chk-14.
> 2022/08/02 22:26:51.240 INFO  o.a.f.r.c.CheckpointCoordinator - No master 
> state to restore
> 2022/08/02 22:26:51.240 INFO  o.a.f.r.o.c.RecreateOnResetOperatorCoordinator 
> - Resetting coordinator to checkpoint.
> 2022/08/02 22:26:51.241 INFO  o.a.f.r.s.c.SourceCoordinator - Closing 
> SourceCoordinator for source Source: hybrid-source.
> 2022/08/02 22:26:51.424 INFO  o.a.f.r.s.c.SourceCoordinator - Restoring 
> SplitEnumerator of source Source: hybrid-source from checkpoint.
> 2022/08/02 22:26:51.425 INFO  o.a.f.r.s.c.SourceCoordinator - Starting split 
> enumerator for source Source: hybrid-source.
> 2022/08/02 22:26:51.426 INFO  c.i.d.s.f.s.c.b.HourlyFileSourceFactory - 
> Reading input data from path 
> s3://idl-kafka-connect-ued-raw-uw2-data-lake-prd/data/topics/sbseg-qbo-clickstream/d_20220729-2300
>  for 2022-07-29T23:00:00Z
> 2022/08/02 22:26:51.426 INFO  o.a.f.c.b.s.h.HybridSourceSplitEnumerator - 
> Restoring enumerator for sourceIndex=47
>  
> 2022/08/02 22:26:51.435 INFO  o.a.f.runtime.jobmaster.JobMaster - Trying to 
> recover from a global failure.
> org.apache.flink.util.FlinkException: Global failure triggered by 
> OperatorCoordinator for 'Source: hybrid-source -> decrypt -> map2Events -> 
> filterOutNulls -> assignTimestampsAndWatermarks -> logRawJson' (operator 
> fd9fbc680ee884c4eafd0b9c2d3d007f).
> at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:545)
> at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> ...
> Caused by: java.lang.NullPointerException: null
> at 
> org.apache.flink.connector.base.source.hybridspp.HybridSourceSplitEnumerator.close(HybridSourceSplitEnumerator.java:246)
> at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.close(SourceCoordinator.java:151)
> at 
> org.apache.flink.runtime.operators.coordination.ComponentClosingUtils.lambda$closeAsyncWithTimeout$0(ComponentClosingUtils.java:70)
> at java.lang.Thread.run(Thread.java:750)
> ---



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-21573) Support expression reuse in codegen

2021-06-01 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354966#comment-17354966
 ] 

Qishang Zhong commented on FLINK-21573:
---

[~lzljs3620320] Yes, I tried.  `isDeterministic` return true or false does not 
effect UDF execute times.

> Support expression reuse in codegen
> ---
>
> Key: FLINK-21573
> URL: https://issues.apache.org/jira/browse/FLINK-21573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Major
>
> Currently there is no expression reuse in codegen, and this may result in 
> more CPU overhead in some cases. E.g.
> {code:java}
> SELECT my_map['key1'] as key1, my_map['key2'] as key2, my_map['key3'] as key3
> FROM (
>   SELECT dump_json_to_map(col1) as my_map
>   FROM T
> )
> {code}
> `dump_json_to_map` will be called 3 times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21573) Support expression reuse in codegen

2021-05-19 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347654#comment-17347654
 ] 

Qishang Zhong commented on FLINK-21573:
---

Hi [~lzljs3620320] . In this case, UDF `split_str` will be called twice.
{code:java}
//代码占位符
public static void main(String[] args) throws Exception {

StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().useBlinkPlanner().build();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
DataStream orderA =
env.fromCollection(
Arrays.asList(
new Order(1L, "beer", 3)));

Table tableA = tEnv.fromDataStream(orderA, $("user"), $("product"), 
$("amount"));
tEnv.createTemporaryFunction("split_str", new SubstringFunction());

Table table =
tEnv.sqlQuery("SELECT user, product, amount, d, UPPER(d) as e\n" +
"FROM (\n" +
"  SELECT split_str(product,1,2) as d, *\n" +
"  FROM " + tableA + "\n" +
") T");
tEnv.toAppendStream(table, Row.class).print();
env.execute();
}

public static class Order {
public Long user;
public String product;
public int amount;

public Order() {
}

public Order(Long user, String product, int amount) {
this.user = user;
this.product = product;
this.amount = amount;
}
}

public static class SubstringFunction extends ScalarFunction {
public String eval(String s, Integer begin, Integer end) {
System.out.println(s);
return s.substring(begin, end);
}
}
{code}
I have some ETL jobs need to connect jdbc or RPC calls in the UDF. It will have 
a big effect.

> Support expression reuse in codegen
> ---
>
> Key: FLINK-21573
> URL: https://issues.apache.org/jira/browse/FLINK-21573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Major
>
> Currently there is no expression reuse in codegen, and this may result in 
> more CPU overhead in some cases. E.g.
> {code:java}
> SELECT my_map['key1'] as key1, my_map['key2'] as key2, my_map['key3'] as key3
> FROM (
>   SELECT dump_json_to_map(col1) as my_map
>   FROM T
> )
> {code}
> `dump_json_to_map` will be called 3 times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19873) Skip DDL change events for Canal data

2021-04-08 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317678#comment-17317678
 ] 

Qishang Zhong commented on FLINK-19873:
---

[~jark]  

The ticket only skip `CREATE` type, I think we should be skip all the DDL 
change events for the Canal data.

https://github.com/alibaba/canal/blob/master/protocol/src/main/java/com/alibaba/otter/canal/protocol/CanalEntry.java#L139

> Skip DDL change events for Canal data
> -
>
> Key: FLINK-19873
> URL: https://issues.apache.org/jira/browse/FLINK-19873
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Fangliang Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: image-2020-10-29-15-11-03-514.png, 
> image-2020-10-29-15-11-16-137.png
>
>
> Currently, "canal-json" can't skip DDL change event and will fail the job. 
> This is not very friendly for users. 
>  !image-2020-10-29-15-11-03-514.png! 
>  !image-2020-10-29-15-11-16-137.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18277) Elasticsearch6DynamicSink#asSummaryString() return identifier typo

2020-06-14 Thread Qishang Zhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qishang Zhong closed FLINK-18277.
-
Resolution: Fixed

> Elasticsearch6DynamicSink#asSummaryString() return identifier typo
> --
>
> Key: FLINK-18277
> URL: https://issues.apache.org/jira/browse/FLINK-18277
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Qishang Zhong
>Assignee: Qishang Zhong
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> identifier Spelling mistakes
> `org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSink#asSummaryString`
> {code:java}
>   @Override
>   public String asSummaryString() {
>   return "Elasticsearch7";
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18277) Elasticsearch6DynamicSink#asSummaryString() return identifier typo

2020-06-12 Thread Qishang Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134214#comment-17134214
 ] 

Qishang Zhong commented on FLINK-18277:
---

[~jark] can you assign to me and I will be familiar with the submission process.

> Elasticsearch6DynamicSink#asSummaryString() return identifier typo
> --
>
> Key: FLINK-18277
> URL: https://issues.apache.org/jira/browse/FLINK-18277
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: Qishang Zhong
>Priority: Trivial
> Fix For: 1.11.0
>
>
> identifier Spelling mistakes
> `org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSink#asSummaryString`
> {code:java}
>   @Override
>   public String asSummaryString() {
>   return "Elasticsearch7";
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18277) Elasticsearch6DynamicSink#asSummaryString() return identifier typo

2020-06-12 Thread Qishang Zhong (Jira)
Qishang Zhong created FLINK-18277:
-

 Summary: Elasticsearch6DynamicSink#asSummaryString() return 
identifier typo
 Key: FLINK-18277
 URL: https://issues.apache.org/jira/browse/FLINK-18277
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Reporter: Qishang Zhong
 Fix For: 1.11.0


identifier Spelling mistakes
`org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSink#asSummaryString`
{code:java}
@Override
public String asSummaryString() {
return "Elasticsearch7";
}
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)