[jira] [Created] (FLINK-29373) DataStream to table not support BigDecimalTypeInfo

2022-09-21 Thread hk__lrzy (Jira)
hk__lrzy created FLINK-29373:


 Summary: DataStream to table not support BigDecimalTypeInfo
 Key: FLINK-29373
 URL: https://issues.apache.org/jira/browse/FLINK-29373
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.0
Reporter: hk__lrzy
 Attachments: image-2022-09-21-15-12-11-082.png

When we want to transfrom datastream to table, `TypeInfoDataTypeConverter` will 
try to convert `TypeInformation` to `DataType`. But if datastream's produce 
types contains `BigDecimalTypeInfo`, `TypeInfoDataTypeConverter` will convert 
it `RawDataType`.

Then when we want tranfrom table to datastream again, exception will hapend.

!image-2022-09-21-15-12-11-082.png!

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29944) Support accumaltor in source reader

2022-11-08 Thread hk__lrzy (Jira)
hk__lrzy created FLINK-29944:


 Summary: Support accumaltor in source reader
 Key: FLINK-29944
 URL: https://issues.apache.org/jira/browse/FLINK-29944
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.17.0
Reporter: hk__lrzy


Source Reader is mainly for union batch and streaming logic in single 
interface, it's good point for the developer, but in the {{SourceFunction}} we 
can access {{runtimeconext}} to use accumulator before, now the 
{{SourceReaderContext}} have no method for it, this PR is mainly to support it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35421) Schema Operator blocking forever when Akka Rpc timeout

2024-05-22 Thread hk__lrzy (Jira)
hk__lrzy created FLINK-35421:


 Summary: Schema Operator blocking forever when Akka Rpc timeout
 Key: FLINK-35421
 URL: https://issues.apache.org/jira/browse/FLINK-35421
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Reporter: hk__lrzy






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35432) Support catch modify-event for mysql-cdc

2024-05-23 Thread hk__lrzy (Jira)
hk__lrzy created FLINK-35432:


 Summary: Support catch modify-event for mysql-cdc
 Key: FLINK-35432
 URL: https://issues.apache.org/jira/browse/FLINK-35432
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: hk__lrzy


Now users common use sql like to modify the column type in MySQL.
{code:java}
Alter table MODIFY COLUMN `new_name` new_type{code}
 

Flink-CDC use *CustomAlterTableParserListener* to parse the ddl and wrap it as 
ChangeEevent now. But i noticed that *CustomAlterTableParserListener* not 
implement the method *enterAlterByModifyColumn* and 
{*}exitAlterByModifyColumn{*}, and it will cause we can't received the  
*AlterColumnTypeEvent* now

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35436) Job can't launch when setting the option schema.change.behavior to IGNORE

2024-05-23 Thread hk__lrzy (Jira)
hk__lrzy created FLINK-35436:


 Summary: Job can't launch when setting the option 
schema.change.behavior to IGNORE
 Key: FLINK-35436
 URL: https://issues.apache.org/jira/browse/FLINK-35436
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: hk__lrzy


Now in the 3.0 pipeline,  *SchemaOperator* already was necessary operator in 
the Flink DAG, both *PrePartitionOperator* and *DataSinkWriterOperator* have 
connection with the *SchemaRegister* according the *schemaEvolutionClient,* but 
when we set the option schema.change.behavior to ignore or exception, the 
pipeline will add a filter operator instead of the *Schema Operator,* final 
cause the job fail.

I think we still need keep the option for the schema.change.behavior to meet 
the difference cases, so i advice to move schema.change.behavior to the 
*SchemaRegister* to let *SchemaOperator* will be always in the DAG, and let to 
*SchemaRegister* decided to apply the schema change or not.{*}{*}

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35463) CDC job failed when restored from checkpoint when route rule changed.

2024-05-27 Thread hk__lrzy (Jira)
hk__lrzy created FLINK-35463:


 Summary: CDC job failed when restored from checkpoint when route 
rule changed.
 Key: FLINK-35463
 URL: https://issues.apache.org/jira/browse/FLINK-35463
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: hk__lrzy


*Exception:* 

 
{code:java}
java.lang.IllegalStateException: Unable to get latest schema for table 
"demo.partition_all" {code}
*Reason:* 

Now *ScheamRegister* will stored the all original tables which from source and 
all derivered table which from route rules.

Assume we have a table in mysql which named *demo.partition_all* and have route 
rule *demo1.partition_all* for it.

1. Before first checkpoint be triggered,  *ScheamRegister* will store both 
*demo.partition_all* and {*}demo1.partition_all{*}'s schema in the 
{*}SchemaManager{*}.

2. Stop the job, change route rule as {*}demo1.partition_all_1{*}, and restart 
it with checkpoint.

3. According the logical follow:
{code:java}
if (request.getSchemaChangeEvent() instanceof CreateTableEvent
&& schemaManager.schemaExists(request.getTableId())) {
  return CompletableFuture.completedFuture(
  wrap(new SchemaChangeResponse(Collections.emptyList(;
} {code}
We will not create a schema for the table *demo1.partition_all_1* and job will 
be failed when we request schema from {*}SchemaOperator{*}.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)