[jira] [Created] (FLINK-35936) paimon cdc schema evolution failure when restart job

2024-07-30 Thread MOBIN (Jira)
MOBIN created FLINK-35936:
-

 Summary: paimon cdc schema evolution failure when restart job
 Key: FLINK-35936
 URL: https://issues.apache.org/jira/browse/FLINK-35936
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Affects Versions: cdc-3.2.0
 Environment: Flink 1.19

cdc master
Reporter: MOBIN


paimon cdc schema evolution failure when restart job

Minimal reproduce step:
 # stop flink-cdc-mysql-to-paimon pipeline job
 # alter mysql table schema, such as add column
 # start pipeline job
 # the newly added column was not synchronized to the paimon table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35058) Encountered change event for table db.table whose schema isn't known to this connector

2024-04-21 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN closed FLINK-35058.
-
Resolution: Not A Bug

> Encountered change event for table db.table whose schema isn't known to this 
> connector
> --
>
> Key: FLINK-35058
> URL: https://issues.apache.org/jira/browse/FLINK-35058
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: 1.17.1
>Reporter: MOBIN
>Priority: Major
>
> Flink1.17.1
> flink-cdc:flink-sql-connector-mysql-cdc-2.4.1.jar
> {code:java}
> CREATE TABLE `test_cdc_timestamp` (
>   `id` BIGINT COMMENT '主键id',
>    
>    proctime AS PROCTIME(),
>    PRIMARY KEY(id) NOT ENFORCED
> ) WITH (
>       'connector' = 'mysql-cdc',
>       'hostname' = 'x',
>       'scan.startup.mode' = 'timestamp',
>       'scan.startup.timestamp-millis' = '171241920' ,
>       'port' = '3306',
>       'username' = 'xxx',
>       'password' = 'xxx',
>       'database-name' = 'xxtablename',
>       'table-name' = 'xxdatabase',
>       'scan.incremental.snapshot.enabled' = 'false',
>        'debezium.snapshot.locking.mode' = 'none',
>        'server-id' = '5701',
>      'server-time-zone' = 'Asia/Shanghai',
>     'debezium.skipped.operations' = 'd'
>       ); {code}
> When I use 'scan.startup.mode' = 'latent-offset 'or'initial' to synchronize 
> data normally, when I use 'scan.startup.mode' = 'timestamp', the following 
> error is reported
> {code:java}
> 2024-04-09 11:11:15.619 [debezium-engine] INFO  io.debezium.util.Threads  - 
> Requested thread factory for connector MySqlConnector, id = 
> mysql_binlog_source named = change-event-source-coordinator
> 2024-04-09 11:11:15.621 [debezium-engine] INFO  io.debezium.util.Threads  - 
> Creating thread 
> debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator
> 2024-04-09 11:11:15.629 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Metrics registered
> 2024-04-09 11:11:15.630 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Context created
> 2024-04-09 11:11:15.642 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.connector.mysql.MySqlSnapshotChangeEventSource  - No 
> previous offset has been found
> 2024-04-09 11:11:15.642 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.connector.mysql.MySqlSnapshotChangeEventSource  - According 
> to the connector configuration only schema will be snapshotted
> 2024-04-09 11:11:15.644 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Snapshot ended 
> with SnapshotResult [status=SKIPPED, offset=null]
> 2024-04-09 11:11:15.652 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.util.Threads  - Requested thread factory for connector 
> MySqlConnector, id = mysql_binlog_source named = binlog-client
> 2024-04-09 11:11:15.656 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Starting streaming
> 2024-04-09 11:11:15.682 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - GTID set 
> purged on server: 
> 0969640a-1d48-11ed-b6cf-28dee561557c:1-27603868993,70958f24-2253-11eb-891d-f875a48ad7b1:1-50323,ec1e6593-2251-11eb-9c18-f875a48ad539:1-25345454762
> 2024-04-09 11:11:15.682 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - Skip 0 
> events on streaming start
> 2024-04-09 11:11:15.682 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - Skip 0 
> rows on streaming start
> 2024-04-09 11:11:15.683 
> [debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
> INFO  io.debezium.util.Threads  - Creating thread 
> debezium-mysqlconnector-mysql_binlog_source-binlog-client
> 2024-04-09 11:11:15.686 [blc.mysql.com:3306] INFO  
> io.debezium.util.Threads  - Creating thread 
> debezium-mysqlconnector-mysql_binlog_source-binlog-client
> 2024-04-09 11:11:15.700 [blc.mysql.com:3306] INFO  
> io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - Connected to 
> MySQL binlog at xxx.mysql.com:3306, starting at MySqlOffsetContext 
> 

[jira] [Created] (FLINK-35058) Encountered change event for table db.table whose schema isn't known to this connector

2024-04-08 Thread MOBIN (Jira)
MOBIN created FLINK-35058:
-

 Summary: Encountered change event for table db.table whose schema 
isn't known to this connector
 Key: FLINK-35058
 URL: https://issues.apache.org/jira/browse/FLINK-35058
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Affects Versions: 1.17.1
Reporter: MOBIN


Flink1.17.1

flink-cdc:flink-sql-connector-mysql-cdc-2.4.1.jar
{code:java}
CREATE TABLE `test_cdc_timestamp` (
  `id` BIGINT COMMENT '主键id',
   
   proctime AS PROCTIME(),
   PRIMARY KEY(id) NOT ENFORCED
) WITH (
      'connector' = 'mysql-cdc',
      'hostname' = 'x',
      'scan.startup.mode' = 'timestamp',
      'scan.startup.timestamp-millis' = '171241920' ,
      'port' = '3306',
      'username' = 'xxx',
      'password' = 'xxx',
      'database-name' = 'xxtablename',
      'table-name' = 'xxdatabase',
      'scan.incremental.snapshot.enabled' = 'false',
       'debezium.snapshot.locking.mode' = 'none',
       'server-id' = '5701',
     'server-time-zone' = 'Asia/Shanghai',
    'debezium.skipped.operations' = 'd'
      ); {code}
When I use 'scan.startup.mode' = 'latent-offset 'or'initial' to synchronize 
data normally, when I use 'scan.startup.mode' = 'timestamp', the following 
error is reported
{code:java}
2024-04-09 11:11:15.619 [debezium-engine] INFO  io.debezium.util.Threads  - 
Requested thread factory for connector MySqlConnector, id = mysql_binlog_source 
named = change-event-source-coordinator
2024-04-09 11:11:15.621 [debezium-engine] INFO  io.debezium.util.Threads  - 
Creating thread 
debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator
2024-04-09 11:11:15.629 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Metrics registered
2024-04-09 11:11:15.630 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Context created
2024-04-09 11:11:15.642 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.connector.mysql.MySqlSnapshotChangeEventSource  - No previous 
offset has been found
2024-04-09 11:11:15.642 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.connector.mysql.MySqlSnapshotChangeEventSource  - According 
to the connector configuration only schema will be snapshotted
2024-04-09 11:11:15.644 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Snapshot ended with 
SnapshotResult [status=SKIPPED, offset=null]
2024-04-09 11:11:15.652 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.util.Threads  - Requested thread factory for connector 
MySqlConnector, id = mysql_binlog_source named = binlog-client
2024-04-09 11:11:15.656 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.pipeline.ChangeEventSourceCoordinator  - Starting streaming
2024-04-09 11:11:15.682 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - GTID set 
purged on server: 
0969640a-1d48-11ed-b6cf-28dee561557c:1-27603868993,70958f24-2253-11eb-891d-f875a48ad7b1:1-50323,ec1e6593-2251-11eb-9c18-f875a48ad539:1-25345454762
2024-04-09 11:11:15.682 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - Skip 0 
events on streaming start
2024-04-09 11:11:15.682 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - Skip 0 
rows on streaming start
2024-04-09 11:11:15.683 
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] 
INFO  io.debezium.util.Threads  - Creating thread 
debezium-mysqlconnector-mysql_binlog_source-binlog-client
2024-04-09 11:11:15.686 [blc.mysql.com:3306] INFO  io.debezium.util.Threads 
 - Creating thread debezium-mysqlconnector-mysql_binlog_source-binlog-client
2024-04-09 11:11:15.700 [blc.mysql.com:3306] INFO  
io.debezium.connector.mysql.MySqlStreamingChangeEventSource  - Connected to 
MySQL binlog at xxx.mysql.com:3306, starting at MySqlOffsetContext 
[sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, 
sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=, 
currentBinlogPosition=0, currentRowNumber=0, serverId=0, sourceTime=null, 
threadId=-1, currentQuery=null, tableIds=[], databaseName=null], 
snapshotCompleted=false, transactionContext=TransactionContext 
[currentTransactionId=null, perTableEventCount={}, totalEventCount=0], 
restartGtidSet=null, currentGtidSet=null, 

[jira] [Updated] (FLINK-33971) Specifies whether to use HBase table that supports dynamic columns.

2024-01-03 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-33971:
--
Description: 
Specifies whether to use HBase table that supports dynamic columns.

Refer to the dynamic.table parameter in this document: 
[[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]]

Sample code for a result table that supports dynamic columns
{code:java}
CREATE TEMPORARY TABLE datagen_source (
  id INT,
  f1hour STRING,
  f1deal BIGINT,
  f2day STRING,
  f2deal BIGINT
) WITH (
  'connector'='datagen'
);
CREATE TEMPORARY TABLE hbase_sink (
  rowkey INT,
  f1 ROW<`hour` STRING, deal BIGINT>,
  f2 ROW<`day` STRING, deal BIGINT>
) WITH (
  'connector'='hbase-2.2',
  'table-name'='',
  'zookeeper.quorum'='',
  'dynamic.table'='true'
);
INSERT INTO hbase_sink
SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source; {code}

If dynamic.table is set to true, HBase table that supports dynamic columns is 
used.
Two fields must be declared in the rows that correspond to each column family. 
The value of the first field indicates the dynamic column, and the value of the 
second field indicates the value of the dynamic column.

For example, the datagen_source table contains a row of data The row of data 
indicates that the ID of the commodity is 1, the transaction amount of the 
commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
inserted into the HBase table. f1:10 is 100, and f2:2020-7-26 is 1.

  was:
Specifies whether to use HBase table that supports dynamic columns.

Refer to the dynamic.table parameter in this document: 
[[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]]

Sample code for a result table that supports dynamic columns

CREATE TEMPORARY TABLE datagen_source (
  id INT,
  f1hour STRING,
  f1deal BIGINT,
  f2day STRING,
  f2deal BIGINT
) WITH (
  'connector'='datagen'
);

CREATE TEMPORARY TABLE hbase_sink (
  rowkey INT,
  f1 ROW<`hour` STRING, deal BIGINT>,
  f2 ROW<`day` STRING, deal BIGINT>
) WITH (
  'connector'='hbase-2.2',
  'table-name'='',
  'zookeeper.quorum'='',
  'dynamic.table'='true'
);

INSERT INTO hbase_sink
SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
If dynamic.table is set to true, HBase table that supports dynamic columns is 
used.
Two fields must be declared in the rows that correspond to each column family. 
The value of the first field indicates the dynamic column, and the value of the 
second field indicates the value of the dynamic column.

For example, the datagen_source table contains a row of data The row of data 
indicates that the ID of the commodity is 1, the transaction amount of the 
commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
inserted into the HBase table. f1:10 is 100, and f2:2020-7-26 is 1.


> Specifies whether to use HBase table that supports dynamic columns.
> ---
>
> Key: FLINK-33971
> URL: https://issues.apache.org/jira/browse/FLINK-33971
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: MOBIN
>Priority: Minor
>
> Specifies whether to use HBase table that supports dynamic columns.
> Refer to the dynamic.table parameter in this document: 
> [[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]]
> Sample code for a result table that supports dynamic columns
> {code:java}
> CREATE TEMPORARY TABLE datagen_source (
>   id INT,
>   f1hour STRING,
>   f1deal BIGINT,
>   f2day STRING,
>   f2deal BIGINT
> ) WITH (
>   'connector'='datagen'
> );
> CREATE TEMPORARY TABLE hbase_sink (
>   rowkey INT,
>   f1 ROW<`hour` STRING, deal BIGINT>,
>   f2 ROW<`day` STRING, deal BIGINT>
> ) WITH (
>   'connector'='hbase-2.2',
>   'table-name'='',
>   'zookeeper.quorum'='',
>   'dynamic.table'='true'
> );
> INSERT INTO hbase_sink
> SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source; {code}
> If dynamic.table is set to true, HBase table that supports dynamic columns is 
> used.
> Two fields must be declared in the rows that correspond to each column 
> family. The value of the first field indicates the dynamic column, and the 
> value of the second field indicates the value of the dynamic column.
> For example, the datagen_source table contains a row of data The row of data 
> indicates that the ID of the commodity is 1, the transaction amount of the 
> commodity between 10:00 and 11:00 

[jira] [Updated] (FLINK-33971) Specifies whether to use HBase table that supports dynamic columns.

2024-01-03 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-33971:
--
Description: 
Specifies whether to use HBase table that supports dynamic columns.

Refer to the dynamic.table parameter in this document: 
[[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]]

Sample code for a result table that supports dynamic columns

CREATE TEMPORARY TABLE datagen_source (
  id INT,
  f1hour STRING,
  f1deal BIGINT,
  f2day STRING,
  f2deal BIGINT
) WITH (
  'connector'='datagen'
);

CREATE TEMPORARY TABLE hbase_sink (
  rowkey INT,
  f1 ROW<`hour` STRING, deal BIGINT>,
  f2 ROW<`day` STRING, deal BIGINT>
) WITH (
  'connector'='hbase-2.2',
  'table-name'='',
  'zookeeper.quorum'='',
  'dynamic.table'='true'
);

INSERT INTO hbase_sink
SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
If dynamic.table is set to true, HBase table that supports dynamic columns is 
used.
Two fields must be declared in the rows that correspond to each column family. 
The value of the first field indicates the dynamic column, and the value of the 
second field indicates the value of the dynamic column.

For example, the datagen_source table contains a row of data The row of data 
indicates that the ID of the commodity is 1, the transaction amount of the 
commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
inserted into the HBase table. f1:10 is 100, and f2:2020-7-26 is 1.

  was:
Specifies whether to use HBase table that supports dynamic columns.

Refer to the dynamic.table parameter in this document: 
[[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]|https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv]

Sample code for a result table that supports dynamic columns

CREATE TEMPORARY TABLE datagen_source (
  id INT,
  f1hour STRING,
  f1deal BIGINT,
  f2day STRING,
  f2deal BIGINT
) WITH (
  'connector'='datagen'
);

CREATE TEMPORARY TABLE hbase_sink (
  rowkey INT,
  f1 ROW<`hour` STRING, deal BIGINT>,
  f2 ROW<`day` STRING, deal BIGINT>
) WITH (
  'connector'='hbase-2.2',
  'table-name'='',
  'zookeeper.quorum'='',
  'dynamic.table'='true'
);

INSERT INTO hbase_sink
SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
If dynamic.table is set to true, HBase table that supports dynamic columns is 
used.
Two fields must be declared in the rows that correspond to each column family. 
The value of the first field indicates the dynamic column, and the value of the 
second field indicates the value of the dynamic column.

For example, the datagen_source table contains a row of data The row of data 
indicates that the ID of the commodity is 1, the transaction amount of the 
commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
inserted into the ApsaraDB for HBase table. f1:10 is 100, and f2:2020-7-26 is 
1.


> Specifies whether to use HBase table that supports dynamic columns.
> ---
>
> Key: FLINK-33971
> URL: https://issues.apache.org/jira/browse/FLINK-33971
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: MOBIN
>Priority: Minor
>
> Specifies whether to use HBase table that supports dynamic columns.
> Refer to the dynamic.table parameter in this document: 
> [[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]]
> Sample code for a result table that supports dynamic columns
> CREATE TEMPORARY TABLE datagen_source (
>   id INT,
>   f1hour STRING,
>   f1deal BIGINT,
>   f2day STRING,
>   f2deal BIGINT
> ) WITH (
>   'connector'='datagen'
> );
> CREATE TEMPORARY TABLE hbase_sink (
>   rowkey INT,
>   f1 ROW<`hour` STRING, deal BIGINT>,
>   f2 ROW<`day` STRING, deal BIGINT>
> ) WITH (
>   'connector'='hbase-2.2',
>   'table-name'='',
>   'zookeeper.quorum'='',
>   'dynamic.table'='true'
> );
> INSERT INTO hbase_sink
> SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
> If dynamic.table is set to true, HBase table that supports dynamic columns is 
> used.
> Two fields must be declared in the rows that correspond to each column 
> family. The value of the first field indicates the dynamic column, and the 
> value of the second field indicates the value of the dynamic column.
> For example, the datagen_source table contains a row of data The row of data 
> indicates that the ID 

[jira] [Commented] (FLINK-33971) Specifies whether to use HBase table that supports dynamic columns.

2024-01-02 Thread MOBIN (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802045#comment-17802045
 ] 

MOBIN commented on FLINK-33971:
---

https://github.com/apache/flink-connector-hbase/pull/36

> Specifies whether to use HBase table that supports dynamic columns.
> ---
>
> Key: FLINK-33971
> URL: https://issues.apache.org/jira/browse/FLINK-33971
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: MOBIN
>Priority: Minor
>
> Specifies whether to use HBase table that supports dynamic columns.
> Refer to the dynamic.table parameter in this document: 
> [[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]|https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv]
> Sample code for a result table that supports dynamic columns
> CREATE TEMPORARY TABLE datagen_source (
>   id INT,
>   f1hour STRING,
>   f1deal BIGINT,
>   f2day STRING,
>   f2deal BIGINT
> ) WITH (
>   'connector'='datagen'
> );
> CREATE TEMPORARY TABLE hbase_sink (
>   rowkey INT,
>   f1 ROW<`hour` STRING, deal BIGINT>,
>   f2 ROW<`day` STRING, deal BIGINT>
> ) WITH (
>   'connector'='hbase-2.2',
>   'table-name'='',
>   'zookeeper.quorum'='',
>   'dynamic.table'='true'
> );
> INSERT INTO hbase_sink
> SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
> If dynamic.table is set to true, HBase table that supports dynamic columns is 
> used.
> Two fields must be declared in the rows that correspond to each column 
> family. The value of the first field indicates the dynamic column, and the 
> value of the second field indicates the value of the dynamic column.
> For example, the datagen_source table contains a row of data The row of data 
> indicates that the ID of the commodity is 1, the transaction amount of the 
> commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
> commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
> inserted into the ApsaraDB for HBase table. f1:10 is 100, and f2:2020-7-26 is 
> 1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-33971) Specifies whether to use HBase table that supports dynamic columns.

2024-01-02 Thread MOBIN (Jira)


[ https://issues.apache.org/jira/browse/FLINK-33971 ]


MOBIN deleted comment on FLINK-33971:
---

was (Author: mobin):
https://github.com/apache/flink-connector-hbase/pull/36

> Specifies whether to use HBase table that supports dynamic columns.
> ---
>
> Key: FLINK-33971
> URL: https://issues.apache.org/jira/browse/FLINK-33971
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: MOBIN
>Priority: Minor
>
> Specifies whether to use HBase table that supports dynamic columns.
> Refer to the dynamic.table parameter in this document: 
> [[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]|https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv]
> Sample code for a result table that supports dynamic columns
> CREATE TEMPORARY TABLE datagen_source (
>   id INT,
>   f1hour STRING,
>   f1deal BIGINT,
>   f2day STRING,
>   f2deal BIGINT
> ) WITH (
>   'connector'='datagen'
> );
> CREATE TEMPORARY TABLE hbase_sink (
>   rowkey INT,
>   f1 ROW<`hour` STRING, deal BIGINT>,
>   f2 ROW<`day` STRING, deal BIGINT>
> ) WITH (
>   'connector'='hbase-2.2',
>   'table-name'='',
>   'zookeeper.quorum'='',
>   'dynamic.table'='true'
> );
> INSERT INTO hbase_sink
> SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
> If dynamic.table is set to true, HBase table that supports dynamic columns is 
> used.
> Two fields must be declared in the rows that correspond to each column 
> family. The value of the first field indicates the dynamic column, and the 
> value of the second field indicates the value of the dynamic column.
> For example, the datagen_source table contains a row of data The row of data 
> indicates that the ID of the commodity is 1, the transaction amount of the 
> commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
> commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
> inserted into the ApsaraDB for HBase table. f1:10 is 100, and f2:2020-7-26 is 
> 1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33971) Specifies whether to use HBase table that supports dynamic columns.

2024-01-02 Thread MOBIN (Jira)
MOBIN created FLINK-33971:
-

 Summary: Specifies whether to use HBase table that supports 
dynamic columns.
 Key: FLINK-33971
 URL: https://issues.apache.org/jira/browse/FLINK-33971
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / HBase
Reporter: MOBIN


Specifies whether to use HBase table that supports dynamic columns.

Refer to the dynamic.table parameter in this document: 
[[https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv|http://example.com]|https://www.alibabacloud.com/help/en/flink/developer-reference/apsaradb-for-hbase-connector#section-ltp-3fy-9qv]

Sample code for a result table that supports dynamic columns

CREATE TEMPORARY TABLE datagen_source (
  id INT,
  f1hour STRING,
  f1deal BIGINT,
  f2day STRING,
  f2deal BIGINT
) WITH (
  'connector'='datagen'
);

CREATE TEMPORARY TABLE hbase_sink (
  rowkey INT,
  f1 ROW<`hour` STRING, deal BIGINT>,
  f2 ROW<`day` STRING, deal BIGINT>
) WITH (
  'connector'='hbase-2.2',
  'table-name'='',
  'zookeeper.quorum'='',
  'dynamic.table'='true'
);

INSERT INTO hbase_sink
SELECT id, ROW(f1hour, f1deal), ROW(f2day, f2deal) FROM datagen_source;
If dynamic.table is set to true, HBase table that supports dynamic columns is 
used.
Two fields must be declared in the rows that correspond to each column family. 
The value of the first field indicates the dynamic column, and the value of the 
second field indicates the value of the dynamic column.

For example, the datagen_source table contains a row of data The row of data 
indicates that the ID of the commodity is 1, the transaction amount of the 
commodity between 10:00 and 11:00 is 100, and the transaction amount of the 
commodity on July 26, 2020 is 1. In this case, a row whose rowkey is 1 is 
inserted into the ApsaraDB for HBase table. f1:10 is 100, and f2:2020-7-26 is 
1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29252) Support create table-store table with 'connector'='table-store'

2022-09-09 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29252:
--
Description: 
Support create table-store table with 'connector'='table-store': 

sink to table-store:
{code:java}
SET 'execution.checkpointing.interval' = '10 s';
CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'datagen',
'fields.word.length' = '1'
);
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED,
cnt BIGINT
) WITH(
  'connector' = 'table-store',
  'catalog-name' = 'test-catalog',
  'default-database' = 'test-db',  //should rename 'catalog-database'?
  'catalog-table' = 'test-tb',
  'warehouse'='file:/tmp/table_store'
);
INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word; 
{code}
source from table-store:
{code:java}
SET 'execution.checkpointing.interval' = '10 s';
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED,
cnt BIGINT
) WITH(
  'connector' = 'table-store',
  'catalog-name' = 'test-catalog',
  'default-database' = 'test-db',
  'catalog-table' = 'test-tb',
  'warehouse'='file:/tmp/table_store'
);
CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'print'
);
INSERT INTO word_table SELECT word FROM word_count;{code}

  was:
Support create table-store table with 'connector'='table-store': 
{code:java}
SET 'execution.checkpointing.interval' = '10 s';
CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'datagen',
'fields.word.length' = '1'
);
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED,
cnt BIGINT
) WITH(
  'connector' = 'table-store',
  'catalog-name' = 'test-catalog',
  'default-database' = 'test-db',  //should rename 'catalog-database'?
  'catalog-table' = 'test-tb',
  'warehouse'='file:/tmp/table_store'
);
INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word; 
{code}


> Support create table-store table with 'connector'='table-store'
> ---
>
> Key: FLINK-29252
> URL: https://issues.apache.org/jira/browse/FLINK-29252
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>
> Support create table-store table with 'connector'='table-store': 
> sink to table-store:
> {code:java}
> SET 'execution.checkpointing.interval' = '10 s';
> CREATE TEMPORARY TABLE word_table (
> word STRING
> ) WITH (
> 'connector' = 'datagen',
> 'fields.word.length' = '1'
> );
> CREATE TABLE word_count (
> word STRING PRIMARY KEY NOT ENFORCED,
> cnt BIGINT
> ) WITH(
>   'connector' = 'table-store',
>   'catalog-name' = 'test-catalog',
>   'default-database' = 'test-db',  //should rename 'catalog-database'?
>   'catalog-table' = 'test-tb',
>   'warehouse'='file:/tmp/table_store'
> );
> INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word; 
> {code}
> source from table-store:
> {code:java}
> SET 'execution.checkpointing.interval' = '10 s';
> CREATE TABLE word_count (
> word STRING PRIMARY KEY NOT ENFORCED,
> cnt BIGINT
> ) WITH(
>   'connector' = 'table-store',
>   'catalog-name' = 'test-catalog',
>   'default-database' = 'test-db',
>   'catalog-table' = 'test-tb',
>   'warehouse'='file:/tmp/table_store'
> );
> CREATE TEMPORARY TABLE word_table (
> word STRING
> ) WITH (
> 'connector' = 'print'
> );
> INSERT INTO word_table SELECT word FROM word_count;{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29252) Support create table-store table with 'connector'='table-store'

2022-09-09 Thread MOBIN (Jira)
MOBIN created FLINK-29252:
-

 Summary: Support create table-store table with 
'connector'='table-store'
 Key: FLINK-29252
 URL: https://issues.apache.org/jira/browse/FLINK-29252
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: MOBIN


Support create table-store table with 'connector'='table-store': 
{code:java}
SET 'execution.checkpointing.interval' = '10 s';
CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'datagen',
'fields.word.length' = '1'
);
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED,
cnt BIGINT
) WITH(
  'connector' = 'table-store',
  'catalog-name' = 'test-catalog',
  'default-database' = 'test-db',  //should rename 'catalog-database'?
  'catalog-table' = 'test-tb',
  'warehouse'='file:/tmp/table_store'
);
INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word; 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema/snapshot json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema/snapshot json file,easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 
 

  was:
pretty schema json file,easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 
 


> Pretty schema/snapshot json file
> 
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>  Labels: pull-request-available
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema/snapshot json file,easy to read
> before improvement:
>   !before.jpeg|width=1669,height=115!
> after improvement:
> !after.jpeg|width=559,height=435! 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema/snapshot json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Summary: Pretty schema/snapshot json file  (was: Pretty schema json file)

> Pretty schema/snapshot json file
> 
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>  Labels: pull-request-available
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,easy to read
> before improvement:
>   !before.jpeg|width=1669,height=115!
> after improvement:
> !after.jpeg|width=559,height=435! 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 
 

  was:
pretty schema json file,easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>  Labels: pull-request-available
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,easy to read
> before improvement:
>   !before.jpeg|width=1669,height=115!
> after improvement:
> !after.jpeg|width=559,height=435! 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 

  was:
pretty schema json file,Easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,easy to read
> before improvement:
>   !before.jpeg|width=1669,height=115!
> after improvement:
> !after.jpeg|width=559,height=435! 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,Easy to read

before improvement:

  !before.jpeg! 

after improvement:

 

  was:
pretty schema json file,Easy to read

before improvement:

 

after improvement:

 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,Easy to read
> before improvement:
>   !before.jpeg! 
> after improvement:
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,Easy to read

before improvement:

  !before.jpeg! 

after improvement:
 !after.jpeg! 
 

  was:
pretty schema json file,Easy to read

before improvement:

  !before.jpeg! 

after improvement:

 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,Easy to read
> before improvement:
>   !before.jpeg! 
> after improvement:
>  !after.jpeg! 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Attachment: before.jpeg

> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,Easy to read
> before improvement:
>  
> after improvement:
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Attachment: after.jpeg

> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,Easy to read
> before improvement:
>  
> after improvement:
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,Easy to read

before improvement:

  !before.jpeg|width=1669,height=115!

after improvement:
!after.jpeg|width=559,height=435! 
 

  was:
pretty schema json file,Easy to read

before improvement:

  !before.jpeg! 

after improvement:
 !after.jpeg! 
 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
> Attachments: after.jpeg, before.jpeg
>
>
> pretty schema json file,Easy to read
> before improvement:
>   !before.jpeg|width=1669,height=115!
> after improvement:
> !after.jpeg|width=559,height=435! 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,Easy to read

before improvement:

 

after improvement:

 

  was:
pretty schema json file,Easy to read

before improvement:


after improvement:

 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>
> pretty schema json file,Easy to read
> before improvement:
>  
> after improvement:
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,Easy to read

before improvement:


after improvement:

 

  was:
pretty schema json file,Easy to read

before improvement:

 

after improvement:

 


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>
> pretty schema json file,Easy to read
> before improvement:
> after improvement:
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)
MOBIN created FLINK-29213:
-

 Summary: Pretty schema json file
 Key: FLINK-29213
 URL: https://issues.apache.org/jira/browse/FLINK-29213
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: MOBIN


pretty schema json file,Easy to read

before improvement:

!image-2022-09-07-00-18-31-990.png!

after improvement:

!image-2022-09-07-00-18-56-858.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29213) Pretty schema json file

2022-09-06 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-29213:
--
Description: 
pretty schema json file,Easy to read

before improvement:

 

after improvement:

 

  was:
pretty schema json file,Easy to read

before improvement:

!image-2022-09-07-00-18-31-990.png!

after improvement:

!image-2022-09-07-00-18-56-858.png!


> Pretty schema json file
> ---
>
> Key: FLINK-29213
> URL: https://issues.apache.org/jira/browse/FLINK-29213
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: MOBIN
>Priority: Minor
>
> pretty schema json file,Easy to read
> before improvement:
>  
> after improvement:
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23195) No value for kafka metrics in Flink Web ui

2021-06-30 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-23195:
--
Description: 
The kafka metrics in the Flink Web ui have no value, but the other metrics are 
displayed normally

!image-2021-07-01-10-02-04-483.png|width=698,height=445!

!image-2021-07-01-10-03-02-214.png|width=683,height=391!

  was:
The kafka metrics in the Flink Web ui have no value, but the other metrics are 
displayed normally

!image-2021-07-01-10-02-04-483.png|width=698,height=445!

!image-2021-07-01-10-03-02-214.png!


> No value for kafka metrics in Flink Web ui
> --
>
> Key: FLINK-23195
> URL: https://issues.apache.org/jira/browse/FLINK-23195
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Runtime / Metrics
>Affects Versions: 1.11.2
>Reporter: MOBIN
>Priority: Minor
> Attachments: image-2021-07-01-10-02-04-483.png, 
> image-2021-07-01-10-03-02-214.png
>
>
> The kafka metrics in the Flink Web ui have no value, but the other metrics 
> are displayed normally
> !image-2021-07-01-10-02-04-483.png|width=698,height=445!
> !image-2021-07-01-10-03-02-214.png|width=683,height=391!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23195) No value for kafka metrics in Flink Web ui

2021-06-30 Thread MOBIN (Jira)
MOBIN created FLINK-23195:
-

 Summary: No value for kafka metrics in Flink Web ui
 Key: FLINK-23195
 URL: https://issues.apache.org/jira/browse/FLINK-23195
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Runtime / Metrics
Affects Versions: 1.11.2
Reporter: MOBIN
 Attachments: image-2021-07-01-10-02-04-483.png, 
image-2021-07-01-10-03-02-214.png

The kafka metrics in the Flink Web ui have no value, but the other metrics are 
displayed normally

!image-2021-07-01-10-02-04-483.png|width=698,height=445!

!image-2021-07-01-10-03-02-214.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23026) OVER WINDOWS function lost data

2021-06-18 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-23026:
--
Description: 
{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}
When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
 {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
10:02:00{color},50
 {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
10:02:00{color},50
 ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=1156,height=192!

  was:
{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}
When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
 {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
10:02:00{color},50
 {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
10:02:00{color},50
 ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=963,height=160!


> OVER WINDOWS function lost data
> ---
>
> Key: FLINK-23026
> URL: https://issues.apache.org/jira/browse/FLINK-23026
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Client
>Affects Versions: 1.12.1
>Reporter: MOBIN
>Priority: Critical
> Attachments: image-2021-06-18-10-54-18-125.png
>
>
> {code:java}
> Flink SQL> CREATE TABLE tmall_item(
> >   itemID VARCHAR,
> >   itemType VARCHAR,
> >   eventtime varchar,
> >   onSellTime AS TO_TIMESTAMP(eventtime),
> >   price DOUBLE,
> >   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> > ) with (
> >   'connector.type' = 'kafka',
> >'connector.version' = 'universal',
> >'connector.topic' = 'items',
> >'format.type' = 'csv',
> >'connector.properties.bootstrap.servers' = 'localhost:9092'
> > );
> >
> [INFO] Table has been created.
> Flink SQL> SELECT
> > itemType,
> > COUNT(itemID) OVER (
> > PARTITION BY itemType
> > ORDER BY onSellTime
> > RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> > FROM tmall_item;
> {code}
> When I enter the following data into the topic, its Electronic count value is 
> 3, which should normally be 4. If the event time and the value of the 
> partition field are the same, data will be lost
> ITEM001,Electronic,2017-11-11 10:01:00,20
>  {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
> 10:02:00{color},50
>  {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
> 10:02:00{color},50
>  ITEM003,Electronic,2017-11-11 10:03:00,50
> !image-2021-06-18-10-54-18-125.png|width=1156,height=192!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23026) OVER WINDOWS function lost data

2021-06-18 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-23026:
--
Description: 
{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}
When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
 {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
10:02:00{color},50
 {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
10:02:00{color},50
 ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=963,height=160!

  was:
{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}
When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
 {color:#FF}ITEM002{color},Electronic,{color:#FF}2017-11-11 
10:02:00{color},50
 {color:#FF}ITEM002{color},Electronic,{color:#FF}2017-11-11 
10:02:00{color},50
 ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=656,height=109!


> OVER WINDOWS function lost data
> ---
>
> Key: FLINK-23026
> URL: https://issues.apache.org/jira/browse/FLINK-23026
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Client
>Affects Versions: 1.12.1
>Reporter: MOBIN
>Priority: Critical
> Attachments: image-2021-06-18-10-54-18-125.png
>
>
> {code:java}
> Flink SQL> CREATE TABLE tmall_item(
> >   itemID VARCHAR,
> >   itemType VARCHAR,
> >   eventtime varchar,
> >   onSellTime AS TO_TIMESTAMP(eventtime),
> >   price DOUBLE,
> >   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> > ) with (
> >   'connector.type' = 'kafka',
> >'connector.version' = 'universal',
> >'connector.topic' = 'items',
> >'format.type' = 'csv',
> >'connector.properties.bootstrap.servers' = 'localhost:9092'
> > );
> >
> [INFO] Table has been created.
> Flink SQL> SELECT
> > itemType,
> > COUNT(itemID) OVER (
> > PARTITION BY itemType
> > ORDER BY onSellTime
> > RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> > FROM tmall_item;
> {code}
> When I enter the following data into the topic, its Electronic count value is 
> 3, which should normally be 4. If the event time and the value of the 
> partition field are the same, data will be lost
> ITEM001,Electronic,2017-11-11 10:01:00,20
>  {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
> 10:02:00{color},50
>  {color:#ff}ITEM002{color},Electronic,{color:#ff}2017-11-11 
> 10:02:00{color},50
>  ITEM003,Electronic,2017-11-11 10:03:00,50
> !image-2021-06-18-10-54-18-125.png|width=963,height=160!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23026) OVER WINDOWS function lost data

2021-06-17 Thread MOBIN (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MOBIN updated FLINK-23026:
--
Description: 
{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}
When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
 {color:#FF}ITEM002{color},Electronic,{color:#FF}2017-11-11 
10:02:00{color},50
 {color:#FF}ITEM002{color},Electronic,{color:#FF}2017-11-11 
10:02:00{color},50
 ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=656,height=109!

  was:
{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}

When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
{color:red}ITEM002{color},Electronic,{color:red}2017-11-11 10:02:00{color},50
{color:red}ITEM002{color},Electronic,{color:red}2017-11-11 10:02:00{color},50
ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=1066,height=177!


> OVER WINDOWS function lost data
> ---
>
> Key: FLINK-23026
> URL: https://issues.apache.org/jira/browse/FLINK-23026
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Client
>Affects Versions: 1.12.1
>Reporter: MOBIN
>Priority: Critical
> Attachments: image-2021-06-18-10-54-18-125.png
>
>
> {code:java}
> Flink SQL> CREATE TABLE tmall_item(
> >   itemID VARCHAR,
> >   itemType VARCHAR,
> >   eventtime varchar,
> >   onSellTime AS TO_TIMESTAMP(eventtime),
> >   price DOUBLE,
> >   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> > ) with (
> >   'connector.type' = 'kafka',
> >'connector.version' = 'universal',
> >'connector.topic' = 'items',
> >'format.type' = 'csv',
> >'connector.properties.bootstrap.servers' = 'localhost:9092'
> > );
> >
> [INFO] Table has been created.
> Flink SQL> SELECT
> > itemType,
> > COUNT(itemID) OVER (
> > PARTITION BY itemType
> > ORDER BY onSellTime
> > RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> > FROM tmall_item;
> {code}
> When I enter the following data into the topic, its Electronic count value is 
> 3, which should normally be 4. If the event time and the value of the 
> partition field are the same, data will be lost
> ITEM001,Electronic,2017-11-11 10:01:00,20
>  {color:#FF}ITEM002{color},Electronic,{color:#FF}2017-11-11 
> 10:02:00{color},50
>  {color:#FF}ITEM002{color},Electronic,{color:#FF}2017-11-11 
> 10:02:00{color},50
>  ITEM003,Electronic,2017-11-11 10:03:00,50
> !image-2021-06-18-10-54-18-125.png|width=656,height=109!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23026) OVER WINDOWS function lost data

2021-06-17 Thread MOBIN (Jira)
MOBIN created FLINK-23026:
-

 Summary: OVER WINDOWS function lost data
 Key: FLINK-23026
 URL: https://issues.apache.org/jira/browse/FLINK-23026
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.12.1
Reporter: MOBIN
 Attachments: image-2021-06-18-10-54-18-125.png

{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}

When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
{color:red}ITEM002{color},Electronic,{color:red}2017-11-11 10:02:00{color},50
{color:red}ITEM002{color},Electronic,{color:red}2017-11-11 10:02:00{color},50
ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=1066,height=177!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)