[jira] [Created] (FLINK-34376) FLINK SQL SUM() causes a precision error

2024-02-05 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-34376:
-

 Summary: FLINK SQL SUM() causes a precision error
 Key: FLINK-34376
 URL: https://issues.apache.org/jira/browse/FLINK-34376
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.18.1, 1.14.3
Reporter: Fangliang Liu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33079) The gap between the checkpoint timeout and the interval settings is too large

2023-09-13 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-33079:
-

 Summary: The gap between the checkpoint timeout and the interval 
settings is too large
 Key: FLINK-33079
 URL: https://issues.apache.org/jira/browse/FLINK-33079
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.19.0
Reporter: Fangliang Liu


The gap between the checkpoint timeout and the interval settings is too large

Some users will think that the documentation is the optimal solution and refer 
to this demo setting, 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31357) A record is deleted before being inserted, it will be deleted

2023-03-07 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-31357:
-

 Summary: A record is deleted before being inserted, it will be 
deleted
 Key: FLINK-31357
 URL: https://issues.apache.org/jira/browse/FLINK-31357
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.14.3
Reporter: Fangliang Liu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-27275) Support null value not update in flink-connector-jdbc

2022-04-17 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-27275:
-

 Summary: Support null value not update in flink-connector-jdbc
 Key: FLINK-27275
 URL: https://issues.apache.org/jira/browse/FLINK-27275
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / JDBC
Affects Versions: 1.14.3
Reporter: Fangliang Liu


The follow DDL
{code:java}
CREATE TABLE IF NOT EXISTS `db`.`tablea` (
`user_id`  bigint,
`A` string,
`B` string,
`C` string,
`flag` varchar(256),
PRIMARY KEY (`user_id`) NOT ENFORCED
)WITH (
'connector' = 'jdbc',
'url' = 'jdbc:mysql://xx.xx.xx.xx:xxx/test',
'table-name' = 'user',
'username'='root',
'password'='root',
'sink.buffer-flush.interval'='1s',
'sink.buffer-flush.max-rows'='50',
'sink.parallelism'='2'
); {code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26380) Pending record count must be zero at this point: 5592

2022-02-26 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-26380:
-

 Summary: Pending record count must be zero at this point: 5592
 Key: FLINK-26380
 URL: https://issues.apache.org/jira/browse/FLINK-26380
 Project: Flink
  Issue Type: Bug
Reporter: Fangliang Liu


Caused by: java.lang.IllegalStateException: Pending record count must be 
zero at this point: 5592
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24961) When the DDL statement is different from the actual schema in the database, ArrayIndexOutOfBoundsException will be reported

2021-11-19 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-24961:
-

 Summary: When the DDL statement is different from the actual 
schema in the database, ArrayIndexOutOfBoundsException will be reported 
 Key: FLINK-24961
 URL: https://issues.apache.org/jira/browse/FLINK-24961
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.13.2
Reporter: Fangliang Liu


[] - Source: TableSourceScan(table=[[default_catalog, kafka_rt_ods_bybitprod, 
withdraws]], fields=[user_id, id, position_id, coin, status, transaction_id, 
amount, fee, address, admin_id, reason, confirm_code, txid, submited_at, 
confirmed_at, verified_at, packed_at, broadcasted_at, successed_at, 
canceled_at, rejected_at, expired_at, destination_tag, updated_at, risk_tags, 
risk_level, risk_status, first_review_result, first_review_admin_id, 
first_review_desc, first_review_at, final_review_result]) -> DropUpdateBefore 
-> Sink: Sink(table=[default_catalog.tidb_rt_ods_bybitprod.withdraws], 
fields=[user_id, id, position_id, coin, status, transaction_id, amount, fee, 
address, admin_id, reason, confirm_code, txid, submited_at, confirmed_at, 
verified_at, packed_at, broadcasted_at, successed_at, canceled_at, rejected_at, 
expired_at, destination_tag, updated_at, risk_tags, risk_level, risk_status, 
first_review_result, first_review_admin_id, first_review_desc, first_review_at, 
final_review_result]) (1/1) (238d9e5c8a275d7427fa87d908cda1a3) switched from 
INITIALIZING to FAILED on container_e14_1627389692587_137379_01_02 @ 
ip-10-60-53-37.ap-southeast-1.compute.internal (dataPort=41325). 
java.lang.ArrayIndexOutOfBoundsException: -1 at 
org.apache.flink.connector.jdbc.table.JdbcDynamicOutputFormatBuilder.lambda$createBufferReduceExecutor$1(JdbcDynamicOutputFormatBuilder.java:145)
 ~[flink-connector-jdbc_2.12-1.13.2.jar:1.13.2] at 
java.util.stream.IntPipeline$4$1.accept(IntPipeline.java:250) ~[?:1.8.0_291] at 
java.util.Spliterators$IntArraySpliterator.forEachRemaining(Spliterators.java:1032)
 ~[?:1.8.0_291] at 
java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
~[?:1.8.0_291] at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) 
~[?:1.8.0_291] at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) 
~[?:1.8.0_291] at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:546) 
~[?:1.8.0_291] at 
java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
 ~[?:1.8.0_291] at 
java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) 
~[?:1.8.0_291] at 
org.apache.flink.connector.jdbc.table.JdbcDynamicOutputFormatBuilder.createBufferReduceExecutor(JdbcDynamicOutputFormatBuilder.java:145)
 ~[flink-connector-jdbc_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.connector.jdbc.table.JdbcDynamicOutputFormatBuilder.lambda$build$edc08011$1(JdbcDynamicOutputFormatBuilder.java:106)
 ~[flink-connector-jdbc_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.createAndOpenStatementExecutor(JdbcBatchingOutputFormat.java:142)
 ~[flink-connector-jdbc_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.open(JdbcBatchingOutputFormat.java:116)
 ~[flink-connector-jdbc_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.connector.jdbc.internal.GenericJdbcSinkFunction.open(GenericJdbcSinkFunction.java:49)
 ~[flink-connector-jdbc_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
 ~[flink-tidb-connector-1.13-0.0.4.jar:?] at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
 ~[flink-table-blink_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:442)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759) 
~[flink-dist_2.12-1.13.2.jar:1.13.2] at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) 

[jira] [Created] (FLINK-23433) Metrics cannot be initialized in format

2021-07-19 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-23433:
-

 Summary: Metrics cannot be initialized in format
 Key: FLINK-23433
 URL: https://issues.apache.org/jira/browse/FLINK-23433
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.11.3
Reporter: Fangliang Liu


 
I want to use metrics in a custom format, I wrote it like this
{code:java}
ProtobufRowDeserializationSchema implements DeserializationSchema{
private transient MetricGroup metrics;
@Override 
public void open(InitializationContext context) throws Exception { 
metrics = context.getMetricGroup(); 
 }
}
{code}
But, received an `metrics` NPE. it stands to reason that the metrics have 
already been initialized. 

 

[~jark], [~lzljs3620320] , Looking forward to your reply.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23270) The examples in the documentation are inappropriate

2021-07-06 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-23270:
-

 Summary: The examples in the documentation are inappropriate
 Key: FLINK-23270
 URL: https://issues.apache.org/jira/browse/FLINK-23270
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Fangliang Liu


[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/joins/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23193) Support to display Options when using desc table statement

2021-06-30 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-23193:
-

 Summary: Support to display Options when using desc table statement
 Key: FLINK-23193
 URL: https://issues.apache.org/jira/browse/FLINK-23193
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Client
Reporter: Fangliang Liu


I use the following statement to create a table.
{code:java}
CREATE TABLE datagen (
  f_sequence INT,
  f_key1 INT,
  f_key2 INT,
  f_random_str STRING,
  log_ts  TIMESTAMP(3),
  WATERMARK FOR log_ts AS log_ts
) WITH (
  'connector' = 'datagen',
  'rows-per-second' = '10',
  'fields.f_sequence.kind' = 'sequence',
  'fields.f_sequence.start' = '1',
  'fields.f_sequence.end' = '1',
  'fields.f_key1.min' = '1',
  'fields.f_key1.max' = '20',
  'fields.f_key2.min' = '1',
  'fields.f_key2.max' = '20',
  'fields.f_random_str.length' = '5'
);
{code}
When I use the `desc datagen` to view the table. Got the following result.
{code:java}
+--++--+-++---+
| name |   type | null | key | extras | watermark |
+--++--+-++---+
|   f_sequence |INT | true | ||   |
|   f_key1 |INT | true | ||   |
|   f_key2 |INT | true | ||   |
| f_random_str | STRING | true | ||   |
|   log_ts | TIMESTAMP(3) *ROWTIME* | true | ||  `log_ts` |
+--++--+-++---+
5 rows in set
{code}
Cannot display the information in the with statement.

I think the following information is also necessary to show when the desc 
statement is executed.

 
{code:java}
'connector' = 'datagen', 
'rows-per-second' = '10', 
'fields.f_sequence.kind' = 'sequence', 
'fields.f_sequence.start' = '1', 
'fields.f_sequence.end' = '1', 
'fields.f_key1.min' = '1', 
'fields.f_key1.max' = '20', 
'fields.f_key2.min' = '1', 
'fields.f_key2.max' = '20', 
'fields.f_random_str.length' = '5'
{code}
 

[~jark], what do you think?Looking forward to your reply.

.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22896) Flink sql supports parallelism at the operator level

2021-06-07 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-22896:
-

 Summary: Flink sql supports parallelism at the operator level
 Key: FLINK-22896
 URL: https://issues.apache.org/jira/browse/FLINK-22896
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Reporter: Fangliang Liu


Now DataStream API supports setting parallelism for operators through 
setParallelism(),

But Table API can only use global parallelism. 

We should let the Table API also have the ability to set the appropriate 
degree of parallelism for each operator on the generated execution graph.

CC [~becket_qin],[~jark],Looking forward to your reply, Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21250) Failure to finalize checkpoint due to org.apache.hadoop.fs.FileAlreadyExistsException: /user/flink/checkpoints/9e36ed4ec2f6685f836e5ee5395f5f2e/chk-11096/_metadata for

2021-02-03 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-21250:
-

 Summary:  Failure to finalize checkpoint due to 
org.apache.hadoop.fs.FileAlreadyExistsException: 
/user/flink/checkpoints/9e36ed4ec2f6685f836e5ee5395f5f2e/chk-11096/_metadata 
for client xxx.xx.xx.xxx already exists
 Key: FLINK-21250
 URL: https://issues.apache.org/jira/browse/FLINK-21250
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.10.1
Reporter: Fangliang Liu


Flink Version :1.10.1

The following exception will occasionally be thrown when the flink job is 
running on yarn.
{code:java}
1 2021-02-03 15:36:51 level:WARN log:2021-02-03 15:36:51,762 
qujianpan_server_pbv2_kafka2hive WARN 
org.apache.flink.runtime.jobmaster.JobMaster - Error while processing 
checkpoint acknowledgement message 
location:org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:796)
 收起  throwable:org.apache.flink.runtime.checkpoint.CheckpointException: Could 
not finalize the pending checkpoint 11096. Failure reason: Failure to finalize 
checkpoint. at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:863)
 at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:781)
 at 
org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:794)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) Caused by: 
org.apache.hadoop.fs.FileAlreadyExistsException: 
/user/flink/checkpoints/9e36ed4ec2f6685f836e5ee5395f5f2e/chk-11096/_metadata 
for client 172.16.190.74 already exists at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:3021)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2908)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2792)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:615)
 at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:117)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:413)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
 at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
 at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1841)
 at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1698) at 
org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1633) at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910) at 

[jira] [Created] (FLINK-19872) Support to parse millisecond for TIME type in CSV format

2020-10-29 Thread Fangliang Liu (Jira)
Fangliang Liu created FLINK-19872:
-

 Summary: Support to parse millisecond for TIME type in CSV format
 Key: FLINK-19872
 URL: https://issues.apache.org/jira/browse/FLINK-19872
 Project: Flink
  Issue Type: Sub-task
Reporter: Fangliang Liu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)