[jira] [Created] (FLINK-31461) Supports schema historical version expiring
Shammon created FLINK-31461: --- Summary: Supports schema historical version expiring Key: FLINK-31461 URL: https://issues.apache.org/jira/browse/FLINK-31461 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Schema evolution will generate multiple versions of schema. When the specified version of the schema is no longer referenced by snapshot, it should be deleted -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31412) HiveCatalog rollback schema if alter table failed in table store
Shammon created FLINK-31412: --- Summary: HiveCatalog rollback schema if alter table failed in table store Key: FLINK-31412 URL: https://issues.apache.org/jira/browse/FLINK-31412 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Rollback schema in dfs if HiveCatalog alter table failed -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31368) Move operation execution logic out from TableEnvironmentImpl
[ https://issues.apache.org/jira/browse/FLINK-31368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697814#comment-17697814 ] Shammon commented on FLINK-31368: - Hi [~jark] +1 for this proposal. In fact, there are similar problems with the interaction between the sql-gateway and the table environment. Currently sql-gateway parse sql statement to `Operation`s and do different operations in `OperationExecutor` while the similar things are in `TableEnvironmentImpl.executeInternal`. Sometimes I wonder whether the new Operation should be placed in the sql-gateway or `TableEnvironmentImpl`. > Move operation execution logic out from TableEnvironmentImpl > > > Key: FLINK-31368 > URL: https://issues.apache.org/jira/browse/FLINK-31368 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Planner >Reporter: Jark Wu >Priority: Major > > Currently, {{TableEnvironmentImpl}} is a bit bloated. The implementation of > {{TableEnvironmentImpl}} is filled with lots of operation execution logic > which makes the class hard to read and maintain. Once you want to add/update > an operation, you have to touch the {{TableEnvironmentImpl}}, which is > unnecessary and not clean. > An improvement idea is to extract the operation execution logic (mainly the > command operation, which doesn't trigger a Flink job) out from > {{TableEnvironmentImpl}} and put it close to the corresponding operation. > This is how Spark does with {{RunnableCommand}} and {{V2CommandExec}}. In > this way, we only need to add a new class of {{Operation}} without modifying > {{TableEnvironmentImpl}} to support a new command. > This is just an internal refactoring that doesn't affect user APIs and is > backward-compatible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31309) Rollback DFS schema if hive sync fail in HiveCatalog.createTable
[ https://issues.apache.org/jira/browse/FLINK-31309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696007#comment-17696007 ] Shammon commented on FLINK-31309: - [~lzljs3620320] Please help to assign to me, thanks > Rollback DFS schema if hive sync fail in HiveCatalog.createTable > > > Key: FLINK-31309 > URL: https://issues.apache.org/jira/browse/FLINK-31309 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.4.0 > > > Avoid schema residue on DFS. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28691) Improve cache hit rate of generated class
[ https://issues.apache.org/jira/browse/FLINK-28691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695973#comment-17695973 ] Shammon commented on FLINK-28691: - Hi [~jark] We found the metaspace fullgc problem in codegen and [~FrankZou] fixed it our internal branch. This may involve codegen and planner, what do you think of it? I found that other users also reported the same problem such as https://issues.apache.org/jira/browse/FLINK-31308 > Improve cache hit rate of generated class > - > > Key: FLINK-28691 > URL: https://issues.apache.org/jira/browse/FLINK-28691 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime >Reporter: Zou >Priority: Major > > In OLAP scenarios, compiling generated classes is very frequent, it will > consume a lot of CPU and large amount of generated classes will also takes up > a lot of space in metaspace, which will lead to frequent Full GC. > As we use a self-incrementing counter in CodeGenUtils#newName, it means we > could not get the same generated class between two queries even when they are > exactly the same. Maybe we could share the same generated class between > different queries if they has the same logic, it will be good for job latency > and resource consumption. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31308) JobManager's metaspace out-of-memory when submit a flinksessionjobs
[ https://issues.apache.org/jira/browse/FLINK-31308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695968#comment-17695968 ] Shammon commented on FLINK-31308: - I think it's a same kind of issue in https://issues.apache.org/jira/browse/FLINK-28691 cc [~FrankZou] > JobManager's metaspace out-of-memory when submit a flinksessionjobs > --- > > Key: FLINK-31308 > URL: https://issues.apache.org/jira/browse/FLINK-31308 > Project: Flink > Issue Type: Bug > Components: Kubernetes Operator, Table SQL / API >Affects Versions: 1.16.1, kubernetes-operator-1.4.0 >Reporter: tanjialiang >Priority: Major > Attachments: image-2023-03-03-10-34-46-681.png > > > Hello teams, when i try to recurring submit a flinksessionjobs by flink > operator, it will be make JobManager's metaspace OOM. My Job having some > flink-sql logic, it is the userclassloader didn't closed? Or may be beacuase > of flink-sql's codegen? By the way, it not appear when i using > flink-sql-gateway to submit. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-31275) Flink supports reporting and storage of source/sink tables relationship
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-31275: Summary: Flink supports reporting and storage of source/sink tables relationship (was: Flink supports reporting and storage of source/sink tables) > Flink supports reporting and storage of source/sink tables relationship > --- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job with an identifier id -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31275) Flink supports reporting and storage of source/sink tables
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695938#comment-17695938 ] Shammon commented on FLINK-31275: - [~jark] Get. In order to describe what I want more accurately, I have updated this issue > Flink supports reporting and storage of source/sink tables > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job with an identifier id -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-31275) Flink supports reporting and storage of source/sink tables
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-31275: Description: Currently flink generates job id in `JobGraph` which can identify a job. On the other hand, flink create source/sink table in planner. We need to create relations between source and sink tables for the job with an identifier id (was: Currently flink generates job id in `JobGraph` which can identify a job. On the other hand, flink create source/sink table in planner. We need to create relations between source and sink tables for the job, so we should generate a planner id in planner.) > Flink supports reporting and storage of source/sink tables > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job with an identifier id -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-31275) Flink supports reporting and storage of source/sink tables
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-31275: Summary: Flink supports reporting and storage of source/sink tables (was: Generate planner id in planner) > Flink supports reporting and storage of source/sink tables > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job, so we should generate a > planner id in planner. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31275) Generate planner id in planner
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695671#comment-17695671 ] Shammon commented on FLINK-31275: - Thanks [~jark], please assign this issue to me and I will create a FLIP for it > Generate planner id in planner > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job, so we should generate a > planner id in planner. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31275) Generate planner id in planner
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695598#comment-17695598 ] Shammon commented on FLINK-31275: - [~jark] As described in FLIP-276, we'd like to store the relationship between source/sink tables and etl jobs, and then users can manage their etl jobs and tables. Currently flink get source/sink tables from `CatalogManager` in planner, we need need to assign a planner id to identify the job. > Generate planner id in planner > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job, so we should generate a > planner id in planner. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31259) Gateway supports initialization of catalog at startup
[ https://issues.apache.org/jira/browse/FLINK-31259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695539#comment-17695539 ] Shammon commented on FLINK-31259: - Thanks [~jark], I get it. If there is demand, I think we can consider to share `CatalogManager` instead of `Catalog` between sessions in the future :) > Gateway supports initialization of catalog at startup > - > > Key: FLINK-31259 > URL: https://issues.apache.org/jira/browse/FLINK-31259 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Gateway >Affects Versions: 1.18.0 >Reporter: Shammon >Assignee: Shammon >Priority: Major > > Support to initializing catalogs in gateway when it starts -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-31275) Generate planner id in planner
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-31275: Description: Currently flink generates job id in `JobGraph` which can identify a job. On the other hand, flink create source/sink table in planner. We need to create relations between source and sink tables for the job, so we should generate a planner id in planner. (was: Currently flink generates job id in `JobGraph`. We need to generate job id in planner and create relations between source and sink.) > Generate planner id in planner > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph` which can identify a job. On > the other hand, flink create source/sink table in planner. We need to create > relations between source and sink tables for the job, so we should generate a > planner id in planner. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-31275) Generate planner id in planner
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-31275: Summary: Generate planner id in planner (was: Generate job id in planner) > Generate planner id in planner > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph`. We need to generate job id in > planner and create relations between source and sink. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31275) Generate job id in planner
[ https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17694913#comment-17694913 ] Shammon commented on FLINK-31275: - Hi [~jark] [~lzljs3620320] What do you think of this issue? thanks > Generate job id in planner > -- > > Key: FLINK-31275 > URL: https://issues.apache.org/jira/browse/FLINK-31275 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Shammon >Priority: Major > > Currently flink generates job id in `JobGraph`. We need to generate job id in > planner and create relations between source and sink. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31275) Generate job id in planner
Shammon created FLINK-31275: --- Summary: Generate job id in planner Key: FLINK-31275 URL: https://issues.apache.org/jira/browse/FLINK-31275 Project: Flink Issue Type: Improvement Components: Table SQL / Planner Affects Versions: 1.18.0 Reporter: Shammon Currently flink generates job id in `JobGraph`. We need to generate job id in planner and create relations between source and sink. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31259) Gateway supports initialization of catalog at startup
[ https://issues.apache.org/jira/browse/FLINK-31259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17694894#comment-17694894 ] Shammon commented on FLINK-31259: - [~jark] I'm not sure whether my understanding of `global catalog` is correct. Currently each `Session` in gateway has its own `CatalogManager`, I'd like to create independent catalog from the initialization for each `CatalogManager` in `Session` when the session is created. The initialized `Catalog` instance will not be shared between sessions. Similar to this, I discussed with [~fsk119] whether it is necessary for all sessions to share a `CatalogManager`, we think the demand for this is still uncertain at present > Gateway supports initialization of catalog at startup > - > > Key: FLINK-31259 > URL: https://issues.apache.org/jira/browse/FLINK-31259 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Gateway >Affects Versions: 1.18.0 >Reporter: Shammon >Assignee: Shammon >Priority: Major > > Support to initializing catalogs in gateway when it starts -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31271) Introduce system database for catalog in table store
Shammon created FLINK-31271: --- Summary: Introduce system database for catalog in table store Key: FLINK-31271 URL: https://issues.apache.org/jira/browse/FLINK-31271 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce a system database for each catalog in table store to manage catalog information such as tables dependencies, relations between snapshots and checkpoints for each table -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31270) Fix flink jar name in docs for table store
Shammon created FLINK-31270: --- Summary: Fix flink jar name in docs for table store Key: FLINK-31270 URL: https://issues.apache.org/jira/browse/FLINK-31270 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31269) Split hive connector to each module of each version
Shammon created FLINK-31269: --- Summary: Split hive connector to each module of each version Key: FLINK-31269 URL: https://issues.apache.org/jira/browse/FLINK-31269 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31259) Gateway supports initialization of catalog at startup
[ https://issues.apache.org/jira/browse/FLINK-31259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17694821#comment-17694821 ] Shammon commented on FLINK-31259: - Hi [~jark] We'd like to initialize catalogs in session when it is created in gateway, for example, create mysql and hive catalog. Currently, business users can initialize them by an initialization file in sql client. There may be some sensitive data in the catalog script, such as username/password/hive uri which we don't want to show to them. So we want to add this ability to sql gateway too. When we start gateway with an initialization file, it will create catalogs from the file for each new session. Users can perform query on them directly and cannot drop these catalogs. What do you think? > Gateway supports initialization of catalog at startup > - > > Key: FLINK-31259 > URL: https://issues.apache.org/jira/browse/FLINK-31259 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Gateway >Affects Versions: 1.18.0 >Reporter: Shammon >Assignee: Shammon >Priority: Major > > Support to initializing catalogs in gateway when it starts -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31259) Gateway supports initialization of catalog at startup
Shammon created FLINK-31259: --- Summary: Gateway supports initialization of catalog at startup Key: FLINK-31259 URL: https://issues.apache.org/jira/browse/FLINK-31259 Project: Flink Issue Type: Improvement Components: Table SQL / Gateway Affects Versions: 1.18.0 Reporter: Shammon Support to initializing catalogs in gateway when it starts -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31251) Deprecate deserialize method in DeserializationSchema
Shammon created FLINK-31251: --- Summary: Deprecate deserialize method in DeserializationSchema Key: FLINK-31251 URL: https://issues.apache.org/jira/browse/FLINK-31251 Project: Flink Issue Type: Improvement Components: Table SQL / Runtime Affects Versions: 1.18.0 Reporter: Shammon Deprecate method `T deserialize(byte[] message)` and use `void deserialize(byte[] message, Collector out)` instead in `DeserializationSchema` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31204) HiveCatalogITCase fails due to avro conflict in table store
Shammon created FLINK-31204: --- Summary: HiveCatalogITCase fails due to avro conflict in table store Key: FLINK-31204 URL: https://issues.apache.org/jira/browse/FLINK-31204 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Test fails in IDEA at akka.actor.ActorCell.invoke(ActorCell.scala:548) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) at akka.dispatch.Mailbox.run(Mailbox.scala:231) at akka.dispatch.Mailbox.exec(Mailbox.scala:243) ... 4 more Caused by: java.lang.NoSuchMethodError: org.apache.avro.Schema.isNullable()Z at org.apache.flink.table.store.format.avro.AvroSchemaConverter.nullableSchema(AvroSchemaConverter.java:203) at org.apache.flink.table.store.format.avro.AvroSchemaConverter.convertToSchema(AvroSchemaConverter.java:172) at org.apache.flink.table.store.format.avro.AvroSchemaConverter.convertToSchema(AvroSchemaConverter.java:147) at org.apache.flink.table.store.format.avro.AvroSchemaConverter.convertToSchema(AvroSchemaConverter.java:147) at org.apache.flink.table.store.format.avro.AvroSchemaConverter.convertToSchema(AvroSchemaConverter.java:55) at org.apache.flink.table.store.format.avro.AvroFileFormat$AvroGenericRecordBulkFormat.(AvroFileFormat.java:95) at org.apache.flink.table.store.format.avro.AvroFileFormat.createReaderFactory(AvroFileFormat.java:80) at org.apache.flink.table.store.format.FileFormat.createReaderFactory(FileFormat.java:71) at org.apache.flink.table.store.format.FileFormat.createReaderFactory(FileFormat.java:67) at org.apache.flink.table.store.file.manifest.ManifestList$Factory.create(ManifestList.java:130) at org.apache.flink.table.store.file.operation.AbstractFileStoreScan.(AbstractFileStoreScan.java:95) at org.apache.flink.table.store.file.operation.KeyValueFileStoreScan.(KeyValueFileStoreScan.java:57) at org.apache.flink.table.store.file.KeyValueFileStore.newScan(KeyValueFileStore.java:118) at org.apache.flink.table.store.file.KeyValueFileStore.newScan(KeyValueFileStore.java:71) at org.apache.flink.table.store.file.KeyValueFileStore.newScan(KeyValueFileStore.java:38) at org.apache.flink.table.store.file.AbstractFileStore.newCommit(AbstractFileStore.java:116) at org.apache.flink.table.store.file.AbstractFileStore.newCommit(AbstractFileStore.java:43) at org.apache.flink.table.store.table.AbstractFileStoreTable.newCommit(AbstractFileStoreTable.java:121) at org.apache.flink.table.store.connector.sink.FileStoreSink.lambda$createCommitterFactory$63124b4e$1(FileStoreSink.java:69) at org.apache.flink.table.store.connector.sink.CommitterOperator.initializeState(CommitterOperator.java:104) at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:283) at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106) at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55) at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702) at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669) at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935) at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) at java.lang.Thread.run(Thread.java:750) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31129) Introduce FlinkEmbeddedHiveRunner for junit5 in table store
Shammon created FLINK-31129: --- Summary: Introduce FlinkEmbeddedHiveRunner for junit5 in table store Key: FLINK-31129 URL: https://issues.apache.org/jira/browse/FLINK-31129 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce FlinkEmbeddedHiveRunner for junit5 in table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30810) Rework the CliClientITCase to extend AbstractSqlGatewayStatementITCase
[ https://issues.apache.org/jira/browse/FLINK-30810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17691003#comment-17691003 ] Shammon commented on FLINK-30810: - Hi [~fsk119] I'd like to fix this issue, can you assign it to me? Thanks > Rework the CliClientITCase to extend AbstractSqlGatewayStatementITCase > -- > > Key: FLINK-30810 > URL: https://issues.apache.org/jira/browse/FLINK-30810 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Client, Tests >Affects Versions: 1.17.0 >Reporter: Shengkai Fang >Priority: Major > > We should always use the AbstractSqlGatewayStatementITCase to cover the > statement tests in the sql-client/sql-gateway. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31121) KafkaSink should be able to catch and ignore exp via config on/off
[ https://issues.apache.org/jira/browse/FLINK-31121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690349#comment-17690349 ] Shammon commented on FLINK-31121: - I'd like to fix it, can you assign to me? THX [~jingge] > KafkaSink should be able to catch and ignore exp via config on/off > -- > > Key: FLINK-31121 > URL: https://issues.apache.org/jira/browse/FLINK-31121 > Project: Flink > Issue Type: Improvement > Components: Connectors / Kafka >Affects Versions: 1.17.0 >Reporter: Jing Ge >Priority: Major > Fix For: 1.18.0 > > > It is a common requirement for users to catch and ignore exp while sinking > the event to to downstream system like Kafka. It will be convenient for some > use cases, if Flink Sink can provide built-in functionality and config to > turn it on and off, especially for cases that data consistency is not very > important or the stream contains dirty events. [1][2] > First of all, consider doing it for KafkaSink. Long term, a common solution > that can be used by any connector would be even better. > > [1][https://lists.apache.org/thread/wy31s8wb9qnskq29wn03kp608z4vrwv8] > [2]https://stackoverflow.com/questions/52308911/how-to-handle-exceptions-in-kafka-sink > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31113) Support AND filter for flink-orc
Shammon created FLINK-31113: --- Summary: Support AND filter for flink-orc Key: FLINK-31113 URL: https://issues.apache.org/jira/browse/FLINK-31113 Project: Flink Issue Type: Improvement Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.17.0 Reporter: Shammon Support AND filter in flink-orc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-25537) [JUnit5 Migration] Module: flink-core
[ https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17689709#comment-17689709 ] Shammon edited comment on FLINK-25537 at 2/16/23 11:48 AM: --- Hi [~Aiden Gong], what's the status of this issue? I found that it has not been updated for more than 6 months, I'd like to take it. Can anyone help to assign to me? THX was (Author: zjureel): Hi, what's the status of this issue? I found that it has not been updated for more than 6 months, I'd like to take it. Can anyone help to assign to me? THX > [JUnit5 Migration] Module: flink-core > - > > Key: FLINK-25537 > URL: https://issues.apache.org/jira/browse/FLINK-25537 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Qingsheng Ren >Assignee: Aiden Gong >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-25537) [JUnit5 Migration] Module: flink-core
[ https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17689709#comment-17689709 ] Shammon edited comment on FLINK-25537 at 2/16/23 11:48 AM: --- Hi [~Aiden Gong], [~renqs], what's the status of this issue? I found that it has not been updated for more than 6 months, I'd like to take it. Can anyone help to assign to me? THX was (Author: zjureel): Hi [~Aiden Gong], what's the status of this issue? I found that it has not been updated for more than 6 months, I'd like to take it. Can anyone help to assign to me? THX > [JUnit5 Migration] Module: flink-core > - > > Key: FLINK-25537 > URL: https://issues.apache.org/jira/browse/FLINK-25537 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Qingsheng Ren >Assignee: Aiden Gong >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-25537) [JUnit5 Migration] Module: flink-core
[ https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17689709#comment-17689709 ] Shammon commented on FLINK-25537: - Hi, what's the status of this issue? I found that it has not been updated for more than 6 months, I'd like to take it. Can anyone help to assign to me? THX > [JUnit5 Migration] Module: flink-core > - > > Key: FLINK-25537 > URL: https://issues.apache.org/jira/browse/FLINK-25537 > Project: Flink > Issue Type: Sub-task > Components: API / Core >Reporter: Qingsheng Ren >Assignee: Aiden Gong >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31096) Unify versions in pom of table store
Shammon created FLINK-31096: --- Summary: Unify versions in pom of table store Key: FLINK-31096 URL: https://issues.apache.org/jira/browse/FLINK-31096 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31088) Flink free for flink-table-store-docs
Shammon created FLINK-31088: --- Summary: Flink free for flink-table-store-docs Key: FLINK-31088 URL: https://issues.apache.org/jira/browse/FLINK-31088 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31078) Flink-table-planner free for flink-table-store-core
Shammon created FLINK-31078: --- Summary: Flink-table-planner free for flink-table-store-core Key: FLINK-31078 URL: https://issues.apache.org/jira/browse/FLINK-31078 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31075) Upgrade test units in table store to junit5
Shammon created FLINK-31075: --- Summary: Upgrade test units in table store to junit5 Key: FLINK-31075 URL: https://issues.apache.org/jira/browse/FLINK-31075 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Current there are junit4 and junit5 test case in table store, we should upgrade all junit4 test cases to junit5 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31074) Introduce value filter for table store
Shammon created FLINK-31074: --- Summary: Introduce value filter for table store Key: FLINK-31074 URL: https://issues.apache.org/jira/browse/FLINK-31074 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Currently table store supports filter key/partition to get files from partition. Besides key stats, table store also has value stats which can be used for filter too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31039) ChangelogWithKeyFileStoreTableITCase in table store is not stable
Shammon created FLINK-31039: --- Summary: ChangelogWithKeyFileStoreTableITCase in table store is not stable Key: FLINK-31039 URL: https://issues.apache.org/jira/browse/FLINK-31039 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon FAILURE! - in org.apache.flink.table.store.connector.ChangelogWithKeyFileStoreTableITCase Error: testFullCompactionChangelogProducerStreamingRandom Time elapsed: 600.077 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 60 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:244) at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:114) at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106) at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) at org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222) at org.apache.flink.table.store.connector.ChangelogWithKeyFileStoreTableITCase.checkFullCompactionTestResult(ChangelogWithKeyFileStoreTableITCase.java:395) at org.apache.flink.table.store.connector.ChangelogWithKeyFileStoreTableITCase.testFullCompactionChangelogProducerRandom(ChangelogWithKeyFileStoreTableITCase.java:343) at org.apache.flink.table.store.connector.ChangelogWithKeyFileStoreTableITCase.testFullCompactionChangelogProducerStreamingRandom(ChangelogWithKeyFileStoreTableITCase.java:300) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) [INFO] [INFO] Results: [INFO] Error: Errors: Error: ChangelogWithKeyFileStoreTableITCase.testFullCompactionChangelogProducerStreamingRandom:300->testFullCompactionChangelogProducerRandom:343->checkFullCompactionTestResult:395 » TestTimedOut https://github.com/apache/flink-table-store/actions/runs/4161755735/jobs/7200106408 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31032) Supports AND predicate in orc format for table store
Shammon created FLINK-31032: --- Summary: Supports AND predicate in orc format for table store Key: FLINK-31032 URL: https://issues.apache.org/jira/browse/FLINK-31032 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Supports `AND` predicate push down in orc format -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31027) Introduce annotation for table store
Shammon created FLINK-31027: --- Summary: Introduce annotation for table store Key: FLINK-31027 URL: https://issues.apache.org/jira/browse/FLINK-31027 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce annotation for table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31012) Update docs for files table in table store
Shammon created FLINK-31012: --- Summary: Update docs for files table in table store Key: FLINK-31012 URL: https://issues.apache.org/jira/browse/FLINK-31012 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Update docs to add partition -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31004) Introduce data input and output stream for table store
Shammon created FLINK-31004: --- Summary: Introduce data input and output stream for table store Key: FLINK-31004 URL: https://issues.apache.org/jira/browse/FLINK-31004 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce data input/output stream for table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-31000) Upgrade test units in flink-table-store-common to junit5
Shammon created FLINK-31000: --- Summary: Upgrade test units in flink-table-store-common to junit5 Key: FLINK-31000 URL: https://issues.apache.org/jira/browse/FLINK-31000 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30999) Introduce flink-table-store-test-utils for table store
Shammon created FLINK-30999: --- Summary: Introduce flink-table-store-test-utils for table store Key: FLINK-30999 URL: https://issues.apache.org/jira/browse/FLINK-30999 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce flink-table-store-test-utils module for table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30997) Refactor tests in connector to extends AbstractTestBase
Shammon created FLINK-30997: --- Summary: Refactor tests in connector to extends AbstractTestBase Key: FLINK-30997 URL: https://issues.apache.org/jira/browse/FLINK-30997 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Refactor tests in connector to extends `AbstractTestBase` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30968) Sql-client supports dynamic config to open session
[ https://issues.apache.org/jira/browse/FLINK-30968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30968: Description: Currently sql client will open session with configuration in flink-conf.yaml when it creates connection to gateway. For the convenience of users, it can supports dynamic config with `--conf` as `bin/sql-client.sh gateway --endpoint host:port --conf k1=v1 --conf k2=v2` (was: Currently sql client will open session with configuration in flink-conf.yaml when it creates connection to gateway. For the convenience of users, it can supports dynamic config with `--conf` as `bin/sql-client.sh gateway --endpoint {host}:{port} --conf k1=v1 --conf k2=v2`) > Sql-client supports dynamic config to open session > -- > > Key: FLINK-30968 > URL: https://issues.apache.org/jira/browse/FLINK-30968 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.17.0 >Reporter: Shammon >Priority: Major > > Currently sql client will open session with configuration in flink-conf.yaml > when it creates connection to gateway. For the convenience of users, it can > supports dynamic config with `--conf` as `bin/sql-client.sh gateway > --endpoint host:port --conf k1=v1 --conf k2=v2` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30968) Sql-client supports dynamic config to open session
Shammon created FLINK-30968: --- Summary: Sql-client supports dynamic config to open session Key: FLINK-30968 URL: https://issues.apache.org/jira/browse/FLINK-30968 Project: Flink Issue Type: Improvement Components: Table SQL / Client Affects Versions: 1.17.0 Reporter: Shammon Currently sql client will open session with configuration in flink-conf.yaml when it creates connection to gateway. For the convenience of users, it can supports dynamic config with `--conf` as `bin/sql-client.sh gateway --endpoint {host}:{port} --conf k1=v1 --conf k2=v2` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30900) Introduce utils for table store
Shammon created FLINK-30900: --- Summary: Introduce utils for table store Key: FLINK-30900 URL: https://issues.apache.org/jira/browse/FLINK-30900 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce utils from flink-core for table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30874) Introduce configuration for table store
[ https://issues.apache.org/jira/browse/FLINK-30874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683660#comment-17683660 ] Shammon commented on FLINK-30874: - [~zhoupeijie] Sure! Hi [~lzljs3620320] can you help to assign it? > Introduce configuration for table store > --- > > Key: FLINK-30874 > URL: https://issues.apache.org/jira/browse/FLINK-30874 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Priority: Major > > Introduce configuration related classes similar to flink in flink-core for > table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30891) Introduce Preconditions for table store
Shammon created FLINK-30891: --- Summary: Introduce Preconditions for table store Key: FLINK-30891 URL: https://issues.apache.org/jira/browse/FLINK-30891 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce Preconditions for table store from flink-core -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30874) Introduce configuration for table store
Shammon created FLINK-30874: --- Summary: Introduce configuration for table store Key: FLINK-30874 URL: https://issues.apache.org/jira/browse/FLINK-30874 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce configuration related classes similar to flink in flink-core for table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30820) Support varchar and char for spark in table store
Shammon created FLINK-30820: --- Summary: Support varchar and char for spark in table store Key: FLINK-30820 URL: https://issues.apache.org/jira/browse/FLINK-30820 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Support convert varchar and char type in SparkTypeUtils -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30750) CompactActionITCase.testBatchCompact in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30750: Description: https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 Besides above error, there's another exception as followed https://github.com/apache/flink-table-store/actions/runs/3964547232/jobs/6793496230 Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignResource$4(DefaultExecutionDeployer.java:227) ... 37 more Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) ... 35 more Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkCheckerImpl.lambda$schedulePendingRequestBulkWithTimestampCheck$0(PhysicalSlotRequestBulkCheckerImpl.java:86) ... 28 more Caused by: java.util.concurrent.TimeoutException: Timeout has occurred: 30 ms ... 29 more was: https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 > CompactActionITCase.testBatchCompact in table store is unstable > --- > > Key: FLINK-30750 > URL: https://issues.apache.org/jira/browse/FLINK-30750 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Priority: Major > > https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 > 2023-01-17T11:45:17.9511390Z [INFO] Results: > 2023-01-17T11:45:17.9511641Z [INFO] > 2023-01-17T11:45:17.9511838Z [ERROR] Errors: > 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » > JobExecution Job execution failed. > 2023-01-17T11:45:17.9512964Z [INFO] > 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, > Skipped: 4 > Besides above error, there's another exception as followed > https://github.com/apache/flink-table-store/actions/runs/3964547232/jobs/6793496230 > Caused by: java.util.concurrent.CompletionException: > java.util.concurrent.CompletionException: > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: > Slot request bulk is not fulfillable! Could not allocate the required slot > within slot request timeout > at > org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignResource$4(DefaultExecutionDeployer.java:227) > ... 37 more > Caused by: java.util.concurrent.CompletionException: > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: > Slot request bulk is not fulfillable! Could not allocate the required slot > within slot request timeout > at > java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) > at > java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) > at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) > at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) > ... 35 more > Caused by: >
[jira] [Updated] (FLINK-30750) CompactActionITCase.testBatchCompact in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30750: Description: https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 Besides above error, there's another exception as followed https://github.com/apache/flink-table-store/actions/runs/3964547232/jobs/6793496230 Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignResource$4(DefaultExecutionDeployer.java:227) ... 37 more Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) ... 35 more Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkCheckerImpl.lambda$schedulePendingRequestBulkWithTimestampCheck$0(PhysicalSlotRequestBulkCheckerImpl.java:86) ... 28 more Caused by: java.util.concurrent.TimeoutException: Timeout has occurred: 30 ms ... 29 more was: https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 Besides above error, there's another exception as followed https://github.com/apache/flink-table-store/actions/runs/3964547232/jobs/6793496230 Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignResource$4(DefaultExecutionDeployer.java:227) ... 37 more Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) ... 35 more Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Slot request bulk is not fulfillable! Could not allocate the required slot within slot request timeout at org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkCheckerImpl.lambda$schedulePendingRequestBulkWithTimestampCheck$0(PhysicalSlotRequestBulkCheckerImpl.java:86) ... 28 more Caused by: java.util.concurrent.TimeoutException: Timeout has occurred: 30 ms ... 29 more > CompactActionITCase.testBatchCompact in table store is unstable > --- > > Key: FLINK-30750 > URL: https://issues.apache.org/jira/browse/FLINK-30750 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Priority: Major > >
[jira] [Commented] (FLINK-30680) Consider using the autoscaler to detect slow taskmanagers
[ https://issues.apache.org/jira/browse/FLINK-30680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678496#comment-17678496 ] Shammon commented on FLINK-30680: - Thanks [~gyfora] to create this issue. In fact out team in bytedance has developed similar function in our flink cluster, we are trying to apply it in production. Our test results show that it has very good effect on slow nodes of streaming process. As for the difference between 'detect slow tm' and restart the job and the overall proposal, [~wangm92] and [~Zhanghao Chen] can give more input > Consider using the autoscaler to detect slow taskmanagers > - > > Key: FLINK-30680 > URL: https://issues.apache.org/jira/browse/FLINK-30680 > Project: Flink > Issue Type: New Feature > Components: Autoscaler, Kubernetes Operator >Reporter: Gyula Fora >Priority: Major > > We could leverage logic in the autoscaler to detect slow taskmanagers by > comparing the per-record processing times between them. > If we notice that all subtasks on a single TM are considerably slower than > the rest (at similar input rates) we should try simply restarting the job > instead of scaling it up. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30750) CompactActionITCase.testBatchCompact in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30750: Description: https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 was: https://github.com/apache/flink-table-store/actions/runs/3938700717/jobs/6737695722 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 > CompactActionITCase.testBatchCompact in table store is unstable > --- > > Key: FLINK-30750 > URL: https://issues.apache.org/jira/browse/FLINK-30750 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Priority: Major > > https://github.com/apache/flink-table-store/actions/runs/3954989166/jobs/6772877033 > 2023-01-17T11:45:17.9511390Z [INFO] Results: > 2023-01-17T11:45:17.9511641Z [INFO] > 2023-01-17T11:45:17.9511838Z [ERROR] Errors: > 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » > JobExecution Job execution failed. > 2023-01-17T11:45:17.9512964Z [INFO] > 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, > Skipped: 4 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30750) CompactActionITCase.testBatchCompact in table store is unstable
Shammon created FLINK-30750: --- Summary: CompactActionITCase.testBatchCompact in table store is unstable Key: FLINK-30750 URL: https://issues.apache.org/jira/browse/FLINK-30750 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon https://github.com/apache/flink-table-store/actions/runs/3938700717/jobs/6737695722 2023-01-17T11:45:17.9511390Z [INFO] Results: 2023-01-17T11:45:17.9511641Z [INFO] 2023-01-17T11:45:17.9511838Z [ERROR] Errors: 2023-01-17T11:45:17.9512585Z [ERROR] CompactActionITCase.testBatchCompact » JobExecution Job execution failed. 2023-01-17T11:45:17.9512964Z [INFO] 2023-01-17T11:45:17.9513223Z [ERROR] Tests run: 224, Failures: 0, Errors: 1, Skipped: 4 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30723) Introduce filter pushdown for parquet format
Shammon created FLINK-30723: --- Summary: Introduce filter pushdown for parquet format Key: FLINK-30723 URL: https://issues.apache.org/jira/browse/FLINK-30723 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Introduce filter pushdown for parquet format -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30710) Fix invalid field id for nested type in spark catalog
Shammon created FLINK-30710: --- Summary: Fix invalid field id for nested type in spark catalog Key: FLINK-30710 URL: https://issues.apache.org/jira/browse/FLINK-30710 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Current user can create table by spark sql, but the field id will start from 0 for nested type which causes exception -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30701) Remove SimpleTableTestHelper and use sql in SparkITCase
Shammon created FLINK-30701: --- Summary: Remove SimpleTableTestHelper and use sql in SparkITCase Key: FLINK-30701 URL: https://issues.apache.org/jira/browse/FLINK-30701 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Remove SimpleTableTestHelper in SparkITCase and use create/select/insert sql instead -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-30603) CompactActionITCase in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17677208#comment-17677208 ] Shammon edited comment on FLINK-30603 at 1/16/23 7:45 AM: -- This test case is still unstable cc [~lzljs3620320] [~TsReaper] was (Author: zjureel): This test case is still unstable cc [~lzljs3620320][~TsReaper] > CompactActionITCase in table store is unstable > -- > > Key: FLINK-30603 > URL: https://issues.apache.org/jira/browse/FLINK-30603 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.4.0, table-store-0.3.1 > > > https://github.com/apache/flink-table-store/actions/runs/3927960511/jobs/6715071149 > Error: Failures: > Error:CompactActionITCase.testStreamingCompact:193 expected:<[+I > 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221208]> > [INFO] > Error: Tests run: 221, Failures: 1, Errors: 0, Skipped: 4 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-30603) CompactActionITCase in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17677208#comment-17677208 ] Shammon edited comment on FLINK-30603 at 1/16/23 7:45 AM: -- This test case is still unstable cc [~lzljs3620320][~TsReaper] was (Author: zjureel): This test case is still unstable > CompactActionITCase in table store is unstable > -- > > Key: FLINK-30603 > URL: https://issues.apache.org/jira/browse/FLINK-30603 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.4.0, table-store-0.3.1 > > > https://github.com/apache/flink-table-store/actions/runs/3927960511/jobs/6715071149 > Error: Failures: > Error:CompactActionITCase.testStreamingCompact:193 expected:<[+I > 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221208]> > [INFO] > Error: Tests run: 221, Failures: 1, Errors: 0, Skipped: 4 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30603) CompactActionITCase in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30603: Description: https://github.com/apache/flink-table-store/actions/runs/3927960511/jobs/6715071149 Error: Failures: Error:CompactActionITCase.testStreamingCompact:193 expected:<[+I 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221208]> [INFO] Error: Tests run: 221, Failures: 1, Errors: 0, Skipped: 4 was: https://github.com/apache/flink-table-store/actions/runs/3871130631/jobs/6598625030 [INFO] Results: [INFO] Error: Failures: Error:CompactActionITCase.testStreamingCompact:187 expected:<[+I 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221209]> > CompactActionITCase in table store is unstable > -- > > Key: FLINK-30603 > URL: https://issues.apache.org/jira/browse/FLINK-30603 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.4.0, table-store-0.3.1 > > > https://github.com/apache/flink-table-store/actions/runs/3927960511/jobs/6715071149 > Error: Failures: > Error:CompactActionITCase.testStreamingCompact:193 expected:<[+I > 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221208]> > [INFO] > Error: Tests run: 221, Failures: 1, Errors: 0, Skipped: 4 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (FLINK-30603) CompactActionITCase in table store is unstable
[ https://issues.apache.org/jira/browse/FLINK-30603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon reopened FLINK-30603: - This test case is still unstable > CompactActionITCase in table store is unstable > -- > > Key: FLINK-30603 > URL: https://issues.apache.org/jira/browse/FLINK-30603 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Assignee: Caizhi Weng >Priority: Major > Labels: pull-request-available > Fix For: table-store-0.4.0, table-store-0.3.1 > > > https://github.com/apache/flink-table-store/actions/runs/3871130631/jobs/6598625030 > [INFO] Results: > [INFO] > Error: Failures: > Error:CompactActionITCase.testStreamingCompact:187 expected:<[+I > 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221209]> -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30603) CompactActionITCase in table store is unstable
Shammon created FLINK-30603: --- Summary: CompactActionITCase in table store is unstable Key: FLINK-30603 URL: https://issues.apache.org/jira/browse/FLINK-30603 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon https://github.com/apache/flink-table-store/actions/runs/3871130631/jobs/6598625030 [INFO] Results: [INFO] Error: Failures: Error:CompactActionITCase.testStreamingCompact:187 expected:<[+I 1|100|15|20221208, +I 1|100|15|20221209]> but was:<[+I 1|100|15|20221209]> -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30602) Remove FileStoreTableITCase in table store
Shammon created FLINK-30602: --- Summary: Remove FileStoreTableITCase in table store Key: FLINK-30602 URL: https://issues.apache.org/jira/browse/FLINK-30602 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Remove `FileStoreTableITCase` in table store -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30590) Remove set default value manually for table options
[ https://issues.apache.org/jira/browse/FLINK-30590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655787#comment-17655787 ] Shammon commented on FLINK-30590: - [~nicholasjiang] Currently `CoreOptions.startupMode` will check the special configuration and determine the scan mode. `CoreOptions.setDefault` is duplicated > Remove set default value manually for table options > --- > > Key: FLINK-30590 > URL: https://issues.apache.org/jira/browse/FLINK-30590 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Priority: Major > > Remove set default value manually in `CoreOptions.setDefaultValues` which may > cause wrong error information and it's not needed anymore -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30590) Remove set default value manually for table options
[ https://issues.apache.org/jira/browse/FLINK-30590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30590: Summary: Remove set default value manually for table options (was: Remove set default value manual for table options) > Remove set default value manually for table options > --- > > Key: FLINK-30590 > URL: https://issues.apache.org/jira/browse/FLINK-30590 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: table-store-0.4.0 >Reporter: Shammon >Priority: Major > > Remove set default value manually in `CoreOptions.setDefaultValues` which may > cause wrong error information and it's not needed anymore -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30590) Remove set default value manual for table options
Shammon created FLINK-30590: --- Summary: Remove set default value manual for table options Key: FLINK-30590 URL: https://issues.apache.org/jira/browse/FLINK-30590 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.4.0 Reporter: Shammon Remove set default value manually in `CoreOptions.setDefaultValues` which may cause wrong error information and it's not needed anymore -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30587) Validate primary key in an append-only table in ddl
Shammon created FLINK-30587: --- Summary: Validate primary key in an append-only table in ddl Key: FLINK-30587 URL: https://issues.apache.org/jira/browse/FLINK-30587 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0, table-store-0.4.0 Reporter: Shammon Current table store check primary key in an append-only table, it should be checked in catalog table too -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30516) Support files table in table store
[ https://issues.apache.org/jira/browse/FLINK-30516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30516: Description: Add files table in Table Store and user can query row count from `mytable$files` (was: Add file count and row count in mytable$snapshots table) > Support files table in table store > -- > > Key: FLINK-30516 > URL: https://issues.apache.org/jira/browse/FLINK-30516 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0, table-store-0.4.0 >Reporter: Shammon >Priority: Major > > Add files table in Table Store and user can query row count from > `mytable$files` -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30516) Support files table in table store
[ https://issues.apache.org/jira/browse/FLINK-30516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30516: Summary: Support files table in table store (was: Add file count and row count in snapshots table) > Support files table in table store > -- > > Key: FLINK-30516 > URL: https://issues.apache.org/jira/browse/FLINK-30516 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0, table-store-0.4.0 >Reporter: Shammon >Priority: Major > > Add file count and row count in mytable$snapshots table -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30516) Add file count and row count in snapshots table
[ https://issues.apache.org/jira/browse/FLINK-30516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17654813#comment-17654813 ] Shammon commented on FLINK-30516: - For this issue, I think maybe we should add statistics such as total row count, file count in `Snapshot` to avoid reading all meta files when user perform query on mytable$snapshots, what do you think? [~lzljs3620320] > Add file count and row count in snapshots table > --- > > Key: FLINK-30516 > URL: https://issues.apache.org/jira/browse/FLINK-30516 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0, table-store-0.4.0 >Reporter: Shammon >Priority: Major > > Add file count and row count in mytable$snapshots table -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30516) Add file count and row count in snapshots table
Shammon created FLINK-30516: --- Summary: Add file count and row count in snapshots table Key: FLINK-30516 URL: https://issues.apache.org/jira/browse/FLINK-30516 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0, table-store-0.4.0 Reporter: Shammon Add file count and row count in mytable$snapshots table -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30428) SparkReadITCase.testSnapshotsTable is unstable
Shammon created FLINK-30428: --- Summary: SparkReadITCase.testSnapshotsTable is unstable Key: FLINK-30428 URL: https://issues.apache.org/jira/browse/FLINK-30428 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon https://github.com/apache/flink-table-store/actions/runs/3702469144/jobs/6272779158 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30423) Introduce cast executor codegen for column type evolution
Shammon created FLINK-30423: --- Summary: Introduce cast executor codegen for column type evolution Key: FLINK-30423 URL: https://issues.apache.org/jira/browse/FLINK-30423 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Introduce cast executor codegen for column type evolution -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30324) Support column/partition statistics in table store
Shammon created FLINK-30324: --- Summary: Support column/partition statistics in table store Key: FLINK-30324 URL: https://issues.apache.org/jira/browse/FLINK-30324 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30323) Add statistics and support table statistics in table store
Shammon created FLINK-30323: --- Summary: Add statistics and support table statistics in table store Key: FLINK-30323 URL: https://issues.apache.org/jira/browse/FLINK-30323 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Add statistics related class in table store and supports table statistics -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30322) Support table/column/partition statistics in table store
Shammon created FLINK-30322: --- Summary: Support table/column/partition statistics in table store Key: FLINK-30322 URL: https://issues.apache.org/jira/browse/FLINK-30322 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Support table/column/partition statistics in table sotre -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30319) Validate illegal column name in table store
Shammon created FLINK-30319: --- Summary: Validate illegal column name in table store Key: FLINK-30319 URL: https://issues.apache.org/jira/browse/FLINK-30319 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon There're some specific column names such as `_KEY_` prefix, `_VALUE_COUNT`, `_SEQUENCE_NUMBER` and `_VALUE_KIND`, we should validate column names and throw exception in DDL -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30303) Support max column width in sql client
Shammon created FLINK-30303: --- Summary: Support max column width in sql client Key: FLINK-30303 URL: https://issues.apache.org/jira/browse/FLINK-30303 Project: Flink Issue Type: Improvement Components: Table SQL / Client Affects Versions: 1.17.0 Reporter: Shammon Currently user can use `sql-client.display.max-column-width` to set column width in sql-client in streaming, this should be supported in batch too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30288) Use visitor to convert predicate for orc
Shammon created FLINK-30288: --- Summary: Use visitor to convert predicate for orc Key: FLINK-30288 URL: https://issues.apache.org/jira/browse/FLINK-30288 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Use `PredicateVisitor` to convert `Predicate` in table store for orc -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30269) Validate table name for metadata table
Shammon created FLINK-30269: --- Summary: Validate table name for metadata table Key: FLINK-30269 URL: https://issues.apache.org/jira/browse/FLINK-30269 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Currently user can create tables `tablename` and `tablename$snapshots`, but can't insert into values into `tablename$snapshots` and execute query on it. At the same time, user can create table `tablename$aaa$bbb`, but cannot execute query on it and event cannot drop it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30263) Introduce schemas meta table
[ https://issues.apache.org/jira/browse/FLINK-30263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30263: Summary: Introduce schemas meta table (was: Introduce schemas table to table store) > Introduce schemas meta table > > > Key: FLINK-30263 > URL: https://issues.apache.org/jira/browse/FLINK-30263 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0 >Reporter: Shammon >Priority: Major > > You can query the historical schemas of the table through SQL, for example, > query the historical schemas of table "T" through the following SQL: > SELECT * FROM T$schemas; -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30263) Introduce schemas table to table store
[ https://issues.apache.org/jira/browse/FLINK-30263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30263: Parent: FLINK-29735 Issue Type: Sub-task (was: New Feature) > Introduce schemas table to table store > -- > > Key: FLINK-30263 > URL: https://issues.apache.org/jira/browse/FLINK-30263 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0 >Reporter: Shammon >Priority: Major > > You can query the historical schemas of the table through SQL, for example, > query the historical schemas of table "T" through the following SQL: > SELECT * FROM T$schemas; -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30263) Introduce schemas table to table store
Shammon created FLINK-30263: --- Summary: Introduce schemas table to table store Key: FLINK-30263 URL: https://issues.apache.org/jira/browse/FLINK-30263 Project: Flink Issue Type: New Feature Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon You can query the historical schemas of the table through SQL, for example, query the historical schemas of table "T" through the following SQL: SELECT * FROM T$schemas; -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30255) Throw exception for upper case fields are used in hive metastore
[ https://issues.apache.org/jira/browse/FLINK-30255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-30255: Summary: Throw exception for upper case fields are used in hive metastore (was: Throw exception for upper case fields is used in hive metastore) > Throw exception for upper case fields are used in hive metastore > > > Key: FLINK-30255 > URL: https://issues.apache.org/jira/browse/FLINK-30255 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0 >Reporter: Shammon >Priority: Major > > Currently there will be incompatible when user use upper case in hive > metastore and table store, we should throw exception for it and find a more > elegant compatibility mode later -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30255) Throw exception for upper case fields is used in hive metastore
Shammon created FLINK-30255: --- Summary: Throw exception for upper case fields is used in hive metastore Key: FLINK-30255 URL: https://issues.apache.org/jira/browse/FLINK-30255 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Currently there will be incompatible when user use upper case in hive metastore and table store, we should throw exception for it and find a more elegant compatibility mode later -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30227) Add LeafEmptyFunction for predicate without fields
Shammon created FLINK-30227: --- Summary: Add LeafEmptyFunction for predicate without fields Key: FLINK-30227 URL: https://issues.apache.org/jira/browse/FLINK-30227 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon In addition to `LeafBinaryFunction` and `LeafUnaryFunction`, we should add `LeafEmptyFunction` for predicate -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30161) Add TableSchema validation before it is commited
Shammon created FLINK-30161: --- Summary: Add TableSchema validation before it is commited Key: FLINK-30161 URL: https://issues.apache.org/jira/browse/FLINK-30161 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon When creating or altering a table, there maybe some configuration or ddl conflicts, we need to check them before committing the table schema -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-29988) Improve upper case fields for hive metastore
[ https://issues.apache.org/jira/browse/FLINK-29988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-29988: Component/s: Table Store > Improve upper case fields for hive metastore > > > Key: FLINK-29988 > URL: https://issues.apache.org/jira/browse/FLINK-29988 > Project: Flink > Issue Type: Improvement > Components: Table Store >Reporter: Jingsong Lee >Priority: Major > > If the fields in the fts table are uppercase, there will be a mismatched > exception when used in the Hive. > 1. If it is not supported at the beginning, throw an exception when flink > creates a table to the hive metastore. > 2. If it is supported, so that no error is reported in the whole process, but > save lower case in hive metastore. We can check columns with the same name > when creating a table in Flink with hive metastore. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30038) HiveE2E test is not stable
Shammon created FLINK-30038: --- Summary: HiveE2E test is not stable Key: FLINK-30038 URL: https://issues.apache.org/jira/browse/FLINK-30038 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon https://github.com/apache/flink-table-store/actions/runs/3476726197/jobs/5812201704 Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for log output matching '.*Starting HiveServer2.*' at org.testcontainers.containers.wait.strategy.LogMessageWaitStrategy.waitUntilReady(LogMessageWaitStrategy.java:49) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30033) Add primary key data type validation
Shammon created FLINK-30033: --- Summary: Add primary key data type validation Key: FLINK-30033 URL: https://issues.apache.org/jira/browse/FLINK-30033 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Add primary key data type validation, table store can refer to hive in https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTableCreate/Drop/TruncateTable -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30027) Fields min and max in BinaryTableStats support lazy deserialization
Shammon created FLINK-30027: --- Summary: Fields min and max in BinaryTableStats support lazy deserialization Key: FLINK-30027 URL: https://issues.apache.org/jira/browse/FLINK-30027 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Predicate get min and max from BinaryRowData, lazily deserialization -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30013) Add data type compatibility check in SchemaChange.updateColumnType
Shammon created FLINK-30013: --- Summary: Add data type compatibility check in SchemaChange.updateColumnType Key: FLINK-30013 URL: https://issues.apache.org/jira/browse/FLINK-30013 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Add LogicalTypeCasts.supportsImplicitCast to check operation in SchemaChange.updateColumnType to avoid data type conversion failures when reading data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-27846) Schema evolution for data file reading
[ https://issues.apache.org/jira/browse/FLINK-27846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17630689#comment-17630689 ] Shammon commented on FLINK-27846: - Thanks [~lzljs3620320] , then we can transform different data columns by schema id, I will try it > Schema evolution for data file reading > -- > > Key: FLINK-27846 > URL: https://issues.apache.org/jira/browse/FLINK-27846 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Jingsong Lee >Priority: Minor > Fix For: table-store-0.3.0 > > > For file reads, we need to. > - Adjust the correspondence of specific fields > - If there is a type evolution, we need to upgrade the corresponding data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-29930) ParquetFileStatsExtractorTest is unstable
Shammon created FLINK-29930: --- Summary: ParquetFileStatsExtractorTest is unstable Key: FLINK-29930 URL: https://issues.apache.org/jira/browse/FLINK-29930 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon https://github.com/apache/flink-table-store/actions/runs/3418971069/jobs/5691913303 [INFO] Running org.apache.flink.table.store.format.parquet.ParquetFileStatsExtractorTest Error: Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.919 s <<< FAILURE! - in org.apache.flink.table.store.format.parquet.ParquetFileStatsExtractorTest Error: testExtract Time elapsed: 0.91 s <<< ERROR! java.lang.UnsupportedOperationException: type CHAR not supported for extracting statistics in parquet format at org.apache.flink.table.store.format.parquet.ParquetFileStatsExtractor.toFieldStats(ParquetFileStatsExtractor.java:85) at org.apache.flink.table.store.format.parquet.ParquetFileStatsExtractor.lambda$extract$0(ParquetFileStatsExtractor.java:73) at java.util.stream.IntPipeline$4$1.accept(IntPipeline.java:250) at java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110) at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:546) at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:505) at org.apache.flink.table.store.format.parquet.ParquetFileStatsExtractor.extract(ParquetFileStatsExtractor.java:75) at org.apache.flink.table.store.format.FileStatsExtractorTestBase.testExtract(FileStatsExtractorTestBase.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at
[jira] [Comment Edited] (FLINK-27846) Schema evolution for data file reading
[ https://issues.apache.org/jira/browse/FLINK-27846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17630272#comment-17630272 ] Shammon edited comment on FLINK-27846 at 11/8/22 10:49 AM: --- Hi [~lzljs3620320] [~TsReaper] Currently flink table store writes and reads data according to the column name in avro/orc/parquet. When performing column modification ddl, such as renaming column, deleting column, and adding new column with old name, it will cause data reading errors. One current idea is to store the column id as the name to avro/orc/parquet. The problem with this modification is that it will cause incompatibility between the new version and the previous version of the data file. What do you think of this solution? Hope to hear from you, THX was (Author: zjureel): Hi [~lzljs3620320] [~TsReaper] Currently flink table store writes and reads according to the column name in avro/orc/parquet. When performing column modification ddl, such as renaming column, deleting column, and adding new column with old name, it will cause data reading errors. One current idea is to store the column id as the name to avro/orc/parquet. The problem with this modification is that it will cause incompatibility between the new version and the previous version of the data file. What do you think of this solution? Hope to hear from you, THX > Schema evolution for data file reading > -- > > Key: FLINK-27846 > URL: https://issues.apache.org/jira/browse/FLINK-27846 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Jingsong Lee >Priority: Minor > Fix For: table-store-0.3.0 > > > For file reads, we need to. > - Adjust the correspondence of specific fields > - If there is a type evolution, we need to upgrade the corresponding data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-27846) Schema evolution for data file reading
[ https://issues.apache.org/jira/browse/FLINK-27846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17630272#comment-17630272 ] Shammon commented on FLINK-27846: - Hi [~lzljs3620320] [~TsReaper] Currently flink table store writes and reads according to the column name in avro/orc/parquet. When performing column modification ddl, such as renaming column, deleting column, and adding new column with old name, it will cause data reading errors. One current idea is to store the column id as the name to avro/orc/parquet. The problem with this modification is that it will cause incompatibility between the new version and the previous version of the data file. What do you think of this solution? Hope to hear from you, THX > Schema evolution for data file reading > -- > > Key: FLINK-27846 > URL: https://issues.apache.org/jira/browse/FLINK-27846 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Reporter: Jingsong Lee >Priority: Minor > Fix For: table-store-0.3.0 > > > For file reads, we need to. > - Adjust the correspondence of specific fields > - If there is a type evolution, we need to upgrade the corresponding data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-29832) Improve switch to default database in docs
[ https://issues.apache.org/jira/browse/FLINK-29832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shammon updated FLINK-29832: Summary: Improve switch to default database in docs (was: Refactor default database name in flink table store) > Improve switch to default database in docs > -- > > Key: FLINK-29832 > URL: https://issues.apache.org/jira/browse/FLINK-29832 > Project: Flink > Issue Type: Bug > Components: Table Store >Affects Versions: table-store-0.3.0, table-store-0.2.2 >Reporter: Shammon >Priority: Major > Labels: pull-request-available > Attachments: image-2022-11-01-16-40-47-539.png > > > `FlinkCatalogFactory` creates a default database named `default` in table > store. The `default` is a keyword in SQL, and when we create a new database, > we cant execute `use default` to switch to `default` directly. We can switch > to default database "use `default`;" in flink table store -- This message was sent by Atlassian Jira (v8.20.10#820010)