[GitHub] [flink] flinkbot commented on pull request #23266: [FLINK-32942][test-utils] ParameterizedTestExtension's parameter provider can be private
flinkbot commented on PR #23266: URL: https://github.com/apache/flink/pull/23266#issuecomment-1689389680 ## CI report: * aae54e2511755fc6ea10ff1a029889064e41f841 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32942) JUnit5 ParameterizedTestExtension's parameter provider can be private.
[ https://issues.apache.org/jira/browse/FLINK-32942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiabao Sun updated FLINK-32942: --- Summary: JUnit5 ParameterizedTestExtension's parameter provider can be private. (was: JUnit5 parameter test's parameter provider can be private. ) > JUnit5 ParameterizedTestExtension's parameter provider can be private. > --- > > Key: FLINK-32942 > URL: https://issues.apache.org/jira/browse/FLINK-32942 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.17.1 >Reporter: Jiabao Sun >Priority: Major > Labels: pull-request-available > > Currently parameters provider must be public. > If we make the parameterProvider accessible before invocation, the test case > will get better isolation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] XComp commented on a diff in pull request #22341: [FLINK-27204][flink-runtime] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor
XComp commented on code in PR #22341: URL: https://github.com/apache/flink/pull/22341#discussion_r1302560535 ## flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java: ## @@ -513,36 +514,39 @@ private void stopDispatcherServices() throws Exception { public CompletableFuture submitJob(JobGraph jobGraph, Time timeout) { final JobID jobID = jobGraph.getJobID(); log.info("Received JobGraph submission '{}' ({}).", jobGraph.getName(), jobID); - -try { -if (isInGloballyTerminalState(jobID)) { -log.warn( -"Ignoring JobGraph submission '{}' ({}) because the job already reached a globally-terminal state (i.e. {}) in a previous execution.", -jobGraph.getName(), -jobID, -Arrays.stream(JobStatus.values()) -.filter(JobStatus::isGloballyTerminalState) -.map(JobStatus::name) -.collect(Collectors.joining(", "))); -return FutureUtils.completedExceptionally( - DuplicateJobSubmissionException.ofGloballyTerminated(jobID)); -} else if (jobManagerRunnerRegistry.isRegistered(jobID) -|| submittedAndWaitingTerminationJobIDs.contains(jobID)) { -// job with the given jobID is not terminated, yet -return FutureUtils.completedExceptionally( -DuplicateJobSubmissionException.of(jobID)); -} else if (isPartialResourceConfigured(jobGraph)) { -return FutureUtils.completedExceptionally( -new JobSubmissionException( -jobID, -"Currently jobs is not supported if parts of the vertices have " -+ "resources configured. The limitation will be removed in future versions.")); -} else { -return internalSubmitJob(jobGraph); -} -} catch (FlinkException e) { -return FutureUtils.completedExceptionally(e); -} +return isInGloballyTerminalState(jobID) +.thenCompose( +isTerminated -> { +if (isTerminated) { +log.warn( +"Ignoring JobGraph submission '{}' ({}) because the job already " ++ "reached a globally-terminal state (i.e. {}) in a " ++ "previous execution.", +jobGraph.getName(), +jobID, +Arrays.stream(JobStatus.values()) + .filter(JobStatus::isGloballyTerminalState) +.map(JobStatus::name) +.collect(Collectors.joining(", "))); +return FutureUtils.completedExceptionally( + DuplicateJobSubmissionException.ofGloballyTerminated( +jobID)); +} else if (jobManagerRunnerRegistry.isRegistered(jobID) Review Comment: You didn't do a pull before adding the changes (I did a [force-push](https://github.com/apache/flink/pull/22341#pullrequestreview-1588695382) to include a few minor changes previous). These changes were reverted with your most-recent push -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32942) JUnit5 parameter test's parameter provider can be private.
[ https://issues.apache.org/jira/browse/FLINK-32942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-32942: --- Labels: pull-request-available (was: ) > JUnit5 parameter test's parameter provider can be private. > --- > > Key: FLINK-32942 > URL: https://issues.apache.org/jira/browse/FLINK-32942 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.17.1 >Reporter: Jiabao Sun >Priority: Major > Labels: pull-request-available > > Currently parameters provider must be public. > If we make the parameterProvider accessible before invocation, the test case > will get better isolation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] Jiabao-Sun opened a new pull request, #23266: [FLINK-32942][test-utils] ParameterizedTestExtension's parameter provider can be private
Jiabao-Sun opened a new pull request, #23266: URL: https://github.com/apache/flink/pull/23266 ParameterizedTestExtension's parameter provider can be private ## What is the purpose of the change ParameterizedTestExtension's parameter provider can be private to make test cases have better isolation. ## Brief change log ParameterizedTestExtension's parameter provider can be private. Old ```java @ExtendWith(ParameterizedTestExtension.class) public class ParameterizedTestExtensionTest { private static final List PARAMETERS = Arrays.asList(1, 2); @Parameters public static List parameters() { return PARAMETERS; } @TestTemplate void testWithParameters(int parameter) { assertThat(parameter).isIn(PARAMETERS); } } ``` After change ```java @ExtendWith(ParameterizedTestExtension.class) class ParameterizedTestExtensionTest { private static final List PARAMETERS = Arrays.asList(1, 2); @Parameters private static List parameters() { return PARAMETERS; } @TestTemplate void testWithParameters(int parameter) { assertThat(parameter).isIn(PARAMETERS); } } ``` ## Verifying this change Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing Added unit test to check parameters provider can be private. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): ( no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-pulsar] tisonkun commented on a diff in pull request #56: [FLINK-26203] Basic table factory for Pulsar connector
tisonkun commented on code in PR #56: URL: https://github.com/apache/flink-connector-pulsar/pull/56#discussion_r1302557629 ## flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/table/PulsarTableOptions.java: ## @@ -0,0 +1,282 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.pulsar.table; + +import org.apache.flink.annotation.PublicEvolving; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ConfigOptions; +import org.apache.flink.configuration.description.Description; +import org.apache.flink.connector.pulsar.sink.writer.router.TopicRoutingMode; + +import org.apache.pulsar.client.api.SubscriptionType; + +import java.time.Duration; +import java.util.List; + +import static org.apache.flink.configuration.description.TextElement.code; +import static org.apache.flink.configuration.description.TextElement.text; +import static org.apache.flink.table.factories.FactoryUtil.FORMAT_SUFFIX; + +/** + * Config options that is used to configure a Pulsar SQL Connector. These config options are + * specific to SQL Connectors only. Other runtime configurations can be found in {@link + * org.apache.flink.connector.pulsar.common.config.PulsarOptions}, {@link + * org.apache.flink.connector.pulsar.source.PulsarSourceOptions}, and {@link + * org.apache.flink.connector.pulsar.sink.PulsarSinkOptions}. + */ +@PublicEvolving +public final class PulsarTableOptions { + +private PulsarTableOptions() {} + +public static final ConfigOption> TOPICS = +ConfigOptions.key("topics") +.stringType() +.asList() +.noDefaultValue() +.withDescription( +Description.builder() +.text( +"Topic name(s) the table reads data from. It can be a single topic name or a list of topic names separated by a semicolon symbol (%s) like %s.", +code(";"), code("topic-1;topic-2")) +.build()); + +// +// Table Source Options +// + +public static final ConfigOption SOURCE_SUBSCRIPTION_TYPE = +ConfigOptions.key("source.subscription-type") +.enumType(SubscriptionType.class) +.defaultValue(SubscriptionType.Exclusive) +.withDescription( +Description.builder() +.text( +"The [subscription type](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/pulsar/#pulsar-subscriptions) that is supported by the [Pulsar DataStream source connector](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/pulsar/#pulsar-source). Currently, only %s and %s subscription types are supported.", +code("Exclusive"), code("Shared")) +.build()); + +/** + * Exactly same as {@link + * org.apache.flink.connector.pulsar.source.PulsarSourceOptions#PULSAR_SUBSCRIPTION_NAME}. + * Copied because we want to have a default value for it. + */ +public static final ConfigOption SOURCE_SUBSCRIPTION_NAME = +ConfigOptions.key("source.subscription-name") +.stringType() +.noDefaultValue() +.withDescription( +Description.builder() +.text( +"The subscription name of the consumer that is used by the runtime [Pulsar DataStream source connector](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/pulsar/#pulsar-source). This argument is required for constructing the consumer.") +
[jira] [Commented] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
[ https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757845#comment-17757845 ] Qingsheng Ren commented on FLINK-32794: --- [~xiangyu0xf] Thanks for taking the issue! Assigned to you just now > Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway > - > > Key: FLINK-32794 > URL: https://issues.apache.org/jira/browse/FLINK-32794 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: xiangyu feng >Priority: Major > Fix For: 1.18.0 > > > Document for jdbc driver: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
[ https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-32794: - Assignee: xiangyu feng > Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway > - > > Key: FLINK-32794 > URL: https://issues.apache.org/jira/browse/FLINK-32794 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: xiangyu feng >Priority: Major > Fix For: 1.18.0 > > > Document for jdbc driver: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
[ https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757844#comment-17757844 ] xiangyu feng commented on FLINK-32794: -- [~renqs] Hello, I would like to take this issue. > Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway > - > > Key: FLINK-32794 > URL: https://issues.apache.org/jira/browse/FLINK-32794 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > > Document for jdbc driver: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32942) JUnit5 parameter test's parameter provider can be private.
[ https://issues.apache.org/jira/browse/FLINK-32942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiabao Sun updated FLINK-32942: --- Component/s: Tests Affects Version/s: 1.17.1 Description: Currently parameters provider must be public. If we make the parameterProvider accessible before invocation, the test case will get better isolation. > JUnit5 parameter test's parameter provider can be private. > --- > > Key: FLINK-32942 > URL: https://issues.apache.org/jira/browse/FLINK-32942 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.17.1 >Reporter: Jiabao Sun >Priority: Major > > Currently parameters provider must be public. > If we make the parameterProvider accessible before invocation, the test case > will get better isolation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-26999) Introduce ClickHouse Connector
[ https://issues.apache.org/jira/browse/FLINK-26999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757843#comment-17757843 ] ConradJam commented on FLINK-26999: --- It seems that this topic has not been mentioned for a long time. Now I want to start it again and implement it. Now I have some ideas to synchronize with everyone and reply based on the above mentioned questions ● For the migration repo, most of them have already been migrated externally. Specifically, [~martijnvisser] can help create a warehouse of flink-connector-clickhouse ● For the existing connector mentioned by [~rmetzger] , I do not recommend direct development, because the unofficial jdbc dependency is used to implement this unofficial dependency. The library has stopped maintenance, so it is recommended to use the official JDBC to re-implement, and the details will be in the new FLIP describe ● It is necessary to start a discussion in the community on the functions of the Flink Clickhouse Connector that everyone expects. I will start a related discussion later after I finish writing the new FLIP During this time, I am taking relevant opinions and writing a new FLIP-202. I will start a discussion once it is completed. If you have any ideas, please communicate in time cc~ [~monster#12] [~RocMarshal] [~rmetzger] [~martijnvisser] [~subkanthi] > Introduce ClickHouse Connector > -- > > Key: FLINK-26999 > URL: https://issues.apache.org/jira/browse/FLINK-26999 > Project: Flink > Issue Type: New Feature > Components: Connectors / Common >Affects Versions: 1.15.0 >Reporter: ZhuoYu Chen >Priority: Major > > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-202%3A+Introduce+ClickHouse+Connector] > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32942) JUnit5 parameter test's parameter provider can be private.
[ https://issues.apache.org/jira/browse/FLINK-32942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiabao Sun updated FLINK-32942: --- Summary: JUnit5 parameter test's parameter provider can be private. (was: JUnit5 parameter tests) > JUnit5 parameter test's parameter provider can be private. > --- > > Key: FLINK-32942 > URL: https://issues.apache.org/jira/browse/FLINK-32942 > Project: Flink > Issue Type: Improvement >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32942) JUnit5 parameter tests
Jiabao Sun created FLINK-32942: -- Summary: JUnit5 parameter tests Key: FLINK-32942 URL: https://issues.apache.org/jira/browse/FLINK-32942 Project: Flink Issue Type: Improvement Reporter: Jiabao Sun -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] wanglijie95 commented on a diff in pull request #23225: [FLINK-32827][table-runtime] Fix the operator fusion codegen may not take effect when enabling runtime filter
wanglijie95 commented on code in PR #23225: URL: https://github.com/apache/flink/pull/23225#discussion_r1302525176 ## flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/fusion/spec/RuntimeFilterFusionCodegenSpec.scala: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.flink.table.planner.plan.fusion.spec + +import org.apache.flink.runtime.operators.util.BloomFilter +import org.apache.flink.table.data.binary.BinaryRowData +import org.apache.flink.table.planner.codegen.{CodeGeneratorContext, GeneratedExpression} +import org.apache.flink.table.planner.codegen.CodeGenUtils.{className, newName, newNames} +import org.apache.flink.table.planner.plan.fusion.{OpFusionCodegenSpecBase, OpFusionContext} +import org.apache.flink.table.planner.typeutils.RowTypeUtils +import org.apache.flink.table.types.logical.RowType +import org.apache.flink.util.Preconditions + +import java.util + +/** The operator fusion codegen spec for RuntimeFilter. */ +class RuntimeFilterFusionCodegenSpec(opCodegenCtx: CodeGeneratorContext, probeIndices: Array[Int]) + extends OpFusionCodegenSpecBase(opCodegenCtx) { + + private lazy val buildInputId = 1 + + private var buildContext: OpFusionContext = _ + private var probeContext: OpFusionContext = _ + private var buildType: RowType = _ + private var probeType: RowType = _ + + private var buildComplete: String = _ + private var filterTerm: String = _ + + override def setup(opFusionContext: OpFusionContext): Unit = { +super.setup(opFusionContext) +val inputContexts = fusionContext.getInputFusionContexts +assert(inputContexts.size == 2) +buildContext = inputContexts.get(0) +probeContext = inputContexts.get(1) + +buildType = buildContext.getOutputType +probeType = probeContext.getOutputType + } + + override def variablePrefix(): String = "rFilter" + + override def doProcessProduce(codegenCtx: CodeGeneratorContext): Unit = { +// call build side first, then call probe side +buildContext.processProduce(codegenCtx) +probeContext.processProduce(codegenCtx) + } + + override def doEndInputProduce(codegenCtx: CodeGeneratorContext): Unit = { +// call build side first, then call probe side +buildContext.endInputProduce(codegenCtx) +probeContext.endInputProduce(codegenCtx) + } + + override def doProcessConsume( + inputId: Int, + inputVars: util.List[GeneratedExpression], + row: GeneratedExpression): String = { +if (inputId == buildInputId) { + buildComplete = newName("buildComplete") + opCodegenCtx.addReusableMember(s"private transient boolean $buildComplete;") + opCodegenCtx.addReusableOpenStatement(s"$buildComplete = false;") + + filterTerm = newName("filter") + val filterClass = className[BloomFilter] + opCodegenCtx.addReusableMember(s"private transient $filterClass $filterTerm;") + + s""" + |${className[Preconditions]}.checkState(!$buildComplete, "Should not build completed."); + |if ($filterTerm == null && !${row.resultTerm}.isNullAt(1)) { + |$filterTerm = $filterClass.fromBytes(${row.resultTerm}.getBinary(1)); + |} + |""".stripMargin +} else { + val Seq(probeKeyTerm, probeKeyWriterTerm) = newNames("probeKeyTerm", "probeKeyWriterTerm") + // project probe key row from input + val probeKeyExprs = probeIndices.map(idx => inputVars.get(idx)) + val keyProjectionCode = getExprCodeGenerator +.generateResultExpression( + probeKeyExprs, + RowTypeUtils.projectRowType(probeType, probeIndices), + classOf[BinaryRowData], + probeKeyTerm, + outRowWriter = Option(probeKeyWriterTerm)) +.code + + val found = newName("found") + val consumeCode = fusionContext.processConsume(null, row.resultTerm) + s""" + |${className[Preconditions]}.checkState($buildComplete, "Should build completed."); + | + |boolean $found = true; + |if ($filterTerm != null) { + | // compute the hash code of probe key + | $keyProjectionCode + | final int hashCode = $probeKeyTerm.
[jira] [Commented] (FLINK-32802) Release Testing: Verify FLIP-291: Externalized Declarative Resource Management
[ https://issues.apache.org/jira/browse/FLINK-32802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757838#comment-17757838 ] Qingsheng Ren commented on FLINK-32802: --- [~ConradJam] Thanks for volunteering! Assigned to you just now. > Release Testing: Verify FLIP-291: Externalized Declarative Resource Management > -- > > Key: FLINK-32802 > URL: https://issues.apache.org/jira/browse/FLINK-32802 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: ConradJam >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-32802) Release Testing: Verify FLIP-291: Externalized Declarative Resource Management
[ https://issues.apache.org/jira/browse/FLINK-32802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-32802: - Assignee: ConradJam > Release Testing: Verify FLIP-291: Externalized Declarative Resource Management > -- > > Key: FLINK-32802 > URL: https://issues.apache.org/jira/browse/FLINK-32802 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: ConradJam >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-31573) Nexmark performance drops in 1.17 compared to 1.13
[ https://issues.apache.org/jira/browse/FLINK-31573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren closed FLINK-31573. - Resolution: Invalid > Nexmark performance drops in 1.17 compared to 1.13 > -- > > Key: FLINK-31573 > URL: https://issues.apache.org/jira/browse/FLINK-31573 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.17.0 >Reporter: Qingsheng Ren >Priority: Critical > > The case was originally > [reported|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz] > in the voting thread of 1.17.0 RC3. > Compared to Flink 1.13, the performance of Nexmark in 1.17.0 RC3 drops ~8% in > query 18. Some details could be found in the [mailing > list|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz]. > A further investigation showed that with configuration > {{execution.checkpointing.checkpoints-after-tasks-finish.enabled}} set to > false, the performance of 1.17 is better than 1.16. > A fully comparison of Nexmark result between 1.16 and 1.17 is ongoing. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31573) Nexmark performance drops in 1.17 compared to 1.13
[ https://issues.apache.org/jira/browse/FLINK-31573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757837#comment-17757837 ] Qingsheng Ren commented on FLINK-31573: --- [~masteryhx] Thanks for bringing this up. I didn't see further evidence showing the performance issues. Feel free to close it and we can reopen it if there's any update > Nexmark performance drops in 1.17 compared to 1.13 > -- > > Key: FLINK-31573 > URL: https://issues.apache.org/jira/browse/FLINK-31573 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.17.0 >Reporter: Qingsheng Ren >Priority: Critical > > The case was originally > [reported|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz] > in the voting thread of 1.17.0 RC3. > Compared to Flink 1.13, the performance of Nexmark in 1.17.0 RC3 drops ~8% in > query 18. Some details could be found in the [mailing > list|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz]. > A further investigation showed that with configuration > {{execution.checkpointing.checkpoints-after-tasks-finish.enabled}} set to > false, the performance of 1.17 is better than 1.16. > A fully comparison of Nexmark result between 1.16 and 1.17 is ongoing. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] zoltar9264 commented on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API
zoltar9264 commented on PR #17443: URL: https://github.com/apache/flink/pull/17443#issuecomment-1689351227 Thanks for the reminder @masteryhx , this work was indeed interrupted for a long time. I would love to pick up this job next week. Thanks ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-32802) Release Testing: Verify FLIP-291: Externalized Declarative Resource Management
[ https://issues.apache.org/jira/browse/FLINK-32802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757836#comment-17757836 ] ConradJam edited comment on FLINK-32802 at 8/23/23 6:23 AM: I want to try this task what should I do Before that, I carefully read the relevant content of FLIP-291 At the same time I wrote related documentation for review FLINK-32671 cc [~renqs] [~dmvk] was (Author: JIRAUSER285483): I want to try this task what should I do Before that, I carefully read the relevant content of FLIP-291 At the same time I wrote related documentation for review FLINK-32671 [~renqs] [~dmvk] > Release Testing: Verify FLIP-291: Externalized Declarative Resource Management > -- > > Key: FLINK-32802 > URL: https://issues.apache.org/jira/browse/FLINK-32802 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32802) Release Testing: Verify FLIP-291: Externalized Declarative Resource Management
[ https://issues.apache.org/jira/browse/FLINK-32802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757836#comment-17757836 ] ConradJam commented on FLINK-32802: --- I want to try this task what should I do Before that, I carefully read the relevant content of FLIP-291 At the same time I wrote related documentation for review FLINK-32671 [~renqs] [~dmvk] > Release Testing: Verify FLIP-291: Externalized Declarative Resource Management > -- > > Key: FLINK-32802 > URL: https://issues.apache.org/jira/browse/FLINK-32802 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-32803) Release Testing: Verify FLINK-32165 - Improve observability of fine-grained resource management
[ https://issues.apache.org/jira/browse/FLINK-32803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-32803: - Assignee: Weihua Hu > Release Testing: Verify FLINK-32165 - Improve observability of fine-grained > resource management > --- > > Key: FLINK-32803 > URL: https://issues.apache.org/jira/browse/FLINK-32803 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Weihua Hu >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32803) Release Testing: Verify FLINK-32165 - Improve observability of fine-grained resource management
[ https://issues.apache.org/jira/browse/FLINK-32803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757834#comment-17757834 ] Qingsheng Ren commented on FLINK-32803: --- [~huweihua] Thanks for taking this! Assigned to you just now. > Release Testing: Verify FLINK-32165 - Improve observability of fine-grained > resource management > --- > > Key: FLINK-32803 > URL: https://issues.apache.org/jira/browse/FLINK-32803 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Weihua Hu >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
[ https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-32804: - Assignee: Matt Wang > Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink > - > > Key: FLINK-32804 > URL: https://issues.apache.org/jira/browse/FLINK-32804 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Matt Wang >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
[ https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757832#comment-17757832 ] Qingsheng Ren commented on FLINK-32804: --- [~wangm92] Thanks for volunteering! Assigned to you just now. > Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink > - > > Key: FLINK-32804 > URL: https://issues.apache.org/jira/browse/FLINK-32804 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Matt Wang >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] Jiabao-Sun commented on pull request #23233: [FLINK-32847][flink-runtime][JUnit5 Migration] Module: The operators package of flink-runtime
Jiabao-Sun commented on PR #23233: URL: https://github.com/apache/flink/pull/23233#issuecomment-1689345268 Thanks @wangzzu for the detailed review again. PTAL when you have time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Jiabao-Sun commented on a diff in pull request #23233: [FLINK-32847][flink-runtime][JUnit5 Migration] Module: The operators package of flink-runtime
Jiabao-Sun commented on code in PR #23233: URL: https://github.com/apache/flink/pull/23233#discussion_r1302527806 ## flink-runtime/src/test/java/org/apache/flink/runtime/operators/util/BitSetTest.java: ## @@ -19,17 +19,20 @@ import org.apache.flink.core.memory.MemorySegment; import org.apache.flink.core.memory.MemorySegmentFactory; +import org.apache.flink.testutils.junit.extensions.parameterized.ParameterizedTestExtension; +import org.apache.flink.testutils.junit.extensions.parameterized.Parameters; -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.TestTemplate; +import org.junit.jupiter.api.extension.ExtendWith; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import java.util.Arrays; +import java.util.List; -@RunWith(Parameterized.class) +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +@ExtendWith(ParameterizedTestExtension.class) public class BitSetTest { Review Comment: I'm afraid we can't change a `ParameterizedTestExtension`'s modifier to public now. It will cause exceptions. ``` java.lang.IllegalStateException: Failed to invoke parameter provider at org.apache.flink.testutils.junit.extensions.parameterized.ParameterizedTestExtension.provideTestTemplateInvocationContexts(ParameterizedTestExtension.java:89) at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.lambda$execute$0(TestTemplateTestDescriptor.java:106) at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:110) at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:44) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:185) at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:155) at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:135) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:9
[jira] [Resolved] (FLINK-32799) Release Testing: Verify FLINK-26603 -Decouple Hive with Flink planner
[ https://issues.apache.org/jira/browse/FLINK-32799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luoyuxia resolved FLINK-32799. -- Resolution: Fixed [~ruanhang1993] Thanks for verifying... Close it now... > Release Testing: Verify FLINK-26603 -Decouple Hive with Flink planner > - > > Key: FLINK-32799 > URL: https://issues.apache.org/jira/browse/FLINK-32799 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Hang Ruan >Priority: Major > Fix For: 1.18.0 > > Attachments: hive.png, lib.png > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] wangzzu commented on a diff in pull request #23233: [FLINK-32847][flink-runtime][JUnit5 Migration] Module: The operators package of flink-runtime
wangzzu commented on code in PR #23233: URL: https://github.com/apache/flink/pull/23233#discussion_r1302499459 ## flink-runtime/src/test/java/org/apache/flink/runtime/operators/CrossTaskTest.java: ## @@ -244,19 +249,13 @@ public void testFailingStreamCrossTask() { final CrossDriver testTask = new CrossDriver<>(); -try { -testDriver(testTask, MockFailingCrossStub.class); -Assert.fail("Exception not forwarded."); -} catch (ExpectedTestException etex) { -// good! -} catch (Exception e) { -e.printStackTrace(); -Assert.fail("Test failed due to an exception."); -} +assertThatThrownBy(() -> testDriver(testTask, MockFailingCrossStub.class)) +.withFailMessage("Exception not forwarded.") Review Comment: same as above, you can check it globally ## flink-runtime/src/test/java/org/apache/flink/runtime/operators/CrossTaskTest.java: ## @@ -155,19 +162,13 @@ public void testFailingBlockCrossTask2() { final CrossDriver testTask = new CrossDriver<>(); -try { -testDriver(testTask, MockFailingCrossStub.class); -Assert.fail("Exception not forwarded."); -} catch (ExpectedTestException etex) { -// good! -} catch (Exception e) { -e.printStackTrace(); -Assert.fail("Test failed due to an exception."); -} +assertThatThrownBy(() -> testDriver(testTask, MockFailingCrossStub.class)) +.withFailMessage("Exception not forwarded.") Review Comment: i think this msg can remove ## flink-runtime/src/test/java/org/apache/flink/runtime/operators/JoinTaskTest.java: ## @@ -414,19 +415,13 @@ public void testFailingMatchTask() { addInput(new UniformRecordGenerator(keyCnt1, valCnt1, true)); addInput(new UniformRecordGenerator(keyCnt2, valCnt2, true)); -try { -testDriver(testTask, MockFailingMatchStub.class); -Assert.fail("Driver did not forward Exception."); -} catch (ExpectedTestException e) { -// good! -} catch (Exception e) { -e.printStackTrace(); -Assert.fail("The test caused an exception."); -} +assertThatThrownBy(() -> testDriver(testTask, MockFailingMatchStub.class)) +.withFailMessage("Driver did not forward Exception.") Review Comment: same as above ## flink-runtime/src/test/java/org/apache/flink/runtime/operators/hash/MutableHashTableTestBase.java: ## @@ -122,14 +122,16 @@ public void testDifferentProbers() { AbstractHashTableProber prober2 = table.getProber(intPairComparator, pairComparator); -assertFalse(prober1 == prober2); +assertThat(prober1).isNotEqualTo(prober2); table.close(); // (This also tests calling close without calling open first.) -assertEquals("Memory lost", NUM_MEM_PAGES, table.getFreeMemory().size()); +assertThat(table.getFreeMemory().size()) +.withFailMessage("Memory lost") +.isEqualTo(NUM_MEM_PAGES); Review Comment: `hasSize()` ## flink-runtime/src/test/java/org/apache/flink/runtime/operators/util/BitSetTest.java: ## @@ -19,17 +19,20 @@ import org.apache.flink.core.memory.MemorySegment; import org.apache.flink.core.memory.MemorySegmentFactory; +import org.apache.flink.testutils.junit.extensions.parameterized.ParameterizedTestExtension; +import org.apache.flink.testutils.junit.extensions.parameterized.Parameters; -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.junit.runners.Parameterized; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.TestTemplate; +import org.junit.jupiter.api.extension.ExtendWith; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; +import java.util.Arrays; +import java.util.List; -@RunWith(Parameterized.class) +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; + +@ExtendWith(ParameterizedTestExtension.class) public class BitSetTest { Review Comment: `public` is not necessary -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List
[ https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tan Kim updated FLINK-32941: Priority: Critical (was: Major) > Table API Bridge `toDataStream(targetDataType)` function not working > correctly for Java List > > > Key: FLINK-32941 > URL: https://issues.apache.org/jira/browse/FLINK-32941 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Tan Kim >Priority: Critical > Labels: bridge > > When the code below is executed, only the first element of the list is > assigned to the List variable in MyPoJo repeatedly. > {code:java} > case class Item( > name: String > ) > case class MyPojo( > @DataTypeHist("RAW") items: java.util.List[Item] > ) > ... > tableEnv > .sqlQuery("select items from table") > .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} > > For example, if you have the following list coming in as input, > ["a","b","c"] > The value actually stored in MyPojo's list variable is > ["a","a","a"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] flinkbot commented on pull request #23265: [FLINK-32853][runtime][JUnit5 Migration] The security, taskmanager an…
flinkbot commented on PR #23265: URL: https://github.com/apache/flink/pull/23265#issuecomment-1689269938 ## CI report: * 461f13bcd3383e07cc1f2af915338d76a77a46c0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32853) [JUnit5 Migration] The security, taskmanager and source packages of flink-runtime module
[ https://issues.apache.org/jira/browse/FLINK-32853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-32853: --- Labels: pull-request-available (was: ) > [JUnit5 Migration] The security, taskmanager and source packages of > flink-runtime module > > > Key: FLINK-32853 > URL: https://issues.apache.org/jira/browse/FLINK-32853 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Rui Fan >Assignee: Yangyang ZHANG >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] zhangyy91 opened a new pull request, #23265: [FLINK-32853][runtime][JUnit5 Migration] The security, taskmanager an…
zhangyy91 opened a new pull request, #23265: URL: https://github.com/apache/flink/pull/23265 ## What is the purpose of the change Migrate security, taskmanager and source packages of flink-runtime module to JUnit5 ## Brief change log Migrate security, taskmanager and source packages of flink-runtime module to JUnit5 ## Verifying this change This change is already covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
[ https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757795#comment-17757795 ] Matt Wang commented on FLINK-32804: --- [~renqs] can i task this, I'm interested in this > Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink > - > > Key: FLINK-32804 > URL: https://issues.apache.org/jira/browse/FLINK-32804 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32803) Release Testing: Verify FLINK-32165 - Improve observability of fine-grained resource management
[ https://issues.apache.org/jira/browse/FLINK-32803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757793#comment-17757793 ] Weihua Hu commented on FLINK-32803: --- I would like to take this issue. [~chesnay] [~renqs] > Release Testing: Verify FLINK-32165 - Improve observability of fine-grained > resource management > --- > > Key: FLINK-32803 > URL: https://issues.apache.org/jira/browse/FLINK-32803 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32799) Release Testing: Verify FLINK-26603 -Decouple Hive with Flink planner
[ https://issues.apache.org/jira/browse/FLINK-32799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757791#comment-17757791 ] Hang Ruan commented on FLINK-32799: --- I have also tested one insert and select. It looks good. > Release Testing: Verify FLINK-26603 -Decouple Hive with Flink planner > - > > Key: FLINK-32799 > URL: https://issues.apache.org/jira/browse/FLINK-32799 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Hang Ruan >Priority: Major > Fix For: 1.18.0 > > Attachments: hive.png, lib.png > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] flinkbot commented on pull request #23264: Refactor @Test(expected) with assertThrows
flinkbot commented on PR #23264: URL: https://github.com/apache/flink/pull/23264#issuecomment-1689208523 ## CI report: * 998a2e1a6bb835bceffc405e82acc815be87b3d9 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
[ https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757780#comment-17757780 ] Shengkai Fang commented on FLINK-32731: --- Thanks for the sharing. I think we should add some retry mechanism to restart the container when namenode fails. I will open a PR to fix this soon. > SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException > - > > Key: FLINK-32731 > URL: https://issues.apache.org/jira/browse/FLINK-32731 > Project: Flink > Issue Type: Bug > Components: Table SQL / Gateway >Affects Versions: 1.18.0 >Reporter: Matthias Pohl >Assignee: Shengkai Fang >Priority: Major > Labels: pull-request-available, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=10987 > {code} > Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in > org.apache.flink.table.gateway.SqlGatewayE2ECase > Aug 02 02:14:04 02:14:04.966 [ERROR] > org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement > Time elapsed: 31.437 s <<< ERROR! > Aug 02 02:14:04 java.util.concurrent.ExecutionException: > Aug 02 02:14:04 java.sql.SQLException: > org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to > execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d. > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267) > Aug 02 02:14:04 at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > Aug 02 02:14:04 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Aug 02 02:14:04 at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > Aug 02 02:14:04 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Aug 02 02:14:04 at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > Aug 02 02:14:04 at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > Aug 02 02:14:04 at java.lang.Thread.run(Thread.java:750) > Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could > not execute CreateTable in path `hive`.`default`.`CsvTable` > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289) > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939) > Aug 02 02:14:04 at > org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84) > Aug 02 02:14:04 at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258) > Aug 02 02:14:04 ... 7 more > Aug 02 02:14:04 Caused by: > org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create > table default.CsvTable > Aug 02 02:14:04 at > org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547) > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950) > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283) > Aug 02 02:14:04 ... 16 more > Aug 02 02:14:04 Caused by: MetaException(message:Got exception: > java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to > hadoop-master:9000 failed on connection exception: java.net.ConnectException: > Connection refused; For more d
[GitHub] [flink] 1996fanrui commented on a diff in pull request #23219: [FLINK-20681][yarn] Support specifying the hdfs path when ship archives or files
1996fanrui commented on code in PR #23219: URL: https://github.com/apache/flink/pull/23219#discussion_r1302400685 ## flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java: ## @@ -156,9 +158,9 @@ public class YarnClusterDescriptor implements ClusterDescriptor { private final boolean sharedYarnClient; /** Lazily initialized list of files to ship. */ -private final List shipFiles = new LinkedList<>(); +private final List shipFiles = new LinkedList<>(); -private final List shipArchives = new LinkedList<>(); +private final List shipArchives = new LinkedList<>(); Review Comment: How about adding a comment to describe we have converted the option path str to the Path with schema and absolute path? It's more clear for other developers, and it's clear to use them. ## flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java: ## @@ -272,17 +272,25 @@ public class YarnConfigOptions { .noDefaultValue() .withDeprecatedKeys("yarn.ship-directories") .withDescription( -"A semicolon-separated list of files and/or directories to be shipped to the YARN cluster."); +"A semicolon-separated list of files and/or directories to be shipped to the YARN " ++ "cluster. These files/directories can come from the local path of flink client " ++ "or HDFS. For example, " Review Comment: How about updating `from the local client and/or HDFS` to `the local path of flink client or HDFS` as well? ## flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java: ## @@ -202,16 +204,27 @@ public YarnClusterDescriptor( this.nodeLabel = flinkConfiguration.getString(YarnConfigOptions.NODE_LABEL); } -private Optional> decodeFilesToShipToCluster( +private Optional> decodeFilesToShipToCluster( final Configuration configuration, final ConfigOption> configOption) { checkNotNull(configuration); checkNotNull(configOption); -final List files = -ConfigUtils.decodeListFromConfig(configuration, configOption, File::new); +List files = ConfigUtils.decodeListFromConfig(configuration, configOption, Path::new); +files = files.stream().map(this::enrichPathSchemaIfNeeded).collect(Collectors.toList()); return files.isEmpty() ? Optional.empty() : Optional.of(files); } +private Path enrichPathSchemaIfNeeded(Path path) { +if (isWithoutSchema(path)) { +return new Path(new File(path.toString()).toURI()); Review Comment: This class has a couple of ` new Path(new File(pathStr).toURI())` to convert path from `localPathStr` to the hdfs `Path`, could we extract one method to do it? ## flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java: ## @@ -202,16 +204,27 @@ public YarnClusterDescriptor( this.nodeLabel = flinkConfiguration.getString(YarnConfigOptions.NODE_LABEL); } -private Optional> decodeFilesToShipToCluster( +private Optional> decodeFilesToShipToCluster( final Configuration configuration, final ConfigOption> configOption) { checkNotNull(configuration); checkNotNull(configOption); -final List files = -ConfigUtils.decodeListFromConfig(configuration, configOption, File::new); +List files = ConfigUtils.decodeListFromConfig(configuration, configOption, Path::new); +files = files.stream().map(this::enrichPathSchemaIfNeeded).collect(Collectors.toList()); return files.isEmpty() ? Optional.empty() : Optional.of(files); } +private Path enrichPathSchemaIfNeeded(Path path) { +if (isWithoutSchema(path)) { +return new Path(new File(path.toString()).toURI()); +} +return path; +} + +private boolean isWithoutSchema(Path path) { +return StringUtils.isNullOrWhitespaceOnly(path.toUri().getScheme()); +} Review Comment: How about updating the `Path enrichPathSchemaIfNeeded(Path path)` to the `Path createPathWithSchema(String path)`? If so, the `files = files.stream().map(this::enrichPathSchemaIfNeeded).collect(Collectors.toList());` can be removed. We just call `List files = ConfigUtils.decodeListFromConfig(configuration, configOption, this::createPathWithSchema);` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] masteryhx commented on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API
masteryhx commented on PR #17443: URL: https://github.com/apache/flink/pull/17443#issuecomment-1689189718 Hi, @zoltar9264 Just Kindly ping, Are you still working on the pr? Could you rebase the newest master branch and resolve comments? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-15014) Refactor KeyedStateInputFormat to support multiple types of user functions
[ https://issues.apache.org/jira/browse/FLINK-15014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu resolved FLINK-15014. -- Resolution: Fixed I saw the pr has been merged and the probelm should have been resolved, just marked as resolved. > Refactor KeyedStateInputFormat to support multiple types of user functions > -- > > Key: FLINK-15014 > URL: https://issues.apache.org/jira/browse/FLINK-15014 > Project: Flink > Issue Type: Improvement > Components: API / State Processor >Affects Versions: 1.10.0 >Reporter: Seth Wiesman >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-19917) RocksDBInitTest.testTempLibFolderDeletedOnFail fails on Windows
[ https://issues.apache.org/jira/browse/FLINK-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu closed FLINK-19917. Resolution: Cannot Reproduce I closed this as it has not been reproduced more than one year, please reopen it if reproduced. > RocksDBInitTest.testTempLibFolderDeletedOnFail fails on Windows > > > Key: FLINK-19917 > URL: https://issues.apache.org/jira/browse/FLINK-19917 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Reporter: Andrey Zagrebin >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor > > {code:java} > java.lang.AssertionError: > Expected :0 > Actual :2{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration
[ https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1775#comment-1775 ] Jane Chan commented on FLINK-32785: --- This ticket aims to verify FLINK-31791: Enhance COMPILED PLAN to support operator-level state TTL configuration. More details about this feature and how to use it can be found in this [documentation|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/overview/#configure-operator-level-state-ttl]. The verification steps are as follows. h3. Part I: Functionality Verification 1. Start the standalone session cluster and sql client. 2. Execute the following DDL statements. {code:sql} CREATE TABLE `default_catalog`.`default_database`.`Orders` ( `order_id` INT, `line_order_id` INT ) WITH ( 'connector' = 'datagen' ); CREATE TABLE `default_catalog`.`default_database`.`LineOrders` ( `line_order_id` INT, `ship_mode` STRING ) WITH ( 'connector' = 'datagen' ); CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` ( `order_id` INT, `line_order_id` INT, `ship_mode` STRING ) WITH ( 'connector' = 'print' ); {code} 3. Generate Compiled Plan {code:sql} COMPILE PLAN '/path/to/plan.json' FOR INSERT INTO OrdersShipInfo SELECT a.order_id, a.line_order_id, b.ship_mode FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id; {code} 4. Verify JSON plan content The generated JSON file should contain the following "state" JSON array for StreamJoin ExecNode. {code:json} { "id" : 5, "type" : "stream-exec-join_1", "joinSpec" : { ... }, "state" : [ { "index" : 0, "ttl" : "0 ms", "name" : "leftState" }, { "index" : 1, "ttl" : "0 ms", "name" : "rightState" } ], "inputProperties": [...], "outputType": ..., "description": ... } {code} h3. Part II: Compatibility Verification Repeat the previously described steps using the flink-1.17 release, and then execute the generated plan using 1.18 via {code:sql} EXECUTE PLAN '/path/to/plan-generated-by-old-flink-version.json' {code} > Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support > operator-level state TTL configuration > - > > Key: FLINK-32785 > URL: https://issues.apache.org/jira/browse/FLINK-32785 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-20772) RocksDBValueState with TTL occurs NullPointerException when calling update(null) method
[ https://issues.apache.org/jira/browse/FLINK-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757776#comment-17757776 ] Hangxiang Yu commented on FLINK-20772: -- Hi, [~dorbae] Sorry for the late reply. I think you are right. We should make TTLValueState also follow the protocol of ValueState#update. > RocksDBValueState with TTL occurs NullPointerException when calling > update(null) method > > > Key: FLINK-20772 > URL: https://issues.apache.org/jira/browse/FLINK-20772 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.11.2 > Environment: Flink version: 1.11.2 > Flink Cluster: Standalone cluster with 3 Job managers and Task managers on > CentOS 7 >Reporter: Seongbae Chang >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > beginner > > h2. Problem > * I use ValueState for my custom trigger and set TTL for these ValueState in > RocksDB backend environment. > * I found an error when I used this code. I know that > ValueState.update(null) works equally to ValueState.clear() in general. > Unfortunately, this error occurs after using TTL > {code:java} > // My Code > ctx.getPartitionedState(batchTotalSizeStateDesc).update(null); > {code} > * I tested this in Flink 1.11.2, but I think it would be a problem in upper > versions. > * Plus, I'm a beginner. So, if there is any problem in this discussion > issue, please give me advice about that. And I'll fix it! > {code:java} > // Error Stacktrace > Caused by: TimerException{org.apache.flink.util.FlinkRuntimeException: Error > while adding data to RocksDB} > ... 12 more > Caused by: org.apache.flink.util.FlinkRuntimeException: Error while adding > data to RocksDB > at > org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:108) > at > org.apache.flink.runtime.state.ttl.TtlValueState.update(TtlValueState.java:50) > at .onProcessingTime(ActionBatchTimeTrigger.java:102) > at .onProcessingTime(ActionBatchTimeTrigger.java:29) > at > org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onProcessingTime(WindowOperator.java:902) > at > org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onProcessingTime(WindowOperator.java:498) > at > org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.onProcessingTime(InternalTimerServiceImpl.java:260) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1220) > ... 11 more > Caused by: java.lang.NullPointerException > at > org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:69) > at > org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:32) > at > org.apache.flink.api.common.typeutils.CompositeSerializer.serialize(CompositeSerializer.java:142) > at > org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValueInternal(AbstractRocksDBState.java:158) > at > org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:178) > at > org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:167) > at > org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:106) > ... 18 more > {code} > > h2. Reason > * It relates to RocksDBValueState with TTLValueState > * In RocksDBValueState(as well as other types of ValueState), > *.update(null)* has to be caught in if-clauses(null checking). However, it > skips the null checking and then tries to serialize the null value. > {code:java} > // > https://github.com/apache/flink/blob/release-1.11/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBValueState.java#L96-L110 > @Override > public void update(V value) { > if (value == null) { > clear(); > return; > } > > try { > backend.db.put(columnFamily, writeOptions, > serializeCurrentKeyWithGroupAndNamespace(), serializeValue(value)); > } catch (Exception e) { > throw new FlinkRuntimeException("Error while adding data to RocksDB", > e); > } > }{code} > * It is because that TtlValueState wraps the value(null) with the > LastAccessTime and makes the new TtlValue Object with the null value. > {code:java} > // > https://github.com/apache/flink/blob/release-1.11/flink-runtime/src/main/java/org/apache/flink/runtime/state/ttl/TtlValueState.java#L47-L51 > @Override > public void update(T va
[GitHub] [flink] wangzzu commented on pull request #23260: [hotfix][docs] Update the parameter types of startRemoteMetricsRpcService in javadocs
wangzzu commented on PR #23260: URL: https://github.com/apache/flink/pull/23260#issuecomment-1689181414 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-25814) AdaptiveSchedulerITCase.testStopWithSavepointFailOnFirstSavepointSucceedOnSecond failed due to stop-with-savepoint failed
[ https://issues.apache.org/jira/browse/FLINK-25814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu closed FLINK-25814. Resolution: Cannot Reproduce Closed this as not reproduced more than one year. Please reopen it if reproduced. > AdaptiveSchedulerITCase.testStopWithSavepointFailOnFirstSavepointSucceedOnSecond > failed due to stop-with-savepoint failed > - > > Key: FLINK-25814 > URL: https://issues.apache.org/jira/browse/FLINK-25814 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.13.5, 1.14.6 >Reporter: Yun Gao >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > test-stability > > {code:java} > 2022-01-25T05:37:28.6339368Z Jan 25 05:37:28 [ERROR] > testStopWithSavepointFailOnFirstSavepointSucceedOnSecond(org.apache.flink.test.scheduling.AdaptiveSchedulerITCase) > Time elapsed: 300.269 s <<< ERROR! > 2022-01-25T05:37:28.6340216Z Jan 25 05:37:28 > java.util.concurrent.ExecutionException: > org.apache.flink.util.FlinkException: Stop with savepoint operation could not > be completed. > 2022-01-25T05:37:28.6342330Z Jan 25 05:37:28 at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2022-01-25T05:37:28.6343776Z Jan 25 05:37:28 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2022-01-25T05:37:28.6344983Z Jan 25 05:37:28 at > org.apache.flink.test.scheduling.AdaptiveSchedulerITCase.testStopWithSavepointFailOnFirstSavepointSucceedOnSecond(AdaptiveSchedulerITCase.java:231) > 2022-01-25T05:37:28.6346165Z Jan 25 05:37:28 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-01-25T05:37:28.6347145Z Jan 25 05:37:28 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2022-01-25T05:37:28.6348207Z Jan 25 05:37:28 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2022-01-25T05:37:28.6349147Z Jan 25 05:37:28 at > java.lang.reflect.Method.invoke(Method.java:498) > 2022-01-25T05:37:28.6350068Z Jan 25 05:37:28 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2022-01-25T05:37:28.6351116Z Jan 25 05:37:28 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2022-01-25T05:37:28.6352132Z Jan 25 05:37:28 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2022-01-25T05:37:28.6353816Z Jan 25 05:37:28 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2022-01-25T05:37:28.6354863Z Jan 25 05:37:28 at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2022-01-25T05:37:28.6355983Z Jan 25 05:37:28 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2022-01-25T05:37:28.6356958Z Jan 25 05:37:28 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2022-01-25T05:37:28.6357871Z Jan 25 05:37:28 at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > 2022-01-25T05:37:28.6358799Z Jan 25 05:37:28 at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2022-01-25T05:37:28.6359658Z Jan 25 05:37:28 at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2022-01-25T05:37:28.6360506Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2022-01-25T05:37:28.6361425Z Jan 25 05:37:28 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2022-01-25T05:37:28.6362486Z Jan 25 05:37:28 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2022-01-25T05:37:28.6364531Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2022-01-25T05:37:28.6365709Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2022-01-25T05:37:28.6366600Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2022-01-25T05:37:28.6367488Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2022-01-25T05:37:28.6368333Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2022-01-25T05:37:28.6369236Z Jan 25 05:37:28 at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2022-01-25T05:37:28.6370133Z Jan 25 05:37:28 at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2022-01-25T05:37:28.6371056Z Jan 25 05:37:28 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2022-01-25T05:37:28.6371957Z Jan 25 05:37:28 at
[jira] [Commented] (FLINK-26490) Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.
[ https://issues.apache.org/jira/browse/FLINK-26490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757774#comment-17757774 ] Hangxiang Yu commented on FLINK-26490: -- Hi, [~liufangqi] . Just kindly ping, are you still working on this ? > Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary. > -- > > Key: FLINK-26490 > URL: https://issues.apache.org/jira/browse/FLINK-26490 > Project: Flink > Issue Type: Improvement > Components: Runtime / State Backends >Reporter: chenfengLiu >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > pull-request-available > > Since Flink introduce key group and MaxParallelism, Flink can rescale with > less cost. > But when we want to update the job parallelism bigger than the > MaxParallelism, it 's impossible cause there are so many MaxParallelism check > that require new parallelism should not bigger than MaxParallelism. > Actually, when an operator which don't contain keyed state, there should be > no problem when update the parallelism bigger than the MaxParallelism,, cause > only keyed state need MaxParallelism and key group. > So should we remove this check or auto adjust the MaxParallelism when we > restore an operator state that don't contain keyed state? > It can make job restore from checkpoint easier. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-31678) NonHAQueryableStateFsBackendITCase.testAggregatingState: Query did no succeed
[ https://issues.apache.org/jira/browse/FLINK-31678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu closed FLINK-31678. Resolution: Cannot Reproduce Closed this because: # not reproduced more than 5 months # Querable State has been marked as deprecated in 1.18 > NonHAQueryableStateFsBackendITCase.testAggregatingState: Query did no succeed > - > > Key: FLINK-31678 > URL: https://issues.apache.org/jira/browse/FLINK-31678 > Project: Flink > Issue Type: Bug > Components: Runtime / Queryable State, Tests >Affects Versions: 1.18.0 >Reporter: Matthias Pohl >Assignee: Hangxiang Yu >Priority: Major > Labels: stale-assigned, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47748&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=be5a4b15-4b23-56b1-7582-795f58a645a2&l=40484 > {code} > ava.lang.AssertionError: Did not succeed query > Mar 31 01:24:32 at org.junit.Assert.fail(Assert.java:89) > Mar 31 01:24:32 at org.junit.Assert.assertTrue(Assert.java:42) > Mar 31 01:24:32 at > org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase.testAggregatingState(AbstractQueryableStateTestBase.java:1094) > [...] > Mar 31 01:24:32 Suppressed: java.util.concurrent.TimeoutException > Mar 31 01:24:32 at > java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1769) > Mar 31 01:24:32 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > Mar 31 01:24:32 at > org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase$AutoCancellableJob.close(AbstractQueryableStateTestBase.java:1351) > Mar 31 01:24:32 at > org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase.testAggregatingState(AbstractQueryableStateTestBase.java:1096) > Mar 31 01:24:32 ... 52 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-17755) Support side-output of expiring states with TTL.
[ https://issues.apache.org/jira/browse/FLINK-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757772#comment-17757772 ] Hangxiang Yu commented on FLINK-17755: -- Hi, [~roeyshemtov] Could you also share the specific scenario about this ? Why you want to get the expired states and what do you want to do with them? > Support side-output of expiring states with TTL. > > > Key: FLINK-17755 > URL: https://issues.apache.org/jira/browse/FLINK-17755 > Project: Flink > Issue Type: New Feature > Components: API / State Processor >Reporter: Roey Shem Tov >Priority: Minor > Labels: auto-deprioritized-major > > When we set a StateTTLConfig to StateDescriptor, then when a record has been > expired, it is deleted from the StateBackend. > I want suggest a new feature, that we can get the expiring results as side > output, to process them and not just delete them. > For example, if we have a ListState that have a TTL enabled, we can get the > expiring records in the list as side-output. > What do you think? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode
[ https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757771#comment-17757771 ] Yubin Li commented on FLINK-32906: -- [~jingge] Thanks for claring that. I found that both configurations also takes effect in fixed-length type (eg. char), Is It expected? i am very glad to contribute :) !image-2023-08-23-10-39-26-602.png! !image-2023-08-23-10-39-10-225.png! in another session, !image-2023-08-23-10-41-30-913.png! !image-2023-08-23-10-41-42-513.png! > Release Testing: Verify FLINK-30025 Unified the max display column width for > SqlClient and Table APi in both Streaming and Batch execMode > - > > Key: FLINK-32906 > URL: https://issues.apache.org/jira/browse/FLINK-32906 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jing Ge >Assignee: Yubin Li >Priority: Major > Fix For: 1.18.0 > > Attachments: image-2023-08-22-18-54-45-122.png, > image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, > image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, > image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, > image-2023-08-23-10-39-26-602.png, image-2023-08-23-10-41-30-913.png, > image-2023-08-23-10-41-42-513.png > > > more info could be found at > FLIP-279 > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode] > > Tests: > Both configs could be set with new value and following behaviours are > expected : > | | |sql-client.display.max-column-width, default value is > 30|table.display.max-column-width, default value is 30| > |sqlclient|Streaming|text longer than the value will be truncated and > replaced with “...”|Text longer than the value will be truncated and replaced > with “...”| > |sqlclient|Batch|text longer than the value will be truncated and replaced > with “...”|Text longer than the value will be truncated and replaced with > “...”| > |Table API|Streaming|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > |Table API|Batch|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > > Please pay attention that this task offers a backward compatible solution and > deprecated sql-client.display.max-column-width, which means once > sql-client.display.max-column-width is used, it falls into the old scenario > where table.display.max-column-width didn't exist, any changes of > table.display.max-column-width won't take effect. You should test it either > by only using table.display.max-column-width or by using > sql-client.display.max-column-width, but not both of them back and forth. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31970) "Key group 0 is not in KeyGroupRange" when using CheckpointedFunction
[ https://issues.apache.org/jira/browse/FLINK-31970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757770#comment-17757770 ] Hangxiang Yu commented on FLINK-31970: -- Hi, [~YordanPavlov] Just kindly ping. I think [~pnowojski] 's analysis is right, does this has been resolved after updateing your code ? > "Key group 0 is not in KeyGroupRange" when using CheckpointedFunction > - > > Key: FLINK-31970 > URL: https://issues.apache.org/jira/browse/FLINK-31970 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.17.0 >Reporter: Yordan Pavlov >Priority: Major > Attachments: fill-topic.sh, main.scala > > > I am experiencing a problem where the following exception would be thrown on > Flink stop (stop with savepoint): > > {code:java} > org.apache.flink.util.SerializedThrowable: > java.lang.IllegalArgumentException: Key group 0 is not in > KeyGroupRange{startKeyGroup=86, endKeyGroup=127}.{code} > > I do not have a non deterministic keyBy() operator in fact, I use > {code:java} > .keyBy(_ => 1){code} > I believe the problem is related to using RocksDB state along with a > {code:java} > CheckpointedFunction{code} > In my test program I have commented out a reduction of the parallelism which > would make the problem go away. I am attaching a standalone program which > presents the problem and also a script which generates the input data. For > clarity I would paste here the essence of the job: > > > {code:scala} > env.fromSource(kafkaSource, watermarkStrategy, "KafkaSource") > .setParallelism(3) > .keyBy(_ => 1) > .window(TumblingEventTimeWindows.of(Time.of(1, TimeUnit.MILLISECONDS))) > .apply(new TestWindow()) > /* .setParallelism(1) this would prevent the problem */ > .uid("window tester") > .name("window tester") > .print() > class TestWindow() extends WindowFunction[(Long, Int), Long, Int, TimeWindow] > with CheckpointedFunction { > var state: ValueState[Long] = _ > var count = 0 > override def snapshotState(functionSnapshotContext: > FunctionSnapshotContext): Unit = { > state.update(count) > } > override def initializeState(context: FunctionInitializationContext): Unit > = { > val storeDescriptor = new > ValueStateDescriptor[Long]("state-xrp-dex-pricer", > createTypeInformation[Long]) > state = context.getKeyedStateStore.getState(storeDescriptor) > } > override def apply(key: Int, window: TimeWindow, input: Iterable[(Long, > Int)], out: Collector[Long]): Unit = { > count += input.size > out.collect(count) > } > }{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-6912) Consider changing the RichFunction#open method signature to take no arguments.
[ https://issues.apache.org/jira/browse/FLINK-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-6912: -- Labels: pull-request-available (was: ) > Consider changing the RichFunction#open method signature to take no arguments. > -- > > Key: FLINK-6912 > URL: https://issues.apache.org/jira/browse/FLINK-6912 > Project: Flink > Issue Type: Sub-task > Components: API / DataSet, API / DataStream >Affects Versions: 1.3.0 >Reporter: Mikhail Pryakhin >Priority: Not a Priority > Labels: pull-request-available > Fix For: 2.0.0 > > > RichFunction#open(org.apache.flink.configuration.Configuration) method takes > a Configuration instance as an argument which is always [passed as a new > instance|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/AbstractUdfStreamOperator.java#L111] > bearing no configuration parameters. As I figured out it is a remnant of the > past since that method signature originates from the Record API. Consider > changing the RichFunction#open method signature to take no arguments as well > as actualizing java docs. > You can find the complete discussion > [here|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/RichMapFunction-setup-method-td13696.html] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] WencongLiu opened a new pull request, #23058: [FLINK-6912] Remove parameter in RichFunction#open
WencongLiu opened a new pull request, #23058: URL: https://github.com/apache/flink/pull/23058 ## What is the purpose of the change Remove parameter in RichFunction#open. ## Brief change log - Add a new class OpenContext - Add a new method RichFunction#open(OpenConext openContext) ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WencongLiu closed pull request #23058: [FLINK-6912] Remove parameter in RichFunction#open
WencongLiu closed pull request #23058: [FLINK-6912] Remove parameter in RichFunction#open URL: https://github.com/apache/flink/pull/23058 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode
[ https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yubin Li updated FLINK-32906: - Attachment: image-2023-08-23-10-41-42-513.png > Release Testing: Verify FLINK-30025 Unified the max display column width for > SqlClient and Table APi in both Streaming and Batch execMode > - > > Key: FLINK-32906 > URL: https://issues.apache.org/jira/browse/FLINK-32906 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jing Ge >Assignee: Yubin Li >Priority: Major > Fix For: 1.18.0 > > Attachments: image-2023-08-22-18-54-45-122.png, > image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, > image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, > image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, > image-2023-08-23-10-39-26-602.png, image-2023-08-23-10-41-30-913.png, > image-2023-08-23-10-41-42-513.png > > > more info could be found at > FLIP-279 > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode] > > Tests: > Both configs could be set with new value and following behaviours are > expected : > | | |sql-client.display.max-column-width, default value is > 30|table.display.max-column-width, default value is 30| > |sqlclient|Streaming|text longer than the value will be truncated and > replaced with “...”|Text longer than the value will be truncated and replaced > with “...”| > |sqlclient|Batch|text longer than the value will be truncated and replaced > with “...”|Text longer than the value will be truncated and replaced with > “...”| > |Table API|Streaming|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > |Table API|Batch|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > > Please pay attention that this task offers a backward compatible solution and > deprecated sql-client.display.max-column-width, which means once > sql-client.display.max-column-width is used, it falls into the old scenario > where table.display.max-column-width didn't exist, any changes of > table.display.max-column-width won't take effect. You should test it either > by only using table.display.max-column-width or by using > sql-client.display.max-column-width, but not both of them back and forth. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode
[ https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yubin Li updated FLINK-32906: - Attachment: image-2023-08-23-10-41-30-913.png > Release Testing: Verify FLINK-30025 Unified the max display column width for > SqlClient and Table APi in both Streaming and Batch execMode > - > > Key: FLINK-32906 > URL: https://issues.apache.org/jira/browse/FLINK-32906 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jing Ge >Assignee: Yubin Li >Priority: Major > Fix For: 1.18.0 > > Attachments: image-2023-08-22-18-54-45-122.png, > image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, > image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, > image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, > image-2023-08-23-10-39-26-602.png, image-2023-08-23-10-41-30-913.png, > image-2023-08-23-10-41-42-513.png > > > more info could be found at > FLIP-279 > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode] > > Tests: > Both configs could be set with new value and following behaviours are > expected : > | | |sql-client.display.max-column-width, default value is > 30|table.display.max-column-width, default value is 30| > |sqlclient|Streaming|text longer than the value will be truncated and > replaced with “...”|Text longer than the value will be truncated and replaced > with “...”| > |sqlclient|Batch|text longer than the value will be truncated and replaced > with “...”|Text longer than the value will be truncated and replaced with > “...”| > |Table API|Streaming|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > |Table API|Batch|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > > Please pay attention that this task offers a backward compatible solution and > deprecated sql-client.display.max-column-width, which means once > sql-client.display.max-column-width is used, it falls into the old scenario > where table.display.max-column-width didn't exist, any changes of > table.display.max-column-width won't take effect. You should test it either > by only using table.display.max-column-width or by using > sql-client.display.max-column-width, but not both of them back and forth. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31053) Example Repair the log output format of the CheckpointCoordinator
[ https://issues.apache.org/jira/browse/FLINK-31053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757768#comment-17757768 ] Hangxiang Yu commented on FLINK-31053: -- Hi, [~xzw0223] I saw you closed your pr, are you still working on this ? I think it's better to use failure.getMessage() to print. WDYT? > Example Repair the log output format of the CheckpointCoordinator > - > > Key: FLINK-31053 > URL: https://issues.apache.org/jira/browse/FLINK-31053 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Affects Versions: 1.16.1 >Reporter: xuzhiwen >Priority: Minor > Labels: pull-request-available > Attachments: image-2023-02-14-13-38-32-967.png > > > !image-2023-02-14-13-38-32-967.png|width=708,height=146! > The log output format is incorrect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode
[ https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yubin Li updated FLINK-32906: - Attachment: image-2023-08-23-10-39-10-225.png > Release Testing: Verify FLINK-30025 Unified the max display column width for > SqlClient and Table APi in both Streaming and Batch execMode > - > > Key: FLINK-32906 > URL: https://issues.apache.org/jira/browse/FLINK-32906 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jing Ge >Assignee: Yubin Li >Priority: Major > Fix For: 1.18.0 > > Attachments: image-2023-08-22-18-54-45-122.png, > image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, > image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, > image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, > image-2023-08-23-10-39-26-602.png > > > more info could be found at > FLIP-279 > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode] > > Tests: > Both configs could be set with new value and following behaviours are > expected : > | | |sql-client.display.max-column-width, default value is > 30|table.display.max-column-width, default value is 30| > |sqlclient|Streaming|text longer than the value will be truncated and > replaced with “...”|Text longer than the value will be truncated and replaced > with “...”| > |sqlclient|Batch|text longer than the value will be truncated and replaced > with “...”|Text longer than the value will be truncated and replaced with > “...”| > |Table API|Streaming|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > |Table API|Batch|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > > Please pay attention that this task offers a backward compatible solution and > deprecated sql-client.display.max-column-width, which means once > sql-client.display.max-column-width is used, it falls into the old scenario > where table.display.max-column-width didn't exist, any changes of > table.display.max-column-width won't take effect. You should test it either > by only using table.display.max-column-width or by using > sql-client.display.max-column-width, but not both of them back and forth. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31685) Checkpoint job folder not deleted after job is cancelled
[ https://issues.apache.org/jira/browse/FLINK-31685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757767#comment-17757767 ] Hangxiang Yu commented on FLINK-31685: -- I just linked many related tickets. It's valid and many users want to resolve. I think we could just introduce an option whether generate the job id directory and make them compatible. As for the job id layout, I think it's still useful if user want to save some historical checkpoints with NO_CLAIM mode. [~tangyun] WDYT? > Checkpoint job folder not deleted after job is cancelled > > > Key: FLINK-31685 > URL: https://issues.apache.org/jira/browse/FLINK-31685 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.16.1 >Reporter: Sergio Sainz >Priority: Major > > When flink job is being checkpointed, and after the job is cancelled, the > checkpoint is indeed deleted (as per > {{{}execution.checkpointing.externalized-checkpoint-retention: > DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains: > > [sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls > 01eff17aa2910484b5aeb644bc531172 3a59309ef018541fc0c20856d0d89855 > 78ff2344dd7ef89f9fbcc9789fc0cd79 a6fd7cec89c0af78c3353d4a46a7d273 > dbc957868c08ebeb100d708bbd057593 > 04ff0abb9e860fc85f0e39d722367c3c 3e09166341615b1b4786efd6745a05d6 > 79efc000aa29522f0a9598661f485f67 a8c42bfe158abd78ebcb4adb135de61f > dc8e04b02c9d8a1bc04b21d2c8f21f74 > 05f48019475de40230900230c63cfe89 3f9fb467c9af91ef41d527fe92f9b590 > 7a6ad7407d7120eda635d71cd843916a a8db748c1d329407405387ac82040be4 > dfb2df1c25056e920d41c94b659dcdab > 09d30bc0ff786994a6a3bb06abd3 455525b76a1c6826b6eaebd5649c5b6b > 7b1458424496baaf3d020e9fece525a4 aa2ef9587b2e9c123744e8940a66a287 > All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}} , > are empty ~ > > *Expected behaviour:* > The job folder id should also be deleted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode
[ https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yubin Li updated FLINK-32906: - Attachment: image-2023-08-23-10-39-26-602.png > Release Testing: Verify FLINK-30025 Unified the max display column width for > SqlClient and Table APi in both Streaming and Batch execMode > - > > Key: FLINK-32906 > URL: https://issues.apache.org/jira/browse/FLINK-32906 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jing Ge >Assignee: Yubin Li >Priority: Major > Fix For: 1.18.0 > > Attachments: image-2023-08-22-18-54-45-122.png, > image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, > image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, > image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, > image-2023-08-23-10-39-26-602.png > > > more info could be found at > FLIP-279 > [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode] > > Tests: > Both configs could be set with new value and following behaviours are > expected : > | | |sql-client.display.max-column-width, default value is > 30|table.display.max-column-width, default value is 30| > |sqlclient|Streaming|text longer than the value will be truncated and > replaced with “...”|Text longer than the value will be truncated and replaced > with “...”| > |sqlclient|Batch|text longer than the value will be truncated and replaced > with “...”|Text longer than the value will be truncated and replaced with > “...”| > |Table API|Streaming|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > |Table API|Batch|No effect. > table.display.max-column-width with the default value 30 will be used|Text > longer than the value will be truncated and replaced with “...”| > > Please pay attention that this task offers a backward compatible solution and > deprecated sql-client.display.max-column-width, which means once > sql-client.display.max-column-width is used, it falls into the old scenario > where table.display.max-column-width didn't exist, any changes of > table.display.max-column-width won't take effect. You should test it either > by only using table.display.max-column-width or by using > sql-client.display.max-column-width, but not both of them back and forth. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31573) Nexmark performance drops in 1.17 compared to 1.13
[ https://issues.apache.org/jira/browse/FLINK-31573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757765#comment-17757765 ] Hangxiang Yu commented on FLINK-31573: -- Hi, [~renqs] Just kindly ping. What's the status of this ticket? I saw the pr of nexmark is merged to resolve this. Could we close this or wait the newest test result ? > Nexmark performance drops in 1.17 compared to 1.13 > -- > > Key: FLINK-31573 > URL: https://issues.apache.org/jira/browse/FLINK-31573 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.17.0 >Reporter: Qingsheng Ren >Priority: Critical > > The case was originally > [reported|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz] > in the voting thread of 1.17.0 RC3. > Compared to Flink 1.13, the performance of Nexmark in 1.17.0 RC3 drops ~8% in > query 18. Some details could be found in the [mailing > list|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz]. > A further investigation showed that with configuration > {{execution.checkpointing.checkpoints-after-tasks-finish.enabled}} set to > false, the performance of 1.17 is better than 1.16. > A fully comparison of Nexmark result between 1.16 and 1.17 is ongoing. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32909) The jobmanager.sh pass arguments failed
[ https://issues.apache.org/jira/browse/FLINK-32909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757763#comment-17757763 ] Alex Wu commented on FLINK-32909: - Yes, I would be happy to. Emmm, I should prepare a PR next, right? > The jobmanager.sh pass arguments failed > --- > > Key: FLINK-32909 > URL: https://issues.apache.org/jira/browse/FLINK-32909 > Project: Flink > Issue Type: Bug > Components: Deployment / Scripts >Affects Versions: 1.16.2, 1.18.0, 1.17.1 >Reporter: Alex Wu >Priority: Major > > I' m trying to use the jobmanager.sh script to create a jobmanager instance > manually, and I need to pass arugments to the script dynamically, rather than > through flink-conf.yaml. But I found that I didn't succeed in doing that when > I commented out all configurations in the flink-conf.yaml, I typed command > like: > > {code:java} > ./bin/jobmanager.sh start -D jobmanager.memory.flink.size=1024m -D > jobmanager.rpc.address=xx.xx.xx.xx -D jobmanager.rpc.port=xxx -D > jobmanager.bind-host=0.0.0.0 -D rest.address=xx.xx.xx.xx -D rest.port=xxx -D > rest.bind-address=0.0.0.0{code} > but I got some errors below: > > {code:java} > [ERROR] The execution result is empty. > [ERROR] Could not get JVM parameters and dynamic configurations properly. > [ERROR] Raw output from BashJavaUtils: > WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will > impact performance. > Exception in thread "main" > org.apache.flink.configuration.IllegalConfigurationException: JobManager > memory configuration failed: Either required fine-grained memory > (jobmanager.memory.heap.size), or Total Flink Memory size (Key: > 'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total > Process Memory size (Key: 'jobmanager.memory.process.size' , default: null > (fallback keys: [])) need to be configured explicitly. > at > org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78) > at > org.apache.flink.runtime.util.bash.BashJavaUtils.getJmResourceParams(BashJavaUtils.java:98) > at > org.apache.flink.runtime.util.bash.BashJavaUtils.runCommand(BashJavaUtils.java:69) > at > org.apache.flink.runtime.util.bash.BashJavaUtils.main(BashJavaUtils.java:56) > Caused by: org.apache.flink.configuration.IllegalConfigurationException: > Either required fine-grained memory (jobmanager.memory.heap.size), or Total > Flink Memory size (Key: 'jobmanager.memory.flink.size' , default: null > (fallback keys: [])), or Total Process Memory size (Key: > 'jobmanager.memory.process.size' , default: null (fallback keys: [])) need to > be configured explicitly. > at > org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.failBecauseRequiredOptionsNotConfigured(ProcessMemoryUtils.java:129) > at > org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:86) > at > org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfig(JobManagerProcessUtils.java:83) > at > org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobM > {code} > It seems to remind me to configure memory for jobmanager instance explicitly, > but I had already passed the jobmanager.memory.flink.size parameter. So I > debug the script, and found a spelling error in the jobmanager.sh script at > line 54: > > {code:java} > parseJmArgsAndExportLogs "${ARGS[@]}" > {code} > the uppercase "$\{ARGS[@]}" is a wrong variable name here from a contextual > perspective, and causing an empty string passed to the function. I changed to > "$\{args[@]}" and It works fine. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-6485) Use buffering to avoid frequent memtable flushes for short intervals in RockdDB incremental checkpoints
[ https://issues.apache.org/jira/browse/FLINK-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757764#comment-17757764 ] Hangxiang Yu commented on FLINK-6485: - IIUC, ChangelogStateBackend ([Generalized incremental checkpoints|https://cwiki.apache.org/confluence/display/FLINK/FLIP-158%3A+Generalized+incremental+checkpoints]) could help to resolve this basically. > Use buffering to avoid frequent memtable flushes for short intervals in > RockdDB incremental checkpoints > --- > > Key: FLINK-6485 > URL: https://issues.apache.org/jira/browse/FLINK-6485 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Stefan Richter >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor > > The current implementation of incremental checkpoitns in RocksDB needs to > flush the memtable to disk prior to a checkpoint and this will generate a SST > file. > What is required for fast checkpoint intervals is an alternative mechanism to > quickly determine a delta from the previous incremental checkpoint to avoid > this frequent flushing. This could be implemented through custom buffering > inside the backend, e.g. a changelog buffer that is maintain up to a certain > size. > The buffer's content becomes part of the private state in the incremental > snapshot and the buffer is dropped i) after each checkpoint or ii) after > exceeding a certain size that justifies flushing and writing a new SST file. > This mechanism should not be blocking, which we can achieve in the following > way: > 1) We have a clear upper limit to the buffer size (e.g. 64MB), once the limit > of diffs is reached, we can drop the buffer because we can assume enough work > was done to justify a new SST file > 2) We write the buffer to a local FS, so we can expect this to be reasonable > fast and that it will not suffer from the kind of blocking that we have in > DFS. I mean technically, also flushing the SST file can block. Then, in the > async part, we can transfer the locally written buffer file to DFS. > There might be other mechanisms in RocksDB that we could exploit for this, > such as the write ahead log, but this could be already be a good solution. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] lincoln-lil commented on a diff in pull request #23209: [FLINK-32824] Port Calcite's fix for the sql like operator
lincoln-lil commented on code in PR #23209: URL: https://github.com/apache/flink/pull/23209#discussion_r1302375790 ## flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/delegation/ParserImplTest.java: ## @@ -114,6 +115,18 @@ public void testCompletionTest() { verifySqlCompletion("SELECT a fram b", 10, new String[] {"FETCH", "FROM"}); } +@Test +public void testSqlLike() { Review Comment: Add a separate test class `SqlLikeUtilsTest` for this test because it's not related to the parser. ## flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/sql/CalcITCase.scala: ## @@ -416,6 +416,30 @@ class CalcITCase extends BatchTestBase { row(3, 2L, "Hello world"), row(4, 3L, "Hello world, how are you?") )) + +val rows = Seq(row(3, "H.llo"), row(3, "Hello")) +val dataId = TestValuesTableFactory.registerData(rows) + +val ddl = + s""" + |CREATE TABLE MyTable ( + | a int, + | c string + |) WITH ( + | 'connector' = 'values', + | 'data-id' = '$dataId', + | 'bounded' = 'true' + |) + |""".stripMargin +tEnv.executeSql(ddl) + +checkResult( + s""" + |SELECT c FROM MyTable + | WHERE c LIKE 'H.llo' + |""".stripMargin, + Seq(row("H.llo")) +) Review Comment: Also add sql case to cover the `similar to` & `not similar to` syntax -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
[ https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fang Yong updated FLINK-32794: -- Description: Document for jdbc driver: https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/ > Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway > - > > Key: FLINK-32794 > URL: https://issues.apache.org/jira/browse/FLINK-32794 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > > Document for jdbc driver: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
[ https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757760#comment-17757760 ] Fang Yong commented on FLINK-32794: --- Thanks [~renqs], DONE > Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway > - > > Key: FLINK-32794 > URL: https://issues.apache.org/jira/browse/FLINK-32794 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > > Document for jdbc driver: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32798) Release Testing: Verify FLIP-294: Support Customized Catalog Modification Listener
[ https://issues.apache.org/jira/browse/FLINK-32798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757759#comment-17757759 ] Fang Yong commented on FLINK-32798: --- [~renqs] I have add the document link in the "Description" > Release Testing: Verify FLIP-294: Support Customized Catalog Modification > Listener > -- > > Key: FLINK-32798 > URL: https://issues.apache.org/jira/browse/FLINK-32798 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > > The document about catalog modification listener is: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-4675) Remove Parameter from WindowAssigner.getDefaultTrigger()
[ https://issues.apache.org/jira/browse/FLINK-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-4675: -- Labels: pull-request-available (was: ) > Remove Parameter from WindowAssigner.getDefaultTrigger() > > > Key: FLINK-4675 > URL: https://issues.apache.org/jira/browse/FLINK-4675 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Aljoscha Krettek >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > For legacy reasons the method has {{StreamExecutionEnvironment}} as a > parameter. This is not needed anymore. > [~StephanEwen] do you think we should break this now? {{WindowAssigner}} is > {{PublicEvolving}} but I wanted to play it conservative for now. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32798) Release Testing: Verify FLIP-294: Support Customized Catalog Modification Listener
[ https://issues.apache.org/jira/browse/FLINK-32798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fang Yong updated FLINK-32798: -- Description: The document about catalog modification listener is: https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener > Release Testing: Verify FLIP-294: Support Customized Catalog Modification > Listener > -- > > Key: FLINK-32798 > URL: https://issues.apache.org/jira/browse/FLINK-32798 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Priority: Major > Fix For: 1.18.0 > > > The document about catalog modification listener is: > https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] WencongLiu opened a new pull request, #23073: [FLINK-4675] Remove parameter in WindowAssigner#getDefaultTrigger()
WencongLiu opened a new pull request, #23073: URL: https://github.com/apache/flink/pull/23073 ## What is the purpose of the change Remove parameter in WindowAssigner#getDefaultTrigger(). ## Brief change log - Remove parameter in WindowAssigner#getDefaultTrigger() ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WencongLiu closed pull request #23073: [FLINK-4675] Remove parameter in WindowAssigner#getDefaultTrigger()
WencongLiu closed pull request #23073: [FLINK-4675] Remove parameter in WindowAssigner#getDefaultTrigger() URL: https://github.com/apache/flink/pull/23073 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-5336) Make Path immutable
[ https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-5336: -- Labels: pull-request-available (was: ) > Make Path immutable > --- > > Key: FLINK-5336 > URL: https://issues.apache.org/jira/browse/FLINK-5336 > Project: Flink > Issue Type: Sub-task > Components: API / DataSet >Reporter: Stephan Ewen >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > The {{Path}} class is currently mutable to support the {{IOReadableWritable}} > serialization. Since that serialization is not used any more, I suggest to > drop that interface from Path and make the Path's URI final. > Being immutable, we can store configures paths properly without the chance of > them being mutated as side effects. > Many parts of the code make the assumption that the Path is immutable, being > susceptible to subtle errors. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] WencongLiu opened a new pull request, #23072: [FLINK-5336] Deprecate IOReadableWritable serialization in Path
WencongLiu opened a new pull request, #23072: URL: https://github.com/apache/flink/pull/23072 ## What is the purpose of the change Deprecate IOReadableWritable serialization in Path. ## Brief change log - Deprecate IOReadableWritable serialization in Path ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-22937) rocksdb cause jvm to crash
[ https://issues.apache.org/jira/browse/FLINK-22937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu closed FLINK-22937. Resolution: Cannot Reproduce Closed this as no response more than two years and lacking enough information to reproduce/debug, please reopen it if necessary. > rocksdb cause jvm to crash > -- > > Key: FLINK-22937 > URL: https://issues.apache.org/jira/browse/FLINK-22937 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.13.1 > Environment: deployment: native kubernates >Reporter: Piers >Priority: Major > Attachments: dump.txt, hs_err_pid1.log > > > JVM crash when running job. Possibly RocksDB caused this. > This link containers JVM crash log. > Thanks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30461) Some rocksdb sst files will remain forever
[ https://issues.apache.org/jira/browse/FLINK-30461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757756#comment-17757756 ] Rui Fan commented on FLINK-30461: - Hi [~mason6345] , I didn't find out in time that the sst files weren't cleaned up, but our flink user feedbacks the checkpoint fails. After analysis, the root cause is : +_The shared directory of a flink job has more than 1 million files. It exceeded the hdfs upper limit, causing new files not to be written._+ The `state.checkpoints.num-retained=1`, I deserialized the _metadata file of the latest checkpoint : only 50k files are depended on, the other 950k files should be cleaned up. So, I think analyzing the _metadata file can figure out that some SST files were not being cleaned up. BTW, please follow the FLINK-28984 as well, it also cause the sst files leak. > Some rocksdb sst files will remain forever > -- > > Key: FLINK-30461 > URL: https://issues.apache.org/jira/browse/FLINK-30461 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.16.0, 1.17.0, 1.15.3 >Reporter: Rui Fan >Assignee: Rui Fan >Priority: Major > Labels: pull-request-available > Fix For: 1.17.0, 1.15.4, 1.16.2 > > Attachments: image-2022-12-20-18-45-32-948.png, > image-2022-12-20-18-47-42-385.png, screenshot-1.png > > > In rocksdb incremental checkpoint mode, during file upload, if some files > have been uploaded and some files have not been uploaded, the checkpoint is > canceled due to checkpoint timeout at this time, and the uploaded files will > remain. > > h2. Impact: > The shared directory of a flink job has more than 1 million files. It > exceeded the hdfs upper limit, causing new files not to be written. > However only 50k files are available, the other 950k files should be cleaned > up. > !https://user-images.githubusercontent.com/38427477/207588272-dda7ba69-c84c-4372-aeb4-c54657b9b956.png|width=1962,height=364! > h2. Root cause: > If an exception is thrown during the checkpoint async phase, flink will clean > up metaStateHandle, miscFiles and sstFiles. > However, when all sst files are uploaded, they are added together to > sstFiles. If some sst files have been uploaded and some sst files are still > being uploaded, and the checkpoint is canceled due to checkpoint timeout at > this time, all sst files will not be added to sstFiles. The uploaded sst will > remain on hdfs. > [code > link|https://github.com/apache/flink/blob/49146cdec41467445de5fc81f100585142728bdf/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java#L328] > h2. Solution: > Using the CloseableRegistry as the tmpResourcesRegistry. If the async phase > is failed, the tmpResourcesRegistry will cleanup these temporary resources. > > POC code: > [https://github.com/1996fanrui/flink/commit/86a456b2bbdad6c032bf8e0bff71c4824abb3ce1] > > > !image-2022-12-20-18-45-32-948.png|width=1114,height=442! > !image-2022-12-20-18-47-42-385.png|width=1332,height=552! > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] WencongLiu commented on a diff in pull request #22341: [FLINK-27204][flink-runtime] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor
WencongLiu commented on code in PR #22341: URL: https://github.com/apache/flink/pull/22341#discussion_r1302368927 ## flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java: ## @@ -513,36 +514,39 @@ private void stopDispatcherServices() throws Exception { public CompletableFuture submitJob(JobGraph jobGraph, Time timeout) { final JobID jobID = jobGraph.getJobID(); log.info("Received JobGraph submission '{}' ({}).", jobGraph.getName(), jobID); - -try { -if (isInGloballyTerminalState(jobID)) { -log.warn( -"Ignoring JobGraph submission '{}' ({}) because the job already reached a globally-terminal state (i.e. {}) in a previous execution.", -jobGraph.getName(), -jobID, -Arrays.stream(JobStatus.values()) -.filter(JobStatus::isGloballyTerminalState) -.map(JobStatus::name) -.collect(Collectors.joining(", "))); -return FutureUtils.completedExceptionally( - DuplicateJobSubmissionException.ofGloballyTerminated(jobID)); -} else if (jobManagerRunnerRegistry.isRegistered(jobID) -|| submittedAndWaitingTerminationJobIDs.contains(jobID)) { -// job with the given jobID is not terminated, yet -return FutureUtils.completedExceptionally( -DuplicateJobSubmissionException.of(jobID)); -} else if (isPartialResourceConfigured(jobGraph)) { -return FutureUtils.completedExceptionally( -new JobSubmissionException( -jobID, -"Currently jobs is not supported if parts of the vertices have " -+ "resources configured. The limitation will be removed in future versions.")); -} else { -return internalSubmitJob(jobGraph); -} -} catch (FlinkException e) { -return FutureUtils.completedExceptionally(e); -} +return isInGloballyTerminalState(jobID) +.thenCompose( +isTerminated -> { +if (isTerminated) { +log.warn( +"Ignoring JobGraph submission '{}' ({}) because the job already " ++ "reached a globally-terminal state (i.e. {}) in a " ++ "previous execution.", +jobGraph.getName(), +jobID, +Arrays.stream(JobStatus.values()) + .filter(JobStatus::isGloballyTerminalState) +.map(JobStatus::name) +.collect(Collectors.joining(", "))); +return FutureUtils.completedExceptionally( + DuplicateJobSubmissionException.ofGloballyTerminated( +jobID)); +} else if (jobManagerRunnerRegistry.isRegistered(jobID) Review Comment: Thanks for your detailed explanation! 😄 I've modified the `thenCompose` to `thenComposeAsync`. ## flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java: ## @@ -513,36 +514,39 @@ private void stopDispatcherServices() throws Exception { public CompletableFuture submitJob(JobGraph jobGraph, Time timeout) { final JobID jobID = jobGraph.getJobID(); log.info("Received JobGraph submission '{}' ({}).", jobGraph.getName(), jobID); - -try { -if (isInGloballyTerminalState(jobID)) { -log.warn( -"Ignoring JobGraph submission '{}' ({}) because the job already reached a globally-terminal state (i.e. {}) in a previous execution.", -jobGraph.getName(), -jobID, -Arrays.stream(JobStatus.values()) -.filter(JobStatus::isGloballyTerminalState) -.map(JobStatus::name) -.collect(Collectors.joining(", "))); -return FutureUtils.completedExceptionally( - DuplicateJobSubmissionException.ofGloballyTerminated(jobID)); -} else if (jobManagerRunnerRegistry.isRegistered(jobID) -|| submittedAndWaitingTerminationJobIDs.contains(jobID)) { -// job with the given jobID is not terminated, yet -retur
[jira] [Updated] (FLINK-32909) The jobmanager.sh pass arguments failed
[ https://issues.apache.org/jira/browse/FLINK-32909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Wu updated FLINK-32909: Description: I' m trying to use the jobmanager.sh script to create a jobmanager instance manually, and I need to pass arugments to the script dynamically, rather than through flink-conf.yaml. But I found that I didn't succeed in doing that when I commented out all configurations in the flink-conf.yaml, I typed command like: {code:java} ./bin/jobmanager.sh start -D jobmanager.memory.flink.size=1024m -D jobmanager.rpc.address=xx.xx.xx.xx -D jobmanager.rpc.port=xxx -D jobmanager.bind-host=0.0.0.0 -D rest.address=xx.xx.xx.xx -D rest.port=xxx -D rest.bind-address=0.0.0.0{code} but I got some errors below: {code:java} [ERROR] The execution result is empty. [ERROR] Could not get JVM parameters and dynamic configurations properly. [ERROR] Raw output from BashJavaUtils: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. Exception in thread "main" org.apache.flink.configuration.IllegalConfigurationException: JobManager memory configuration failed: Either required fine-grained memory (jobmanager.memory.heap.size), or Total Flink Memory size (Key: 'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total Process Memory size (Key: 'jobmanager.memory.process.size' , default: null (fallback keys: [])) need to be configured explicitly. at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78) at org.apache.flink.runtime.util.bash.BashJavaUtils.getJmResourceParams(BashJavaUtils.java:98) at org.apache.flink.runtime.util.bash.BashJavaUtils.runCommand(BashJavaUtils.java:69) at org.apache.flink.runtime.util.bash.BashJavaUtils.main(BashJavaUtils.java:56) Caused by: org.apache.flink.configuration.IllegalConfigurationException: Either required fine-grained memory (jobmanager.memory.heap.size), or Total Flink Memory size (Key: 'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total Process Memory size (Key: 'jobmanager.memory.process.size' , default: null (fallback keys: [])) need to be configured explicitly. at org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.failBecauseRequiredOptionsNotConfigured(ProcessMemoryUtils.java:129) at org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:86) at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfig(JobManagerProcessUtils.java:83) at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobM {code} It seems to remind me to configure memory for jobmanager instance explicitly, but I had already passed the jobmanager.memory.flink.size parameter. So I debug the script, and found a spelling error in the jobmanager.sh script at line 54: {code:java} parseJmArgsAndExportLogs "${ARGS[@]}" {code} the uppercase "$\{ARGS[@]}" is a wrong variable name here from a contextual perspective, and causing an empty string passed to the function. I changed to "$\{args[@]}" and It works fine. was: I' m trying to use the jobmanager.sh script to create a jobmanager instance manually, and I need to pass arugments to the script dynamically, rather than through flink-conf.yaml. But I found that I didn't succeed in doing that when I commented out all configurations in the flink-conf.yaml, I typed command like: {code:java} ./bin/jobmanager.sh start -D jobmanager.memory.flink.size=1024m -D jobmanager.rpc.address=xx.xx.xx.xx -D jobmanager.rpc.port=xxx -D jobmanager.bind-host=0.0.0.0 -Drest.address=xx.xx.xx.xx -Drest.port=xxx -Drest.bind-address=0.0.0.0{code} but I got some errors below: {code:java} [ERROR] The execution result is empty. [ERROR] Could not get JVM parameters and dynamic configurations properly. [ERROR] Raw output from BashJavaUtils: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. Exception in thread "main" org.apache.flink.configuration.IllegalConfigurationException: JobManager memory configuration failed: Either required fine-grained memory (jobmanager.memory.heap.size), or Total Flink Memory size (Key: 'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total Process Memory size (Key: 'jobmanager.memory.process.size' , default: null (fallback keys: [])) need to be configured explicitly. at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78) at org.apache.flink.runtime.util.bash.BashJavaUtils.getJmResourceParams(BashJavaUtils.java:98) at org.apache.flink.runti
[GitHub] [flink] flinkbot commented on pull request #23263: [FLINK-32671][docs] Document Externalized Declarative Resource Management
flinkbot commented on PR #23263: URL: https://github.com/apache/flink/pull/23263#issuecomment-1689128611 ## CI report: * 194010966689800b21dee2f182ec5162eccd9a5c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32671) Document Externalized Declarative Resource Management
[ https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-32671: --- Labels: pull-request-available (was: ) > Document Externalized Declarative Resource Management > - > > Key: FLINK-32671 > URL: https://issues.apache.org/jira/browse/FLINK-32671 > Project: Flink > Issue Type: Sub-task >Reporter: Konstantin Knauf >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] czy006 opened a new pull request, #23263: [FLINK-32671][docs] Document Externalized Declarative Resource Management
czy006 opened a new pull request, #23263: URL: https://github.com/apache/flink/pull/23263 ## What is the purpose of the change Add Document Externalized Declarative Resource Management (FLIP-291) ## Brief change log - Externalized Declarative Resource Management (FLIP-291)docs ## Verifying this change Its changes only affect the JavaDocs of Adaptive Scheduler, adding the description of Externalized Declarative Resource Management ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-32926) Create a release branch
[ https://issues.apache.org/jira/browse/FLINK-32926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-32926: - Assignee: Jing Ge > Create a release branch > --- > > Key: FLINK-32926 > URL: https://issues.apache.org/jira/browse/FLINK-32926 > Project: Flink > Issue Type: Sub-task >Reporter: Sergey Nuyanzin >Assignee: Jing Ge >Priority: Major > > If you are doing a new minor release, you need to update Flink version in the > following repositories and the [AzureCI project > configuration|https://dev.azure.com/apache-flink/apache-flink/]: > * [apache/flink|https://github.com/apache/flink] > * [apache/flink-docker|https://github.com/apache/flink-docker] > * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks] > Patch releases don't require the these repositories to be touched. Simply > checkout the already existing branch for that version: > {code:java} > $ git checkout release-$SHORT_RELEASE_VERSION > {code} > h4. Flink repository > Create a branch for the new version that we want to release before updating > the master branch to the next development version: > {code:bash} > $ cd ./tools > tools $ releasing/create_snapshot_branch.sh > tools $ git checkout master > tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION > NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh > {code} > In the \{{master}} branch, add a new value (e.g. \{{v1_16("1.16")}}) to > [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java] > as the last entry: > {code:java} > // ... > v1_12("1.12"), > v1_13("1.13"), > v1_14("1.14"), > v1_15("1.15"), > v1_16("1.16"); > {code} > The newly created branch and updated \{{master}} branch need to be pushed to > the official repository. > h4. Flink Docker Repository > Afterwards fork off from \{{dev-master}} a \{{dev-x.y}} branch in the > [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make > sure that > [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml] > points to the correct snapshot version; for \{{dev-x.y}} it should point to > \{{{}x.y-SNAPSHOT{}}}, while for \{{dev-master}} it should point to the most > recent snapshot version (\\{[$NEXT_SNAPSHOT_VERSION}}). > After pushing the new minor release branch, as the last step you should also > update the documentation workflow to also build the documentation for the new > release branch. Check [Managing > Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation] > on details on how to do that. You may also want to manually trigger a build > to make the changes visible as soon as possible. > h4. Flink Benchmark Repository > First of all, checkout the \{{master}} branch to \{{dev-x.y}} branch in > [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that > we can have a branch named \{{dev-x.y}} which could be built on top of > (${\{CURRENT_SNAPSHOT_VERSION}}). > Then, inside the repository you need to manually update the > \{{flink.version}} property inside the parent *pom.xml* file. It should be > pointing to the most recent snapshot version ($NEXT_SNAPSHOT_VERSION). For > example: > {code:xml} > 1.18-SNAPSHOT > {code} > h4. AzureCI Project Configuration > The new release branch needs to be configured within AzureCI to make azure > aware of the new release branch. This matter can only be handled by Ververica > employees since they are owning the AzureCI setup. > > > h3. Expectations (Minor Version only if not stated otherwise) > * Release branch has been created and pushed > * Changes on the new release branch are picked up by [Azure > CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary] > * \{{master}} branch has the version information updated to the new version > (check pom.xml files and > * > [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java] > enum) > * New version is added to the > [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java] > enum. > * Make sure [flink-docker|https://github.com/apache/flink-docker/] has > \{{dev-x.y}} branch and docker e2e tests run against this branch in the > corresponding Apache Flink release branch (see > [apache/flink:flink-end-to-end-tests/test-scripts/common_docker.sh:51|https://github.com/apache/flink/blob/master/flink-end-to-end-tests/test-scripts/
[jira] [Commented] (FLINK-28866) Use DDL instead of legacy method to register the test source in JoinITCase
[ https://issues.apache.org/jira/browse/FLINK-28866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757748#comment-17757748 ] lincoln lee commented on FLINK-28866: - [~xu_shuai_] assigned to you. > Use DDL instead of legacy method to register the test source in JoinITCase > -- > > Key: FLINK-28866 > URL: https://issues.apache.org/jira/browse/FLINK-28866 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: Shuai Xu >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-28866) Use DDL instead of legacy method to register the test source in JoinITCase
[ https://issues.apache.org/jira/browse/FLINK-28866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lincoln lee reassigned FLINK-28866: --- Assignee: Shuai Xu > Use DDL instead of legacy method to register the test source in JoinITCase > -- > > Key: FLINK-28866 > URL: https://issues.apache.org/jira/browse/FLINK-28866 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: godfrey he >Assignee: Shuai Xu >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List
[ https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tan Kim updated FLINK-32941: Description: When the code below is executed, only the first element of the list is assigned to the List variable in MyPoJo repeatedly. {code:java} case class Item( name: String ) case class MyPojo( @DataTypeHist("RAW") items: java.util.List[Item] ) ... tableEnv .sqlQuery("select items from table") .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} For example, if you have the following list coming in as input, ["a","b","c"] The value actually stored in MyPojo's list variable is ["a","a","a"] was: When the code below is executed, only the first element of the list is assigned to the List variable in MyPoJo repeatedly. {code:java} case class Item( id: String, name: String ) case class MyPojo( @DataTypeHist("RAW") items: java.util.List[Item] ) ... tableEnv .sqlQuery("select items from table") .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} For example, if you have the following list coming in as input, ["a","b","c"] The value actually stored in MyPojo's list variable is ["a","a","a"] > Table API Bridge `toDataStream(targetDataType)` function not working > correctly for Java List > > > Key: FLINK-32941 > URL: https://issues.apache.org/jira/browse/FLINK-32941 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Tan Kim >Priority: Major > Labels: bridge > > When the code below is executed, only the first element of the list is > assigned to the List variable in MyPoJo repeatedly. > {code:java} > case class Item( > name: String > ) > case class MyPojo( > @DataTypeHist("RAW") items: java.util.List[Item] > ) > ... > tableEnv > .sqlQuery("select items from table") > .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} > > For example, if you have the following list coming in as input, > ["a","b","c"] > The value actually stored in MyPojo's list variable is > ["a","a","a"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List
[ https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tan Kim updated FLINK-32941: Description: When the code below is executed, only the first element of the list is assigned to the List variable in MyPoJo repeatedly. {code:java} case class Item( id: String, name: String ) case class MyPojo( @DataTypeHist("RAW") items: java.util.List[Item] ) ... tableEnv .sqlQuery("select items from table") .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} For example, if you have the following list coming in as input, ["a","b","c"] The value actually stored in MyPojo's list variable is ["a","a","a"] was: When the code below is executed, only the first element of the list is assigned to the List variable in MyPoJo repeatedly. {code:java} case class Item( id: String, name: String ) case class MyPojo( @DataTypeHist("RAW") items: java.util.List[Item] ) ... tableEnv .sqlQuery("select items from table") .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} For example, if you have the following list coming in as input, ["a","b","c"] The value actually stored in MyPojo's list variable is ["a","a","a"] > Table API Bridge `toDataStream(targetDataType)` function not working > correctly for Java List > > > Key: FLINK-32941 > URL: https://issues.apache.org/jira/browse/FLINK-32941 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Tan Kim >Priority: Major > Labels: bridge > > When the code below is executed, only the first element of the list is > assigned to the List variable in MyPoJo repeatedly. > {code:java} > case class Item( > id: String, > name: String > ) > case class MyPojo( > @DataTypeHist("RAW") items: java.util.List[Item] > ) > ... > tableEnv > .sqlQuery("select items from table") > .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} > > For example, if you have the following list coming in as input, > ["a","b","c"] > The value actually stored in MyPojo's list variable is > ["a","a","a"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List
[ https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tan Kim updated FLINK-32941: Labels: bridge (was: ) > Table API Bridge `toDataStream(targetDataType)` function not working > correctly for Java List > > > Key: FLINK-32941 > URL: https://issues.apache.org/jira/browse/FLINK-32941 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Tan Kim >Priority: Major > Labels: bridge > > > When the code below is executed, only the first element of the list is > assigned to the List variable in MyPoJo repeatedly. > {code:java} > case class Item( > id: String, > name: String > ) > case class MyPojo( > @DataTypeHist("RAW") items: java.util.List[Item] > ) > ... > tableEnv > .sqlQuery("select items from table") > .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} > > For example, if you have the following list coming in as input, > ["a","b","c"] > The value actually stored in MyPojo's list variable is > ["a","a","a"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List
[ https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tan Kim updated FLINK-32941: Component/s: Table SQL / API > Table API Bridge `toDataStream(targetDataType)` function not working > correctly for Java List > > > Key: FLINK-32941 > URL: https://issues.apache.org/jira/browse/FLINK-32941 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Reporter: Tan Kim >Priority: Major > > > When the code below is executed, only the first element of the list is > assigned to the List variable in MyPoJo repeatedly. > {code:java} > case class Item( > id: String, > name: String > ) > case class MyPojo( > @DataTypeHist("RAW") items: java.util.List[Item] > ) > ... > tableEnv > .sqlQuery("select items from table") > .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} > > For example, if you have the following list coming in as input, > ["a","b","c"] > The value actually stored in MyPojo's list variable is > ["a","a","a"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List
Tan Kim created FLINK-32941: --- Summary: Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List Key: FLINK-32941 URL: https://issues.apache.org/jira/browse/FLINK-32941 Project: Flink Issue Type: Bug Reporter: Tan Kim When the code below is executed, only the first element of the list is assigned to the List variable in MyPoJo repeatedly. {code:java} case class Item( id: String, name: String ) case class MyPojo( @DataTypeHist("RAW") items: java.util.List[Item] ) ... tableEnv .sqlQuery("select items from table") .toDataStream(DataTypes.of(classOf[MyPoJo])) {code} For example, if you have the following list coming in as input, ["a","b","c"] The value actually stored in MyPojo's list variable is ["a","a","a"] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30461) Some rocksdb sst files will remain forever
[ https://issues.apache.org/jira/browse/FLINK-30461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17757746#comment-17757746 ] Mason Chen commented on FLINK-30461: [~fanrui] Thanks for fixing this! Just curious, how did you figure out that some SST files were not being cleaned up? Are there any tricks to discover the issue outside of reading the code? I recently hit this issue too but all I saw was that SST sizes continuous growth from RocksDB metrics. > Some rocksdb sst files will remain forever > -- > > Key: FLINK-30461 > URL: https://issues.apache.org/jira/browse/FLINK-30461 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.16.0, 1.17.0, 1.15.3 >Reporter: Rui Fan >Assignee: Rui Fan >Priority: Major > Labels: pull-request-available > Fix For: 1.17.0, 1.15.4, 1.16.2 > > Attachments: image-2022-12-20-18-45-32-948.png, > image-2022-12-20-18-47-42-385.png, screenshot-1.png > > > In rocksdb incremental checkpoint mode, during file upload, if some files > have been uploaded and some files have not been uploaded, the checkpoint is > canceled due to checkpoint timeout at this time, and the uploaded files will > remain. > > h2. Impact: > The shared directory of a flink job has more than 1 million files. It > exceeded the hdfs upper limit, causing new files not to be written. > However only 50k files are available, the other 950k files should be cleaned > up. > !https://user-images.githubusercontent.com/38427477/207588272-dda7ba69-c84c-4372-aeb4-c54657b9b956.png|width=1962,height=364! > h2. Root cause: > If an exception is thrown during the checkpoint async phase, flink will clean > up metaStateHandle, miscFiles and sstFiles. > However, when all sst files are uploaded, they are added together to > sstFiles. If some sst files have been uploaded and some sst files are still > being uploaded, and the checkpoint is canceled due to checkpoint timeout at > this time, all sst files will not be added to sstFiles. The uploaded sst will > remain on hdfs. > [code > link|https://github.com/apache/flink/blob/49146cdec41467445de5fc81f100585142728bdf/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java#L328] > h2. Solution: > Using the CloseableRegistry as the tmpResourcesRegistry. If the async phase > is failed, the tmpResourcesRegistry will cleanup these temporary resources. > > POC code: > [https://github.com/1996fanrui/flink/commit/86a456b2bbdad6c032bf8e0bff71c4824abb3ce1] > > > !image-2022-12-20-18-45-32-948.png|width=1114,height=442! > !image-2022-12-20-18-47-42-385.png|width=1332,height=552! > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32940) Support projection pushdown to table source for column projections through UDTF
Venkata krishnan Sowrirajan created FLINK-32940: --- Summary: Support projection pushdown to table source for column projections through UDTF Key: FLINK-32940 URL: https://issues.apache.org/jira/browse/FLINK-32940 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: Venkata krishnan Sowrirajan Currently, Flink doesn't push down columns projected through UDTF like _UNNEST_ to the table source. For eg: {code:java} select t1.name, t2.ename from DEPT_NESTED as t1, unnest(t1.employees) as t2{code} For the above SQL, Flink projects all the columns for DEPT_NESTED rather than only _name_ and {_}employees{_}. If the table source supports nested fields column projection, ideally it should project only _t1.employees.ename_ from the table source. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] flinkbot commented on pull request #23262: [hotfix] Apply spotless
flinkbot commented on PR #23262: URL: https://github.com/apache/flink/pull/23262#issuecomment-1689052696 ## CI report: * ad08a5f854e31179019dbeaf87dca92f4e44ba7f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] snuyanzin opened a new pull request, #23262: [hotfix] Apply spotless
snuyanzin opened a new pull request, #23262: URL: https://github.com/apache/flink/pull/23262 ## What is the purpose of the change The PR is aiming to fix the failing build ## Verifying this change This change is a trivial rework / code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): ( no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no ) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no ) - The S3 file system connector: (no ) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-21871) Support watermark for Hive and Filesystem streaming source
[ https://issues.apache.org/jira/browse/FLINK-21871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-21871: --- Labels: auto-deprioritized-major auto-unassigned pull-request-available (was: auto-unassigned pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Support watermark for Hive and Filesystem streaming source > -- > > Key: FLINK-21871 > URL: https://issues.apache.org/jira/browse/FLINK-21871 > Project: Flink > Issue Type: New Feature > Components: Connectors / FileSystem, Connectors / Hive, Table SQL / > API >Reporter: Jark Wu >Priority: Minor > Labels: auto-deprioritized-major, auto-unassigned, > pull-request-available > > Hive and Filesystem already support streaming source. However, they doesn't > support watermark on the source. That means users can't leverage the > streaming source to perform the Flink powerful streaming analysis, e.g. > window aggregate, interval join, and so on. > In order to make more Hive users can leverage Flink to perform streaming > analysis, and also cooperate with the new optimized window-TVF operations > (FLIP-145), we need to support watermark for Hive and Filesystem. > h2. How to emit watermark in Hive and Filesystem > Factual data in Hive are usually partitioned by date time, e.g. > {{pt_day=2021-03-19, pt_hour=10}}. In this case, when the data of partition > {{pt_day=2021-03-19, pt_hour=10}} are emitted, we should be able to know all > the data before {{2021-03-19 11:00:00}} have been arrived, so we can emit a > watermark value of {{2021-03-19 11:00:00}}. We call this partition watermark. > The partition watermark is much better than record watermark (extract > watermark from record, e.g. {{ts - INTERVAL '1' MINUTE}}). Because in above > example, if we are using partition watermark, the window of [10:00, 11:00) > will be triggered when pt_hour=10 is finished. However, if we are using > record watermark, the window of [10:00, 11:00) will be triggered when > pt_hour=11 is arrived, that will make the pipeline have one more partition > dely. > Therefore, we firstly focus on support partition watermark for Hive and > Filesystem. > h2. Example > In order to support such watermarks, we propose using the following DDL to > define a Hive table with watermark defined: > {code:sql} > -- using hive dialect > CREATE TABLE hive_table ( > x int, > y string, > z int, > ts timestamp, > WATERMARK FOR ts AS SOURCE_WATERMARK > ) PARTITIONED BY (pt_day string, pt_hour string) > TBLPROPERTIES ( > 'streaming-source.enable'='true', > 'streaming-source.monitor-interval'='1s', > 'partition.time-extractor.timestamp-pattern'='$pt_day $pt_hour:00:00', > 'partition.time-interval'='1h' > ); > -- window aggregate on the hive table > SELECT window_start, window_end, COUNT(*), MAX(y), SUM(z) > FROM TABLE( >TUMBLE(TABLE hive_table, DESCRIPTOR(ts), INTERVAL '1' HOUR)) > GROUP BY window_start, window_end; > {code} > For filesystem connector, the DDL can be: > {code:sql} > CREATE TABLE fs_table ( > x int, > y string, > z int, > ts TIMESTAMP(3), > pt_day string, > pt_hour string, > WATERMARK FOR ts AS SOURCE_WATERMARK > ) PARTITIONED BY (pt_day, pt_hour) > WITH ( > 'connector' = 'filesystem', > 'path' = '/path/to/file', > 'format' = 'parquet', > 'streaming-source.enable'='true', > 'streaming-source.monitor-interval'='1s', > 'partition.time-extractor.timestamp-pattern'='$pt_day $pt_hour:00:00', > 'partition.time-interval'='1h' > ); > {code} > I will explain the new function/configuration. > h2. SOURCE_WATERMARK built-in function > FLIP-66[1] proposed {{SYSTEM_WATERMARK}} function for watermarks preserved in > underlying source system. > However, the SYSTEM prefix sounds like a Flink system generated value, but > actually, this is a SOURCE system generated value. > So I propose to use {{SOURCE_WATERMARK}} intead, this also keeps the concept > align with the API of > {{org.apache.flink.table.descriptors.Rowtime#watermarksFromSource}}. > h2. Table Options for Watermark > - {{partition.time-extractor.timestamp-pattern}}: this option already exists. > This is used to extract/convert partition value to a timestamp value. > - {{partition.time-interval}}: this is a new option. It indicates the minimal > time interval of the partitions. It's used to calculate the correct watermark > when a partition is finished. The watermark = partition-timestamp + > time-inteval. > h2. How to suppo
[jira] [Updated] (FLINK-22366) HiveSinkCompactionITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-22366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22366: --- Labels: auto-deprioritized-critical auto-deprioritized-major test-stability (was: auto-deprioritized-critical stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > HiveSinkCompactionITCase fails on azure > --- > > Key: FLINK-22366 > URL: https://issues.apache.org/jira/browse/FLINK-22366 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.13.0, 1.12.5 >Reporter: Dawid Wysakowicz >Priority: Minor > Labels: auto-deprioritized-critical, auto-deprioritized-major, > test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23420 > {code} > [ERROR] testNonPartition[format = > sequencefile](org.apache.flink.connectors.hive.HiveSinkCompactionITCase) > Time elapsed: 4.999 s <<< FAILURE! > Apr 19 22:25:10 java.lang.AssertionError: expected:<[+I[0, 0, 0], +I[0, 0, > 0], +I[1, 1, 1], +I[1, 1, 1], +I[2, 2, 2], +I[2, 2, 2], +I[3, 3, 3], +I[3, 3, > 3], +I[4, 4, 4], +I[4, 4, 4], +I[5, 5, 5], +I[5, 5, 5], +I[6, 6, 6], +I[6, 6, > 6], +I[7, 7, 7], +I[7, 7, 7], +I[8, 8, 8], +I[8, 8, 8], +I[9, 9, 9], +I[9, 9, > 9], +I[10, 0, 0], +I[10, 0, 0], +I[11, 1, 1], +I[11, 1, 1], +I[12, 2, 2], > +I[12, 2, 2], +I[13, 3, 3], +I[13, 3, 3], +I[14, 4, 4], +I[14, 4, 4], +I[15, > 5, 5], +I[15, 5, 5], +I[16, 6, 6], +I[16, 6, 6], +I[17, 7, 7], +I[17, 7, 7], > +I[18, 8, 8], +I[18, 8, 8], +I[19, 9, 9], +I[19, 9, 9], +I[20, 0, 0], +I[20, > 0, 0], +I[21, 1, 1], +I[21, 1, 1], +I[22, 2, 2], +I[22, 2, 2], +I[23, 3, 3], > +I[23, 3, 3], +I[24, 4, 4], +I[24, 4, 4], +I[25, 5, 5], +I[25, 5, 5], +I[26, > 6, 6], +I[26, 6, 6], +I[27, 7, 7], +I[27, 7, 7], +I[28, 8, 8], +I[28, 8, 8], > +I[29, 9, 9], +I[29, 9, 9], +I[30, 0, 0], +I[30, 0, 0], +I[31, 1, 1], +I[31, > 1, 1], +I[32, 2, 2], +I[32, 2, 2], +I[33, 3, 3], +I[33, 3, 3], +I[34, 4, 4], > +I[34, 4, 4], +I[35, 5, 5], +I[35, 5, 5], +I[36, 6, 6], +I[36, 6, 6], +I[37, > 7, 7], +I[37, 7, 7], +I[38, 8, 8], +I[38, 8, 8], +I[39, 9, 9], +I[39, 9, 9], > +I[40, 0, 0], +I[40, 0, 0], +I[41, 1, 1], +I[41, 1, 1], +I[42, 2, 2], +I[42, > 2, 2], +I[43, 3, 3], +I[43, 3, 3], +I[44, 4, 4], +I[44, 4, 4], +I[45, 5, 5], > +I[45, 5, 5], +I[46, 6, 6], +I[46, 6, 6], +I[47, 7, 7], +I[47, 7, 7], +I[48, > 8, 8], +I[48, 8, 8], +I[49, 9, 9], +I[49, 9, 9], +I[50, 0, 0], +I[50, 0, 0], > +I[51, 1, 1], +I[51, 1, 1], +I[52, 2, 2], +I[52, 2, 2], +I[53, 3, 3], +I[53, > 3, 3], +I[54, 4, 4], +I[54, 4, 4], +I[55, 5, 5], +I[55, 5, 5], +I[56, 6, 6], > +I[56, 6, 6], +I[57, 7, 7], +I[57, 7, 7], +I[58, 8, 8], +I[58, 8, 8], +I[59, > 9, 9], +I[59, 9, 9], +I[60, 0, 0], +I[60, 0, 0], +I[61, 1, 1], +I[61, 1, 1], > +I[62, 2, 2], +I[62, 2, 2], +I[63, 3, 3], +I[63, 3, 3], +I[64, 4, 4], +I[64, > 4, 4], +I[65, 5, 5], +I[65, 5, 5], +I[66, 6, 6], +I[66, 6, 6], +I[67, 7, 7], > +I[67, 7, 7], +I[68, 8, 8], +I[68, 8, 8], +I[69, 9, 9], +I[69, 9, 9], +I[70, > 0, 0], +I[70, 0, 0], +I[71, 1, 1], +I[71, 1, 1], +I[72, 2, 2], +I[72, 2, 2], > +I[73, 3, 3], +I[73, 3, 3], +I[74, 4, 4], +I[74, 4, 4], +I[75, 5, 5], +I[75, > 5, 5], +I[76, 6, 6], +I[76, 6, 6], +I[77, 7, 7], +I[77, 7, 7], +I[78, 8, 8], > +I[78, 8, 8], +I[79, 9, 9], +I[79, 9, 9], +I[80, 0, 0], +I[80, 0, 0], +I[81, > 1, 1], +I[81, 1, 1], +I[82, 2, 2], +I[82, 2, 2], +I[83, 3, 3], +I[83, 3, 3], > +I[84, 4, 4], +I[84, 4, 4], +I[85, 5, 5], +I[85, 5, 5], +I[86, 6, 6], +I[86, > 6, 6], +I[87, 7, 7], +I[87, 7, 7], +I[88, 8, 8], +I[88, 8, 8], +I[89, 9, 9], > +I[89, 9, 9], +I[90, 0, 0], +I[90, 0, 0], +I[91, 1, 1], +I[91, 1, 1], +I[92, > 2, 2], +I[92, 2, 2], +I[93, 3, 3], +I[93, 3, 3], +I[94, 4, 4], +I[94, 4, 4], > +I[95, 5, 5], +I[95, 5, 5], +I[96, 6, 6], +I[96, 6, 6], +I[97, 7, 7], +I[97, > 7, 7], +I[98, 8, 8], +I[98, 8, 8], +I[99, 9, 9], +I[99, 9, 9]]> but > was:<[+I[0, 0, 0], +I[1, 1, 1], +I[2, 2, 2], +I[3, 3, 3], +I[4, 4, 4], +I[5, > 5, 5], +I[6, 6, 6], +I[7, 7, 7], +I[8, 8, 8], +I[9, 9, 9], +I[10, 0, 0], > +I[11, 1, 1], +I[12, 2, 2], +I[13, 3, 3], +I[14, 4, 4], +I[15, 5, 5], +I[16, > 6, 6], +I[17, 7, 7], +I[18, 8, 8], +I[19, 9, 9], +I[20, 0, 0], +I[21, 1, 1], > +I[22, 2, 2], +I[23, 3, 3], +I[24, 4, 4], +I[25, 5, 5], +I[26, 6, 6], +I[27, > 7, 7], +I[28, 8, 8], +I[29, 9, 9], +I[30, 0, 0], +I[31, 1, 1], +I[32, 2, 2], > +I[33, 3, 3], +I[34, 4, 4], +I[35, 5, 5], +I[36, 6, 6], +I[37, 7, 7], +I[38, > 8, 8], +I[39, 9, 9]
[jira] [Updated] (FLINK-24677) JdbcBatchingOutputFormat should not generate circulate chaining of exceptions when flushing fails in timer thread
[ https://issues.apache.org/jira/browse/FLINK-24677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-24677: --- Labels: auto-deprioritized-major pull-request-available (was: pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > JdbcBatchingOutputFormat should not generate circulate chaining of exceptions > when flushing fails in timer thread > - > > Key: FLINK-24677 > URL: https://issues.apache.org/jira/browse/FLINK-24677 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: 1.15.0 >Reporter: Caizhi Weng >Priority: Minor > Labels: auto-deprioritized-major, pull-request-available > > This is reported from the [user mailing > list|https://lists.apache.org/thread.html/r3e725f52e4f325b9dcb790635cc642bd6018c4bca39f86c71b8a60f4%40%3Cuser.flink.apache.org%3E]. > In the timer thread created in {{JdbcBatchingOutputFormat#open}}, > {{flushException}} field will be recorded if the call to {{flush}} throws an > exception. This exception is used to fail the job in the main thread. > However {{JdbcBatchingOutputFormat#flush}} will also check for this exception > and will wrap it with a new layer of runtime exception. This will cause a > super long stack when the main thread finally discover the exception and > fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-23238) EventTimeWindowCheckpointingITCase.testTumblingTimeWindowWithKVStateMaxMaxParallelism fails on azure
[ https://issues.apache.org/jira/browse/FLINK-23238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-23238: --- Labels: auto-deprioritized-major auto-deprioritized-minor test-stability (was: auto-deprioritized-major auto-deprioritized-minor stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > EventTimeWindowCheckpointingITCase.testTumblingTimeWindowWithKVStateMaxMaxParallelism > fails on azure > > > Key: FLINK-23238 > URL: https://issues.apache.org/jira/browse/FLINK-23238 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.12.4, 1.14.6, 1.15.3 >Reporter: Xintong Song >Priority: Minor > Labels: auto-deprioritized-major, auto-deprioritized-minor, > test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19873&view=logs&j=baf26b34-3c6a-54e8-f93f-cf269b32f802&t=6dff16b1-bf54-58f3-23c6-76282f49a185&l=4490 > {code} > [ERROR] Tests run: 42, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 261.311 s <<< FAILURE! - in > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase > [ERROR] testTumblingTimeWindowWithKVStateMaxMaxParallelism[statebackend type > =ROCKSDB_INCREMENTAL](org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase) > Time elapsed: 79.062 s <<< FAILURE! > java.lang.AssertionError: Job execution failed. > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.doTestTumblingTimeWindowWithKVState(EventTimeWindowCheckpointingITCase.java:434) > at > org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.testTumblingTimeWindowWithKVStateMaxMaxParallelism(EventTimeWindowCheckpointingITCase.java:350) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at org.junit.runners.Suite.runChild(Suite.java:128) > at org.junit.runners.Suite.runChild(Suite.java:27) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Pr
[jira] [Updated] (FLINK-22068) FlinkKinesisConsumerTest.testPeriodicWatermark fails on azure
[ https://issues.apache.org/jira/browse/FLINK-22068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22068: --- Labels: auto-deprioritized-major test-stability (was: stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > FlinkKinesisConsumerTest.testPeriodicWatermark fails on azure > - > > Key: FLINK-22068 > URL: https://issues.apache.org/jira/browse/FLINK-22068 > Project: Flink > Issue Type: Bug > Components: Connectors / Kinesis >Affects Versions: 1.13.0, 1.14.0, 1.15.0 >Reporter: Dawid Wysakowicz >Priority: Minor > Labels: auto-deprioritized-major, test-stability > > {code} > [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 5.567 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest > [ERROR] > testPeriodicWatermark(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest) > Time elapsed: 0.845 s <<< FAILURE! > java.lang.AssertionError: > Expected: iterable containing [, ] > but: item 0: was > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:956) > at org.junit.Assert.assertThat(Assert.java:923) > at > org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest.testPeriodicWatermark(FlinkKinesisConsumerTest.java:988) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326) > at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89) > at > org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298) > at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87) > at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134) > at > org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) > at > org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) > at > org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136) > at >
[jira] [Updated] (FLINK-22805) Dynamic configuration of Flink checkpoint interval
[ https://issues.apache.org/jira/browse/FLINK-22805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22805: --- Labels: auto-deprioritized-critical auto-deprioritized-major (was: auto-deprioritized-critical stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > Dynamic configuration of Flink checkpoint interval > -- > > Key: FLINK-22805 > URL: https://issues.apache.org/jira/browse/FLINK-22805 > Project: Flink > Issue Type: New Feature > Components: Runtime / Checkpointing >Affects Versions: 1.13.1 >Reporter: Fu Kai >Priority: Minor > Labels: auto-deprioritized-critical, auto-deprioritized-major > > Flink currently does not support dynamic configuration of checkpoint interval > on the fly. It's useful for use cases like backfill/cold-start from a stream > containing whole history. > > In the cold-start phase, resources are fully utilized and the back-pressure > is high for all upstream operators, causing the checkpoint timeout > constantly. The real production traffic is far less than that and the > provisioned resource is capable of handling it. > > With the dynamic checkpoint interval configuration, the cold-start process > can be speeded up with less frequent checkpoint interval or even turned off. > After the process is completed, the checkpoint interval can be updated to > normal. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-23632) [DOCS]The link to setup-Pyflink-virtual-env.sh is broken for page dev/python/faq
[ https://issues.apache.org/jira/browse/FLINK-23632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-23632: --- Labels: auto-deprioritized-major pull-request-available (was: pull-request-available stale-major) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > [DOCS]The link to setup-Pyflink-virtual-env.sh is broken for page > dev/python/faq > > > Key: FLINK-23632 > URL: https://issues.apache.org/jira/browse/FLINK-23632 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: wuguihu >Priority: Minor > Labels: auto-deprioritized-major, pull-request-available > Attachments: image-20210805021609756.png > > > There is no setup-pyflink-virtual-env.sh file in the current version, and no > download link can be found. > > 1. There is no setup-pyflink-virtual-env.sh file in the current version, and > no download link can be found. > 2. This file can be found in previous versions > [https://ci.apache.org/projects/flink/flink-docs-release-1.12/downloads/setup-pyflink-virtual-env.sh] > 3. The file has not been found since version 1.13. > [https://ci.apache.org/projects/flink/flink-docs-release-1.13/downloads/setup-pyflink-virtual-env.sh] > > 4. The link below does not take effect > {code:java} > [convenience script]({% link downloads/setup-pyflink-virtual-env.sh %}) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout
[ https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-22194: --- Labels: auto-deprioritized-major test-stability (was: stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to > commit timeout > -- > > Key: FLINK-22194 > URL: https://issues.apache.org/jira/browse/FLINK-22194 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.13.0, 1.14.0, 1.12.4, 1.15.0 >Reporter: Guowei Ma >Priority: Minor > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308&view=logs&j=b0097207-033c-5d9a-b48c-6d4796fbe60d&t=e8fcc430-213e-5cce-59d4-6942acf09121&l=6535 > {code:java} > [ERROR] > testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest) > Time elapsed: 60.123 s <<< ERROR! > java.util.concurrent.TimeoutException: The offset commit did not finish > before timeout. > at > org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210) > at > org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285) > at > org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)