[GitHub] [flink] flinkbot commented on issue #9888: [hotfix][doc]fix typos in richfunction
flinkbot commented on issue #9888: [hotfix][doc]fix typos in richfunction URL: https://github.com/apache/flink/pull/9888#issuecomment-541391905 ## CI report: * b37d86b0fbc58892a68e821239b7083dffc46d75 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] buptljy commented on a change in pull request #9843: [FLINK-14296] [Table SQL] Use Optional for optional parameters in parser module
buptljy commented on a change in pull request #9843: [FLINK-14296] [Table SQL] Use Optional for optional parameters in parser module URL: https://github.com/apache/flink/pull/9843#discussion_r334265001 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateTable.java ## @@ -57,12 +60,15 @@ private final SqlNodeList propertyList; + @Nullable Review comment: @danny0405 `primaryKeyList` and `uniqueKeysList` are null if they're not defined because we define them in parserImpls.ftl as null by default, which is different with other fields like propertyList and partitionColumns. @wuchong I kind of misunderstand your comments in jira. I thought we should keep `primaryKeyList` and `uniqueKeysList` 's default value. I will make them consistent with propertyList and partitionColumns. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9888: [hotfix][doc]fix typos in richfunction
flinkbot commented on issue #9888: [hotfix][doc]fix typos in richfunction URL: https://github.com/apache/flink/pull/9888#issuecomment-541391094 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit b37d86b0fbc58892a68e821239b7083dffc46d75 (Sun Oct 13 06:29:09 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] wangxlong opened a new pull request #9888: [hotfix][doc]fix typos in richfunction
wangxlong opened a new pull request #9888: [hotfix][doc]fix typos in richfunction URL: https://github.com/apache/flink/pull/9888 ## What is the purpose of the change Fix typos in RichFunction annotation. For only RichFilterFunction has open method to extend, so we should change FilterFunction to RichFilterFunction. ## Brief change log typos in RichFunction ## Verifying this change This change is a trivial work. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): ( no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14275) Window Aggregate function support a RichFunction
[ https://issues.apache.org/jira/browse/FLINK-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950225#comment-16950225 ] hailong wang commented on FLINK-14275: -- Hi [~jark], I have some confusion. According to the implementation of AbstractUdfStreamOperator: {code:java} @Override public void open() throws Exception { super.open(); FunctionUtils.openFunction(userFunction, new Configuration()); } {code} User RichFunction Configuration always pass a new one, what should be correct ? > Window Aggregate function support a RichFunction > > > Key: FLINK-14275 > URL: https://issues.apache.org/jira/browse/FLINK-14275 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.9.0 >Reporter: hailong wang >Priority: Major > > Now, window aggregate function cannot be a RichFunction. In other words, open > function can not be invoked and runtimeContext cannot be fetched in > RichAggregateFunction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) * b1855d5dfff586e41f152a6861ae04f30042cfde : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/127193220) * 54210a098757726e439e71797c44a6c22d48bc27 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128328713) * b089ac03c90e327467a903fcba5d61fbfdf2583e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128458501) * 67f96bda736cf31fb9edee77dc5ec8ccb37a63fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128685328) * 54c4c47a573ea7d86ac3a226c5ee1693cf3bbaf5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129062768) * 2983c5d535e3caa98389665c36fbfea229c232b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129233636) * 576ff18cd9d0174ffd6737f17ce59f4820a4d3a5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129585067) * 16f1ec83d5e8562fafb69a3b2950b8b5c1a6340b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129589023) * acb4479ac4cda25b7c9fd809dc7e4de4c47ebe4a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131514727) * a13f249a530d00db2860166fdf392080bfc71626 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131646104) * 750515312b78b0c9d420c00c82cccbda3a8b474b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131672900) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) * b1855d5dfff586e41f152a6861ae04f30042cfde : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/127193220) * 54210a098757726e439e71797c44a6c22d48bc27 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128328713) * b089ac03c90e327467a903fcba5d61fbfdf2583e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128458501) * 67f96bda736cf31fb9edee77dc5ec8ccb37a63fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128685328) * 54c4c47a573ea7d86ac3a226c5ee1693cf3bbaf5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129062768) * 2983c5d535e3caa98389665c36fbfea229c232b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129233636) * 576ff18cd9d0174ffd6737f17ce59f4820a4d3a5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129585067) * 16f1ec83d5e8562fafb69a3b2950b8b5c1a6340b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129589023) * acb4479ac4cda25b7c9fd809dc7e4de4c47ebe4a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131514727) * a13f249a530d00db2860166fdf392080bfc71626 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131646104) * 750515312b78b0c9d420c00c82cccbda3a8b474b : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131672900) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on issue #9881: [FLINK-14377] Parse Executor-relevant ProgramOptions to ConfigOptions
TisonKun commented on issue #9881: [FLINK-14377] Parse Executor-relevant ProgramOptions to ConfigOptions URL: https://github.com/apache/flink/pull/9881#issuecomment-541385115 @kl0u thanks for opening this pull requests. The purpose of it looks good to me. My concern contains 1. For providing configuration, so far we use name pattern `XxxConfiguration` such as `RestConfiguration`/`JobMasterConfiguration` and so on. `ExecutionParameterProvider` looks diverge from them. 2. `fromConfiguration` is so far implemented as static methods of the `XxxConfiguration`. I wonder whether or not we have to introduce a `ExecutionParameterProviderBuilder` which doesn't follow a builder pattern as described in our [code style guide](https://lists.apache.org/x/thread.html/58e7ed148ff1df7acfacc038a5f07a3a74547caf6674c959ea6f91b4@%3Cdev.flink.apache.org%3E). 3. Since we decide to make `Configuration` the unique view of configurations in Flink, `fromCommandLine` looks like a common pattern. Shall we try to induce how to parse a `CommandLine` to `Configuration` so that it guides following development? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) * b1855d5dfff586e41f152a6861ae04f30042cfde : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/127193220) * 54210a098757726e439e71797c44a6c22d48bc27 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128328713) * b089ac03c90e327467a903fcba5d61fbfdf2583e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128458501) * 67f96bda736cf31fb9edee77dc5ec8ccb37a63fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128685328) * 54c4c47a573ea7d86ac3a226c5ee1693cf3bbaf5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129062768) * 2983c5d535e3caa98389665c36fbfea229c232b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129233636) * 576ff18cd9d0174ffd6737f17ce59f4820a4d3a5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129585067) * 16f1ec83d5e8562fafb69a3b2950b8b5c1a6340b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129589023) * acb4479ac4cda25b7c9fd809dc7e4de4c47ebe4a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131514727) * a13f249a530d00db2860166fdf392080bfc71626 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131646104) * 750515312b78b0c9d420c00c82cccbda3a8b474b : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuyang1706 commented on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
xuyang1706 commented on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-541383212 > @xuyang1706 Thanks for updating the patch. the patch LGTM overall. I created a PR with some minor improvements against your branch. Can you check if that makes sense? If so, feel free to merge it to your branch. > > @walterddr Do you also want to take another look? I am thinking of merging the patch either on Sunday or Monday. Thanks. Thanks for your help @becketqin. I have merged your "minor improvement PR" to my branch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuyang1706 commented on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
xuyang1706 commented on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-541380802 > Thanks for the contribution @xuyang1706 and the reviews @becketqin . Overall it looks good to me. I only left a minor comment. please kindly take a look. > +1 to merge to unblock future developments. Thanks @walterddr. Yes, this API is rarely used, but in some cases, if user want to release no-used MLEnviroment in the process, this is only API could be called. Thus, I prefer to keep it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuyang1706 commented on a change in pull request #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
xuyang1706 commented on a change in pull request #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#discussion_r334260628 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironmentFactory.java ## @@ -0,0 +1,111 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common; + +import java.util.HashMap; + +/** + * Factory to get the MLEnvironment using a MLEnvironmentId. + * + * The following code snippet shows how to interact with MLEnvironmentFactory. + * + * {@code + * long mlEnvId = MLEnvironmentFactory.getNewMLEnvironmentId(); + * MLEnvironment mlEnv = MLEnvironmentFactory.get(mlEnvId); + * } + * + */ +public class MLEnvironmentFactory { + + /** +* The default MLEnvironmentId. +*/ + public static final Long DEFAULT_ML_ENVIRONMENT_ID = 0L; + + /** +* A monotonically increasing id for the MLEnvironments. +* Each id uniquely identifies an MLEnvironment. +*/ + private static Long nextId = 1L; + + /** +* Map that hold the MLEnvironment and use the MLEnvironmentId as its key. +*/ + private static final HashMap map = new HashMap<>(); + + static { + map.put(DEFAULT_ML_ENVIRONMENT_ID, new MLEnvironment()); + } + + /** +* Get the MLEnvironment using a MLEnvironmentId. +* +* @param mlEnvId the MLEnvironmentId +* @return the MLEnvironment +*/ + public static synchronized MLEnvironment get(Long mlEnvId) { + if (!map.containsKey(mlEnvId)) { + throw new IllegalArgumentException( + String.format("Cannot find MLEnvironment for MLEnvironmentId %s." + + " Did you get the MLEnvironmentId by calling getNewMLEnvironmentId?", mlEnvId)); + } + + return map.get(mlEnvId); + } + + /** +* Get the MLEnvironment use the default MLEnvironmentId. +* +* @return the default MLEnvironment. +*/ + public static synchronized MLEnvironment getDefault() { + return get(DEFAULT_ML_ENVIRONMENT_ID); + } + + /** +* Create a unique MLEnvironment id and register a new MLEnvironment in the factory. +* +* @return the MLEnvironment id. +*/ + public static synchronized Long getNewMLEnvironmentId() { + return registerMLEnvironment(new MLEnvironment()); + } + + /** +* Register a new MLEnvironment to the factory and return a new MLEnvironment id. +* +* @param env the MLEnvironment that will be stored in the factory. +* @return the MLEnvironment id. +*/ + public static synchronized Long registerMLEnvironment(MLEnvironment env) { + map.put(nextId, env); + return nextId++; + } + + /** +* Remove the MLEnvironment using the MLEnvironmentId. +* +* @param mlEnvId the id. +* @return the removed MLEnvironment +*/ + public static synchronized MLEnvironment remove(Long mlEnvId) { Review comment: MLEnviromentFactory can create multi MLEnviroment, if some of them only used in a time slot, user can use this API to release the resource. In most cases, user just use default MLEnviroment. This API is rarely used. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate
flinkbot edited a comment on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate URL: https://github.com/apache/flink/pull/8468#issuecomment-524372146 ## CI report: * baae1632aabac35e6e08b402065857c4d67491f2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124398771) * 6208617ff2d84bef7efaa7ee7cf96cba00031d88 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/127279857) * 33997c30f049e32a22cd6caa0427568a52d25e63 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/127650128) * 9366993839d7a63193b59d838e840b8e6e9e6679 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129781312) * 5195aa7aa89c09a40727cbcf7b46d2a646ffacaa : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131667337) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9887: [FLINK-14215] [docs] add environment variable configuration
flinkbot edited a comment on issue #9887: [FLINK-14215] [docs] add environment variable configuration URL: https://github.com/apache/flink/pull/9887#issuecomment-541371856 ## CI report: * 59ed4592c6d1d5eb5acc681bfa34bc279a781a3d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131667344) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9887: [FLINK-14215] [docs] add environment variable configuration
flinkbot edited a comment on issue #9887: [FLINK-14215] [docs] add environment variable configuration URL: https://github.com/apache/flink/pull/9887#issuecomment-541371856 ## CI report: * 59ed4592c6d1d5eb5acc681bfa34bc279a781a3d : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131667344) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate
flinkbot edited a comment on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate URL: https://github.com/apache/flink/pull/8468#issuecomment-524372146 ## CI report: * baae1632aabac35e6e08b402065857c4d67491f2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124398771) * 6208617ff2d84bef7efaa7ee7cf96cba00031d88 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/127279857) * 33997c30f049e32a22cd6caa0427568a52d25e63 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/127650128) * 9366993839d7a63193b59d838e840b8e6e9e6679 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129781312) * 5195aa7aa89c09a40727cbcf7b46d2a646ffacaa : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131667337) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9887: [FLINK-14215] [docs] add environment variable configuration
flinkbot commented on issue #9887: [FLINK-14215] [docs] add environment variable configuration URL: https://github.com/apache/flink/pull/9887#issuecomment-541371856 ## CI report: * 59ed4592c6d1d5eb5acc681bfa34bc279a781a3d : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate
flinkbot edited a comment on issue #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate URL: https://github.com/apache/flink/pull/8468#issuecomment-524372146 ## CI report: * baae1632aabac35e6e08b402065857c4d67491f2 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124398771) * 6208617ff2d84bef7efaa7ee7cf96cba00031d88 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/127279857) * 33997c30f049e32a22cd6caa0427568a52d25e63 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/127650128) * 9366993839d7a63193b59d838e840b8e6e9e6679 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129781312) * 5195aa7aa89c09a40727cbcf7b46d2a646ffacaa : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9887: [FLINK-14215] add environment variable configuration
flinkbot commented on issue #9887: [FLINK-14215] add environment variable configuration URL: https://github.com/apache/flink/pull/9887#issuecomment-541370639 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 59ed4592c6d1d5eb5acc681bfa34bc279a781a3d (Sat Oct 12 23:51:53 UTC 2019) **Warnings:** * Documentation files were touched, but no `.zh.md` files: Update Chinese documentation or file Jira ticket. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-14215) Add Docs for TM and JM Environment Variable Setting
[ https://issues.apache.org/jira/browse/FLINK-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-14215: --- Labels: pull-request-available (was: ) > Add Docs for TM and JM Environment Variable Setting > --- > > Key: FLINK-14215 > URL: https://issues.apache.org/jira/browse/FLINK-14215 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.9.0 >Reporter: Zhenqiu Huang >Assignee: Zhenqiu Huang >Priority: Minor > Labels: pull-request-available > Fix For: 1.9.2 > > > Add description for > /** >* Prefix for passing custom environment variables to Flink's master > process. >* For example for passing LD_LIBRARY_PATH as an env variable to the > AppMaster, set: >* containerized.master.env.LD_LIBRARY_PATH: "/usr/lib/native" >* in the flink-conf.yaml. >*/ > public static final String CONTAINERIZED_MASTER_ENV_PREFIX = > "containerized.master.env."; > /** >* Similar to the {@see CONTAINERIZED_MASTER_ENV_PREFIX}, this > configuration prefix allows >* setting custom environment variables for the workers (TaskManagers). >*/ > public static final String CONTAINERIZED_TASK_MANAGER_ENV_PREFIX = > "containerized.taskmanager.env."; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] HuangZhenQiu opened a new pull request #9887: [FLINK-14215] add environment variable configuration
HuangZhenQiu opened a new pull request #9887: [FLINK-14215] add environment variable configuration URL: https://github.com/apache/flink/pull/9887 ## What is the purpose of the change *(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)* ## Brief change log *(for example:)* - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact* - *Deployments RPC transmits only the blob storage reference* - *TaskManagers retrieve the TaskInfo from the blob cache* ## Verifying this change *(Please pick either of the following options)* This change is a trivial rework / code cleanup without any test coverage. *(or)* This change is already covered by existing tests, such as *(please describe tests)*. *(or)* This change added tests and can be verified as follows: *(example:)* - *Added integration tests for end-to-end deployment with large payloads (100MB)* - *Extended integration test for recovery after master (JobManager) failure* - *Added test that validates that TaskInfo is transferred only once across recoveries* - *Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) - The serializers: (yes / no / don't know) - The runtime per-record code paths (performance sensitive): (yes / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (yes / no / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] walterddr commented on a change in pull request #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
walterddr commented on a change in pull request #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#discussion_r334254798 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironmentFactory.java ## @@ -0,0 +1,111 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common; + +import java.util.HashMap; + +/** + * Factory to get the MLEnvironment using a MLEnvironmentId. + * + * The following code snippet shows how to interact with MLEnvironmentFactory. + * + * {@code + * long mlEnvId = MLEnvironmentFactory.getNewMLEnvironmentId(); + * MLEnvironment mlEnv = MLEnvironmentFactory.get(mlEnvId); + * } + * + */ +public class MLEnvironmentFactory { + + /** +* The default MLEnvironmentId. +*/ + public static final Long DEFAULT_ML_ENVIRONMENT_ID = 0L; + + /** +* A monotonically increasing id for the MLEnvironments. +* Each id uniquely identifies an MLEnvironment. +*/ + private static Long nextId = 1L; + + /** +* Map that hold the MLEnvironment and use the MLEnvironmentId as its key. +*/ + private static final HashMap map = new HashMap<>(); + + static { + map.put(DEFAULT_ML_ENVIRONMENT_ID, new MLEnvironment()); + } + + /** +* Get the MLEnvironment using a MLEnvironmentId. +* +* @param mlEnvId the MLEnvironmentId +* @return the MLEnvironment +*/ + public static synchronized MLEnvironment get(Long mlEnvId) { + if (!map.containsKey(mlEnvId)) { + throw new IllegalArgumentException( + String.format("Cannot find MLEnvironment for MLEnvironmentId %s." + + " Did you get the MLEnvironmentId by calling getNewMLEnvironmentId?", mlEnvId)); + } + + return map.get(mlEnvId); + } + + /** +* Get the MLEnvironment use the default MLEnvironmentId. +* +* @return the default MLEnvironment. +*/ + public static synchronized MLEnvironment getDefault() { + return get(DEFAULT_ML_ENVIRONMENT_ID); + } + + /** +* Create a unique MLEnvironment id and register a new MLEnvironment in the factory. +* +* @return the MLEnvironment id. +*/ + public static synchronized Long getNewMLEnvironmentId() { + return registerMLEnvironment(new MLEnvironment()); + } + + /** +* Register a new MLEnvironment to the factory and return a new MLEnvironment id. +* +* @param env the MLEnvironment that will be stored in the factory. +* @return the MLEnvironment id. +*/ + public static synchronized Long registerMLEnvironment(MLEnvironment env) { + map.put(nextId, env); + return nextId++; + } + + /** +* Remove the MLEnvironment using the MLEnvironmentId. +* +* @param mlEnvId the id. +* @return the removed MLEnvironment +*/ + public static synchronized MLEnvironment remove(Long mlEnvId) { Review comment: when would this API be called? seems like it is not necessary? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] becketqin commented on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
becketqin commented on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-541362792 @xuyang1706 Thanks for updating the patch. the patch LGTM overall. I created a PR with some minor improvements against your branch. Can you check if that makes sense? If so, feel free to merge it to your branch. @walterddr Do you also want to take another look? I am thinking of merging the patch either on Sunday or Monday. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14309) Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis
[ https://issues.apache.org/jira/browse/FLINK-14309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950111#comment-16950111 ] Jiangjie Qin commented on FLINK-14309: -- [~trohrmann] This patch just fixes the test cases. So it is not useful if a branch is not under active development. Do we usually backport test stability fixes to earlier release branches? That may help if we plan to have further bug fix releases on those bug fix branch. But from release perspective, backporting or not seems not making much difference. > Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink > fails on Travis > -- > > Key: FLINK-14309 > URL: https://issues.apache.org/jira/browse/FLINK-14309 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.10.0 >Reporter: Till Rohrmann >Assignee: Jiayi Liao >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The > {{Kafka09ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink}} > fails on Travis with > {code} > Test > testOneToOneAtLeastOnceRegularSink(org.apache.flink.streaming.connectors.kafka.Kafka09ProducerITCase) > failed with: > java.lang.AssertionError: Job should fail! > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:280) > at > org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink(KafkaProducerTestBase.java:206) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > https://api.travis-ci.com/v3/job/240747188/log.txt -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] walterddr commented on a change in pull request #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate
walterddr commented on a change in pull request #8468: [FLINK-12399][table][table-planner] Fix FilterableTableSource does not change after applyPredicate URL: https://github.com/apache/flink/pull/8468#discussion_r334245784 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/api/stream/table/validation/TableSourceValidationTest.scala ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.api.stream.table.validation + +import org.apache.flink.api.common.typeinfo.TypeInformation +import org.apache.flink.api.java.typeutils.RowTypeInfo +import org.apache.flink.table.api.scala._ +import org.apache.flink.table.api.{TableException, TableSchema, Types} +import org.apache.flink.table.utils.{TableTestBase, TestFilterableTableSourceWithoutExplainSourceOverride, TestProjectableTableSourceWithoutExplainSourceOverride} +import org.hamcrest.Matchers +import org.junit.Test + +class TableSourceValidationTest extends TableTestBase { + + @Test + def testPushProjectTableSourceWithoutExplainSource(): Unit = { +expectedException.expectCause(Matchers.isA(classOf[TableException])) Review comment: I actually tried using the annotation but it doesnt work since the wrapped around exception is not `TableException`. had to unparse the cause of the top level. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-14027) Add documentation for Python user-defined functions
[ https://issues.apache.org/jira/browse/FLINK-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng reassigned FLINK-14027: --- Assignee: Wei Zhong > Add documentation for Python user-defined functions > --- > > Key: FLINK-14027 > URL: https://issues.apache.org/jira/browse/FLINK-14027 > Project: Flink > Issue Type: Sub-task > Components: API / Python, Documentation >Reporter: Dian Fu >Assignee: Wei Zhong >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > We should add documentation about how to use Python user-defined functions. > Python dependencies should be included in the document. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-14208) Optimize Python UDFs with parameters of constant values
[ https://issues.apache.org/jira/browse/FLINK-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng closed FLINK-14208. --- Resolution: Resolved > Optimize Python UDFs with parameters of constant values > --- > > Key: FLINK-14208 > URL: https://issues.apache.org/jira/browse/FLINK-14208 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Dian Fu >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > We need support Python UDFs with parameters of constant values. It should be > noticed that the constant parameters are not needed to be transferred between > the Java operator and the Python worker. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-14208) Optimize Python UDFs with parameters of constant values
[ https://issues.apache.org/jira/browse/FLINK-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950066#comment-16950066 ] Hequn Cheng commented on FLINK-14208: - Resolved in 1.10.0 via: ddfed72813281255f119fd6838c197433eb6eaf3 > Optimize Python UDFs with parameters of constant values > --- > > Key: FLINK-14208 > URL: https://issues.apache.org/jira/browse/FLINK-14208 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Dian Fu >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 10m > Remaining Estimate: 0h > > We need support Python UDFs with parameters of constant values. It should be > noticed that the constant parameters are not needed to be transferred between > the Java operator and the Python worker. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14208) Optimize Python UDFs with parameters of constant values
[ https://issues.apache.org/jira/browse/FLINK-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-14208: --- Labels: pull-request-available (was: ) > Optimize Python UDFs with parameters of constant values > --- > > Key: FLINK-14208 > URL: https://issues.apache.org/jira/browse/FLINK-14208 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Dian Fu >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > > We need support Python UDFs with parameters of constant values. It should be > noticed that the constant parameters are not needed to be transferred between > the Java operator and the Python worker. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] hequn8128 closed pull request #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
hequn8128 closed pull request #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-539830457 ## CI report: * 5b6f4eb6008d7f2656fce8c2fbd7b43d7b8a0cf0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131068689) * 8a89d38abd427c34219762155947f0cd46a8e83b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131083277) * 1127b6ac73690b5e4a6895d9a1a5a9475ba4c770 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131102400) * 81769499b8d9773bd9c787de3d784f811cdc0656 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131268997) * 78899808f9b87373aa2526d436987403e09379c2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131310010) * 08d762ab691321c42b231c8d641bb68f60659ea7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131643307) * 60519a8d05474306ea99578138f08b5dc92bae2c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131646730) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6
[ https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950063#comment-16950063 ] liupengcheng commented on FLINK-14123: -- [~xintongsong] I think it's a common case, because I just ran terasort test. Maybe the reason why this issue is not reported by any other people before is that they are using java 9+(G1 GC by default) or Flink are now mainly used on some streaming cases. > Change taskmanager.memory.fraction default value to 0.6 > --- > > Key: FLINK-14123 > URL: https://issues.apache.org/jira/browse/FLINK-14123 > Project: Flink > Issue Type: Improvement > Components: Runtime / Configuration >Affects Versions: 1.9.0 >Reporter: liupengcheng >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we are testing flink batch task, such as terasort, however, it > started only awhile then it failed due to OOM. > > {code:java} > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: a807e1d635bd4471ceea4282477f8850) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326) > at > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62) > at > org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539) > at > com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89) > at > com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1080) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1080) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259) > ... 23 more > Caused by: java.lang.RuntimeException: Error obtaining the sorted input: > Thread 'SortMerger Reading Thread' terminated due to an exception: GC > overhead limit exceeded > at > org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650) > at > org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1109) > at org.apache.flink.runtime.operators.NoOpDriver.run(NoOpDriver.java:82) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: Thread 'SortMerger Reading Thread' terminated > due to an exception: GC overhead limit exceeded > at > org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831) > Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded > at > org.apache.flink.api.common.typeutils.base.arra
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) * b1855d5dfff586e41f152a6861ae04f30042cfde : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/127193220) * 54210a098757726e439e71797c44a6c22d48bc27 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128328713) * b089ac03c90e327467a903fcba5d61fbfdf2583e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128458501) * 67f96bda736cf31fb9edee77dc5ec8ccb37a63fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128685328) * 54c4c47a573ea7d86ac3a226c5ee1693cf3bbaf5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129062768) * 2983c5d535e3caa98389665c36fbfea229c232b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129233636) * 576ff18cd9d0174ffd6737f17ce59f4820a4d3a5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129585067) * 16f1ec83d5e8562fafb69a3b2950b8b5c1a6340b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129589023) * acb4479ac4cda25b7c9fd809dc7e4de4c47ebe4a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131514727) * a13f249a530d00db2860166fdf392080bfc71626 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131646104) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14275) Window Aggregate function support a RichFunction
[ https://issues.apache.org/jira/browse/FLINK-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950061#comment-16950061 ] Jark Wu commented on FLINK-14275: - Hi [~hailong wang], 1) we should not provide the full power of RuntimeContext, maybe some kind of {{RestrictedRuntimeContext}} 2) we should pass the real {{Configuration}} instead of a new one. > Window Aggregate function support a RichFunction > > > Key: FLINK-14275 > URL: https://issues.apache.org/jira/browse/FLINK-14275 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.9.0 >Reporter: hailong wang >Priority: Major > > Now, window aggregate function cannot be a RichFunction. In other words, open > function can not be invoked and runtimeContext cannot be fetched in > RichAggregateFunction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions.
flinkbot edited a comment on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions. URL: https://github.com/apache/flink/pull/9886#issuecomment-541326107 ## CI report: * e431c1c28a35dd86e60a0892ff0e0d15b8da7245 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131646097) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-539830457 ## CI report: * 5b6f4eb6008d7f2656fce8c2fbd7b43d7b8a0cf0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131068689) * 8a89d38abd427c34219762155947f0cd46a8e83b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131083277) * 1127b6ac73690b5e4a6895d9a1a5a9475ba4c770 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131102400) * 81769499b8d9773bd9c787de3d784f811cdc0656 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131268997) * 78899808f9b87373aa2526d436987403e09379c2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131310010) * 08d762ab691321c42b231c8d641bb68f60659ea7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131643307) * 60519a8d05474306ea99578138f08b5dc92bae2c : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131646730) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14275) Window Aggregate function support a RichFunction
[ https://issues.apache.org/jira/browse/FLINK-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950055#comment-16950055 ] hailong wang commented on FLINK-14275: -- Hi, [~jark], thanks for you tip. In my side, we just need to say whether aggregate function(reduce function, fold function) is RichFunction in WindowOperator.open(): {code:java} if (windowStateDescriptor instanceof AggregatingStateDescriptor) { AggregateFunction aggregateFunction = ((AggregatingStateDescriptor) windowStateDescriptor).getAggregateFunction(); if (RichAggregateFunction.class.isAssignableFrom(aggregateFunction.getClass())) { ((RichAggregateFunction)aggregateFunction).setRuntimeContext(getRuntimeContext()); ((RichAggregateFunction)aggregateFunction).open(new Configuration()); } } {code} If there is something I haven't considered? By doing this, we can change AggregateAggFunction to extend RichAggregateFunction. So that, UDAF can be initialized in GroupWindowAggregate > Window Aggregate function support a RichFunction > > > Key: FLINK-14275 > URL: https://issues.apache.org/jira/browse/FLINK-14275 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.9.0 >Reporter: hailong wang >Priority: Major > > Now, window aggregate function cannot be a RichFunction. In other words, open > function can not be invoked and runtimeContext cannot be fetched in > RichAggregateFunction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) * b1855d5dfff586e41f152a6861ae04f30042cfde : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/127193220) * 54210a098757726e439e71797c44a6c22d48bc27 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128328713) * b089ac03c90e327467a903fcba5d61fbfdf2583e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128458501) * 67f96bda736cf31fb9edee77dc5ec8ccb37a63fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128685328) * 54c4c47a573ea7d86ac3a226c5ee1693cf3bbaf5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129062768) * 2983c5d535e3caa98389665c36fbfea229c232b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129233636) * 576ff18cd9d0174ffd6737f17ce59f4820a4d3a5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129585067) * 16f1ec83d5e8562fafb69a3b2950b8b5c1a6340b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129589023) * acb4479ac4cda25b7c9fd809dc7e4de4c47ebe4a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131514727) * a13f249a530d00db2860166fdf392080bfc71626 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131646104) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions.
flinkbot edited a comment on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions. URL: https://github.com/apache/flink/pull/9886#issuecomment-541326107 ## CI report: * e431c1c28a35dd86e60a0892ff0e0d15b8da7245 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131646097) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-539830457 ## CI report: * 5b6f4eb6008d7f2656fce8c2fbd7b43d7b8a0cf0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131068689) * 8a89d38abd427c34219762155947f0cd46a8e83b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131083277) * 1127b6ac73690b5e4a6895d9a1a5a9475ba4c770 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131102400) * 81769499b8d9773bd9c787de3d784f811cdc0656 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131268997) * 78899808f9b87373aa2526d436987403e09379c2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131310010) * 08d762ab691321c42b231c8d641bb68f60659ea7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131643307) * 60519a8d05474306ea99578138f08b5dc92bae2c : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
flinkbot edited a comment on issue #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#issuecomment-513425405 ## CI report: * bd7ade5e0b57dc8577d7f864afcbbb24c2513e56 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119869757) * 6a187929b931a4bd8cd7dbd0ec3d2c5a7a98278d : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121411219) * 4f2afd322f96aeaba6d9c0b67a82a051eff22df0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121723032) * c4fc6905d3adf3ad9ff6f58c5d4f472fdfa7d52b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/121724039) * b1855d5dfff586e41f152a6861ae04f30042cfde : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/127193220) * 54210a098757726e439e71797c44a6c22d48bc27 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128328713) * b089ac03c90e327467a903fcba5d61fbfdf2583e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128458501) * 67f96bda736cf31fb9edee77dc5ec8ccb37a63fd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/128685328) * 54c4c47a573ea7d86ac3a226c5ee1693cf3bbaf5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129062768) * 2983c5d535e3caa98389665c36fbfea229c232b9 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129233636) * 576ff18cd9d0174ffd6737f17ce59f4820a4d3a5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129585067) * 16f1ec83d5e8562fafb69a3b2950b8b5c1a6340b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129589023) * acb4479ac4cda25b7c9fd809dc7e4de4c47ebe4a : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131514727) * a13f249a530d00db2860166fdf392080bfc71626 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions.
flinkbot commented on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions. URL: https://github.com/apache/flink/pull/9886#issuecomment-541326107 ## CI report: * e431c1c28a35dd86e60a0892ff0e0d15b8da7245 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] hequn8128 commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
hequn8128 commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-541325992 @HuangXingBo Thanks a lot for the update. LGTM, will merge this when Travis passed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions.
flinkbot commented on issue #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions. URL: https://github.com/apache/flink/pull/9886#issuecomment-541324020 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit e431c1c28a35dd86e60a0892ff0e0d15b8da7245 (Sat Oct 12 13:14:36 UTC 2019) **Warnings:** * **1 pom.xml files were touched**: Check for build and licensing issues. * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-14027).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-14027) Add documentation for Python user-defined functions
[ https://issues.apache.org/jira/browse/FLINK-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-14027: --- Labels: pull-request-available (was: ) > Add documentation for Python user-defined functions > --- > > Key: FLINK-14027 > URL: https://issues.apache.org/jira/browse/FLINK-14027 > Project: Flink > Issue Type: Sub-task > Components: API / Python, Documentation >Reporter: Dian Fu >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > > We should add documentation about how to use Python user-defined functions. > Python dependencies should be included in the document. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xuyang1706 commented on a change in pull request #9184: [FLINK-13339][ml] Add an implementation of pipeline's api
xuyang1706 commented on a change in pull request #9184: [FLINK-13339][ml] Add an implementation of pipeline's api URL: https://github.com/apache/flink/pull/9184#discussion_r334236991 ## File path: flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/MLEnvironmentFactory.java ## @@ -0,0 +1,125 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.flink.ml.common; + +import java.util.HashMap; + +/** + * Factory to get the MLEnvironment using a MLEnvironmentId. + * + * The following code snippet shows how to interact with MLEnvironmentFactory. + * + * {@code + * long mlEnvId = MLEnvironmentFactory.getNewMLEnvironmentId(); + * MLEnvironment mlEnv = MLEnvironmentFactory.get(mlEnvId); + * } + * + */ +public class MLEnvironmentFactory { + + /** +* The default MLEnvironmentId. +*/ + public static final Long DEFAULT_ML_ENVIRONMENT_ID = 0L; + + /** +* A 'id' is a unique identifier of a MLEnvironment. +*/ + private static Long id = 1L; + + /** +* Map that hold the MLEnvironment and use the MLEnvironmentId as its key. +*/ + private static final HashMap map = new HashMap<>(); + + /** +* Get the MLEnvironment using a MLEnvironmentId. +* The default MLEnvironment will be set a new MLEnvironment +* when there is no default MLEnvironment. +* +* @param mlEnvId the MLEnvironmentId +* @return the MLEnvironment +*/ + public static synchronized MLEnvironment get(Long mlEnvId) { + if (!map.containsKey(mlEnvId)) { + if (mlEnvId.equals(DEFAULT_ML_ENVIRONMENT_ID)) { + setDefault(new MLEnvironment()); + } else { + throw new IllegalArgumentException("There is no Environment in factory. " + Review comment: Thanks for your advice, we refactored code to create the default ML Session in the static block and changed the exception message. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] WeiZhong94 opened a new pull request #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions.
WeiZhong94 opened a new pull request #9886: [FLINK-14027][python][doc] Add documentation for Python user-defined functions. URL: https://github.com/apache/flink/pull/9886 ## What is the purpose of the change *This pull request adds documentation for Python user-defined functions.* ## Brief change log - *adds documentation for Python user-defined functions to `udfs.md` and `udfs.zh.md`.* - *adds documentation for configurations of Python user-defined functions.* ## Verifying this change This change adds documentation without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (docs) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-14367) Fully support for converting RexNode to Expression
[ https://issues.apache.org/jira/browse/FLINK-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-14367: Description: Currently, the {{RexNodeToExpressionConverter}} in both planners are not fully support. There're many RexNodes can not be converted to Expressions. 1) RexFieldAccess -> GET call 2) Literals in interval types and complex types and TimeUnitRange 3) TRIM(BOTH/LEADING/HEADING, ..) ... We should have a comprehensive tests to cover all cases. A good idea is to verify with {{ExpressionConverter}} together. was: Currently, the {{RexNodeToExpressionConverter}} in both planners are not fully support. There're many RexNodes can not be converted to Expressions. 1) RexFieldAccess -> GET call 2) Literals in interval types and complex types 3) TRIM(BOTH/LEADING/HEADING, ..) ... We should have a comprehensive tests to cover all cases. A good idea is to verify with {{ExpressionConverter}} together. > Fully support for converting RexNode to Expression > -- > > Key: FLINK-14367 > URL: https://issues.apache.org/jira/browse/FLINK-14367 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: Jark Wu >Priority: Major > > Currently, the {{RexNodeToExpressionConverter}} in both planners are not > fully support. There're many RexNodes can not be converted to Expressions. > 1) RexFieldAccess -> GET call > 2) Literals in interval types and complex types and TimeUnitRange > 3) TRIM(BOTH/LEADING/HEADING, ..) > ... > We should have a comprehensive tests to cover all cases. A good idea is to > verify with {{ExpressionConverter}} together. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-539830457 ## CI report: * 5b6f4eb6008d7f2656fce8c2fbd7b43d7b8a0cf0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131068689) * 8a89d38abd427c34219762155947f0cd46a8e83b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131083277) * 1127b6ac73690b5e4a6895d9a1a5a9475ba4c770 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131102400) * 81769499b8d9773bd9c787de3d784f811cdc0656 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131268997) * 78899808f9b87373aa2526d436987403e09379c2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131310010) * 08d762ab691321c42b231c8d641bb68f60659ea7 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131643307) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9881: [FLINK-14377] Parse Executor-relevant ProgramOptions to ConfigOptions
TisonKun commented on a change in pull request #9881: [FLINK-14377] Parse Executor-relevant ProgramOptions to ConfigOptions URL: https://github.com/apache/flink/pull/9881#discussion_r334236287 ## File path: flink-clients/src/main/java/org/apache/flink/client/cli/ExecutionParameterProviderBuilder.java ## @@ -0,0 +1,161 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.client.cli; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.ExecutionConfig; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.configuration.ExecutorOptions; +import org.apache.flink.configuration.PipelineOptions; +import org.apache.flink.runtime.jobgraph.SavepointRestoreSettings; + +import org.apache.commons.cli.CommandLine; + +import java.io.File; +import java.net.MalformedURLException; +import java.net.URL; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import static org.apache.flink.client.cli.CliFrontendParser.ARGS_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.CLASSPATH_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.CLASS_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.DETACHED_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.JAR_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.PARALLELISM_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.PYMODULE_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.PY_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.SAVEPOINT_ALLOW_NON_RESTORED_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.SAVEPOINT_PATH_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.SHUTDOWN_IF_ATTACHED_OPTION; +import static org.apache.flink.client.cli.CliFrontendParser.YARN_DETACHED_OPTION; +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * Creates a {@link ExecutionParameterProvider} based either on a {@link Configuration}, or a {@link CommandLine command-line options}. + */ +@Internal +public class ExecutionParameterProviderBuilder { + + /** +* Creates a {@link ExecutionParameterProvider} based on the provided {@link Configuration}. +*/ + public static ExecutionParameterProvider fromConfiguration(final Configuration configuration) { + return new ExecutionParameterProvider(checkNotNull(configuration)); + } + + /** +* Creates a {@link ExecutionParameterProvider} based on the provided user-provided {@link CommandLine command-line options}. +*/ + public static ExecutionParameterProvider fromCommandLine(final CommandLine commandLine) throws CliArgsException { + final Configuration configuration = new Configuration(); + + final String[] args = commandLine.hasOption(ARGS_OPTION.getOpt()) ? + commandLine.getOptionValues(ARGS_OPTION.getOpt()) : + commandLine.getArgs(); + + final String entryPointClass = commandLine.hasOption(CLASS_OPTION.getOpt()) + ? commandLine.getOptionValue(CLASS_OPTION.getOpt()) + : null; + + final boolean isPython = commandLine.hasOption(PY_OPTION.getOpt()) + | commandLine.hasOption(PYMODULE_OPTION.getOpt()) + | "org.apache.flink.client.python.PythonGatewayServer".equals(entryPointClass); + + if (commandLine.hasOption(JAR_OPTION.getOpt())) { + parseJarURLToConfig(commandLine.getOptionValue(JAR_OPTION.getOpt()), configuration); + } else if (!isPython && args.length > 0) { + parseJarURLToConfig(args[0], configuration); + } + + parseClasspathURLsToConfig(commandLine, configuration); + parseParallelismToConfig(commandLine, configuration); + parseDetachedModeToConfig(commandLine, configuration); + parseShutdownOnExi
[GitHub] [flink] hequn8128 commented on a change in pull request #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
hequn8128 commented on a change in pull request #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#discussion_r334235662 ## File path: flink-python/pyflink/table/tests/test_udf.py ## @@ -112,6 +112,103 @@ def test_udf_in_join_condition_2(self): actual = source_sink_utils.results() self.assert_equals(actual, ["2,Hi,2,Flink"]) +def test_udf_with_constant_params(self): +def udf_with_constant_params(p, null_param, tinyint_param, smallint_param, int_param, + bigint_param, decimal_param, float_param, double_param, + boolean_param, str_param, + date_param, time_param, timestamp_param): +# decide two float whether equals Review comment: decide whether two floats are equal This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] hequn8128 commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
hequn8128 commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-541321281 I have created a new jira([FLINK-14383](https://issues.apache.org/jira/browse/FLINK-14383)) for the time interval types. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Assigned] (FLINK-14383) Support python UDFs with constant value of time interval types
[ https://issues.apache.org/jira/browse/FLINK-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng reassigned FLINK-14383: --- Assignee: Huang Xingbo > Support python UDFs with constant value of time interval types > -- > > Key: FLINK-14383 > URL: https://issues.apache.org/jira/browse/FLINK-14383 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Hequn Cheng >Assignee: Huang Xingbo >Priority: Major > Fix For: 1.10.0 > > > As discussed > [here|https://github.com/apache/flink/pull/9858#issuecomment-541312088], this > issue is dedicated to add support for python UDFs with constant value of time > interval types. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-14383) Support python UDFs with constant value of time interval types
Hequn Cheng created FLINK-14383: --- Summary: Support python UDFs with constant value of time interval types Key: FLINK-14383 URL: https://issues.apache.org/jira/browse/FLINK-14383 Project: Flink Issue Type: Sub-task Components: API / Python Reporter: Hequn Cheng Fix For: 1.10.0 As discussed [here|https://github.com/apache/flink/pull/9858#issuecomment-541312088], this issue is dedicated to add support for python UDFs with constant value of time interval types. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-14382) Wrong local FLINK_PLUGINS_DIR is set to flink-conf.yaml of jobmanager and taskmanager on Yarn
Yang Wang created FLINK-14382: - Summary: Wrong local FLINK_PLUGINS_DIR is set to flink-conf.yaml of jobmanager and taskmanager on Yarn Key: FLINK-14382 URL: https://issues.apache.org/jira/browse/FLINK-14382 Project: Flink Issue Type: Bug Components: Deployment / YARN Reporter: Yang Wang If we do not set FLINK_PLUGINS_DIR in flink-conf.yaml, it will be set to [flink configuration|https://github.com/apache/flink/blob/9e6ff81e22d6f5f04abb50ca1aea84fd2542bf9d/flink-core/src/main/java/org/apache/flink/configuration/GlobalConfiguration.java#L158] according to the environment. In yarn mode, the local path will be set in flink-conf.yaml and used by jobmanager and taskmanager. We will find the warning log like below. {code:java} 2019-10-12 19:24:58,165 WARN org.apache.flink.core.plugin.PluginConfig - Environment variable [FLINK_PLUGINS_DIR] is set to [/Users/wangy/IdeaProjects/apache-flink/flink-dist/target/flink-1.10-SNAPSHOT-bin/flink-1.10-SNAPSHOT/plugins] but the directory doesn't exist {code} It was in introduced by FLINK-12143. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-539830457 ## CI report: * 5b6f4eb6008d7f2656fce8c2fbd7b43d7b8a0cf0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131068689) * 8a89d38abd427c34219762155947f0cd46a8e83b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131083277) * 1127b6ac73690b5e4a6895d9a1a5a9475ba4c770 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131102400) * 81769499b8d9773bd9c787de3d784f811cdc0656 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131268997) * 78899808f9b87373aa2526d436987403e09379c2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131310010) * 08d762ab691321c42b231c8d641bb68f60659ea7 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131643307) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient
TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient URL: https://github.com/apache/flink/pull/9820#discussion_r334234669 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java ## @@ -201,18 +157,15 @@ public JobSubmissionResult run( int parallelism, SavepointRestoreSettings savepointSettings) throws CompilerException, ProgramInvocationException { - OptimizedPlan optPlan = getOptimizedPlan(compiler, plan, parallelism); Review comment: With this change `ClusterClient` doesn't have any usage of `compiler`. Also we want to hide `compiler` inside translator. So we can remove `ClusterClient.compiler` field. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14212) Support Python UDFs without arguments
[ https://issues.apache.org/jira/browse/FLINK-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950018#comment-16950018 ] Hequn Cheng commented on FLINK-14212: - Resolved in 1.10.0 via: 345bfefd91c5a1a309ff1a4b397eb9882788fa2a > Support Python UDFs without arguments > - > > Key: FLINK-14212 > URL: https://issues.apache.org/jira/browse/FLINK-14212 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Dian Fu >Assignee: Wei Zhong >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We should support Python UDFs without arguments -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-14212) Support Python UDFs without arguments
[ https://issues.apache.org/jira/browse/FLINK-14212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng closed FLINK-14212. --- Resolution: Fixed > Support Python UDFs without arguments > - > > Key: FLINK-14212 > URL: https://issues.apache.org/jira/browse/FLINK-14212 > Project: Flink > Issue Type: Sub-task > Components: API / Python >Reporter: Dian Fu >Assignee: Wei Zhong >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We should support Python UDFs without arguments -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
flinkbot edited a comment on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-539830457 ## CI report: * 5b6f4eb6008d7f2656fce8c2fbd7b43d7b8a0cf0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131068689) * 8a89d38abd427c34219762155947f0cd46a8e83b : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131083277) * 1127b6ac73690b5e4a6895d9a1a5a9475ba4c770 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131102400) * 81769499b8d9773bd9c787de3d784f811cdc0656 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131268997) * 78899808f9b87373aa2526d436987403e09379c2 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131310010) * 08d762ab691321c42b231c8d641bb68f60659ea7 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] hequn8128 closed pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs.
hequn8128 closed pull request #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient
TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient URL: https://github.com/apache/flink/pull/9820#discussion_r334234669 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java ## @@ -201,18 +157,15 @@ public JobSubmissionResult run( int parallelism, SavepointRestoreSettings savepointSettings) throws CompilerException, ProgramInvocationException { - OptimizedPlan optPlan = getOptimizedPlan(compiler, plan, parallelism); Review comment: With this change `ClusterClient` doesn't have any usage of `compiler`. Also we want to hide `compiler` inside translator. So we can remove `Cluster.compiler` field. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] HuangXingBo commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
HuangXingBo commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-541318216 > @HuangXingBo I find that there are problems for supporting the time Interval type in Python due to the equals method of `TimeIntervalTypeInfo`. The equals method use `==` to compare the two serializers. > > This is ok for java because the `INTERVAL_MILLIS` in `TimeIntervalTypeInfo` is a static variable. However for python, we serialize the `INTERVAL_MILLIS` type in `PythonFunctionCodeGenerator` using `EncodingUtils.encodeObjectToString` and deserialize it later. This makes the two objects can not compare with `==` anymore. > > I'm not sure if we can change the compare method in `TimeIntervalTypeInfo`, i.e., change `==` to `equals`. But definitely, it's not a good idea to solve the problem in this PR. We can discuss the problem of the equals method in another jira and support time Interval type later. > > What do you think? > > Best, Hequn @hequn8128 LGTM, we can support interval type in another pr later. And I have addressed the comment in the latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient
TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient URL: https://github.com/apache/flink/pull/9820#discussion_r334234469 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphTranslator.java ## @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.flink.streaming.api.graph; + +import org.apache.flink.api.common.Plan; +import org.apache.flink.api.dag.Pipeline; +import org.apache.flink.client.FlinkPipelineTranslator; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.optimizer.DataStatistics; +import org.apache.flink.optimizer.Optimizer; +import org.apache.flink.optimizer.plan.OptimizedPlan; +import org.apache.flink.optimizer.plantranslate.JobGraphGenerator; +import org.apache.flink.runtime.jobgraph.JobGraph; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.flink.util.Preconditions.checkArgument; + +/** + * {@link FlinkPipelineTranslator} for DataStream API {@link StreamGraph StreamGraphs}. + * + * Note: this is used through reflection in + * {@link org.apache.flink.client.FlinkPipelineTranslationUtil}. + */ +@SuppressWarnings("unused") +public class StreamGraphTranslator implements FlinkPipelineTranslator { + + private static final Logger LOG = LoggerFactory.getLogger(StreamGraphTranslator.class); + + @Override + public JobGraph translateToJobGraph( + Pipeline pipeline, + Configuration optimizerConfiguration, + int defaultParallelism) { + checkArgument(pipeline instanceof StreamGraph, + "Given pipeline is not a DataStream StreamGraph."); + + StreamGraph streamGraph = (StreamGraph) pipeline; + return streamGraph.getJobGraph(null); + } + + private void setDefaultParallelism(Plan plan, int defaultParallelism) { Review comment: Doesn't used anywhere. Also I don't think StreamGraphTranslator should care about batch Plan This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient
TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient URL: https://github.com/apache/flink/pull/9820#discussion_r334234314 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphTranslator.java ## @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.flink.streaming.api.graph; + +import org.apache.flink.api.common.Plan; +import org.apache.flink.api.dag.Pipeline; +import org.apache.flink.client.FlinkPipelineTranslator; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.optimizer.DataStatistics; +import org.apache.flink.optimizer.Optimizer; +import org.apache.flink.optimizer.plan.OptimizedPlan; +import org.apache.flink.optimizer.plantranslate.JobGraphGenerator; +import org.apache.flink.runtime.jobgraph.JobGraph; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.flink.util.Preconditions.checkArgument; + +/** + * {@link FlinkPipelineTranslator} for DataStream API {@link StreamGraph StreamGraphs}. + * + * Note: this is used through reflection in + * {@link org.apache.flink.client.FlinkPipelineTranslationUtil}. + */ +@SuppressWarnings("unused") +public class StreamGraphTranslator implements FlinkPipelineTranslator { + + private static final Logger LOG = LoggerFactory.getLogger(StreamGraphTranslator.class); + + @Override + public JobGraph translateToJobGraph( + Pipeline pipeline, + Configuration optimizerConfiguration, + int defaultParallelism) { + checkArgument(pipeline instanceof StreamGraph, + "Given pipeline is not a DataStream StreamGraph."); + + StreamGraph streamGraph = (StreamGraph) pipeline; + return streamGraph.getJobGraph(null); + } + + private void setDefaultParallelism(Plan plan, int defaultParallelism) { + if (defaultParallelism > 0 && plan.getDefaultParallelism() <= 0) { + LOG.debug("Changing plan default parallelism from {} to {}", + plan.getDefaultParallelism(), + defaultParallelism); + plan.setDefaultParallelism(defaultParallelism); + } + + LOG.debug("Set parallelism {}, plan default parallelism {}", + defaultParallelism, + plan.getDefaultParallelism()); + } + + @Override + public String translateToJSONExecutionPlan(Pipeline pipeline) { + checkArgument(pipeline instanceof StreamGraph, + "Given pipeline is not a DataStream StreamGraph."); + + StreamGraph streamGraph = (StreamGraph) pipeline; + + return streamGraph.getStreamingPlanAsJSON(); + } + + private JobGraph compilePlan(Plan plan, Configuration optimizerConfiguration) { Review comment: Doesn't used anywhere. Also I don't think StreamGraphTranslator should care about batch `Plan` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient
TisonKun commented on a change in pull request #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient URL: https://github.com/apache/flink/pull/9820#discussion_r334234276 ## File path: flink-clients/src/main/java/org/apache/flink/client/FlinkPipelineTranslationUtil.java ## @@ -0,0 +1,98 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.flink.client; + +import org.apache.flink.api.dag.Pipeline; +import org.apache.flink.configuration.Configuration; +import org.apache.flink.runtime.jobgraph.JobGraph; + +/** + * Utility for transforming {@link Pipeline FlinkPipelines} into a {@link JobGraph}. This uses + * reflection or service discovery to find the right {@link FlinkPipelineTranslator} for a given + * subclass of {@link Pipeline}. + */ +public final class FlinkPipelineTranslationUtil { + + /** +* Transmogrifies the given {@link Pipeline} to a {@link JobGraph}. +*/ + public static JobGraph getJobGraph( + Pipeline pipeline, + Configuration optimizerConfiguration, + int defaultParallelism) { + + FlinkPipelineTranslator pipelineTranslator = getPipelineTranslator(pipeline); + + return pipelineTranslator.translateToJobGraph(pipeline, + optimizerConfiguration, + defaultParallelism); + } + + /** +* Extracts the execution plan (as JSON) from the given {@link Pipeline}. +*/ + public static String translateToJSONExecutionPlan(Pipeline pipeline) { + FlinkPipelineTranslator pipelineTranslator = getPipelineTranslator(pipeline); + return pipelineTranslator.translateToJSONExecutionPlan(pipeline); + } + + private static FlinkPipelineTranslator getPipelineTranslator(Pipeline pipeline) { + PlanTranslator planToJobGraphTransmogrifier = new PlanTranslator(); Review comment: ```suggestion PlanTranslator planToJobGraphTranslator = new PlanTranslator(); ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6
[ https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950010#comment-16950010 ] Xintong Song commented on FLINK-14123: -- Good point. FLIP-49 does not fix this problem for bug fix versions of previous releases. My concern is that, changing this value may fix the GC overhead error, but it may also introduce regression for some other user cases. I think it's ok that some users have to change their configs when switching between different releases, as long as there's a good reason for that. But having to change configs for switching to debug versions of the same release sounds a little aggressive, especially for users who do not run into this error. I'm not sure which one is worse, having the error which can be fixed by manually adjust the configuration, or having the regression in cases that used to work. Does this error occur only in some rare cases or commonly for all flink batch jobs? I'm asking because I don't find any other people run into the same problem. If it's a common case, maybe we should fix it in the debug versions. [~sewen] What do you think? > Change taskmanager.memory.fraction default value to 0.6 > --- > > Key: FLINK-14123 > URL: https://issues.apache.org/jira/browse/FLINK-14123 > Project: Flink > Issue Type: Improvement > Components: Runtime / Configuration >Affects Versions: 1.9.0 >Reporter: liupengcheng >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we are testing flink batch task, such as terasort, however, it > started only awhile then it failed due to OOM. > > {code:java} > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: a807e1d635bd4471ceea4282477f8850) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326) > at > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62) > at > org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539) > at > com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89) > at > com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1080) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1080) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259) > ... 23 more > Caused by: java.lang.RuntimeException: Error obtaining the sorted input: > Thread 'SortMerger Reading Thread' terminated due to an exception: GC > overhead limit exceeded > at > org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650) > at > org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1109) > at org.apache.flink.runtime.operators.NoOpDriver.run(NoOpDriver.java:82) > at org.apache.flink.runtime.o
[jira] [Commented] (FLINK-14123) Change taskmanager.memory.fraction default value to 0.6
[ https://issues.apache.org/jira/browse/FLINK-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950006#comment-16950006 ] liupengcheng commented on FLINK-14123: -- [~xintongsong] I still have some doubt about your opinion, even if like you have.said that the FLIP-49 is very likely to be release in Flink 1.10, but this new design only affect versions Flink 1.10+, then how can we avoid the oom risks for 1.9.x. I think this issue is a serious stable problem,we'd better fix it for later bugfix versions on 1.9. What do you think? cc [~sewen] > Change taskmanager.memory.fraction default value to 0.6 > --- > > Key: FLINK-14123 > URL: https://issues.apache.org/jira/browse/FLINK-14123 > Project: Flink > Issue Type: Improvement > Components: Runtime / Configuration >Affects Versions: 1.9.0 >Reporter: liupengcheng >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we are testing flink batch task, such as terasort, however, it > started only awhile then it failed due to OOM. > > {code:java} > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: a807e1d635bd4471ceea4282477f8850) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326) > at > org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62) > at > org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539) > at > com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89) > at > com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) > at > org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1080) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1080) > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:259) > ... 23 more > Caused by: java.lang.RuntimeException: Error obtaining the sorted input: > Thread 'SortMerger Reading Thread' terminated due to an exception: GC > overhead limit exceeded > at > org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650) > at > org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1109) > at org.apache.flink.runtime.operators.NoOpDriver.run(NoOpDriver.java:82) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: Thread 'SortMerger Reading Thread' terminated > due to an exception: GC overhead limit exceeded > at > org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831) > Ca
[GitHub] [flink] aljoscha commented on issue #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient
aljoscha commented on issue #9820: [FLINK-14290] Decouple plan translation from job execution/ClusterClient URL: https://github.com/apache/flink/pull/9820#issuecomment-541313769 @TisonKun @kl0u you could have another look now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] hequn8128 commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values
hequn8128 commented on issue #9858: [FLINK-14208][python] Optimize Python UDFs with parameters of constant values URL: https://github.com/apache/flink/pull/9858#issuecomment-541312088 @HuangXingBo I find that there are problems for supporting the time Interval type in Python due to the equals method of `TimeIntervalTypeInfo`. The equals method use `==` to compare the two serializers. This is ok for java because the `INTERVAL_MILLIS` in `TimeIntervalTypeInfo` is a static variable. However for python, we serialize the `INTERVAL_MILLIS` type in `PythonFunctionCodeGenerator` using `EncodingUtils.encodeObjectToString` and deserialize it later. This makes the two objects can not compare with `==` anymore. I'm not sure if we can change the compare method in `TimeIntervalTypeInfo`, i.e., change `==` to `equals`. But definitely, it's not a good idea to solve the problem in this PR. We can discuss the problem of the equals method in another jira and support time Interval type later. What do you think? Best, Hequn This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] ifndef-SleePy commented on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously
ifndef-SleePy commented on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously URL: https://github.com/apache/flink/pull/9885#issuecomment-541311402 @flinkbot attention @pnowojski This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] hequn8128 commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
hequn8128 commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-541310126 @WeiZhong94 Thanks a lot for the update. Also thank you all for the review. LGTM, will merge this once travis passed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
flinkbot edited a comment on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-539919841 ## CI report: * bf1d566ea2f91d61c2f436bb92b5337088b7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131102433) * 3110f74ae60ed10bdb2bdbed7dd1facfde9fdeea : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131266172) * f48b910ceee34ff7eb9f6f75d7782b49005c587f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131275232) * c97adb28424b4518535c9922b011d7adcf5e842f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131279546) * 4edf1202d5b9c34f329ca29ed96430b69a5807cd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131468804) * 04b573f99c0e09a06e3db6f3d939e619e25a7eb0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131633458) * dee89d05086821716490658ddcc573a7574eb1bd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131637633) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend
flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend URL: https://github.com/apache/flink/pull/9501#issuecomment-523769032 ## CI report: * 6107a00876759dcef075592c8bfa56764f66b6fa : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124167653) * 4263f76529de2d80aa3554ee91c206fc0f9152e5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124179375) * befcc101f408baca6bb2bf83eea14cb4a9f496bd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124863326) * df676fa3d969772b8995538adc147bbb86c36189 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124881729) * 24328ec47f13f4a445f3c431ea6025a368aaedc1 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125238415) * 38d3ce4f2e78ec98aceea6a9d1f958f903ae5f3f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128211309) * cfbb8387223123f660bc5d5b1f02ec36616b6c11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128339902) * 2819c8dd4ede0cb088ac20ba3715e8d8bafb124e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129069417) * 1cf8f18b890dcbb0e19870cb4be6efc1a600fea5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129082047) * 20e2c58d7ba5b385a6b6e898a49b8ce9a0c81243 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129230744) * 8af6229634471ccd064c02b71fc60bc5262c8ab1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129306669) * 80bde06accbe5b4a6bf24d4346b5f3a285998403 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129428242) * f7233cb92215a8e85515fc897e079f875e3e3db4 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/130546181) * 728f37b2743a9e969c3219b6ded682c9a497adc7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131594441) * cda6e8090f29f406298c2ae8c922f1902f3c6122 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131602364) * 57b4b4b6f94058a194b2206f61236872040f6f11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131612775) * 6c59df1414336a026a2e5a72087bff49b9627c19 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131636770) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-12576) inputQueueLength metric does not work for LocalInputChannels
[ https://issues.apache.org/jira/browse/FLINK-12576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-12576: Fix Version/s: (was: 1.9.0) 1.9.2 > inputQueueLength metric does not work for LocalInputChannels > > > Key: FLINK-12576 > URL: https://issues.apache.org/jira/browse/FLINK-12576 > Project: Flink > Issue Type: Bug > Components: Runtime / Metrics, Runtime / Network >Affects Versions: 1.6.4, 1.7.2, 1.8.0, 1.9.0 >Reporter: Piotr Nowojski >Priority: Major > Labels: pull-request-available > Fix For: 1.9.2 > > Attachments: Screen Shot 2019-09-24 at 3.11.15 PM.png, Screen Shot > 2019-09-24 at 3.13.05 PM.png, Screen Shot 2019-09-24 at 3.22.36 PM.png, > Screen Shot 2019-09-24 at 3.22.53 PM.png, > flink-1.8-2-single-slot-TMs-input.png, > flink-1.8-2-single-slot-TMs-output.png, flink-1.8-input-subtasks.png, > flink-1.8-output-subtasks.png, image-2019-09-26-11-34-24-878.png, > image-2019-09-26-11-36-06-027.png > > Time Spent: 20m > Remaining Estimate: 0h > > Currently {{inputQueueLength}} ignores LocalInputChannels > ({{SingleInputGate#getNumberOfQueuedBuffers}}). This can can cause mistakes > when looking for causes of back pressure (If task is back pressuring whole > Flink job, but there is a data skew and only local input channels are being > used). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14328) JobCluster cannot reach TaskManager in K8s
[ https://issues.apache.org/jira/browse/FLINK-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-14328: Fix Version/s: (was: 1.9.0) 1.9.2 > JobCluster cannot reach TaskManager in K8s > -- > > Key: FLINK-14328 > URL: https://issues.apache.org/jira/browse/FLINK-14328 > Project: Flink > Issue Type: Bug >Reporter: Tim >Priority: Major > Fix For: 1.9.2 > > > I have a Job Cluster which I am running in K8s. It consists of > * job manager deployment (1) > * task manager deployment (1) > * service > This is more or less following the standard "Job Cluster" setup. > Additionally, (due to known issues of TMs talking to JMs), I have set > taskmanager.network.bind-policy to "ip", so that the task manager binds on > the IP of the pod rather than the pod name (which is not reachable via DNS). > So far so good. > > Once the cluster is started, I can see the job running. I also see that the > JM's resource msnager has registered the TM. > {code:java} > 2019-10-05 20:37:14.554 [flink-akka.actor.default-dispatcher-4] DEBUG > org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Slot Pool Status: > status: connected to > akka.tcp://flink@data-capture-enrichedtrans-raw-jobcluster:6123/user/resourcemanager > registered TaskManagers: [f34656491b8dfae726d992d276dc6d39] > available slots: [] > allocated slots: [[AllocatedSlot a00f44d19f38ca36da3ae5083c2d02ae @ > f34656491b8dfae726d992d276dc6d39 @ > data-capture-enrichedtrans-raw-taskmanager-674476f57c-26kxr (dataPort=35815) > - 0]] > pending requests: [] > } > {code} > However, I see several errors like below, before the job eventually fails > (maybe after 5 minutes), and goes into recovery. This happens until all > restarts are exhaused, at which point the cluster completely fails. > {code:java} > 2019-10-05 20:42:14.768 [flink-akka.actor.default-dispatcher-19] WARN > akka.remote.ReliableDeliverySupervisor > flink-akka.remote.default-remote-dispatcher-6 - Association with remote > system [akka.tcp://flink@10.107.38.92:50100] has failed, address is now gated > for [50] ms. Reason: [Association failed with > [akka.tcp://flink@10.107.38.92:50100]] Caused by: [java.net.ConnectException: > Connection refused: /10.107.38.92:50100] > {code} > {{To me it looks like the JM is not able to make a connection on the RPC port > of the taskmanager (50100 is the taskmanager.rpc.port setting, and > 10.107.38.92 is the IP address of the task manager pod as seen by "kubectl > describe pod".)}} > {{Has anyone come across this issue?}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14215) Add Docs for TM and JM Environment Variable Setting
[ https://issues.apache.org/jira/browse/FLINK-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-14215: Fix Version/s: (was: 1.9.0) 1.9.2 > Add Docs for TM and JM Environment Variable Setting > --- > > Key: FLINK-14215 > URL: https://issues.apache.org/jira/browse/FLINK-14215 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.9.0 >Reporter: Zhenqiu Huang >Assignee: Zhenqiu Huang >Priority: Minor > Fix For: 1.9.2 > > > Add description for > /** >* Prefix for passing custom environment variables to Flink's master > process. >* For example for passing LD_LIBRARY_PATH as an env variable to the > AppMaster, set: >* containerized.master.env.LD_LIBRARY_PATH: "/usr/lib/native" >* in the flink-conf.yaml. >*/ > public static final String CONTAINERIZED_MASTER_ENV_PREFIX = > "containerized.master.env."; > /** >* Similar to the {@see CONTAINERIZED_MASTER_ENV_PREFIX}, this > configuration prefix allows >* setting custom environment variables for the workers (TaskManagers). >*/ > public static final String CONTAINERIZED_TASK_MANAGER_ENV_PREFIX = > "containerized.taskmanager.env."; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14072) Add a third-party maven repository for flink-shaded .
[ https://issues.apache.org/jira/browse/FLINK-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-14072: Fix Version/s: (was: 1.10.0) (was: 1.9.0) 1.9.2 > Add a third-party maven repository for flink-shaded . > - > > Key: FLINK-14072 > URL: https://issues.apache.org/jira/browse/FLINK-14072 > Project: Flink > Issue Type: Bug > Components: BuildSystem / Shaded >Affects Versions: 1.9.0, 1.10.0 >Reporter: Jeff Yang >Priority: Major > Labels: pull-request-available > Fix For: 1.9.2 > > Time Spent: 10m > Remaining Estimate: 0h > > Add a third-party maven repository to avoid the package being found when > custom compiling. Such as CDH,HDP,MapR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14327) Getting "Could not forward element to next operator" error
[ https://issues.apache.org/jira/browse/FLINK-14327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-14327: Fix Version/s: (was: 1.9.0) 1.9.2 > Getting "Could not forward element to next operator" error > -- > > Key: FLINK-14327 > URL: https://issues.apache.org/jira/browse/FLINK-14327 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.9.0 >Reporter: ASK5 >Priority: Major > Fix For: 1.9.2 > > Attachments: so2.png > > > val TEMPERATURE_THRESHOLD: Double = 50.00 > val see: StreamExecutionEnvironment = > StreamExecutionEnvironment.getExecutionEnvironment > val properties = new Properties() > properties.setProperty("zookeeper.connect", "localhost:2181") > properties.setProperty("bootstrap.servers", "localhost:9092") > val src = see.addSource(new FlinkKafkaConsumer010[ObjectNode]("broadcast", > new JSONKeyValueDeserializationSchema(false), > properties)).name("kafkaSource") > case class Event(locationID: String, temp: Double) > var data = src.map { v => { > val loc = v.get("locationID").asInstanceOf[String] > val temperature = v.get("temp").asDouble() > (loc, temperature) > }} > data = data > .keyBy( > v => v._1 > ) > data.print() > see.execute() > ---* > And I'm getting the following error while consuming json file from Kafka:- > > {{Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution > failed at flinkBroadcast1$.main(flinkBroadcast1.scala:59) at > flinkBroadcast1.main(flinkBroadcast1.scala)Caused by: java.lang.Exception: > org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: > Could not forward element to next operator...Caused by: > org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: > Could not forward element to next operator...Caused by: > java.lang.NullPointerException}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
flinkbot edited a comment on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-539919841 ## CI report: * bf1d566ea2f91d61c2f436bb92b5337088b7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131102433) * 3110f74ae60ed10bdb2bdbed7dd1facfde9fdeea : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131266172) * f48b910ceee34ff7eb9f6f75d7782b49005c587f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131275232) * c97adb28424b4518535c9922b011d7adcf5e842f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131279546) * 4edf1202d5b9c34f329ca29ed96430b69a5807cd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131468804) * 04b573f99c0e09a06e3db6f3d939e619e25a7eb0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131633458) * dee89d05086821716490658ddcc573a7574eb1bd : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131637633) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14225) Travis is unable to parse one of the secure environment variables
[ https://issues.apache.org/jira/browse/FLINK-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949981#comment-16949981 ] Gary Yao commented on FLINK-14225: -- If there are no objections, I will remove these variables on Monday, 2019-10-14, 11 am CEST. > Travis is unable to parse one of the secure environment variables > - > > Key: FLINK-14225 > URL: https://issues.apache.org/jira/browse/FLINK-14225 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.10.0 >Reporter: Gary Yao >Priority: Blocker > > Example: https://travis-ci.org/apache/flink/jobs/589531009 > {noformat} > We were unable to parse one of your secure environment variables. > Please make sure to escape special characters such as ' ' (white space) and $ > (dollar symbol) with \ (backslash) . > For example, thi$isanexample would be typed as thi\$isanexample. See > https://docs.travis-ci.com/user/encryption-keys. > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously
flinkbot edited a comment on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously URL: https://github.com/apache/flink/pull/9885#issuecomment-541299624 ## CI report: * d0c426e6ff61dcad1de4c7d3e9a2a16416e77e54 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131633477) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-14004) Define SourceReader interface to verify the integration with StreamOneInputProcessor
[ https://issues.apache.org/jira/browse/FLINK-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhijiang updated FLINK-14004: - Affects Version/s: 1.8.0 1.9.0 > Define SourceReader interface to verify the integration with > StreamOneInputProcessor > > > Key: FLINK-14004 > URL: https://issues.apache.org/jira/browse/FLINK-14004 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Affects Versions: 1.8.0, 1.9.0 >Reporter: zhijiang >Assignee: zhijiang >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We already refactored the task input and output sides based on the new source > characters in FLIP-27. In order to further verify that the new source reader > could work well with the unified StreamOneInputProcessor in mailbox model, we > would design a unit test for integrating the whole process. In detail: > * Define SourceReader and SourceOutput relevant interfaces based on FLIP-27 > * Implement an example of stateless SourceReader (bounded sequence of > integers) > * Define SourceReaderOperator to integrate the SourceReader with > StreamOneInputProcessor > * Define SourceReaderStreamTask to execute the source input and implement a > unit test for it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14004) Define SourceReaderOperator to verify the integration with StreamOneInputProcessor
[ https://issues.apache.org/jira/browse/FLINK-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhijiang updated FLINK-14004: - Summary: Define SourceReaderOperator to verify the integration with StreamOneInputProcessor (was: Define SourceReader interface to verify the integration with StreamOneInputProcessor) > Define SourceReaderOperator to verify the integration with > StreamOneInputProcessor > -- > > Key: FLINK-14004 > URL: https://issues.apache.org/jira/browse/FLINK-14004 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Affects Versions: 1.8.0, 1.9.0 >Reporter: zhijiang >Assignee: zhijiang >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We already refactored the task input and output sides based on the new source > characters in FLIP-27. In order to further verify that the new source reader > could work well with the unified StreamOneInputProcessor in mailbox model, we > would design a unit test for integrating the whole process. In detail: > * Define SourceReader and SourceOutput relevant interfaces based on FLIP-27 > * Implement an example of stateless SourceReader (bounded sequence of > integers) > * Define SourceReaderOperator to integrate the SourceReader with > StreamOneInputProcessor > * Define SourceReaderStreamTask to execute the source input and implement a > unit test for it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-14004) Define SourceReaderOperator to verify the integration with StreamOneInputProcessor
[ https://issues.apache.org/jira/browse/FLINK-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhijiang updated FLINK-14004: - Description: We already refactored the task input and output in runtime stack for considering the requirements of FLIP-27. In order to further verify that the new source could work well with the unified StreamOneInputProcessor in mailbox model, we define the SourceReaderOperator as task input and implement a unit test for passing through the whole process. (was: We already refactored the task input and output sides based on the new source characters in FLIP-27. In order to further verify that the new source reader could work well with the unified StreamOneInputProcessor in mailbox model, we would design a unit test for integrating the whole process. In detail: * Define SourceReader and SourceOutput relevant interfaces based on FLIP-27 * Implement an example of stateless SourceReader (bounded sequence of integers) * Define SourceReaderOperator to integrate the SourceReader with StreamOneInputProcessor * Define SourceReaderStreamTask to execute the source input and implement a unit test for it.) > Define SourceReaderOperator to verify the integration with > StreamOneInputProcessor > -- > > Key: FLINK-14004 > URL: https://issues.apache.org/jira/browse/FLINK-14004 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Affects Versions: 1.8.0, 1.9.0 >Reporter: zhijiang >Assignee: zhijiang >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We already refactored the task input and output in runtime stack for > considering the requirements of FLIP-27. In order to further verify that the > new source could work well with the unified StreamOneInputProcessor in > mailbox model, we define the SourceReaderOperator as task input and implement > a unit test for passing through the whole process. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
flinkbot edited a comment on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-539919841 ## CI report: * bf1d566ea2f91d61c2f436bb92b5337088b7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131102433) * 3110f74ae60ed10bdb2bdbed7dd1facfde9fdeea : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131266172) * f48b910ceee34ff7eb9f6f75d7782b49005c587f : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/131275232) * c97adb28424b4518535c9922b011d7adcf5e842f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131279546) * 4edf1202d5b9c34f329ca29ed96430b69a5807cd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131468804) * 04b573f99c0e09a06e3db6f3d939e619e25a7eb0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131633458) * dee89d05086821716490658ddcc573a7574eb1bd : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-14004) Define SourceReader interface to verify the integration with StreamOneInputProcessor
[ https://issues.apache.org/jira/browse/FLINK-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhijiang updated FLINK-14004: - Fix Version/s: 1.10.0 > Define SourceReader interface to verify the integration with > StreamOneInputProcessor > > > Key: FLINK-14004 > URL: https://issues.apache.org/jira/browse/FLINK-14004 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: zhijiang >Assignee: zhijiang >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We already refactored the task input and output sides based on the new source > characters in FLIP-27. In order to further verify that the new source reader > could work well with the unified StreamOneInputProcessor in mailbox model, we > would design a unit test for integrating the whole process. In detail: > * Define SourceReader and SourceOutput relevant interfaces based on FLIP-27 > * Implement an example of stateless SourceReader (bounded sequence of > integers) > * Define SourceReaderOperator to integrate the SourceReader with > StreamOneInputProcessor > * Define SourceReaderStreamTask to execute the source input and implement a > unit test for it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-14004) Define SourceReader interface to verify the integration with StreamOneInputProcessor
[ https://issues.apache.org/jira/browse/FLINK-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhijiang resolved FLINK-14004. -- Resolution: Fixed Merged in master: cee9bf0ab6c1d6b1678070ff2e05f0aa34f26dc0 > Define SourceReader interface to verify the integration with > StreamOneInputProcessor > > > Key: FLINK-14004 > URL: https://issues.apache.org/jira/browse/FLINK-14004 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: zhijiang >Assignee: zhijiang >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > We already refactored the task input and output sides based on the new source > characters in FLIP-27. In order to further verify that the new source reader > could work well with the unified StreamOneInputProcessor in mailbox model, we > would design a unit test for integrating the whole process. In detail: > * Define SourceReader and SourceOutput relevant interfaces based on FLIP-27 > * Implement an example of stateless SourceReader (bounded sequence of > integers) > * Define SourceReaderOperator to integrate the SourceReader with > StreamOneInputProcessor > * Define SourceReaderStreamTask to execute the source input and implement a > unit test for it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] jinglining commented on issue #9798: [FLINK-14176][web]Web Ui add log url for taskmanager of vertex
jinglining commented on issue #9798: [FLINK-14176][web]Web Ui add log url for taskmanager of vertex URL: https://github.com/apache/flink/pull/9798#issuecomment-541305771 Hi, @zentol . I have added screeshot. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] jinglining commented on issue #9798: [FLINK-14176][web]Web Ui add log url for taskmanager of vertex
jinglining commented on issue #9798: [FLINK-14176][web]Web Ui add log url for taskmanager of vertex URL: https://github.com/apache/flink/pull/9798#issuecomment-541305694 ![image](https://user-images.githubusercontent.com/3992588/66698878-82bc0600-ed14-11e9-934a-78712e1117e5.png) screenshot This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhijiangW merged pull request #9646: [FLINK-14004][runtime] Define SourceReaderOperator to verify the integration with StreamOneInputProcessor
zhijiangW merged pull request #9646: [FLINK-14004][runtime] Define SourceReaderOperator to verify the integration with StreamOneInputProcessor URL: https://github.com/apache/flink/pull/9646 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zhijiangW commented on issue #9646: [FLINK-14004][runtime] Define SourceReaderOperator to verify the integration with StreamOneInputProcessor
zhijiangW commented on issue #9646: [FLINK-14004][runtime] Define SourceReaderOperator to verify the integration with StreamOneInputProcessor URL: https://github.com/apache/flink/pull/9646#issuecomment-541305664 Thanks for the review @pnowojski and I fixed the above nit comments. The travis failed because of known fragile `Kafka010ProducerITCase`, so ignore it to merge. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-14318) JDK11 build stalls during shading
[ https://issues.apache.org/jira/browse/FLINK-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16949972#comment-16949972 ] Till Rohrmann commented on FLINK-14318: --- Another instance: https://api.travis-ci.org/v3/job/596712083/log.txt > JDK11 build stalls during shading > - > > Key: FLINK-14318 > URL: https://issues.apache.org/jira/browse/FLINK-14318 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.10.0 >Reporter: Gary Yao >Priority: Critical > Labels: test-stability > > JDK11 build stalls during shading. > Travis stage: e2d - misc - jdk11 > https://travis-ci.org/apache/flink/builds/593022581?utm_source=slack&utm_medium=notification > https://api.travis-ci.org/v3/job/593022629/log.txt > Relevant excerpt from logs: > {noformat} > 01:53:43.889 [INFO] > > 01:53:43.889 [INFO] Building flink-metrics-reporter-prometheus-test > 1.10-SNAPSHOT > 01:53:43.889 [INFO] > > ... > 01:53:44.508 [INFO] Including > org.apache.flink:force-shading:jar:1.10-SNAPSHOT in the shaded jar. > 01:53:44.508 [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.15 from the shaded > jar. > 01:53:44.508 [INFO] Excluding com.google.code.findbugs:jsr305:jar:1.3.9 from > the shaded jar. > 01:53:44.508 [INFO] No artifact matching filter io.netty:netty > 01:53:44.522 [INFO] Replacing original artifact with shaded artifact. > 01:53:44.523 [INFO] Replacing > /home/travis/build/apache/flink/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/flink-metrics-reporter-prometheus-test-1.10-SNAPSHOT.jar > with > /home/travis/build/apache/flink/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/flink-metrics-reporter-prometheus-test-1.10-SNAPSHOT-shaded.jar > 01:53:44.524 [INFO] Replacing original test artifact with shaded test > artifact. > 01:53:44.524 [INFO] Replacing > /home/travis/build/apache/flink/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/flink-metrics-reporter-prometheus-test-1.10-SNAPSHOT-tests.jar > with > /home/travis/build/apache/flink/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/flink-metrics-reporter-prometheus-test-1.10-SNAPSHOT-shaded-tests.jar > 01:53:44.524 [INFO] Dependency-reduced POM written at: > /home/travis/build/apache/flink/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/dependency-reduced-pom.xml > No output has been received in the last 10m0s, this potentially indicates a > stalled build or something wrong with the build itself. > Check the details on how to adjust your build configuration on: > https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received > The build has been terminated > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend
flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend URL: https://github.com/apache/flink/pull/9501#issuecomment-523769032 ## CI report: * 6107a00876759dcef075592c8bfa56764f66b6fa : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124167653) * 4263f76529de2d80aa3554ee91c206fc0f9152e5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124179375) * befcc101f408baca6bb2bf83eea14cb4a9f496bd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124863326) * df676fa3d969772b8995538adc147bbb86c36189 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124881729) * 24328ec47f13f4a445f3c431ea6025a368aaedc1 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125238415) * 38d3ce4f2e78ec98aceea6a9d1f958f903ae5f3f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128211309) * cfbb8387223123f660bc5d5b1f02ec36616b6c11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128339902) * 2819c8dd4ede0cb088ac20ba3715e8d8bafb124e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129069417) * 1cf8f18b890dcbb0e19870cb4be6efc1a600fea5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129082047) * 20e2c58d7ba5b385a6b6e898a49b8ce9a0c81243 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129230744) * 8af6229634471ccd064c02b71fc60bc5262c8ab1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129306669) * 80bde06accbe5b4a6bf24d4346b5f3a285998403 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129428242) * f7233cb92215a8e85515fc897e079f875e3e3db4 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/130546181) * 728f37b2743a9e969c3219b6ded682c9a497adc7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131594441) * cda6e8090f29f406298c2ae8c922f1902f3c6122 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131602364) * 57b4b4b6f94058a194b2206f61236872040f6f11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131612775) * 6c59df1414336a026a2e5a72087bff49b9627c19 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131636770) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] WeiZhong94 commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
WeiZhong94 commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-541303687 > > @JingsongLi I noticed that the "getScalarFunction" method in "ScalarSqlFunction" of blink planner was replaced by "makeFunction" method. I think using "getScalarFunction" to get the initial ScalarFunction object and check its language type makes more sense here so I re-add the method. Please take a look at this PR to make sure it does not cause any side effects to blink planner :). > > LGTM, it is reasonable, can you just use scala val to scalarFunction just like `TableSqlFunction`? Thanks for your reply! That makes sense to me. I have updated the code in the latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend
flinkbot edited a comment on issue #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend URL: https://github.com/apache/flink/pull/9501#issuecomment-523769032 ## CI report: * 6107a00876759dcef075592c8bfa56764f66b6fa : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124167653) * 4263f76529de2d80aa3554ee91c206fc0f9152e5 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124179375) * befcc101f408baca6bb2bf83eea14cb4a9f496bd : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/124863326) * df676fa3d969772b8995538adc147bbb86c36189 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/124881729) * 24328ec47f13f4a445f3c431ea6025a368aaedc1 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125238415) * 38d3ce4f2e78ec98aceea6a9d1f958f903ae5f3f : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128211309) * cfbb8387223123f660bc5d5b1f02ec36616b6c11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/128339902) * 2819c8dd4ede0cb088ac20ba3715e8d8bafb124e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129069417) * 1cf8f18b890dcbb0e19870cb4be6efc1a600fea5 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129082047) * 20e2c58d7ba5b385a6b6e898a49b8ce9a0c81243 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129230744) * 8af6229634471ccd064c02b71fc60bc5262c8ab1 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/129306669) * 80bde06accbe5b4a6bf24d4346b5f3a285998403 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/129428242) * f7233cb92215a8e85515fc897e079f875e3e3db4 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/130546181) * 728f37b2743a9e969c3219b6ded682c9a497adc7 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131594441) * cda6e8090f29f406298c2ae8c922f1902f3c6122 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131602364) * 57b4b4b6f94058a194b2206f61236872040f6f11 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/131612775) * 6c59df1414336a026a2e5a72087bff49b9627c19 : UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] JingsongLi commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs.
JingsongLi commented on issue #9865: [FLINK-14212][python] Support no-argument Python UDFs. URL: https://github.com/apache/flink/pull/9865#issuecomment-541302996 > @JingsongLi I noticed that the "getScalarFunction" method in "ScalarSqlFunction" of blink planner was replaced by "makeFunction" method. I think using "getScalarFunction" to get the initial ScalarFunction object and check its language type makes more sense here so I re-add the method. Please take a look at this PR to make sure it does not cause any side effects to blink planner :). LGTM, it is reasonable, can you just use scala val to scalarFunction just like `TableSqlFunction`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] carp84 commented on a change in pull request #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend
carp84 commented on a change in pull request #9501: [FLINK-12697] [State Backends] Support on-disk state storage for spill-able heap backend URL: https://github.com/apache/flink/pull/9501#discussion_r334227853 ## File path: flink-state-backends/flink-statebackend-heap-spillable/src/main/java/org/apache/flink/runtime/state/heap/space/Allocator.java ## @@ -0,0 +1,50 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.state.heap.space; + +import java.io.Closeable; + +/** + * Implementations are responsible for allocate space. + */ +public interface Allocator extends Closeable { + + /** +* Allocate space with the given size. +* +* @param size size of space to allocate. +* @return address of the allocated space, or -1 when allocation is failed. +*/ + long allocate(int size); Review comment: ok, got the question, and yes we should better handle this part, will resolve this in new commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9646: [FLINK-14004][runtime] Define SourceReaderOperator to verify the integration with StreamOneInputProcessor
flinkbot edited a comment on issue #9646: [FLINK-14004][runtime] Define SourceReaderOperator to verify the integration with StreamOneInputProcessor URL: https://github.com/apache/flink/pull/9646#issuecomment-529229460 ## CI report: * 4f286e24aa0bb04d8e6cc81d64dc7e9ac642b012 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/126367829) * 4208f77cade9cbf9c333e877225b2419e5e3e63e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/126382681) * 78fbc2d6e6a7275560caac22a26ae3865a7ead27 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/126464941) * 831786bb7057d89928df2f62d301beede7e5f24c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/131291078) * 05bee3b4be11cd9b1905e60b7cef5edf9ed829b0 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131632194) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously
flinkbot edited a comment on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously URL: https://github.com/apache/flink/pull/9885#issuecomment-541299624 ## CI report: * d0c426e6ff61dcad1de4c7d3e9a2a16416e77e54 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/131633477) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services